CN114419064B - Mammary gland area image segmentation method based on RN-DoubleU-Net network - Google Patents

Mammary gland area image segmentation method based on RN-DoubleU-Net network Download PDF

Info

Publication number
CN114419064B
CN114419064B CN202210021366.1A CN202210021366A CN114419064B CN 114419064 B CN114419064 B CN 114419064B CN 202210021366 A CN202210021366 A CN 202210021366A CN 114419064 B CN114419064 B CN 114419064B
Authority
CN
China
Prior art keywords
size
convolution
network
multiplied
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210021366.1A
Other languages
Chinese (zh)
Other versions
CN114419064A (en
Inventor
陈昱莅
周耀
陆铖
马苗
裴炤
武杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Normal University
Original Assignee
Shaanxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Normal University filed Critical Shaanxi Normal University
Priority to CN202210021366.1A priority Critical patent/CN114419064B/en
Publication of CN114419064A publication Critical patent/CN114419064A/en
Application granted granted Critical
Publication of CN114419064B publication Critical patent/CN114419064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Abstract

A mammary gland area image segmentation method based on an RN-DoubleU-Net network comprises the steps of preprocessing a data set, constructing an RN-DoubleU-Net network model, training the RN-DoubleU-Net network, storing the model, verifying the RN-DoubleU-Net network and testing the RN-DoubleU-Net network. The DoubleU-Net network model is formed by connecting two Unet subnetworks in series, wherein each Unet subnetwork is formed by connecting an encoder, a cavity volume and a decoder in series in sequence. The depth residual error network module formed by a common convolution block and three residual error convolution blocks is used as an encoder of a second Unet sub-network to construct an RN-double U-Net network, the mammary gland area image is segmented, and the effective information in the image is utilized to accurately segment the mammary gland area image in the image. Compared with the existing image segmentation method, the method provided by the invention has the advantages of accurate segmentation of the mammary gland area image, high segmentation accuracy, high segmentation speed and the like, and can be used for automatically segmenting the mammary gland area image.

Description

Mammary gland area image segmentation method based on RN-DoubleU-Net network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to segmentation of breast gland area images.
Background
Image segmentation refers to detecting a region or object in an image that is significant to people; with the popularity of intelligent devices and the advent of the large data age, people are generating, storing and using a large number of pictures, and image segmentation techniques can help us to further identify and analyze the pictures. The image segmentation by the deep learning technology shows stronger performance, and the deep learning technology is used for segmenting the characteristics of the image through continuous training, so that the region of interest of people is reserved. Therefore, the deep learning technology has wide research value and significance in the field of image segmentation.
Many deep learning methods applied to image segmentation, such as a U-Net network, and various methods for improving the U-Net network have been studied. The segmentation detection effect of the U-Net network on the mammary gland duct area is not obvious and quite unstable; the U-Net network has obvious segmentation detection effect on common cells, but has still to be improved on the segmentation effect of the mammary gland duct region; the improved method for the U-Net network, such as the double U-Net network, has higher segmentation detection precision for the mammary gland duct area than the U-Net network, but does not meet certain requirements.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide the mammary gland area image segmentation method based on the RN-DoubleU-Net network, which has the advantages of accurate segmentation area, high precision and high segmentation detection speed.
The technical scheme adopted for solving the technical problems is composed of the following steps:
(1) Dataset preprocessing
398 pictures of the mammary gland area data set are taken, and the size of the pictures is 2000 x 2000 pixels.
1) The mammary gland area dataset pixel values were normalized to [0,1], cut into pictures of 512 x 512 pixels in size.
2) The segmented data set is processed according to 8:1: the proportion of 1 is divided into a training set, a verification set and a test set.
(2) Construction of RN-DoubleU-Net network model
The RN-DoubleU-Net network model is formed by connecting a first Unet sub-network and a second Unet sub-network, wherein the output of the first Unet sub-network is connected with the input of the second Unet sub-network.
The first Unet subnetwork is formed by sequentially connecting a first subnetwork encoder, a first subnetwork hole convolution and a first subnetwork decoder in series, and the second Unet subnetwork is formed by sequentially connecting a second subnetwork encoder, a second subnetwork hole convolution and a second subnetwork decoder in series.
(3) Training RN-DoubleU-Net network
1) Determining an objective function
The objective function includes a loss function L dice And an evaluation function F1 for determining a loss function L according to the following formula dice
Wherein X represents a true value, X ε { X } 1 ,x 2 ,...x n Y represents a predicted value, Y ε { Y }, Y 1 ,y 2 ,...y n N is the number of elements, a finite positive integer.
An evaluation function F1 determined by:
wherein P is the precision, P epsilon [0,1], R is the recall, R epsilon [0,1], T is true positive, T epsilon [0,1], F is false positive, F epsilon [0,1], N is false negative, N epsilon [0,1], and P, R, T, F, N is not 0 at the same time.
2) Training RN-DoubleU-Net network
The training set is sent into an RN-DoubleU-Net network for training, and in the training process, the learning rate gamma epsilon [10 ] of the RN-DoubleU-Net network -5 ,10 -3 ]The optimizer adopts the estimation optimization of the self-adaptive momentAnd (5) iterating until the loss function converges.
(4) Preservation model
In the process of training the RN-DoubleU-Net network, the weight is continuously updated by using a deep learning framework, and the weight file is stored.
(5) Validating RN-DoubleU-Net network
The verification set is input into the RN-DoubleU-Net network for verification.
(6) Testing RN-DoubleU-Net network
And inputting the test set into the RN-DoubleU-Net network for testing, and loading the saved weight file to obtain the mammary gland area image.
In the step of constructing the RN-double u-Net network model in the step (2) of the present invention, the first sub-network encoder is a VGG19 network in the deep learning framework, the VGG19 network is composed of 16 coding convolution layers connected in series, and the coding convolution kernel size of the coding convolution layers of the VGG19 network is 3×3, and the step length is 1.
The first subnetwork cavity convolution is formed by a cavity convolution layer, and the cavity convolution layer is formed by sequentially connecting 1 common convolution kernel and 5 cavity convolution kernels in series.
The first sub-network decoder is composed of 4 decoding convolution blocks, each decoding convolution block comprises an up-sampling layer with the size of 1 multiplied by 2, decoding convolution kernels with the size of 3 multiplied by 3 and the step size of 1, and 1 attention mechanism block, and the input of the first sub-network decoder is connected with the output of the first sub-network cavity convolution.
The size of the common convolution kernel is 1 multiplied by 1, the size of the first cavity convolution kernel is 1 multiplied by 1, the size of the cavity is 1, the size of the second cavity convolution kernel is 3 multiplied by 3, the size of the third cavity convolution kernel is 1, the size of the cavity is 12, the size of the fourth cavity convolution kernel is 3 multiplied by 3, the size of the cavity is 18, the size of the fifth cavity convolution kernel is 1 multiplied by 1, the size of the cavity is 1; the input of the hole convolution layer is connected to the output of the first sub-network encoder.
In the step of constructing the RN-DoubleU-Net network model in the step (2) of the invention, the second sub-network hole convolution has the same structure as the first sub-network hole convolution, and the input of the second sub-network hole convolution is connected with the output of the second sub-network encoder. The second sub-network decoder is formed by connecting 4 decoding convolution blocks in series, each decoding convolution block comprises an up-sampling layer with the size of 1 multiplied by 2, decoding convolution kernels with the size of 3 multiplied by 3 and the step length of 1, and 1 attention mechanism block, and the input of the second sub-network decoder is connected with the output of the second sub-network cavity convolution.
In the step of constructing the RN-DoubleU-Net network in the step (2), the second sub-network encoder is formed by sequentially connecting a common convolution block, a first residual convolution block, a second residual convolution block and a third residual convolution block in series.
The common convolution block is formed by connecting 1 coding convolution core with the size of 7 multiplied by 7 and the step length of 2 in series with 1 pooling layer with the size of 3 multiplied by 3 and the step length of 2.
The first residual convolution block consists of 3 residual units which are sequentially connected in series, the first residual unit is formed by sequentially connecting 4 coding convolution kernels in series, the size of the second coding convolution kernel is 3 multiplied by 3, the step size of the first coding convolution kernel is 1, the size of the third coding convolution kernel is 1 multiplied by 1, and the size of the fourth coding convolution kernel is 1 multiplied by 1; the second residual unit and the third residual unit are formed by sequentially connecting 3 coding convolution kernels in series, the second coding convolution kernel of each residual unit is 1 in step size of 3 multiplied by 3, and the first coding convolution kernel and the third coding convolution kernel are 1 in step size of 1 multiplied by 1.
The second residual convolution block consists of 4 residual units which are sequentially connected in series, the first residual unit is formed by sequentially connecting 4 coding convolution kernels in series, the size of the first coding convolution kernel and the size of the fourth coding convolution kernel are 1 multiplied by 1, the size of the second coding convolution kernel is 3 multiplied by 3, the size of the third coding convolution kernel is 1, and the size of the third coding convolution kernel is 1 multiplied by 1; the second to the fourth residual units are formed by sequentially connecting 3 coding convolution kernels in series, the second coding convolution kernel of each residual unit is 1 in step size of 3 multiplied by 3, and the first and the third coding convolution kernels are 1 multiplied by 1 in step size.
The third residual convolution block consists of 6 residual units which are sequentially connected in series; the first residual unit is formed by sequentially connecting 4 coding convolution kernels in series, the size of the first coding convolution kernel and the fourth coding convolution kernel is 1 multiplied by 1, the size of the second coding convolution kernel is 3 multiplied by 3, the size of the third coding convolution kernel is 1, and the size of the third coding convolution kernel is 1 multiplied by 1; the second to sixth residual units are formed by sequentially connecting 3 coding convolution kernels in series, the second coding convolution kernel of each residual unit is 1 in step size of 3 multiplied by 3, and the first and third coding convolution kernels are 1 multiplied by 1 in step size; an input of the second sub-network encoder is connected to an output of the first sub-network decoder.
Because the invention adopts the common convolution block and the three residual convolution blocks as the second sub-network encoder, the invention constructs the RN-double U-Net) network for dividing the mammary gland area image, and the RN-double U-Net network can fully utilize the effective information in the image and accurately divide the mammary gland area image in the image. Compared with the existing image segmentation method, the method provided by the invention has the advantages of accurate segmentation of the mammary gland area image, high segmentation precision, high segmentation speed and the like, and can be used for automatically segmenting the mammary gland area image.
Drawings
Fig. 1 is a flow chart of embodiment 1 of the present invention.
Fig. 2 is a schematic diagram of the structure of an RN-DoubleU-Net network.
Fig. 3 is a schematic diagram of a second sub-network encoder of the RN-DoubleU-Net network.
Detailed Description
The invention will be further illustrated with reference to the drawings and examples, but the invention is not limited to the following examples.
Example 1
Fig. 1 shows a flowchart of the present embodiment. In fig. 1, the mammary gland area image segmentation method based on the RN-double u-Net network of the present embodiment is composed of the following steps:
(1) Dataset preprocessing
398 pictures of the mammary gland area data set are taken, and the size of the pictures is 2000 x 2000 pixels.
1) The mammary gland area dataset pixel values were normalized to [0,1], and this example employed normalization of the mammary gland area dataset pixel values to 0.5, cut into pictures of 512 x 512 pixels in size.
2) The segmented data set is processed according to 8:1: the proportion of 1 is divided into a training set, a verification set and a test set.
(2) Construction of RN-DoubleU-Net network model
In fig. 2, the RN-double u-Net network of the present embodiment is formed by connecting a first subnet and a second subnet, and an output of the first subnet is connected to an input of the second subnet.
The first Unet subnetwork is formed by sequentially connecting a first subnetwork encoder, a first subnetwork hole convolution and a first subnetwork decoder in series, and the second subnetwork is formed by sequentially connecting a second subnetwork encoder, a second subnetwork hole convolution and a second subnetwork decoder in series.
The first sub-network encoder of the embodiment is a VGG19 network in a deep learning framework, the VGG19 network is composed of 16 coding convolution layers connected in series, and the coding convolution kernel size of the coding convolution layers of the VGG19 network is 3×3 and the step size is 1. The first subnetwork hole convolution is formed by a hole convolution layer, and the hole convolution layer is formed by sequentially connecting 1 common convolution kernel and 5 hole convolution kernels in series. The first sub-network decoder is composed of 4 decoding convolution blocks, each decoding convolution block comprises an up-sampling layer with the size of 1 multiplied by 2, decoding convolution kernels with the size of 3 multiplied by 3 and the step size of 1, and 1 attention mechanism block, and the input of the first sub-network decoder is connected with the output of the first sub-network cavity convolution.
The size of the common convolution kernel of this embodiment is 1×1 step, the size of the first hole convolution kernel is 1×1 step, the hole is 1, the size of the second hole convolution kernel is 3×3 step, the hole is 6, the size of the third hole convolution kernel is 3×3 step, the hole is 12, the size of the fourth hole convolution kernel is 3×3 step, the hole is 18, the size of the fifth hole convolution kernel is 1×1 step, the hole is 1; the input of the hole convolution layer is connected to the output of the first sub-network encoder.
In fig. 2, the second sub-network hole convolution is identical to the first sub-network hole convolution structure, and the input of the second sub-network hole convolution is connected to the output of the second sub-network encoder. The second sub-network decoder is formed by connecting 4 decoding convolution blocks in series, each decoding convolution block comprises an up-sampling layer with the size of 1 multiplied by 2, decoding convolution kernels with the size of 3 multiplied by 3 and the step size of 1, and 1 attention mechanism block, and the input of the second sub-network decoder module is connected with the output of the second sub-network cavity convolution.
In fig. 3, the second sub-network encoder of the present embodiment is formed by sequentially concatenating a normal convolution block, a first residual convolution block, a second residual convolution block, and a third residual convolution block.
The normal convolution block of this embodiment is composed of 1 coding convolution kernel of 7×7 step size 2, 1 pooling layer of 3×3 step size 2 connected in series. The first residual convolution block consists of 3 residual units which are sequentially connected in series, wherein the first residual unit consists of 4 coding convolution kernels which are sequentially connected in series, the size of the second coding convolution kernel is 3 multiplied by 3, the size of the first, third and fourth coding convolution kernels is 1 multiplied by 1, and the size of the first, third and fourth coding convolution kernels is 1 multiplied by 1; the second and third residual units are formed by sequentially connecting 3 coding convolution kernels in series, the second coding convolution kernel of each residual unit is 1 in step size of 3×3, and the first and third coding convolution kernels are 1 in step size of 1×1. The second residual convolution block is composed of 4 residual units which are sequentially connected in series, the first residual unit is composed of 4 coding convolution kernels which are sequentially connected in series, the size of the first coding convolution kernel and the size of the fourth coding convolution kernel are 1 multiplied by 1, the size of the second coding convolution kernel is 3 multiplied by 3, the size of the third coding convolution kernel is 1, and the size of the third coding convolution kernel is 1 multiplied by 1; the second to the fourth residual units are formed by sequentially connecting 3 coding convolution kernels in series, the second coding convolution kernel of each residual unit is 1 in step size of 3 multiplied by 3, and the first and the third coding convolution kernels are 1 multiplied by 1 in step size. The third residual convolution block consists of 6 residual units which are sequentially connected in series; the first residual unit is formed by sequentially connecting 4 coding convolution kernels in series, the size of the first coding convolution kernel and the fourth coding convolution kernel is 1 multiplied by 1, the size of the second coding convolution kernel is 3 multiplied by 3, the size of the third coding convolution kernel is 1, and the size of the third coding convolution kernel is 1 multiplied by 1; the second to sixth residual units are formed by sequentially connecting 3 coding convolution kernels in series, the second coding convolution kernel of each residual unit is 1 in step size of 3 multiplied by 3, and the first and third coding convolution kernels are 1 multiplied by 1 in step size; an input of the second sub-network encoder is connected to an output of the first sub-network decoder.
Because the invention adopts the common convolution block and the three residual convolution blocks as the second sub-network encoder, an RN-double U-Net network is constructed, the mammary gland area image in the data set is segmented, and the effective information in the image is utilized to segment the mammary gland area image, so that higher segmentation precision is obtained. Compared with the prior art, the method has the advantages of accurate segmentation of the mammary gland area image, high segmentation precision, high segmentation speed and the like.
(3) Training RN-DoubleU-Net network
1) Determining an objective function
The objective function includes a loss function L dice And an evaluation function F1 for determining a loss function L according to the following formula dice
Wherein X represents a true value, X ε { X } 1 ,x 2 ,...x n Y represents a predicted value, Y ε { Y }, Y 1 ,y 2 ,...y n N is the number of elements and is a finite positive integer, the value of n in this embodiment is 512, and the specific value of n should be determined according to the size of the image pixel, and typically a multiple of 512 is adopted or a number capable of dividing 512 is adopted.
An evaluation function F1 determined by:
wherein P is the precision rate, P epsilon [0,1], R is the recall rate, R epsilon [0,1], T is true positive, T epsilon [0,1], F is false positive, F epsilon [0,1], N is false negative, N epsilon [0,1], and the value of P, R, T, F, N in this embodiment is 0.5.
2) Training RN-DoubleU-Net network
The training set is sent into an RN-DoubleU-Net network for training, and in the training process, the learning rate gamma epsilon [10 ] of the RN-DoubleU-Net network -5 ,10 -3 ]The gamma value of this example was 10 -3 The optimizer adopts an adaptive moment estimation optimizer to iterate until the loss function converges.
(4) Preservation model
In the process of training the RN-DoubleU-Net network, the weight is continuously updated by using a deep learning framework, and the weight file is stored.
(5) Validating RN-DoubleU-Net network
The verification set is input into the RN-DoubleU-Net network for verification.
(6) Testing RN-DoubleU-Net network
And inputting the test set into the RN-DoubleU-Net network for testing, and loading the saved weight file to obtain the mammary gland area image.
And (3) completing the mammary gland area image segmentation method based on the RN-double U-Net network.
Example 2
The mammary gland area image segmentation method based on the RN-DoubleU-Net network in the embodiment comprises the following steps:
(1) Dataset preprocessing
398 pictures of the mammary gland area data set are taken, and the size of the pictures is 2000 x 2000 pixels.
1) The mammary gland area dataset pixel values were normalized to [0,1], and this embodiment employs normalization of the mammary gland area dataset pixel values to 0, cut into pictures of 512 x 512 pixels in size.
2) The segmented data set is processed according to 8:1: the proportion of 1 is divided into a training set, a verification set and a test set.
(2) Construction of RN-DoubleU-Net network model
This step is the same as in example 1.
(3) Training RN-DoubleU-Net network
1) Determining an objective function
The objective function includes a loss function L dice And an evaluation function F1 for determining a loss function L according to the following formula dice
Wherein X represents a true value, X ε { X } 1 ,x 2 ,...x n Y represents a predicted value, Y ε { Y }, Y 1 ,y 2 ,...y n N is the number of elements and is a finite positive integer, the value of n in this embodiment is 512, and the specific value of n should be determined according to the size of the image pixel, and typically a multiple of 512 is adopted or a number capable of dividing 512 is adopted.
An evaluation function F1 determined by:
wherein P is the precision, P epsilon [0,1], R is the recall, R epsilon [0,1], T is true positive, T epsilon [0,1], F is false positive, F epsilon [0,1], N is false negative, N epsilon [0,1]. In this example, the value of P was 0.5, and the values of R and T, F, N were 0.
2) Training RN-DoubleU-Net network
The training set is sent into an RN-DoubleU-Net network for training, and in the training process, the learning rate gamma epsilon [10 ] of the RN-DoubleU-Net network -5 ,10 -4 ]The gamma value of this example was 10 -5 The optimizer adopts an adaptive moment estimation optimizer to iterate until the loss function converges.
The other steps were the same as in example 1. And (3) completing the mammary gland area image segmentation method based on the RN-double U-Net network.
Example 3
The mammary gland area image segmentation method based on the RN-DoubleU-Net network in the embodiment comprises the following steps:
(1) Dataset preprocessing
398 pictures of the mammary gland area data set are taken, and the size of the pictures is 2000 x 2000 pixels.
1) The mammary gland area dataset pixel values were normalized to [0,1], and this embodiment uses normalization of the mammary gland area dataset pixel values to 1, cut into pictures of 512 x 512 pixels in size.
2) The segmented data set is processed according to 8:1: the proportion of 1 is divided into a training set, a verification set and a test set.
(2) Construction of RN-DoubleU-Net network model
This step is the same as in example 1.
(3) Training RN-DoubleU-Net network
1) Determining an objective function
The objective function includes a loss function L dice And an evaluation function F1 for determining a loss function L according to the following formula dice
Wherein X represents a true value, X ε { X } 1 ,x 2 ,...x n Y represents a predicted value, Y ε { Y }, Y 1 ,y 2 ,...y n N is the number of elements, is a finite positive integer, n in this embodiment is 512, n is a specific valueIt should be determined by the size of the image pixels, typically taking the form of multiples of 512, or numbers that can be divided by 512.
An evaluation function F1 determined by:
wherein P is the precision, P epsilon [0,1], R is the recall, R epsilon [0,1], T is true positive, T epsilon [0,1], F is false positive, F epsilon [0,1], N is false negative, N epsilon [0,1]. The value of example P, R, T, F, N was 1.
2) Training RN-DoubleU-Net network
The training set is sent into an RN-DoubleU-Net network for training, and in the training process, the learning rate gamma epsilon [10 ] of the RN-DoubleU-Net network -5 ,10 -4 ]The gamma value of this example was 10 -4 The optimizer adopts an adaptive moment estimation optimizer to iterate until the loss function converges.
The other steps were the same as in example 1. And (3) completing the mammary gland area image segmentation method based on the RN-double U-Net network.
Example 4
The mammary gland area image segmentation method based on the RN-DoubleU-Net network in the embodiment comprises the following steps:
(1) Dataset preprocessing
This step is the same as in example 1.
(2) Construction of RN-DoubleU-Net network model
This step is the same as in example 1.
(3) Training RN-DoubleU-Net network
1) Determining an objective function
The objective function includes a loss function L dice And an evaluation function F1 for determining a loss function L according to the following formula dice
Wherein X represents a true value, X ε { X } 1 ,x 2 ,...x n Y represents a predicted value, Y ε { Y }, Y 1 ,y 2 ,...y n N is the number of elements and is a finite positive integer, the value of n in this embodiment is 512, and the specific value of n should be determined according to the size of the image pixel, and typically a multiple of 512 is adopted or a number capable of dividing 512 is adopted.
An evaluation function F1 determined by:
wherein P is the precision rate, P epsilon [0,1], R is the recall rate, R epsilon [0,1], T is true positive, T epsilon [0,1], F is false positive, F epsilon [0,1], N is false negative, N epsilon [0,1], the value of P in this embodiment is 0, and the value of R, T, F, N is 0.5.
2) Training RN-DoubleU-Net network
This step is the same as in example 1.
The other steps were the same as in example 1. And (3) completing the mammary gland area image segmentation method based on the RN-double U-Net network.
In order to verify the beneficial effects of the invention, the inventor adopts the breast gland area image segmentation method based on the RN-DoubleU-Net network, the DoubleU-Net method and the U-Net method of the embodiment 1 to carry out contrast simulation experiments, and the various experimental conditions are as follows:
the same test set is tested by each trained model, the accuracy of the model is tested by using an evaluation code, an evaluation function F1 is adopted as the quality of the evaluation method, and the larger the value of the evaluation function F1 is, the better the method is, and the experimental result of the evaluation function F1 is shown in a table 1.
TABLE 1 evaluation function F1 values for example 1 and the double U-Net method, U-Net method
Test method Evaluation function F1
Example 1 0.7271
DoubleU-Net method 0.7167
U-Net method 0.6206
As can be seen from Table 1, the evaluation function F1 value of the method of example 1 was 0.7271, the evaluation function F1 value of the double U-Net method was 0.7167, and the evaluation function F1 value of the U-Net method was 0.6206. The evaluation function F1 of the method of example 1 was 1.45% higher than the evaluation function F1 of the double U-Net method, and 17.16% higher than the evaluation function F1 of the U-Net method.

Claims (4)

1. The mammary gland area image segmentation method based on the RN-DoubleU-Net network is characterized by comprising the following steps of:
(1) Dataset preprocessing
398 pictures of the mammary gland area data set are taken, and the size of the pictures is 2000 multiplied by 2000 pixels;
1) Normalizing the pixel value of the mammary gland area data set to [0,1], and cutting into pictures with 512 multiplied by 512 pixels;
2) The segmented data set is processed according to 8:1: the proportion of 1 is divided into a training set, a verification set and a test set;
(2) Construction of RN-DoubleU-Net network model
The RN-DoubleU-Net network model is formed by connecting a first Unet sub-network and a second Unet sub-network, wherein the output of the first Unet sub-network is connected with the input of the second Unet sub-network;
the first Unet subnetwork is formed by sequentially connecting a first subnetwork encoder, a first subnetwork hole convolution and a first subnetwork decoder in series, and the second Unet subnetwork is formed by sequentially connecting a second subnetwork encoder, a second subnetwork hole convolution and a second subnetwork decoder in series;
the second sub-network encoder is formed by sequentially connecting a common convolution block with a first residual convolution block, a second residual convolution block and a third residual convolution block in series;
the common convolution block is formed by connecting 1 coding convolution kernel with the size of 7 multiplied by 7 and the step length of 2 in series with 1 pooling layer with the size of 3 multiplied by 3 and the step length of 2;
the first residual convolution block consists of 3 residual units which are sequentially connected in series, the first residual unit is formed by sequentially connecting 4 coding convolution kernels in series, the size of the second coding convolution kernel is 3 multiplied by 3, the step size of the first coding convolution kernel is 1, the size of the third coding convolution kernel is 1 multiplied by 1, and the size of the fourth coding convolution kernel is 1 multiplied by 1; the second residual unit and the third residual unit are formed by sequentially connecting 3 coding convolution kernels in series, the second coding convolution kernel of each residual unit is 1 in step size of 3 multiplied by 3, and the first coding convolution kernel and the third coding convolution kernel are 1 in step size of 1 multiplied by 1;
the second residual convolution block consists of 4 residual units which are sequentially connected in series, the first residual unit is formed by sequentially connecting 4 coding convolution kernels in series, the size of the first coding convolution kernel and the size of the fourth coding convolution kernel are 1 multiplied by 1, the size of the second coding convolution kernel is 3 multiplied by 3, the size of the third coding convolution kernel is 1, and the size of the third coding convolution kernel is 1 multiplied by 1; the second to the fourth residual units are formed by sequentially connecting 3 coding convolution kernels in series, the second coding convolution kernel of each residual unit is 1 in step size of 3 multiplied by 3, and the first and the third coding convolution kernels are 1 in step size of 1 multiplied by 1;
the third residual convolution block consists of 6 residual units which are sequentially connected in series; the first residual unit is formed by sequentially connecting 4 coding convolution kernels in series, the size of the first coding convolution kernel and the fourth coding convolution kernel is 1 multiplied by 1, the size of the second coding convolution kernel is 3 multiplied by 3, the size of the third coding convolution kernel is 1, and the size of the third coding convolution kernel is 1 multiplied by 1; the second to sixth residual units are formed by sequentially connecting 3 coding convolution kernels in series, the second coding convolution kernel of each residual unit is 1 in step size of 3 multiplied by 3, and the first and third coding convolution kernels are 1 multiplied by 1 in step size; an input of the second sub-network encoder is connected with an output of the first sub-network decoder;
(3) Training RN-DoubleU-Net network
1) Determining an objective function
The objective function includes a loss function L dice And an evaluation function F1 for determining a loss function L according to the following formula dice
Wherein X represents a true value, X ε { X } 1 ,x 2 ,...x n Y represents a predicted value, Y ε { Y }, Y 1 ,y 2 ,...y n N is the number of elements, a finite positive integer;
an evaluation function F1 determined by:
wherein P is the precision rate, P epsilon [0,1], R is the recall rate, R epsilon [0,1], T is true positive, T epsilon [0,1], F is false positive, F epsilon [0,1], N is false negative, N epsilon [0,1], and P, R, T, F, N is not 0 at the same time;
2) Training RN-DoubleU-Net network
The training set is sent into an RN-DoubleU-Net network for training, and in the training process, the learning rate gamma epsilon [10 ] of the RN-DoubleU-Net network -5 ,10 -3 ]The optimizer adopts an adaptive moment estimation optimizer to iterate until the loss function converges;
(4) Preservation model
In the process of training the RN-DoubleU-Net network, continuously updating weights by using a deep learning framework, and storing weight files;
(5) Validating RN-DoubleU-Net network
Inputting the verification set into the RN-DoubleU-Net network for verification;
(6) Testing RN-DoubleU-Net network
And inputting the test set into the RN-DoubleU-Net network for testing, and loading the saved weight file to obtain the mammary gland area image.
2. The mammary gland area image segmentation method based on the RN-DoubleU-Net network according to claim 1, wherein the method comprises the following steps of: in the step of constructing the RN-DoubleU-Net network model (2), the first sub-network encoder is a VGG19 network in a deep learning framework, the VGG19 network is composed of 16 coding convolution layers connected in series, and the coding convolution kernel of the coding convolution layers of the VGG19 network is 3 multiplied by 3 and has a step length of 1;
the first subnetwork cavity convolution is formed by a cavity convolution layer, and the cavity convolution layer is formed by sequentially connecting 1 common convolution kernel and 5 cavity convolution kernels in series;
the first sub-network decoder is composed of 4 decoding convolution blocks, each decoding convolution block comprises an up-sampling layer with the size of 1 multiplied by 2, decoding convolution kernels with the size of 3 multiplied by 3 and the step size of 1, and 1 attention mechanism block, and the input of the first sub-network decoder is connected with the output of the first sub-network cavity convolution.
3. The mammary gland area image segmentation method based on the RN-double u-Net network according to claim 2, wherein the method comprises the following steps: the size of the common convolution kernel is 1 multiplied by 1, the size of the first cavity convolution kernel is 1 multiplied by 1, the size of the cavity is 1, the size of the second cavity convolution kernel is 3 multiplied by 3, the size of the third cavity convolution kernel is 1, the size of the cavity is 12, the size of the fourth cavity convolution kernel is 3 multiplied by 3, the size of the cavity is 18, the size of the fifth cavity convolution kernel is 1 multiplied by 1, the size of the cavity is 1; the input of the hole convolution layer is connected to the output of the first sub-network encoder.
4. The mammary gland area image segmentation method based on the RN-DoubleU-Net network according to claim 1, wherein the method comprises the following steps of: in the step of constructing the RN-DoubleU-Net network model, the second sub-network cavity convolution has the same structure as the first sub-network cavity convolution, and the input of the second sub-network cavity convolution is connected with the output of the second sub-network encoder;
the second sub-network decoder is formed by connecting 4 decoding convolution blocks in series, each decoding convolution block comprises an up-sampling layer with the size of 1 multiplied by 2, decoding convolution kernels with the size of 3 multiplied by 3 and the step length of 1, and 1 attention mechanism block, and the input of the second sub-network decoder is connected with the output of the second sub-network cavity convolution.
CN202210021366.1A 2022-01-10 2022-01-10 Mammary gland area image segmentation method based on RN-DoubleU-Net network Active CN114419064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210021366.1A CN114419064B (en) 2022-01-10 2022-01-10 Mammary gland area image segmentation method based on RN-DoubleU-Net network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210021366.1A CN114419064B (en) 2022-01-10 2022-01-10 Mammary gland area image segmentation method based on RN-DoubleU-Net network

Publications (2)

Publication Number Publication Date
CN114419064A CN114419064A (en) 2022-04-29
CN114419064B true CN114419064B (en) 2024-04-05

Family

ID=81272311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210021366.1A Active CN114419064B (en) 2022-01-10 2022-01-10 Mammary gland area image segmentation method based on RN-DoubleU-Net network

Country Status (1)

Country Link
CN (1) CN114419064B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020118618A1 (en) * 2018-12-13 2020-06-18 深圳先进技术研究院 Mammary gland mass image recognition method and device
CN113487615A (en) * 2021-06-29 2021-10-08 上海海事大学 Retina blood vessel segmentation method and terminal based on residual error network feature extraction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020118618A1 (en) * 2018-12-13 2020-06-18 深圳先进技术研究院 Mammary gland mass image recognition method and device
CN113487615A (en) * 2021-06-29 2021-10-08 上海海事大学 Retina blood vessel segmentation method and terminal based on residual error network feature extraction

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation;Debesh Jha等;ARXIV;20200608;全文 *
基于两步聚类和随机森林的乳腺腺管自动识别方法;王帅;刘娟;毕姚姚;陈哲;郑群花;段慧芳;;计算机科学;20180315(第03期);全文 *

Also Published As

Publication number Publication date
CN114419064A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN111369563B (en) Semantic segmentation method based on pyramid void convolutional network
CN110533631B (en) SAR image change detection method based on pyramid pooling twin network
CN108230278B (en) Image raindrop removing method based on generation countermeasure network
CN110555399B (en) Finger vein identification method and device, computer equipment and readable storage medium
CN111160229B (en) SSD network-based video target detection method and device
CN112767423B (en) Remote sensing image building segmentation method based on improved SegNet
CN112163520A (en) MDSSD face detection method based on improved loss function
CN114612715A (en) Edge federal image classification method based on local differential privacy
CN108960326B (en) Point cloud fast segmentation method and system based on deep learning framework
CN113516650A (en) Circuit board hole plugging defect detection method and device based on deep learning
CN114639102B (en) Cell segmentation method and device based on key point and size regression
CN113591553A (en) Turbo pump migration learning fault intelligent judgment method based on small sample weight optimization
CN112348830A (en) Multi-organ segmentation method based on improved 3D U-Net
CN115393656A (en) Automatic classification method for stratum classification of logging-while-drilling image
CN115170872A (en) Class increment learning method based on knowledge distillation
CN111739037A (en) Semantic segmentation method for indoor scene RGB-D image
CN114419064B (en) Mammary gland area image segmentation method based on RN-DoubleU-Net network
CN115035408A (en) Unmanned aerial vehicle image tree species classification method based on transfer learning and attention mechanism
CN112990336B (en) Deep three-dimensional point cloud classification network construction method based on competitive attention fusion
CN114596302A (en) PCB defect detection method, system, medium, equipment and terminal
CN114677535A (en) Training method of domain-adaptive image classification network, image classification method and device
CN114970601A (en) Power equipment partial discharge type identification method, equipment and storage medium
CN111008529B (en) Chinese relation extraction method based on neural network
CN112084551A (en) Building facade identification and generation method based on confrontation generation network
CN110852451A (en) Recursive kernel self-adaptive filtering method based on kernel function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant