CN116205924A - Prostate segmentation algorithm based on U2-Net - Google Patents

Prostate segmentation algorithm based on U2-Net Download PDF

Info

Publication number
CN116205924A
CN116205924A CN202211468231.6A CN202211468231A CN116205924A CN 116205924 A CN116205924 A CN 116205924A CN 202211468231 A CN202211468231 A CN 202211468231A CN 116205924 A CN116205924 A CN 116205924A
Authority
CN
China
Prior art keywords
image data
data
net
receiving
algorithm model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211468231.6A
Other languages
Chinese (zh)
Inventor
石勇涛
储志杰
雷帮军
尤一飞
李伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges University CTGU
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN202211468231.6A priority Critical patent/CN116205924A/en
Publication of CN116205924A publication Critical patent/CN116205924A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30081Prostate
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Operations Research (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a prostate segmentation algorithm based on U2-Net, which comprises the following steps: receiving image data, analyzing the attribute of the image data, and judging the availability of the image data according to the attribute of the image data; receiving available image data, carrying out standardized processing on the image data, and enabling the processed image data to be in normal distribution with a mean value of 0 and a variance of 1; the invention can effectively analyze and process the prostate medical collected image data which is required to be processed by receiving the image data which is subjected to standardized processing and further carrying out data set augmentation on the image data, provides more and finer data support with reference for the algorithm in the running process in a data augmentation mode, and further divides the data obtained by augmentation to ensure that various data support data which is used as the running data of the algorithm can be reasonably applied, so that the algorithm can be fully developed in the practical application process.

Description

Prostate segmentation algorithm based on U2-Net
Technical Field
The invention relates to the technical field of image processing, in particular to a prostate segmentation algorithm based on U2-Net.
Background
Prostate disease is very common in male reproductive diseases. As a male specific gonadal organ, the male sexual organ is easy to inflame, and the incidence rate is generally higher along with the increase of life pressure of modern society. According to statistics, the number of prostate cancer patients in China is continuously increased in recent years, and even in some western developed countries, the mortality rate of the prostate cancer is only inferior to that of lung cancer diseases, however, due to the difference of medical levels in various countries, the survival rate of prostate cancer groups in China is far lower than that of other developed countries. Therefore, the Chinese medicinal composition has great significance for improving the male health condition in China. In clinical applications, diagnosis of prostate disease is mainly performed by transrectal ultrasound (TRUS) guided needle biopsy, and whether the detection of the target prostate from TRUS images directly affects subsequent related treatments can be accurately detected.
At present, the prostate segmentation task is mainly manually outlined by experienced doctors, the workload of the process is huge and time-consuming, and the accuracy of the manual work on the target is difficult to grasp, so that a rapid and automatic segmentation technology is urgently needed in modern clinic.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects existing in the prior art, the invention provides a prostate segmentation algorithm based on U2-Net, which solves the technical problems in the background art.
Technical proposal
In order to achieve the above purpose, the invention is realized by the following technical scheme:
a U2-Net based prostate segmentation algorithm comprising the steps of:
step1: receiving image data, analyzing the attribute of the image data, and judging the availability of the image data according to the attribute of the image data;
step2: receiving available image data, carrying out standardized processing on the image data, and enabling the processed image data to be in normal distribution with a mean value of 0 and a variance of 1;
step3: receiving the image data subjected to standardization processing, further carrying out data set augmentation on the image data, analyzing the source image data of each newly added image data subjected to the data set augmentation, and distinguishing the newly added image data from the source image data according to the source image data of each newly added image data;
step4: acquiring the image data which is processed in Step3, building a U2-Net algorithm model, deploying a control system, and controlling the U2-Net algorithm model to periodically run and receive the image data to process the image data by the control system;
step5: reading operation data of a U2-Net algorithm model stored in a control system in Step4, analyzing operation error process data in the data, and tracing the operation error process data to capture error process data;
step6: and confirming the image data according to the tracing result, packaging the confirmed image data, and feeding back to the image data transmitting end corresponding to the image data receiving end.
Still further, the image data attributes analyzed in Step1 include: image data format, image data image depth, image data pixels;
when receiving image data, the Step1 receives the image data at a single time for the same image data receiving target, the number of the image data is not less than eighty four groups, the usability judgment of the image data in the Step1 is judged according to the attribute of the image data, and the image data is in the format of bmp, 768×576 and 8 bits in image depth.
Furthermore, in the step2, when the data normalization processing is performed on the received image data, any one of a min-max normalization equation or a z-score normalization equation is calculated, where the z-score normalization equation is used as a target of preference, and the min-max normalization equation or the z-score normalization equation are respectively as follows:
min-max normalization equation:
Figure SMS_1
z-score normalization equation:
Figure SMS_2
wherein: x is the data set needing to be standardized
x min Representing the minimum value in the set;
x max represents the maximum value in the set;
x is the original data;
Figure SMS_3
is the original data average value;
sigma is the standard deviation of the original data;
y is a normalized value.
Still further, the Step3 augmentation method for the image dataset includes: scaling, translation, horizontal overturn, vertical overturn and horizontal overturn, increasing Gaussian blur noise, clipping, pixel filling and improving contrast, so that the number of the amplified image data is ten times that of the initial image data.
Further, after the image data is distinguished in Step3, the training set and the verification set are further divided into the image data, and after the division is completed, the divided image data are fed back to Step4 respectively:
wherein the dividing ratio of the training set and the verification set is 80%/20%.
Further, the loss function of the U2-Net algorithm model built in Step4 is as follows:
Figure SMS_4
wherein: m is the number of layers of the fusion output feature map;
Figure SMS_5
loss between the m-th output feature map and the true value;
Figure SMS_6
the weight coefficient of the feature map is output for the mth;
l fuse loss between the feature map and the true value obtained by final fusion;
w fuse the weight coefficient for the loss function.
Still further, the control system deployed in Step4 includes:
the control terminal (1) is a main control end of the system and is used for sending out control commands;
the receiving module (2) is used for receiving the image data, respectively packaging the differentiated image data and then sending the image data to the U2-Net algorithm model;
the monitoring module (3) is operated after the U2-Net algorithm model receives the packed image data and is used for monitoring whether the data output process of the U2-Net algorithm model is finished in real time;
the storage module (4) is used for acquiring the running state of the U2-Net algorithm model monitored by the monitoring module (3), receiving the output data of the U2-Net algorithm model after the U2-Net algorithm model runs and is received, and storing the output data;
the process synchronization of the U2-Net algorithm model operation errors detected by the detection unit (21) is sent to the storage module (4).
Still further, the receiving module (2) is internally provided with a sub-module comprising:
the detection unit (21) is used for detecting whether the image data received by the receiving module (2) accords with the operation requirement of the U2-Net algorithm model;
the operation requirements of the U2-Net algorithm model comprise: the image data is distinguished by the attribute of the image data and the number of groups amplified by the image data, and the detection unit (21) judges whether the result is negative or not, and the jump receiving module (2) operates again.
Furthermore, the control terminal (1) is electrically connected with the receiving module (2) through a medium, the detecting unit (21) is electrically connected with the receiving module (2) through the medium, the receiving module (2) is electrically connected with the monitoring module (3) and the storage module (4) through the medium, and the storage module (4) is electrically connected with the detecting unit (21) through the medium.
Further, after the Step6 feeds back to the image data transmitting end corresponding to the image data receiving end, the fed back data content is deleted from the storage module (4).
Advantageous effects
Compared with the known public technology, the technical scheme provided by the invention has the following beneficial effects:
1. the invention provides a prostate segmentation algorithm based on U2-Net, which can effectively analyze and process prostate medical acquired image data to be processed through the execution of the steps in the algorithm, provides more and finer data support with reference for the operation process of the algorithm through a data augmentation mode, further divides the data obtained through augmentation, ensures that various data support data operated as the algorithm can be reasonably applied, and ensures that the algorithm can be fully developed in the actual application process.
2. In the execution process of the steps, the algorithm can also store the operation data of the algorithm, the effective data in the stored data can be used as medical reference for subsequent reading by medical staff, and the operation error data of the algorithm in the stored data can be provided to the real-time user side of the algorithm, so that the user side can further debug the algorithm according to the error source of the operation of the algorithm, and the operation of the algorithm is ensured to be stable and rapid gradually.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is evident that the drawings in the following description are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a flow chart of a U2-Net based prostate segmentation algorithm;
FIG. 2 is a schematic diagram of a control system according to the present invention;
FIGS. 3-5 are schematic diagrams of U2-Net models constructed in the present invention;
reference numerals in the drawings represent respectively: 1. a control terminal; 2. a receiving module; 21. a detection unit; 3. a monitoring module; 4. and a storage module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention is further described below with reference to examples.
Example 1
The prostate segmentation algorithm based on U2-Net of the present embodiment, as shown in FIG. 1, comprises the following steps:
step1: receiving image data, analyzing the attribute of the image data, and judging the availability of the image data according to the attribute of the image data;
step2: receiving available image data, carrying out standardized processing on the image data, and enabling the processed image data to be in normal distribution with a mean value of 0 and a variance of 1;
step3: receiving the image data subjected to standardization processing, further carrying out data set augmentation on the image data, analyzing the source image data of each newly added image data subjected to the data set augmentation, and distinguishing the newly added image data from the source image data according to the source image data of each newly added image data;
step4: acquiring the image data which is processed in Step3, building a U2-Net algorithm model, deploying a control system, and controlling the U2-Net algorithm model to periodically run and receive the image data to process the image data by the control system;
step5: reading operation data of a U2-Net algorithm model stored in a control system in Step4, analyzing operation error process data in the data, and tracing the operation error process data to capture error process data;
step6: and confirming the image data according to the tracing result, packaging the confirmed image data, and feeding back to the image data transmitting end corresponding to the image data receiving end.
Example 2
On the aspect of implementation, on the basis of embodiment 1, this embodiment further specifically describes a U2-Net-based prostate segmentation algorithm in embodiment 1 with reference to fig. 1:
the image data attributes analyzed in Step1 include: image data format, image data image depth, image data pixels;
when Step1 receives image data, the number of image data received by the same image data receiving target is not less than eighty four groups at a time, the usability judgment of the image data in Step1 is judged according to the attribute of the image data, and the image data is in the format of bmp, 768×576 and 8 bits in image depth.
As shown in fig. 1, in step2, when data normalization processing is performed on the received image data, any one of a min-max normalization equation or a z-score normalization equation is calculated, where the z-score normalization equation is used as a target of preference, and the min-max normalization equation or the z-score normalization equation are respectively as follows:
min-max normalization equation:
Figure SMS_7
z-score normalization equation:
Figure SMS_8
wherein: x is the data set needing to be standardized
x min Representing the minimum value in the set;
x max represents the maximum value in the set;
x is the original data;
Figure SMS_9
is the original data average value;
sigma is the standard deviation of the original data;
y is a normalized value.
As shown in fig. 1, the augmentation mode of the image dataset in Step3 includes: scaling, translation, horizontal overturn, vertical overturn and horizontal overturn, increasing Gaussian blur noise, clipping, pixel filling and improving contrast, so that the number of the amplified image data is ten times that of the initial image data.
Example 3
On the aspect of implementation, on the basis of embodiment 1, this embodiment further specifically describes a U2-Net-based prostate segmentation algorithm in embodiment 1 with reference to fig. 1:
after distinguishing the image data in Step3, further dividing the training set and the verification set of the image data, and after finishing the division, feeding back the divided image data to Step4 respectively:
wherein the dividing ratio of the training set and the verification set is 80%/20%.
As shown in fig. 1, the loss function of the U2-Net algorithm model built in Step4 is:
Figure SMS_10
wherein: m is the number of layers of the fusion output feature map;
Figure SMS_11
loss between the m-th output feature map and the true value;
Figure SMS_12
the weight coefficient of the feature map is output for the mth;
l fuse loss between the feature map and the true value obtained by final fusion;
w fuse the weight coefficient for the loss function.
Example 4
In the implementation aspect, on the basis of embodiment 1, this embodiment further specifically describes a U2-Net based prostate segmentation algorithm in embodiment 1 with reference to fig. 2:
a control system deployed in Step4, comprising:
the control terminal (1) is a main control end of the system and is used for sending out control commands;
the receiving module (2) is used for receiving the image data, respectively packaging the differentiated image data and then sending the image data to the U2-Net algorithm model;
the monitoring module (3) is operated after the U2-Net algorithm model receives the packed image data and is used for monitoring whether the data output process of the U2-Net algorithm model is finished in real time;
the storage module (4) is used for acquiring the running state of the U2-Net algorithm model monitored by the monitoring module (3), receiving the output data of the U2-Net algorithm model after the U2-Net algorithm model runs and is received, and storing the output data;
the process synchronization of the U2-Net algorithm model operation errors detected by the detection unit (21) is sent to the storage module (4).
As shown in fig. 1, the receiving module (2) is internally provided with a sub-module, which includes:
the detection unit (21) is used for detecting whether the image data received by the receiving module (2) accords with the operation requirement of the U2-Net algorithm model;
the operation requirements of the U2-Net algorithm model comprise: the image data is distinguished by the attribute of the image data and the number of groups amplified by the image data, and the detection unit (21) judges whether the result is negative or not, and the jump receiving module (2) operates again.
As shown in fig. 1, the control terminal (1) is electrically connected with a receiving module (2) through a medium, the inside of the receiving module (2) is electrically connected with a detecting unit (21) through the medium, the receiving module (2) is electrically connected with a monitoring module (3) and a storage module (4) through the medium, and the storage module (4) is electrically connected with the detecting unit (21) through the medium.
As shown in fig. 1, step6 deletes the data content fed back from the storage module (4) after feeding back to the corresponding image data transmitting end of the image data receiving end.
As shown in fig. 3 to 5, the overall structure of U2-Net will be described:
the U2-Net structure is integrally composed of an Encoder and a Decode, and is similar to the U-shaped structure of a U-Net, as shown in the following figure I, wherein the Encoder and the Decode are respectively composed of sub-modules of En_1, en_2, en_3, en_4, en_5, en_6 and De_1, de_2, de_3, de_4 and De_5; en_1, en_2, en_3, en_4, de_1, de_2, de_3, de_4 are composed of ReSideal U-blocks (RSU for short), en_1, de_1 are composed of RSU-7, en_2, de_2 are composed of RSU-6, en_3, de_3 are composed of RSU-5, en_4, de_4 are composed of RSU-4, and depths are 6, 5, 4, respectively; RSU-7 is shown in the following figure, wherein 7 is the depth of the RSU, wherein the lowest layer is the expansion convolution, and the expansion coefficient is 2; en_5, en_6 and De_5 are composed of RSU-4F, since the whole U-shaped network has been downsampled four times by the time of En_5 layer, the resolution of the image is very low, so the downsampling layer is not set in RSU-4F, RSU-4F is shown in the following figure three, wherein the band d is expansion convolution, and the number behind d is expansion coefficient; then, the outputs of De_1, de_2, de_3, de_4, de_5 and En_6 are respectively convolved through a convolution layer with the convolution kernel size of 3*3 to obtain a feature map with the channel number of 1, then the 6 feature maps are scaled to the same size as the input picture size through a bilinear interpolation method to obtain Sup1, sup2, sup3, sup4, sup5 and Sup6 in a first picture, then the 6 scaled pictures are subjected to concat splicing, and finally a final prediction picture Sfuse is obtained through a convolution layer with the convolution kernel size of 1*1 and a sigmoid function;
the original U2-Net is subjected to the operation of pooling kernel size 2 downsampling after passing through each En module of the Encoder, so that on one hand, the translational invariance of the convolutional neural network characteristic is increased after pooling, and the pooled high-level characteristic has a larger receptive field; on the other hand, the feature map after pooling is also made small, and more detail information is lost. The purpose of compressing the feature map can be achieved for the convolution layer with the step length of 2, and as the parameters of the convolution layer are obtained through learning, more detail information of the image can be reserved, but the calculated amount is increased; in view of this, the MC layer replaces the conventional downsampling layer, and the MC layer is shown in the following figure four, and the input picture is calculated in parallel by the pooling layer and the convolution layer with the step distance of 2, and then the channel number is halved by the CBS (Conv, BN, slu) layer respectively, and then the results of the two are spliced and output by the concat, so that the high-level characteristics of the output image have larger receptive field and retain more detailed information;
the original prostate ultrasound image contains a large number of noise blocks, greatly influences the segmentation accuracy, and an attention mechanism module (CBAM) is led out to solve the problem, and the attention mechanism is added in the network to increase the representation capability of the network, namely important features are focused, and unnecessary features are restrained. The attention module is shown in the following fifth diagram, and is composed of two modules of a channel attention mechanism (CAM, channel Attention Module) and a spatial attention mechanism (SAM, spatial Attention Module) in series, wherein the channel attention mechanism CAM and the spatial attention mechanism SAM are shown in the following sixth diagram; in the channel attention module, firstly, carrying out maximum pooling downsampling (MaxPool) and average pooling downsampling (AvgPool) on an input feature map (H.W.C) in parallel to respectively obtain 1.1xC feature maps, processing the two feature maps through a shared full-connection layer, carrying out element-wise addition operation on the processed result, carrying out nonlinear activation on a sigmoid activation function on the result, and generating a channel attention feature map for output; then multiplying the output attention characteristic diagram with the original diagram (namely the symbol in the diagram), and taking the result as an input diagram of the spatial attention module; in the spatial attention module, firstly, carrying out maximum pooling downsampling and average pooling downsampling operations on channels by an input picture (with the size of H.times.W.times.C) to obtain two feature images with the size of H.times.W.times.1, then, carrying out splicing operations on the two feature images in the channel direction, carrying out convolution operations by a convolution kernel with the size of 7*7 and the number of channels of 1, and carrying out nonlinear activation by a sigmoid activation function to obtain a spatial attention feature image, wherein the feature image is the feature image output by the spatial attention module; multiplying the feature map with the input map of the spatial attention module to obtain a final output feature map of the CBAM module;
in the traditional U2-Net model, an En module is usually directly connected with a corresponding De module in a jumping way, only local characteristic information with fixed size can be obtained, but context information on a high scale is ignored, and the context semantic information is enriched by fusing a multi-scale characteristic diagram through an MFE module. The MFE module is used for respectively inputting the input pictures into expansion convolution modules of DConv1, DConv3 and DConv5 for convolution, wherein the expansion coefficients of DConv1, DConv3 and DConv5 are respectively 1, 3 and 5, respectively performing BN (BatchNormalization) operation on the three expansion convolved feature graphs, then performing concat splicing, and performing BN operation after splicing to obtain a final feature output graph.
In summary, the above embodiment can effectively analyze and process the prostate medical collected image data to be processed, provide more, finer and reference data support for the algorithm in the running process in a data augmentation mode, and further divide the data obtained by augmentation, so as to ensure that various data support data running as the algorithm can be reasonably applied, and the algorithm can be fully developed in the practical application process; in addition, in the execution process of the steps, the algorithm can also store operation data of the algorithm, effective data in the stored data can be used as medical reference for subsequent reading by medical staff, and algorithm operation error data in the stored data can be provided to a real-time user side of the algorithm, so that the user side can further debug the algorithm according to an error source of the operation of the algorithm, and the operation of the algorithm is ensured to be stable and rapid gradually.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A U2-Net based prostate segmentation algorithm comprising the steps of:
step1: receiving image data, analyzing the attribute of the image data, and judging the availability of the image data according to the attribute of the image data;
step2: receiving available image data, carrying out standardized processing on the image data, and enabling the processed image data to be in normal distribution with a mean value of 0 and a variance of 1;
step3: receiving the image data subjected to standardization processing, further carrying out data set augmentation on the image data, analyzing the source image data of each newly added image data subjected to the data set augmentation, and distinguishing the newly added image data from the source image data according to the source image data of each newly added image data;
step4: acquiring the image data which is processed in Step3, building a U2-Net algorithm model, deploying a control system, and controlling the U2-Net algorithm model to periodically run and receive the image data to process the image data by the control system;
step5: reading operation data of a U2-Net algorithm model stored in a control system in Step4, analyzing operation error process data in the data, and tracing the operation error process data to capture error process data;
step6: and confirming the image data according to the tracing result, packaging the confirmed image data, and feeding back to the image data transmitting end corresponding to the image data receiving end.
2. The U2-Net based prostate segmentation algorithm according to claim 1, wherein the image data attributes analyzed in Step1 include: image data format, image data image depth, image data pixels;
when receiving image data, the Step1 receives the image data at a single time for the same image data receiving target, the number of the image data is not less than eighty four groups, the usability judgment of the image data in the Step1 is judged according to the attribute of the image data, and the image data is in the format of bmp, 768×576 and 8 bits in image depth.
3. The U2-Net based prostate segmentation algorithm according to claim 1, wherein in the step2, the received image data is calculated by any one of a min-max normalization equation or a z-score normalization equation, and the z-score normalization equation is used as a target of preference, and the min-max normalization equation or the z-score normalization equation are respectively:
min-max normalization equation:
Figure FDA0003957289440000011
z-score normalization equation:
Figure FDA0003957289440000021
wherein: x is the data set needing to be standardized
x m-incense n Representing the minimum value in the set;
x max represents the maximum value in the set;
x is the original data;
Figure FDA0003957289440000022
is the original data average value;
sigma is the standard deviation of the original data;
y is a normalized value.
4. The U2-Net based prostate segmentation algorithm according to claim 1, wherein the Step3 augmentation of the image dataset comprises: scaling, translation, horizontal overturn, vertical overturn and horizontal overturn, increasing Gaussian blur noise, clipping, pixel filling and improving contrast, so that the number of the amplified image data is ten times that of the initial image data.
5. The U2-Net based prostate segmentation algorithm according to claim 1, wherein after the image data is differentiated in Step3, the training set and the verification set are further divided into the image data, and after the division is completed, the divided image data are fed back to Step4 respectively:
wherein the dividing ratio of the training set and the verification set is 80%/20%.
6. The U2-Net based prostate segmentation algorithm according to claim 1, wherein the loss function of the U2-Net algorithm model built in Step4 is:
Figure FDA0003957289440000023
wherein: m is the number of layers of the fusion output feature map;
Figure FDA0003957289440000024
loss between the m-th output feature map and the true value;
Figure FDA0003957289440000025
the weight coefficient of the feature map is output for the mth;
l fuse for final fusionLoss between the obtained feature map and the true value;
w fuse the weight coefficient for the loss function.
7. The U2-Net based prostate segmentation algorithm according to claim 1, wherein the control system deployed in Step4 comprises:
the control terminal (1) is a main control end of the system and is used for sending out control commands;
the receiving module (2) is used for receiving the image data, respectively packaging the differentiated image data and then sending the image data to the U2-Net algorithm model;
the monitoring module (3) is operated after the U2-Net algorithm model receives the packed image data and is used for monitoring whether the data output process of the U2-Net algorithm model is finished in real time;
the storage module (4) is used for acquiring the running state of the U2-Net algorithm model monitored by the monitoring module (3), receiving the output data of the U2-Net algorithm model after the U2-Net algorithm model runs and is received, and storing the output data;
the process synchronization of the U2-Net algorithm model operation errors detected by the detection unit (21) is sent to the storage module (4).
8. The U2-Net based prostate segmentation algorithm according to claim 7, wherein the receiving module (2) is internally provided with a sub-module comprising:
the detection unit (21) is used for detecting whether the image data received by the receiving module (2) accords with the operation requirement of the U2-Net algorithm model;
the operation requirements of the U2-Net algorithm model comprise: the image data is distinguished by the attribute of the image data and the number of groups amplified by the image data, and the detection unit (21) judges whether the result is negative or not, and the jump receiving module (2) operates again.
9. The prostate segmentation algorithm based on U2-Net according to claim 7, wherein the control terminal (1) is electrically connected with the receiving module (2) through a medium, the receiving module (2) is electrically connected with the detecting unit (21) through the medium, the receiving module (2) is electrically connected with the monitoring module (3) and the storage module (4) through the medium, and the storage module (4) is electrically connected with the detecting unit (21) through the medium.
10. The U2-Net based prostate segmentation algorithm according to claim 1 or 7, wherein the Step6 deletes the fed-back data content from the storage module (4) after feeding back the data content to the corresponding image data sender of the image data receiver.
CN202211468231.6A 2022-11-22 2022-11-22 Prostate segmentation algorithm based on U2-Net Pending CN116205924A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211468231.6A CN116205924A (en) 2022-11-22 2022-11-22 Prostate segmentation algorithm based on U2-Net

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211468231.6A CN116205924A (en) 2022-11-22 2022-11-22 Prostate segmentation algorithm based on U2-Net

Publications (1)

Publication Number Publication Date
CN116205924A true CN116205924A (en) 2023-06-02

Family

ID=86515257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211468231.6A Pending CN116205924A (en) 2022-11-22 2022-11-22 Prostate segmentation algorithm based on U2-Net

Country Status (1)

Country Link
CN (1) CN116205924A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117011713A (en) * 2023-08-08 2023-11-07 中国水利水电科学研究院 Method for extracting field information based on convolutional neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117011713A (en) * 2023-08-08 2023-11-07 中国水利水电科学研究院 Method for extracting field information based on convolutional neural network
CN117011713B (en) * 2023-08-08 2024-05-07 中国水利水电科学研究院 Method for extracting field information based on convolutional neural network

Similar Documents

Publication Publication Date Title
AU2019213369B2 (en) Non-local memory network for semi-supervised video object segmentation
CN109919928B (en) Medical image detection method and device and storage medium
CN112183482A (en) Dangerous driving behavior recognition method, device and system and readable storage medium
CN112085714B (en) Pulmonary nodule detection method, model training method, device, equipment and medium
CN110689599A (en) 3D visual saliency prediction method for generating countermeasure network based on non-local enhancement
CN111226258B (en) Signal conversion system and signal conversion method
CN111369565A (en) Digital pathological image segmentation and classification method based on graph convolution network
CN112861575A (en) Pedestrian structuring method, device, equipment and storage medium
CN112233128B (en) Image segmentation method, model training method, device, medium, and electronic device
WO2023151237A1 (en) Face pose estimation method and apparatus, electronic device, and storage medium
CN116189172B (en) 3D target detection method, device, storage medium and chip
CN114332942A (en) Night infrared pedestrian detection method and system based on improved YOLOv3
CN116205924A (en) Prostate segmentation algorithm based on U2-Net
CN112529930A (en) Context learning medical image segmentation method based on focus fusion
CN116091414A (en) Cardiovascular image recognition method and system based on deep learning
CN113780326A (en) Image processing method and device, storage medium and electronic equipment
CN114549394A (en) Deep learning-based tumor focus region semantic segmentation method and system
CN116258756B (en) Self-supervision monocular depth estimation method and system
CN116486465A (en) Image recognition method and system for face structure analysis
CN116994049A (en) Full-automatic flat knitting machine and method thereof
CN116740808A (en) Animal behavior recognition method based on deep learning target detection and image classification
CN115471901A (en) Multi-pose face frontization method and system based on generation of confrontation network
CN111062944A (en) Network model training method and device and image segmentation method and device
CN113192085A (en) Three-dimensional organ image segmentation method and device and computer equipment
CN112365551A (en) Image quality processing system, method, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination