CN113240014B - Application method of class II segmentation loss function in achieving class II segmentation of intervertebral disc tissue image - Google Patents

Application method of class II segmentation loss function in achieving class II segmentation of intervertebral disc tissue image Download PDF

Info

Publication number
CN113240014B
CN113240014B CN202110538676.6A CN202110538676A CN113240014B CN 113240014 B CN113240014 B CN 113240014B CN 202110538676 A CN202110538676 A CN 202110538676A CN 113240014 B CN113240014 B CN 113240014B
Authority
CN
China
Prior art keywords
segmentation
pixel point
loss function
class
equal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110538676.6A
Other languages
Chinese (zh)
Other versions
CN113240014A (en
Inventor
李奇
何思源
宋雨
武岩
李修军
高宁
杨菁菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202110538676.6A priority Critical patent/CN113240014B/en
Publication of CN113240014A publication Critical patent/CN113240014A/en
Application granted granted Critical
Publication of CN113240014B publication Critical patent/CN113240014B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

A class II segmentation loss function and a construction method and application thereof relate to the field of deep learning and medical image processing and solve the problems of unbalanced background and poor segmentation precision in the prior art. The invention obtains the second-class segmentation loss function by setting the basic loss function, setting the limiting item suitable for the basic loss function, setting the balance quantity and combining the basic loss function and the limiting item through the balance quantity. The method obviously inhibits false positive influence caused by unbalance of the segmentation target during the intervertebral disc tissue image segmentation, improves the segmentation performance of the full convolution neural network and the class II segmentation loss function, improves the segmentation effect, and has extremely high application and popularization values in the technical fields of deep learning and medical image processing.

Description

Application method of class II segmentation loss function in achieving class II segmentation of intervertebral disc tissue image
Technical Field
The invention relates to the technical field of deep learning and medical image processing, in particular to a class II segmentation loss function and a construction method and application thereof.
Background
With the continuous development of artificial intelligence technology, the computer aided diagnosis research oriented to medical images is more and more emphasized. In particular to diseases such as spondylopathy, accurate tissue region segmentation is an important precondition for realizing intelligent diagnosis. At present, a common intelligent segmentation technology is mainly based on a deep learning algorithm, and high-dimensional features of a target area are extracted by the algorithm through inputting a medical image and a corresponding label file, so that pixel-by-pixel classification judgment is carried out, a probability value is obtained, and finally a semantic segmentation task is realized. The loss function is an important component in the deep learning algorithm. Generally, the loss function calculates an error value between the label and the prediction output, so that the back propagation adjustment of the deep learning algorithm is realized, and the learning performance of the algorithm is improved. A common loss function is a cross-entropy function. With the progress of research in recent years, other functions such as Dice Loss, Focal Loss, Tversky Loss, etc. have been proposed and applied.
Due to the characteristics of the medical image of the spine, the segmentation research for the spine tissue still has the problems to be solved, such as the unbalanced background in the two types of segmentation. Background imbalance problems often arise in two types of segmentation applications that are directed to images of intervertebral disc tissue. In medical images of the spine, the number of pixels of the intervertebral disc tissue image is often small (about 3%), compared to the number of background pixels of the non-target intervertebral disc tissue image. The background imbalance problem causes low training efficiency of the deep learning algorithm and degradation of a cross entropy function. Some Loss functions, such as Dice local and Focal local, effectively alleviate the influence of the imbalance problem on the segmentation result by adjusting the classification weight of the small-pixel-quantity target. The use of these loss functions still leads to the inevitable problem of false positives, i.e. some confusing pixels in the background may be misinterpreted as target pixels.
Therefore, an improved scheme aiming at the existing loss function is urgently needed to be developed to improve the performance of the deep learning algorithm in two types of segmentation facing the intervertebral disc tissue image and realize the application.
Disclosure of Invention
The invention aims to provide a two-class segmentation loss function, a construction method and application thereof, and aims to solve the problems of unbalanced background and poor segmentation precision of the two-class segmentation of the conventional intervertebral disc tissue image.
The technical scheme adopted by the invention for solving the technical problem is as follows:
the invention relates to a two-class segmentation loss function, the expression of which is as follows:
Ln=ρl+(1-ρ)r
in the formula, l is a basic loss function, and the existing two semantic segmentation loss functions are adopted; r is a limiting term; ρ is the contribution used to balance the base loss function l and the limiting term r, 0 ≦ ρ ≦ 1.
Further, the expression of the basic loss function/is:
l=f(pti,gti)
when t is 0, g0iIs the true value, p, of whether the pixel point i in the label of the trained image is the background pixel point0iJudging probability value for the model of predicting whether pixel point i in the result is background pixel point, if pixel point i in the label of the trained image is background pixel point, g0i1, otherwise g0iIf the pixel point i in the prediction result is a background pixel point, p is equal to 00i1, otherwise p0i=0;
When t is 1, g1iWhether a pixel point i in a label of the trained image is the true value of a segmentation target pixel point, p1iJudging probability value for the model of judging whether the pixel point i in the prediction result is the segmentation target pixel point, and if the pixel point i in the label of the trained image is the segmentation target pixel point, g1i1, otherwise g1iIf the pixel point i in the prediction result is the segmentation target pixel point, p is equal to 01i1, otherwise p1i=0。
Further, the expression of the restriction term r is:
Figure BDA0003070750310000021
in the formula, N is the total number of pixel points of a single trained image; gamma is a nonlinear amplification factor, and gamma is more than or equal to 1 and less than or equal to 5; e is 1.
The invention relates to a method for constructing a class II segmentation loss function, which mainly comprises the following steps of:
step one, setting a basic loss function;
secondly, setting a limiting item suitable for the basic loss function according to the parameter characteristics in the basic loss function;
and step three, setting balance quantity, and combining the basic loss function and the limiting term through the balance quantity to obtain a second-class segmentation loss function.
Further, the specific process of the step one is as follows:
setting a basic loss function l, wherein the expression is as follows:
l=f(pti,gti)
in the formula, gtiIs the true value, p, of a pixel point i in a label of a trained imagetiAnd judging the probability value for the model of the pixel point i in the prediction result, wherein t is 0 or 1.
Further, when t is 0, g0iIs the true value, p, of whether the pixel point i in the label of the trained image is the background pixel point0iJudging probability value for the model of predicting whether pixel point i in the result is background pixel point, if pixel point i in the label of the trained image is background pixel point, g0i1, otherwise g0iIf the pixel point i in the prediction result is a background pixel point, p is equal to 00i1, otherwise p0i=0;
When t is 1, g1iWhether the pixel point i in the label of the trained image is the true value of the segmentation target pixel point, p1iJudging probability value for the model of judging whether the pixel point i in the prediction result is the segmentation target pixel point, and if the pixel point i in the label of the trained image is the segmentation target pixel point, g1i1, otherwise g1iIf the pixel point i in the prediction result is the segmentation target pixel point, p is equal to 01i1, otherwise p1i=0。
Further, the specific process of the second step is as follows:
according to the parameter characteristics in the basic loss function l, a limiting term r suitable for the basic loss function l is set, and the expression is as follows:
Figure BDA0003070750310000041
in the formula, N is the total number of pixel points of a single trained image; gamma is a nonlinear amplification factor, gamma is more than or equal to 1 and less than or equal to 5, and epsilon is 1.
Further, the specific process of step three is as follows:
combining the basic loss function L and the limiting term r to obtain a two-class segmentation loss function LnThe expression is as follows:
Figure BDA0003070750310000042
wherein the basis loss function l ═ f (p)ti,gti) Dividing a loss function for any two types of existing semantics; rho is a balance quantity, namely a contribution quantity used for balancing the basic loss function l and the limiting term r, and rho is more than or equal to 0 and less than or equal to 1.
Further, the basic Loss function l is Tversky Loss, and the expression thereof is:
Figure BDA0003070750310000043
in the formula, α is 0.3.
The invention discloses application of a two-class segmentation loss function in realizing two-class segmentation of an intervertebral disc tissue image, wherein the two-class segmentation loss function is embedded into a full convolution neural network to realize the two-class segmentation of the intervertebral disc tissue image, the full convolution neural network selects U-net, and an activation layer of the full convolution neural network is set as Sigmoid.
The invention has the beneficial effects that:
the invention discloses a two-class segmentation loss function capable of improving the segmentation performance of an intervertebral disc tissue image, and a construction method and application thereof.
Compared with the prior art, the invention has the following advantages:
(1) the invention improves the correction rate of the limiting term to the error feedback by controlling the contribution degree of the basic loss function number through the balance quantity.
(2) The invention designs the limiting item for optimizing the loss function and obviously reduces the influence of the problems of unbalanced background and false positive on the two-class segmentation of the intervertebral disc tissue image.
(3) The invention can inhibit false positive pixels in the intervertebral disc tissue image segmentation result and simultaneously can not influence abnormal intervertebral disc pixels caused by pathological changes.
(4) The method obviously inhibits false positive influence caused by unbalance of the segmentation target during the intervertebral disc tissue image segmentation, improves the segmentation performance of the full convolution neural network and the class II segmentation loss function, improves the segmentation effect, and has extremely high application and popularization values in the technical fields of deep learning and medical image processing.
Drawings
FIG. 1 is a flow chart of a method for constructing a class II segmentation loss function according to the present invention.
Fig. 2 is a comparison graph of the segmentation results of a two-class segmentation loss function and other common functions according to the first embodiment of the present invention.
Detailed Description
The following is a description of the illustrated embodiments of the invention using the accompanying drawings. The drawings illustrate only one embodiment of the invention and are therefore not to be considered limiting of its scope. It is obvious to a person skilled in the art that other relevant figures can also be derived from these figures without inventive effort.
The invention relates to a two-class segmentation loss function, the expression of which is as follows:
Ln=ρl+(1-ρ)r
in the formula, l is a basic loss function, and the existing two types of semantic segmentation loss functions are adopted, wherein the expression is as follows:
l=f(pti,gti)
in the formula, gtiIs the true value, p, of a pixel point i in a label of a trained imagetiModel judgment rule for pixel point i in prediction resultThe value t is 0 or 1.
In the formula, r is a limiting term and is expressed as:
Figure BDA0003070750310000061
in the formula, N is the total number of pixel points of a single trained image; gamma is a nonlinear amplification factor, and the value range of gamma is [1,5], namely gamma is more than or equal to 1 and less than or equal to 5; e 1, preventing the formula from being divided by zero.
In the formula, rho is the contribution amount for balancing the basic loss function l and the limiting term r, and the value range is [0,1], namely rho is more than or equal to 0 and less than or equal to 1.
The invention discloses a method for constructing a two-class segmentation loss function, which specifically comprises the following steps as shown in figure 1:
step one, setting a basic loss function l suitable for two types of segmentation, wherein the expression is as follows:
l=f(pti,gti)
in the formula, gtiIs the true value, p, of a pixel point i in a label of a trained imagetiAnd judging the probability value for the model of the pixel point i in the prediction result, wherein t is 0 or 1. Specifically, the method comprises the following steps:
when t is 0, g0iIs the true value, p, of whether the pixel point i in the label of the trained image is the background pixel point0iJudging probability value for the model of predicting whether pixel point i in the result is background pixel point, if pixel point i in the label of the trained image is background pixel point, g0i1, otherwise g0iIf the pixel point i in the prediction result is a background pixel point, p is equal to 00i1, otherwise p0i=0。
When t is 1, g1iWhether a pixel point i in a label of the trained image is the true value of a segmentation target pixel point, p1iJudging probability value for the model of judging whether the pixel point i in the prediction result is the segmentation target pixel point, and if the pixel point i in the label of the trained image is the segmentation target pixel point, g1i1, otherwise g1iIf the pixel point i in the prediction result is the segmentation target image, the pixel point i is equal to 0Prime point, then p1i1, otherwise p1i=0。
Step two, setting a limiting term r suitable for the basic loss function l according to the parameter characteristics in the basic loss function l, wherein the expression is as follows:
Figure BDA0003070750310000062
in the formula, N is the total number of pixel points of a single trained image; gamma is a nonlinear amplification factor, the value range of gamma is [1,5], namely gamma is more than or equal to 1 and less than or equal to 5, and epsilon is 1, so that the formula is prevented from being divided by zero.
Setting balance quantity, and combining the basic loss function L and the limiting term r through the balance quantity to obtain a second-class segmentation loss function LnThe expression is as follows:
Figure BDA0003070750310000071
wherein the base loss function l ═ f (p)ti,gti) Adopting any two types of existing semantic segmentation loss functions; rho is a balance quantity, namely a contribution quantity for balancing the basic loss function l and the limiting term r, and the value range of rho is [0,1]]I.e. rho is more than or equal to 0 and less than or equal to 1.
When the invention is used, firstly, the two types of segmentation loss functions are embedded into a full convolution neural network to realize the two types of segmentation of the intervertebral disc tissue image, wherein the full convolution neural network is preferably selected from U-net, an activation layer of the full convolution neural network is preferably set as Sigmoid, and the two types of segmentation loss functions L are preferably set as SigmoidnThe optimal value of the middle non-linear amplification factor gamma is 1.5; the optimum value of the contribution ρ for balancing the base loss function l and the limiting term r is 0.8.
Detailed description of the invention
The invention relates to a method for constructing a class II segmentation loss function, which specifically comprises the following steps:
(1) setting a basic Loss function l suitable for the two-class segmentation as Tversesky Loss, wherein the expression is as follows:
Figure BDA0003070750310000072
in the formula, the optimal value of alpha is 0.3; g0iIs the true value, p, of whether the pixel point i in the label of the trained image is the background pixel point0iJudging probability value for the model of predicting whether pixel point i in the result is background pixel point, if pixel point i in the label of the trained image is background pixel point, g0i1, otherwise g0iIf the pixel point i in the prediction result is a background pixel point, p is equal to 00i1, otherwise p0i=0;
g1iWhether a pixel point i in a label of the trained image is the true value of a segmentation target pixel point, p1iJudging probability value for the model of judging whether the pixel point i in the prediction result is the segmentation target pixel point, and if the pixel point i in the label of the trained image is the segmentation target pixel point, g1i1, otherwise g1iIf the pixel point i in the prediction result is the segmentation target pixel point, p is equal to 01i1, otherwise p1i=0。
(2) According to the parameter characteristics in the basic loss function l, a limiting term r suitable for the basic loss function l is set, and the expression is as follows:
Figure BDA0003070750310000081
in the formula, N is the total number of pixel points of a single trained image; gamma is a nonlinear amplification factor, the value range of gamma is [1,5], namely gamma is more than or equal to 1 and less than or equal to 5, and epsilon is 1, so that the formula is prevented from being divided by zero.
(3) Combining the basic loss function L and the constraint term r to obtain a class-two segmentation loss function L of the present embodimentnThe expression is as follows:
Figure BDA0003070750310000082
in the present embodiment, the nonlinear amplification factor γ is 1.5; the contribution p for balancing the base loss function l and the limiting term r takes 0.8.
(4) Using comparative tests
Firstly, a standard framework of a common full convolution neural network U-net is constructed, an output activation function is set to be Sigmoid, and a cross entropy Loss function, a Tversesky Loss function and a class II segmentation Loss function L of the embodiment are respectively embeddedn. For the above three loss functions, all parameters in the model are the same as the hyper-parameter settings. The data set used for the training was 759 human dorsal sagittal Magnetic Resonance (MR) images, with corresponding labeling of each of the 7 intervertebral discs between the lower spine (T11/T12-L5/S1). Of these, 50% of the images were taken from patients with herniated discs, and the remainder from normal persons (patients with non-herniated discs). Training data accounted for 90% of all data sets, and test data accounted for 10% of all data sets. The learning rate of the model is 10-6Training is performed for 10 periods, each comprising 2000 iterations.
The pair of the segmentation results of the two types of segmentation loss functions constructed in the present embodiment and other common functions is shown in fig. 2. FIG. 2(a) is a human back sagittal Magnetic Resonance (MR) image; FIG. 2(b) is an artificially labeled intervertebral disc region; FIG. 2(c) is the result of using a common cross-entropy loss function that is affected by the background imbalance problem resulting in failure; FIG. 2(d) is a diagram of the results obtained using Tversesky Loss, which may alleviate the problem of background imbalance, but may show false positive pixels; FIG. 2(e) shows a class-two segmentation loss function L according to the present embodimentnAccording to the obtained result, the function effectively reduces the influence caused by the unbalanced background problem, the occurrence of false positive pixel points is obviously inhibited, the final segmentation accuracy rate reaches 99.21%, and the AUC reaches 89.88%.
The method obviously inhibits false positive influence caused by unbalance of the segmentation target during the intervertebral disc tissue image segmentation, improves the segmentation performance of the full convolution neural network and the class II segmentation loss function, improves the segmentation effect, and has extremely high application and popularization values in the technical fields of deep learning and medical image processing.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (1)

1. The application method of the class II segmentation loss function in the realization of the class II segmentation of the intervertebral disc tissue image is characterized in that the class II segmentation loss function is embedded into a full convolution neural network to realize the class II segmentation of the intervertebral disc tissue image, the full convolution neural network selects U-net, and an activation layer of the full convolution neural network is set as Sigmoid;
the expression of the two types of segmentation loss functions is as follows:
Ln=ρl+(1-ρ)r
wherein l is a base loss function; r is a limiting term; rho is the contribution amount for balancing the basic loss function l and the limiting term r, and rho is more than or equal to 0 and less than or equal to 1;
the basic Loss function l is Tversesky Loss, and the expression is as follows:
Figure FDA0003609744320000011
the expression of the restriction term r is:
Figure FDA0003609744320000012
wherein α is 0.3; n is the total number of pixel points of the single image to be trained; gamma is a nonlinear amplification factor, and gamma is more than or equal to 1 and less than or equal to 5; e is 1; g0iIs the true value, p, of whether the pixel point i in the label of the trained image is the background pixel point0iJudging probability value for the model of predicting whether pixel point i in the result is background pixel point, if pixel point i in the label of the trained image is background pixel point, g0i1, otherwise g0iIf the pixel point i in the prediction result is a background pixel point, p is equal to 00i1, otherwise p0i=0;g1iWhether a pixel point i in a label of the trained image is the true value of a segmentation target pixel point, p1iJudging probability value for the model of judging whether the pixel point i in the prediction result is the segmentation target pixel point, and if the pixel point i in the label of the trained image is the segmentation target pixel point, g1i1, otherwise g1iIf the pixel point i in the prediction result is the segmentation target pixel point, p is equal to 01i1, otherwise p1i=0。
CN202110538676.6A 2021-05-18 2021-05-18 Application method of class II segmentation loss function in achieving class II segmentation of intervertebral disc tissue image Active CN113240014B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110538676.6A CN113240014B (en) 2021-05-18 2021-05-18 Application method of class II segmentation loss function in achieving class II segmentation of intervertebral disc tissue image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110538676.6A CN113240014B (en) 2021-05-18 2021-05-18 Application method of class II segmentation loss function in achieving class II segmentation of intervertebral disc tissue image

Publications (2)

Publication Number Publication Date
CN113240014A CN113240014A (en) 2021-08-10
CN113240014B true CN113240014B (en) 2022-05-31

Family

ID=77134886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110538676.6A Active CN113240014B (en) 2021-05-18 2021-05-18 Application method of class II segmentation loss function in achieving class II segmentation of intervertebral disc tissue image

Country Status (1)

Country Link
CN (1) CN113240014B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294160B (en) * 2022-10-08 2022-12-16 长春理工大学 Lightweight segmentation network for spine image and construction method and application thereof
CN116091870B (en) * 2023-03-01 2023-09-12 哈尔滨市科佳通用机电股份有限公司 Network training and detecting method, system and medium for identifying and detecting damage faults of slave plate seat
CN116823864B (en) * 2023-08-25 2024-01-05 锋睿领创(珠海)科技有限公司 Data processing method, device, equipment and medium based on balance loss function

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363802A (en) * 2018-10-26 2019-10-22 西安电子科技大学 Prostate figure registration system and method based on automatic segmentation and pelvis alignment
CN111445478A (en) * 2020-03-18 2020-07-24 吉林大学 Intracranial aneurysm region automatic detection system and detection method for CTA image
CN112070772A (en) * 2020-08-27 2020-12-11 闽江学院 Blood leukocyte image segmentation method based on UNet + + and ResNet
CN112150410A (en) * 2020-08-24 2020-12-29 浙江工商大学 Automatic detection method and system for weld defects
CN112330682A (en) * 2020-11-09 2021-02-05 重庆邮电大学 Industrial CT image segmentation method based on deep convolutional neural network
CN112669282A (en) * 2020-12-29 2021-04-16 燕山大学 Spine positioning method based on deep neural network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3432263B1 (en) * 2017-07-17 2020-09-16 Siemens Healthcare GmbH Semantic segmentation for cancer detection in digital breast tomosynthesis
US10825149B2 (en) * 2018-08-23 2020-11-03 Siemens Healthcare Gmbh Defective pixel correction using adversarial networks
CN111402268B (en) * 2020-03-16 2023-05-23 苏州科技大学 Liver in medical image and focus segmentation method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363802A (en) * 2018-10-26 2019-10-22 西安电子科技大学 Prostate figure registration system and method based on automatic segmentation and pelvis alignment
CN111445478A (en) * 2020-03-18 2020-07-24 吉林大学 Intracranial aneurysm region automatic detection system and detection method for CTA image
CN112150410A (en) * 2020-08-24 2020-12-29 浙江工商大学 Automatic detection method and system for weld defects
CN112070772A (en) * 2020-08-27 2020-12-11 闽江学院 Blood leukocyte image segmentation method based on UNet + + and ResNet
CN112330682A (en) * 2020-11-09 2021-02-05 重庆邮电大学 Industrial CT image segmentation method based on deep convolutional neural network
CN112669282A (en) * 2020-12-29 2021-04-16 燕山大学 Spine positioning method based on deep neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于全卷积神经网络的左心室图像分割方法;谢文鑫等;《软件导刊》;20200515(第05期);全文 *

Also Published As

Publication number Publication date
CN113240014A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN113240014B (en) Application method of class II segmentation loss function in achieving class II segmentation of intervertebral disc tissue image
CN110348428B (en) Fundus image classification method and device and computer-readable storage medium
CN109753978B (en) Image classification method, device and computer readable storage medium
CN109191476B (en) Novel biomedical image automatic segmentation method based on U-net network structure
WO2021253939A1 (en) Rough set-based neural network method for segmenting fundus retinal vascular image
CN108021916A (en) Deep learning diabetic retinopathy sorting technique based on notice mechanism
CN107730516B (en) Brain MR image segmentation method based on fuzzy clustering
CN109345538A (en) A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks
CN112055878B (en) Adjusting a machine learning model based on the second set of training data
CN110879982B (en) Crowd counting system and method
CN109272107A (en) A method of improving the number of parameters of deep layer convolutional neural networks
CN111127390A (en) X-ray image processing method and system based on transfer learning
CN107610675A (en) A kind of image processing method and device based on dynamic level
CN110895772A (en) Electricity sales amount prediction method based on combination of grey correlation analysis and SA-PSO-Elman algorithm
CN112764526A (en) Self-adaptive brain-computer interface decoding method based on multi-model dynamic integration
CN112001921A (en) New coronary pneumonia CT image focus segmentation image processing method based on focus weighting loss function
CN114333021A (en) Face recognition method and device, computer equipment and storage medium
CN111191685A (en) Method for dynamically weighting loss function
CN116433644A (en) Eye image dynamic diagnosis method based on recognition model
CN113240698B (en) Application method of multi-class segmentation loss function in implementation of multi-class segmentation of vertebral tissue image
Chaurasiya et al. BPSO-based feature selection for precise class labeling of Diabetic Retinopathy images
CN110968893A (en) Privacy protection method for associated classified data sequence based on Pufferfish framework
Wang et al. Using chaos world cup optimization algorithm for medical images contrast enhancement
CN110189227A (en) Insider trading discriminating conduct based on principal component analysis and reverse transmittance nerve network
CN109886386A (en) Wake up the determination method and device of model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant