CN113850826B - Image segmentation-based heart image processing method, device, equipment and medium - Google Patents
Image segmentation-based heart image processing method, device, equipment and medium Download PDFInfo
- Publication number
- CN113850826B CN113850826B CN202111138992.0A CN202111138992A CN113850826B CN 113850826 B CN113850826 B CN 113850826B CN 202111138992 A CN202111138992 A CN 202111138992A CN 113850826 B CN113850826 B CN 113850826B
- Authority
- CN
- China
- Prior art keywords
- sample data
- model
- heart
- data set
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003709 image segmentation Methods 0.000 title claims abstract description 30
- 238000003672 processing method Methods 0.000 title claims abstract description 22
- 230000011218 segmentation Effects 0.000 claims abstract description 162
- 238000012549 training Methods 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims description 40
- 230000000747 cardiac effect Effects 0.000 claims description 37
- 230000006870 function Effects 0.000 claims description 34
- 238000004590 computer program Methods 0.000 claims description 14
- 238000000034 method Methods 0.000 abstract description 21
- 238000005516 engineering process Methods 0.000 abstract description 11
- 238000013473 artificial intelligence Methods 0.000 abstract description 6
- 238000002372 labelling Methods 0.000 abstract description 2
- 230000007246 mechanism Effects 0.000 description 19
- 238000000605 extraction Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 210000005240 left ventricle Anatomy 0.000 description 5
- 210000005241 right ventricle Anatomy 0.000 description 4
- 238000004891 communication Methods 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000002595 magnetic resonance imaging Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 208000019622 heart disease Diseases 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002861 ventricular Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The application relates to the field of medical treatment and the field of image segmentation in artificial intelligence, and provides a heart image processing method, which comprises the following steps: acquiring a first sample data set and a second sample data set; expanding the first sample data set to obtain a third sample data set, and acquiring a heart segmentation model to be trained, wherein the heart segmentation model comprises a student network and a teacher network; performing iterative training on the heart segmentation model according to the second sample data set and the third sample data set to obtain a target heart segmentation model; and acquiring a target heart image to be segmented, inputting the target heart image into a target heart segmentation model for image segmentation, and obtaining a target heart segmentation image. The method reduces the data labeling cost of the heart segmentation model and improves the accuracy of the heart segmentation model. The application also relates to a blockchain technology, and the target heart segmentation model can be stored in the blockchain.
Description
Technical Field
The present application relates to the field of image processing, and in particular, to a method, apparatus, device, and medium for processing cardiac images based on image segmentation.
Background
Heart is one of the most important organs in the human body, and heart diseases can seriously affect the daily life of a patient, and even possibly take away the life of the patient at any time. The segmentation of the heart image plays a vital role in clinical medicine research on heart pathological tissues, can assist doctors in diagnosis, reduces human errors, improves the medical efficiency, and saves precious time for doctors and patients. At present, a heart segmentation model is mainly trained based on a deep learning algorithm, a large number of marked heart images are needed for enabling the heart segmentation model to have a good segmentation effect, marking of the heart images is difficult, more manpower and material resources are needed to be consumed, and user experience is poor.
Disclosure of Invention
The embodiment of the application provides a heart image processing method, device, equipment and medium based on image segmentation, which aim to reduce the data labeling cost of a heart segmentation model and improve the accuracy of the heart segmentation model.
In a first aspect, an embodiment of the present application provides a cardiac image processing method based on image segmentation, including:
acquiring a first sample data set and a second sample data set, wherein data in the first sample data set contains labels, and data in the second sample data set is unlabeled;
Expanding the first sample data set to obtain a third sample data set, and acquiring a heart segmentation model to be trained, wherein the heart segmentation model comprises a student network and a teacher network;
Performing iterative training on the heart segmentation model according to the second sample data set and the third sample data set to obtain a target heart segmentation model, wherein model parameters of the teacher network are updated based on an exponential weighted average algorithm and model parameters of the student network;
and acquiring a target heart image to be segmented, and inputting the target heart image into the target heart segmentation model for image segmentation to obtain a target heart segmentation image.
In a second aspect, an embodiment of the present application further provides a cardiac image processing apparatus, including:
The acquisition module is used for acquiring a first sample data set and a second sample data set, wherein data in the first sample data set contains labels, and data in the second sample data set is unlabeled;
the data expansion module is used for expanding the first sample data set to obtain a third sample data set;
The acquisition module is further used for acquiring a heart segmentation model to be trained, wherein the heart segmentation model comprises a student network and a teacher network;
The model training module is used for carrying out iterative training on the heart segmentation model according to the second sample data set and the third sample data set to obtain a target heart segmentation model, wherein model parameters of the teacher network are updated based on an exponential weighted average algorithm and model parameters of the student network;
The image segmentation module is used for acquiring a target heart image to be segmented, inputting the target heart image into the target heart segmentation model for image segmentation, and obtaining a target heart segmentation image.
In a third aspect, embodiments of the present application also provide a computer device comprising a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program, when executed by the processor, implements the steps of the cardiac image processing method as described above.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the cardiac image processing method as described above.
The embodiment of the application provides a heart image processing method, device, equipment and medium based on image segmentation, which are used for carrying out iterative training on a heart segmentation model comprising a student network and a teacher network through a first sample data set comprising marked sample data and a second sample data set comprising unmarked sample data, so that learning on the heart segmentation model by using a small amount of marked data and assisting a large amount of unmarked data can be realized, and the marked data is not needed, thereby reducing the data marking cost.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for processing cardiac images according to an embodiment of the present application;
FIG. 2 is a schematic representation of a hierarchical structure of a segmented image of a heart to be trained in an embodiment of the present application;
FIG. 3 is a schematic representation of a hierarchical structure of a segmented image of a target heart in an embodiment of the application;
FIG. 4 is a schematic block diagram of a cardiac image processing apparatus provided by an embodiment of the present application;
fig. 5 is a schematic block diagram of a computer device according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Wherein artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The embodiment of the application provides a heart image processing method, device, equipment and medium based on image segmentation. The heart image processing method can be applied to a server, wherein the server can be an independent server, and can also be a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDNs), basic cloud computing services such as big data and artificial intelligent platforms, and the like.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a flowchart of a cardiac image processing method based on image segmentation according to an embodiment of the present application.
As shown in fig. 1, the image segmentation-based heart image processing method includes steps S101 to S104.
Step S101, acquiring a first sample data set and a second sample data set.
The data in the first sample data set contains labels, the data in the second sample data set does not contain labels, the first sample data set comprises a plurality of first sample data, the second sample data set comprises a plurality of second sample data, the number of the first sample data is smaller than that of the second sample data, the first sample data comprises a heart image and a labeled heart segmentation map, the second sample data only comprises a heart image, the heart image can comprise a heart medical image, the medical image refers to internal tissues acquired in a non-invasive mode for medical or medical research, such as images generated by medical instruments such as CT (Computed Tomography, electronic computer tomography), MRI (Magnetic Resonance Imaging ), US (ultra sonic) and the like.
Step S102, expanding the first sample data set to obtain a third sample data set, and acquiring a heart segmentation model to be trained.
Illustratively, a first number of first sample data in the first sample data set and a second number of second sample data in the second sample data set are determined; and determining expansion multiplying power according to the second number and the first number, and expanding the first sample data set according to the expansion multiplying power to obtain a third sample data set. The difference between the number of samples in the second sample data set and the number of samples in the third sample data set is less than or equal to a preset threshold, which may be set based on practical situations, and the embodiment is not limited in particular, for example, the preset threshold is 15.
Illustratively, a ratio of the second number to the first number is determined as an expansion ratio. For example, if the first sample data set includes 50 pieces of first sample data and the second sample data set includes 1000 pieces of second sample data, the expansion magnification is 1000/50=20. Or determining the difference value between the second number and the first number to obtain a sample number difference value, and determining the ratio of the sample number difference value to the first number as an expansion multiplying power. For example, if the first sample data set includes 50 pieces of first sample data and the second sample data set includes 1000 pieces of second sample data, the sample number difference is 950, and the expansion ratio is 950/50=19.
For example, the manner of performing expansion processing according to the first sample data set of expansion rate to obtain the third sample data set may be: and copying the first sample data set according to the expansion multiplying power to obtain new sample data, and combining the new sample data to obtain a third sample data set. For example, the expansion magnification is 20, the first sample data set includes 50 pieces of first sample data, the 50 pieces of sample data are copied 20 times, and finally the sample data obtained by the copying 20 times are combined, thereby obtaining a third sample data set including 1000 pieces of first sample data.
The first sample data with the preset proportion is randomly sampled from the first sample data set until the sampling times reach the expansion multiplying power, so as to obtain newly added sample data, and the newly added sample data and the first sample data set are combined, so that a third sample data set is obtained. For example, the expansion ratio is 19, the first sample data set includes 50 pieces of first sample data, 98% (45 pieces) of first sample data are randomly sampled from the 50 pieces of first sample data each time, and the total sampling is 19 times, so as to obtain 931 pieces of newly added sample data, and finally the newly added 931 pieces of sample data and the first sample data set are combined, so that a third sample data set including 981 pieces of first sample data is obtained.
The heart segmentation model to be trained comprises a student network and a teacher network. As shown in fig. 2, the heart segmentation model includes a student network 10 and a teacher network 20, the student network 10 includes a first feature extraction network 11, a first attention mechanism network 12, a first convolution layer 13, a first Relu layer 14, and a first softmax layer 15, and the first feature extraction network 11 is connected to the first attention mechanism network 12, the first attention mechanism network 12 is connected to the first convolution layer 13 and the first softmax layer 15, respectively, the first convolution layer 13 is connected to the first Relu layer 14, the teacher network 20 includes a second feature extraction network 21, a second attention mechanism network 22, a second convolution layer 23, a second Relu layer 24, and a second softmax layer 25, and the second feature extraction network 21 is connected to the second attention mechanism network 22, the second attention mechanism network 22 is connected to the second convolution layer 23 and the second softmax layer 25, respectively, and the second convolution layer 23 is connected to the second Relu layer 24.
It will be appreciated that the first feature extraction network 11 may be the same as or different from the second feature extraction network 21, and the first attention mechanism network 12 may be the same as or different from the second attention mechanism network 22, and the first convolution layer 13 may be the same as or different from the second convolution layer 23. For example, the first feature extraction network 11 is a residual network Resnet, the second feature extraction network 21 is a U-net network, the first Attention mechanism network 12 is Self-Attention, the second Attention mechanism network 22 is Soft-Attention, and the first convolution layer 13 and the second convolution layer 23 are convolution layers with a convolution kernel size of 1x 1.
And step S103, performing iterative training on the heart segmentation model according to the second sample data set and the third sample data set to obtain a target heart segmentation model.
The model parameters of the teacher network are updated based on the exponential weighted average algorithm and the model parameters of the student network, and the target heart segmentation model may include only the teacher network, or may include the student network and the teacher network, which is not limited in this embodiment.
Illustratively, a piece of sample data is alternately selected from the third sample data set and the second sample data set as the target sample data; inputting heart images in target sample data into a student network for processing to obtain a first prediction attention map and a first prediction heart segmentation map; inputting the heart image in the target sample data into a teacher network for processing to obtain a second prediction attention map and a second prediction heart segmentation map; determining a model loss value based on the first predicted attention map, the first predicted heart segmentation map, the second predicted attention map, and the second predicted heart segmentation map; determining whether the heart segmentation model converges according to the model loss value; if the heart segmentation model does not converge, updating the model parameters of the heart segmentation model and returning to perform the step of alternately selecting one piece of sample data from the third sample data set and the second sample data set as target sample data until the heart segmentation model converges.
For example, the manner of determining whether the heart segmentation model converges based on the model loss value may be: determining whether the model loss value is less than or equal to a preset loss value; if the model loss value is smaller than or equal to a preset loss value, determining that the heart segmentation model converges; if the model loss value is larger than the preset loss value, determining that the heart segmentation model is not converged. The preset loss value may be set based on practical situations, which is not specifically limited in this embodiment.
Illustratively, the manner in which the cardiac image is input into the student network for processing to obtain the first predicted attention map and the first predicted heart segmentation map may be: inputting the heart image into a first feature extraction network for feature extraction to obtain a first feature map; inputting the first feature map into a first attention mechanism network for processing to obtain a second feature map; inputting the second feature map into the first convolution layer for convolution to obtain a third feature map; inputting the third feature map into a first Relu layer for processing to obtain a first predictive attention map; and inputting the second characteristic map into the first softmax layer for processing to obtain a first predicted heart segmentation map.
Illustratively, the manner in which the heart image is input into the teacher network for processing to obtain the second predicted attention map and the second predicted heart segmentation map may be: inputting the heart image into a second feature extraction network for feature extraction to obtain a fourth feature map; inputting the fourth feature map into a second attention mechanism network for processing to obtain a fifth feature map; inputting the fifth characteristic diagram into a second convolution layer for convolution to obtain a sixth characteristic diagram; inputting the sixth feature map into a second Relu layer for processing to obtain a second predictive attention map; and inputting the fifth characteristic map into a second softmax layer for processing to obtain a second predicted heart segmentation map.
Illustratively, the manner of alternately selecting a piece of sample data from the third sample data set and the second sample data set as the target sample data may be: selecting a piece of sample data from the third sample data set as target sample data; if the number of samples selected from the third sample data set reaches the first preset number, selecting one piece of sample data from the second sample data set as target sample data; if the number of samples selected from the second sample data set reaches the second preset number, selecting one piece of sample data from the third sample data set as target sample data, and repeating the steps, so that one piece of sample data is alternately selected from the third sample data set and the second sample data set as target sample data.
The first preset number and the second preset number may be the same or different, and the first preset number and the second preset number may be set based on actual situations, which is not specifically limited in this embodiment. For example, the first preset number is 1, the second preset number is 1, and for example, the first preset number is 3, and the second preset number is 2.
Illustratively, the model loss value may be determined from the first predicted attention map, the first predicted cardiac segmentation map, the second predicted attention map, and the second predicted cardiac segmentation map by: if the target sample data is second sample data without labels, calculating an error between the first prediction heart segmentation map and the second prediction heart segmentation map based on the first loss function to obtain a first loss value; calculating an error between the first predicted attention map and the second predicted attention map based on the second loss function, resulting in a second loss value; and carrying out weighted summation on the first loss value and the second loss value to obtain a model loss value.
Illustratively, a first weighting coefficient and a second weighting coefficient are obtained; performing addition operation on the first weighting coefficient and the first loss value to obtain a first weighting loss value; performing addition operation on the second weighting coefficient and the second loss value to obtain a second weighting loss value; and accumulating the first weighted loss value and the second weighted loss value to obtain a model loss value. The first weighting coefficient and the second weighting coefficient may be set based on practical situations, which is not specifically limited in this embodiment.
Wherein the first loss function may be:
wherein N is the total number of pixels, N is the number of classification categories, p j(xi) is the prediction probability that the ith pixel in the first prediction heart segmentation map is the classification category j, p j′(xi) is the prediction probability that the ith pixel in the second prediction heart segmentation map is the classification category j, and the classification categories of the pixels may include the left ventricle, the right ventricle, the left ventricle outer wall, the heart outer wall, and the like, and of course may also include the remaining classification categories, which is not specifically limited in this embodiment.
Wherein the second loss function may be:
where N 1 is the number of pixels in the first predictive attention map, N 2 is the number of pixels in the second predictive attention map, f (x i) is the pixel value of the i-th pixel in the first predictive attention map, and f (x i') is the pixel value of the i-th pixel in the second predictive attention map.
Illustratively, the model loss value may be determined from the first predicted attention map, the first predicted cardiac segmentation map, the second predicted attention map, and the second predicted cardiac segmentation map by: if the target sample data is marked first sample data, calculating an error between the first prediction heart segmentation map and the second prediction heart segmentation map based on a first loss function to obtain a first loss value; calculating an error between the first predicted attention map and the second predicted attention map based on the second loss function, resulting in a second loss value; based on a third loss function, calculating cross entropy between the heart segmentation image marked in the first sample data and the first prediction heart segmentation image to obtain a third loss value; processing the heart segmentation image marked in the first sample data to obtain a target attention map, and calculating an error between the target attention map and the first prediction attention map based on a fourth loss function to obtain a fourth loss value; and carrying out weighted summation on the first loss value, the second loss value, the third loss value and the fourth loss value to obtain a model loss value.
Illustratively, a first weighting coefficient, a second weighting coefficient, a third weighting coefficient, and a fourth weighting coefficient are obtained; performing addition operation on the first weighting coefficient and the first loss value to obtain a first weighting loss value; performing addition operation on the second weighting coefficient and the second loss value to obtain a second weighting loss value; performing addition operation on the third weighting coefficient and the third loss value to obtain a third weighting loss value; performing addition operation on the fourth weighting coefficient and the fourth loss value to obtain a fourth weighting loss value; and accumulating the first weighted loss value, the second weighted loss value, the third weighted loss value and the fourth weighted loss value to obtain a model loss value. The first weighting coefficient, the second weighting coefficient, the third weighting coefficient and the fourth weighting coefficient may be set based on actual situations, which is not particularly limited in this embodiment.
The third loss function may be a cross entropy loss function, specifically may be:
wherein N is the total number of pixels, N is the number of classification categories, p j(xi) is the prediction probability that the ith pixel in the first prediction heart segmentation map is the classification category j, y ic is the coincidence function, and c is the classification category of the ith pixel in the labeled heart segmentation image.
Wherein the fourth loss function may be:
Where N 1 is the number of pixels in the first predictive attention map, N 3 is the number of pixels in the target attention map, f (x i) is the pixel value of the i-th pixel in the first predictive attention map, and f (x i ") is the pixel value of the i-th pixel in the target attention map.
By way of example, the way to update the model parameters of the heart segmentation model may be: updating model parameters of the student network based on a back propagation algorithm; based on the exponential weighted average algorithm and the model parameters of the student network, updating the model parameters of the teacher network. The model parameters of the teacher network are updated through an exponential weighted average algorithm and the model parameters of the student network, so that the teacher network can obtain better image segmentation performance.
For example, based on the exponential weighted average algorithm and the model parameters of the student network, the manner of updating the model parameters of the teacher network may be: acquiring a first model parameter of a teacher network under the current iteration round number and a second model parameter of a student network under the previous iteration round number; determining target model parameters of a teacher network according to the first model parameters, the second model parameters and preset adjustment coefficients; and updating the model parameters of the teacher network into target model parameters. The preset adjustment coefficient may be set based on practical situations, which is not specifically limited in this embodiment.
For example, according to the first model parameter, the second model parameter and the preset adjustment coefficient, the manner of determining the target model parameter of the teacher network may be: calculating the product of the second model parameter and a preset adjusting coefficient to obtain a model parameter gain value; and accumulating the first model parameters and the model parameter gain values to obtain target model parameters of the teacher network. For example, θ ' t represents a target model parameter, θ t represents a first model parameter of a teacher network for a current iteration round, θ ' t-1 represents a second model parameter of a student network for a previous iteration round, α represents a preset adjustment coefficient, and then the target model parameter θ ' t=αθ′t-1+(1-α)θt.
Step S104, obtaining a target heart image to be segmented, and inputting the target heart image into a target heart segmentation model for image segmentation to obtain a target heart segmentation image.
The target heart segmentation model may include only a teacher network, or may include a student network and a teacher network, which is not specifically limited in this embodiment. Optionally, the target heart segmentation model only includes a teacher network, as shown in fig. 3, where the target heart segmentation model includes a third feature extraction network 31, a third attention mechanism network 32, and a third softmax layer 33, where the third feature extraction network 31 is connected to the third attention mechanism network 32, the third attention mechanism network 32 is connected to the third softmax layer 33, the third feature extraction network 31 is a network obtained by iteratively training the second feature extraction network 21 in fig. 2, the third attention mechanism network 32 is a network obtained by iteratively training the second attention mechanism network 22 in fig. 2, and the third softmax layer 33 is obtained by iteratively training the second softmax layer 25 in fig. 2. The target heart segmentation model may be stored in a blockchain to improve the security of the target heart segmentation model.
The method comprises the steps of inputting a target heart image into a third feature extraction network for feature extraction to obtain a first target feature map; inputting the first target feature map into a third attention mechanism network for processing to obtain a second target feature map; inputting the second target feature map into a third softmax layer for processing to obtain the prediction probability of each pixel point in the target heart image as each classification category; determining a target classification category of each pixel point in the target heart image according to the prediction probability that each pixel point in the target heart image is the classification category; and generating a target heart segmentation image according to the target classification category of each pixel point in the target heart image. The classification categories of the pixel points may include a left ventricle, a right ventricle, a left ventricle outer wall, a heart outer wall, and the like, and of course, may also include other classification categories, which are not particularly limited in this embodiment.
For example, according to the prediction probability that the pixel point in the target heart image is the classification category, the manner of determining the target classification category of the pixel point in the target heart image may be: and determining the classification category corresponding to the maximum prediction probability as the target classification category of the pixel point. For example, the prediction probabilities of the pixel point a in the target heart image are p 1、p2、p3 and p 4, and p 2>p1>p4>p3, respectively, for the left ventricle, the right ventricle, the left ventricular outer wall, and the cardiac outer wall, and therefore the target classification class of the pixel point a in the target heart image is the right ventricle.
According to the heart image processing method, the first sample data set comprising the marked sample data and the second sample data set comprising the unmarked sample data are used for carrying out iterative training on the heart segmentation model comprising the student network and the teacher network, so that the heart segmentation model can be learned by using a small amount of marked data and assisting a large amount of unmarked data, and a large amount of marked data is not needed, thereby reducing the data marking cost, and meanwhile, the model parameters of the teacher network are updated based on an exponential weighted average algorithm and the model parameters of the student network, the image segmentation performance of the teacher network can be improved, and the accuracy of the heart segmentation model is further improved.
Referring to fig. 4, fig. 4 is a schematic block diagram of a cardiac image processing apparatus according to an embodiment of the present application.
As shown in fig. 4, the cardiac image processing apparatus 200 includes:
An obtaining module 210, configured to obtain a first sample data set and a second sample data set, where data in the first sample data set includes labels, and data in the second sample data set has no labels;
A data expansion module 220, configured to expand the first sample data set to obtain a third sample data set;
the obtaining module 210 is further configured to obtain a heart segmentation model to be trained, where the heart segmentation model includes a student network and a teacher network;
the model training module 230 is configured to perform iterative training on the heart segmentation model according to the second sample data set and the third sample data set to obtain a target heart segmentation model, where model parameters of the teacher network are updated based on an exponential weighted average algorithm and model parameters of the student network;
the image segmentation module 240 is configured to obtain a target heart image to be segmented, and input the target heart image into the target heart segmentation model for image segmentation, so as to obtain a target heart segmentation image.
In an embodiment, the model training module 230 is further configured to:
alternately selecting a piece of sample data from the third sample data set and the second sample data set as target sample data;
inputting the heart image in the target sample data into the student network for processing to obtain a first prediction attention map and a first prediction heart segmentation map;
Inputting the heart image in the target sample data into the teacher network for processing to obtain a second prediction attention map and a second prediction heart segmentation map;
Determining a model loss value from the first predicted attention map, the first predicted cardiac segmentation map, the second predicted attention map, and the second predicted cardiac segmentation map;
determining whether the heart segmentation model converges according to the model loss value;
If the heart segmentation model is not converged, updating model parameters of the heart segmentation model;
returning to the step of alternately selecting a piece of sample data from the third sample data set and the second sample data set as target sample data until the heart segmentation model converges.
In an embodiment, the model training module 230 is further configured to:
If the target sample data is the second sample data, calculating an error between the first predicted heart segmentation map and the second predicted heart segmentation map based on a first loss function to obtain a first loss value;
Calculating an error between the first predicted attention profile and the second predicted attention profile based on a second loss function, resulting in a second loss value;
and carrying out weighted summation on the first loss value and the second loss value to obtain a model loss value.
In an embodiment, the model training module 230 is further configured to:
If the target sample data is the first sample data, calculating an error between the first predicted heart segmentation map and the second predicted heart segmentation map based on a first loss function to obtain a first loss value;
Calculating an error between the first predicted attention profile and the second predicted attention profile based on a second loss function, resulting in a second loss value;
Calculating cross entropy between the heart segmentation image marked in the first sample data and the first prediction heart segmentation image based on a third loss function to obtain a third loss value;
Processing the heart segmentation image marked in the first sample data to obtain a target attention map, and calculating an error between the target attention map and the first prediction attention map based on a fourth loss function to obtain a fourth loss value;
and carrying out weighted summation on the first loss value, the second loss value, the third loss value and the fourth loss value to obtain a model loss value.
In an embodiment, the model training module 230 is further configured to:
Updating model parameters of the student network based on a back propagation algorithm;
Acquiring a first model parameter of the teacher network under the current iteration round number and a second model parameter of the student network under the previous iteration round number;
Determining target model parameters of the teacher network according to the first model parameters, the second model parameters and preset adjustment coefficients;
and updating the model parameters of the teacher network into the target model parameters.
In an embodiment, the model training module 230 is further configured to:
Calculating the product of the second model parameter and the preset adjusting coefficient to obtain a model parameter gain value;
and accumulating the first model parameters and the model parameter gain values to obtain target model parameters of the teacher network.
In an embodiment, the data expansion module 220 is further configured to:
determining a first number of first sample data in the first sample data set and a second number of second sample data in the second sample data set;
And determining expansion multiplying power according to the second number and the first number, and expanding the first sample data set according to the expansion multiplying power to obtain a third sample data set.
It should be noted that, for convenience and brevity of description, specific working processes of the above-described apparatus and modules and units may refer to corresponding processes in the foregoing embodiment of the image segmentation-based cardiac image processing method, which are not described herein again.
The apparatus provided by the above embodiments may be implemented in the form of a computer program which may be run on a computer device as shown in fig. 5.
Referring to fig. 5, fig. 5 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device may be a server.
As shown in fig. 5, the computer device includes a processor, a memory, and a network interface connected by a system bus, wherein the memory may include a storage medium and an internal memory.
The storage medium may store an operating system and a computer program. The computer program comprises program instructions which, when executed, cause the processor to perform any of a number of image segmentation based cardiac image processing methods.
The processor is used to provide computing and control capabilities to support the operation of the entire computer device.
The network interface is used for network communication such as transmitting assigned tasks and the like. It will be appreciated by those skilled in the art that the structure shown in FIG. 5 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
It should be appreciated that the Processor may be a central processing unit (Central Processing Unit, CPU), it may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein in an embodiment the processor is configured to run a computer program stored in the memory to implement the steps of:
acquiring a first sample data set and a second sample data set, wherein data in the first sample data set contains labels, and data in the second sample data set is unlabeled;
Expanding the first sample data set to obtain a third sample data set, and acquiring a heart segmentation model to be trained, wherein the heart segmentation model comprises a student network and a teacher network;
Performing iterative training on the heart segmentation model according to the second sample data set and the third sample data set to obtain a target heart segmentation model, wherein model parameters of the teacher network are updated based on an exponential weighted average algorithm and model parameters of the student network;
and acquiring a target heart image to be segmented, and inputting the target heart image into the target heart segmentation model for image segmentation to obtain a target heart segmentation image.
In an embodiment, the processor, when implementing iterative training of the heart segmentation model based on the second sample data set and the third sample data set, is configured to implement:
alternately selecting a piece of sample data from the third sample data set and the second sample data set as target sample data;
inputting the heart image in the target sample data into the student network for processing to obtain a first prediction attention map and a first prediction heart segmentation map;
Inputting the heart image in the target sample data into the teacher network for processing to obtain a second prediction attention map and a second prediction heart segmentation map;
Determining a model loss value from the first predicted attention map, the first predicted cardiac segmentation map, the second predicted attention map, and the second predicted cardiac segmentation map;
determining whether the heart segmentation model converges according to the model loss value;
If the heart segmentation model is not converged, updating model parameters of the heart segmentation model;
returning to the step of alternately selecting a piece of sample data from the third sample data set and the second sample data set as target sample data until the heart segmentation model converges.
In an embodiment, the processor, when implementing determining model loss values from the first predicted attention map, the first predicted cardiac segmentation map, the second predicted attention map, and the second predicted cardiac segmentation map, is configured to implement:
If the target sample data is the second sample data, calculating an error between the first predicted heart segmentation map and the second predicted heart segmentation map based on a first loss function to obtain a first loss value;
Calculating an error between the first predicted attention profile and the second predicted attention profile based on a second loss function, resulting in a second loss value;
and carrying out weighted summation on the first loss value and the second loss value to obtain a model loss value.
In an embodiment, the processor, when implementing determining model loss values from the first predicted attention map, the first predicted cardiac segmentation map, the second predicted attention map, and the second predicted cardiac segmentation map, is configured to implement:
If the target sample data is the first sample data, calculating an error between the first predicted heart segmentation map and the second predicted heart segmentation map based on a first loss function to obtain a first loss value;
Calculating an error between the first predicted attention profile and the second predicted attention profile based on a second loss function, resulting in a second loss value;
Calculating cross entropy between the heart segmentation image marked in the first sample data and the first prediction heart segmentation image based on a third loss function to obtain a third loss value;
Processing the heart segmentation image marked in the first sample data to obtain a target attention map, and calculating an error between the target attention map and the first prediction attention map based on a fourth loss function to obtain a fourth loss value;
and carrying out weighted summation on the first loss value, the second loss value, the third loss value and the fourth loss value to obtain a model loss value.
In an embodiment, the processor, when implementing updating model parameters of the heart segmentation model, is configured to implement:
Updating model parameters of the student network based on a back propagation algorithm;
Acquiring a first model parameter of the teacher network under the current iteration round number and a second model parameter of the student network under the previous iteration round number;
Determining target model parameters of the teacher network according to the first model parameters, the second model parameters and preset adjustment coefficients;
and updating the model parameters of the teacher network into the target model parameters.
In an embodiment, the processor is configured to, when determining the target model parameter of the teacher network according to the first model parameter, the second model parameter and the preset adjustment coefficient, implement:
Calculating the product of the second model parameter and the preset adjusting coefficient to obtain a model parameter gain value;
and accumulating the first model parameters and the model parameter gain values to obtain target model parameters of the teacher network.
In an embodiment, when implementing expansion of the first sample data set to obtain a third sample data set, the processor is configured to implement:
determining a first number of first sample data in the first sample data set and a second number of second sample data in the second sample data set;
And determining expansion multiplying power according to the second number and the first number, and expanding the first sample data set according to the expansion multiplying power to obtain a third sample data set.
It should be noted that, for convenience and brevity of description, specific working procedures of the above-described computer device may refer to corresponding procedures in the foregoing embodiment of the image segmentation-based cardiac image processing method, and will not be described in detail herein.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored thereon, the computer program including program instructions that, when executed, implement a method for performing the method for image segmentation based cardiac image processing according to various embodiments of the present application.
Wherein the computer readable storage medium may be volatile or nonvolatile. The computer readable storage medium may be an internal storage unit of the computer device according to the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like, which are provided on the computer device.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The blockchain (Blockchain), essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains information from a batch of network transactions for verifying the validity (anti-counterfeit) of its information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
It is to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. While the application has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the application. Therefore, the protection scope of the application is subject to the protection scope of the claims.
Claims (7)
1. A cardiac image processing method based on image segmentation, comprising:
acquiring a first sample data set and a second sample data set, wherein data in the first sample data set contains labels, and data in the second sample data set is unlabeled;
Expanding the first sample data set to obtain a third sample data set, and acquiring a heart segmentation model to be trained, wherein the heart segmentation model comprises a student network and a teacher network;
Performing iterative training on the heart segmentation model according to the second sample data set and the third sample data set to obtain a target heart segmentation model, wherein model parameters of the teacher network are updated based on an exponential weighted average algorithm and model parameters of the student network;
Acquiring a target heart image to be segmented, and inputting the target heart image into the target heart segmentation model for image segmentation to obtain a target heart segmentation image;
wherein said iteratively training said heart segmentation model from said second sample dataset and said third sample dataset comprises:
alternately selecting a piece of sample data from the third sample data set and the second sample data set as target sample data;
inputting the heart image in the target sample data into the student network for processing to obtain a first prediction attention map and a first prediction heart segmentation map;
Inputting the heart image in the target sample data into the teacher network for processing to obtain a second prediction attention map and a second prediction heart segmentation map;
Calculating an error between the first predicted heart segmentation map and the second predicted heart segmentation map based on a first loss function to obtain a first loss value;
Calculating an error between the first predicted attention profile and the second predicted attention profile based on a second loss function, resulting in a second loss value;
If the target sample data is the second sample data, carrying out weighted summation on the first loss value and the second loss value to obtain a model loss value;
If the target sample data is the first sample data, calculating cross entropy between the heart segmentation image marked in the first sample data and the first prediction heart segmentation image based on a third loss function to obtain a third loss value; processing the heart segmentation image marked in the first sample data to obtain a target attention map, and calculating an error between the target attention map and the first prediction attention map based on a fourth loss function to obtain a fourth loss value; carrying out weighted summation on the first loss value, the second loss value, the third loss value and the fourth loss value to obtain a model loss value;
determining whether the heart segmentation model converges according to the model loss value;
If the heart segmentation model is not converged, updating model parameters of the heart segmentation model;
returning to the step of alternately selecting a piece of sample data from the third sample data set and the second sample data set as target sample data until the heart segmentation model converges.
2. The cardiac image processing method as recited in claim 1 in which the updating of the model parameters of the cardiac segmentation model comprises:
Updating model parameters of the student network based on a back propagation algorithm;
Acquiring a first model parameter of the teacher network under the current iteration round number and a second model parameter of the student network under the previous iteration round number;
Determining target model parameters of the teacher network according to the first model parameters, the second model parameters and preset adjustment coefficients;
and updating the model parameters of the teacher network into the target model parameters.
3. The cardiac image processing method as set forth in claim 2, wherein the determining the target model parameters of the teacher network according to the first model parameters, the second model parameters, and the preset adjustment coefficients includes:
Calculating the product of the second model parameter and the preset adjusting coefficient to obtain a model parameter gain value;
and accumulating the first model parameters and the model parameter gain values to obtain target model parameters of the teacher network.
4. A cardiac image processing method according to any one of claims 1-3, wherein expanding the first sample data set to obtain a third sample data set comprises:
determining a first number of first sample data in the first sample data set and a second number of second sample data in the second sample data set;
And determining expansion multiplying power according to the second number and the first number, and expanding the first sample data set according to the expansion multiplying power to obtain a third sample data set.
5. A cardiac image processing apparatus, characterized in that the cardiac image processing apparatus comprises:
The acquisition module is used for acquiring a first sample data set and a second sample data set, wherein data in the first sample data set contains labels, and data in the second sample data set is unlabeled;
the data expansion module is used for expanding the first sample data set to obtain a third sample data set;
The acquisition module is further used for acquiring a heart segmentation model to be trained, wherein the heart segmentation model comprises a student network and a teacher network;
The model training module is used for carrying out iterative training on the heart segmentation model according to the second sample data set and the third sample data set to obtain a target heart segmentation model, wherein model parameters of the teacher network are updated based on an exponential weighted average algorithm and model parameters of the student network;
the image segmentation module is used for acquiring a target heart image to be segmented, inputting the target heart image into the target heart segmentation model for image segmentation to obtain a target heart segmentation image;
wherein, the model training module is further used for:
alternately selecting a piece of sample data from the third sample data set and the second sample data set as target sample data;
inputting the heart image in the target sample data into the student network for processing to obtain a first prediction attention map and a first prediction heart segmentation map;
Inputting the heart image in the target sample data into the teacher network for processing to obtain a second prediction attention map and a second prediction heart segmentation map;
Calculating an error between the first predicted heart segmentation map and the second predicted heart segmentation map based on a first loss function to obtain a first loss value;
Calculating an error between the first predicted attention profile and the second predicted attention profile based on a second loss function, resulting in a second loss value;
If the target sample data is the second sample data, carrying out weighted summation on the first loss value and the second loss value to obtain a model loss value;
If the target sample data is the first sample data, calculating cross entropy between the heart segmentation image marked in the first sample data and the first prediction heart segmentation image based on a third loss function to obtain a third loss value; processing the heart segmentation image marked in the first sample data to obtain a target attention map, and calculating an error between the target attention map and the first prediction attention map based on a fourth loss function to obtain a fourth loss value; carrying out weighted summation on the first loss value, the second loss value, the third loss value and the fourth loss value to obtain a model loss value;
determining whether the heart segmentation model converges according to the model loss value;
If the heart segmentation model is not converged, updating model parameters of the heart segmentation model;
returning to the step of alternately selecting a piece of sample data from the third sample data set and the second sample data set as target sample data until the heart segmentation model converges.
6. A computer device, characterized in that it comprises a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program, when being executed by the processor, implements the steps of the image segmentation based cardiac image processing method according to any one of claims 1-4.
7. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, implements the steps of the image segmentation based cardiac image processing method as claimed in any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111138992.0A CN113850826B (en) | 2021-09-27 | 2021-09-27 | Image segmentation-based heart image processing method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111138992.0A CN113850826B (en) | 2021-09-27 | 2021-09-27 | Image segmentation-based heart image processing method, device, equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113850826A CN113850826A (en) | 2021-12-28 |
CN113850826B true CN113850826B (en) | 2024-07-19 |
Family
ID=78980641
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111138992.0A Active CN113850826B (en) | 2021-09-27 | 2021-09-27 | Image segmentation-based heart image processing method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113850826B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021056765A1 (en) * | 2019-09-24 | 2021-04-01 | 北京市商汤科技开发有限公司 | Image processing method and related apparatus |
WO2021179205A1 (en) * | 2020-03-11 | 2021-09-16 | 深圳先进技术研究院 | Medical image segmentation method, medical image segmentation apparatus and terminal device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112767320A (en) * | 2020-12-31 | 2021-05-07 | 平安科技(深圳)有限公司 | Image detection method, image detection device, electronic equipment and storage medium |
CN113160230A (en) * | 2021-03-26 | 2021-07-23 | 联想(北京)有限公司 | Image processing method and device |
-
2021
- 2021-09-27 CN CN202111138992.0A patent/CN113850826B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021056765A1 (en) * | 2019-09-24 | 2021-04-01 | 北京市商汤科技开发有限公司 | Image processing method and related apparatus |
WO2021179205A1 (en) * | 2020-03-11 | 2021-09-16 | 深圳先进技术研究院 | Medical image segmentation method, medical image segmentation apparatus and terminal device |
Also Published As
Publication number | Publication date |
---|---|
CN113850826A (en) | 2021-12-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jiang et al. | A novel negative-transfer-resistant fuzzy clustering model with a shared cross-domain transfer latent space and its application to brain CT image segmentation | |
CN110348515B (en) | Image classification method, image classification model training method and device | |
Wang et al. | Automated segmentation of dental CBCT image with prior‐guided sequential random forests | |
CN112365980B (en) | Brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization method and system | |
WO2021186592A1 (en) | Diagnosis assistance device and model generation device | |
Xu et al. | Class-incremental domain adaptation with smoothing and calibration for surgical report generation | |
US11430123B2 (en) | Sampling latent variables to generate multiple segmentations of an image | |
CN113688912B (en) | Method, device, equipment and medium for generating countermeasure sample based on artificial intelligence | |
CN111192660B (en) | Image report analysis method, device and computer storage medium | |
US20230107505A1 (en) | Classifying out-of-distribution data using a contrastive loss | |
CN113569891A (en) | Training data processing device, electronic equipment and storage medium of neural network model | |
CN113421228A (en) | Thyroid nodule identification model training method and system based on parameter migration | |
Khan et al. | SkinNet‐ENDO: Multiclass skin lesion recognition using deep neural network and Entropy‐Normal distribution optimization algorithm with ELM | |
Biswas et al. | Data augmentation for improved brain tumor segmentation | |
CN114743037A (en) | Deep medical image clustering method based on multi-scale structure learning | |
CN117058307A (en) | Method, system, equipment and storage medium for generating heart three-dimensional nuclear magnetic resonance image | |
CN116721289A (en) | Cervical OCT image classification method and system based on self-supervision cluster contrast learning | |
CN116884623A (en) | Medical rehabilitation prediction system based on laser scanning imaging | |
CN114169467A (en) | Image annotation method, electronic device and storage medium | |
CN113850826B (en) | Image segmentation-based heart image processing method, device, equipment and medium | |
CN115239740A (en) | GT-UNet-based full-center segmentation algorithm | |
CN116485853A (en) | Medical image registration method and device based on deep learning neural network | |
CN116823848A (en) | Multi-mode brain tumor segmentation method based on image fusion technology | |
CN115762721A (en) | Medical image quality control method and system based on computer vision technology | |
US11861846B2 (en) | Correcting segmentation of medical images using a statistical analysis of historic corrections |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |