CN112116625A - Automatic heart CT image segmentation method, device and medium based on contradiction marking method - Google Patents

Automatic heart CT image segmentation method, device and medium based on contradiction marking method Download PDF

Info

Publication number
CN112116625A
CN112116625A CN202010862313.3A CN202010862313A CN112116625A CN 112116625 A CN112116625 A CN 112116625A CN 202010862313 A CN202010862313 A CN 202010862313A CN 112116625 A CN112116625 A CN 112116625A
Authority
CN
China
Prior art keywords
contradiction
annotation
heart
images
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010862313.3A
Other languages
Chinese (zh)
Inventor
陈泓昊
周昌昊
杜文亮
田小林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Macau Univ of Science and Technology
Original Assignee
Macau Univ of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Macau Univ of Science and Technology filed Critical Macau Univ of Science and Technology
Priority to CN202010862313.3A priority Critical patent/CN112116625A/en
Publication of CN112116625A publication Critical patent/CN112116625A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to a technical scheme of a heart CT image automatic segmentation method, a device and a medium based on a contradiction marking method, which comprises the following steps: carrying out low-precision contradiction annotation on the heart CT image to obtain a first contradiction annotation set; similar annotations contained in contradiction annotation graphs in adjacent graphs of different frames in a plurality of cardiac CT images are used as a second contradiction annotation set; preheating the deep neural network based on the U-Net through the accurate labeling data part, and mixing the first contradiction labeling set and the second contradiction labeling set to obtain a mixed contradiction labeling set; training a full convolution neural network through a mixed contradiction labeling set, and segmenting the foreground and the background of the heart CT image until the full convolution neural network converges; and (3) counting gray level histograms of the foreground and the background of the CT heart series images segmented by the U-Net network, segmenting a threshold, setting an interest region of the heart CT images and segmenting. The invention has the beneficial effects that: the network training process is faster; the time cost is relatively low, and the cost is reduced.

Description

Automatic heart CT image segmentation method, device and medium based on contradiction marking method
Technical Field
The invention relates to the field of computers, in particular to a method, a device and a medium for automatically segmenting a cardiac CT image based on a contradiction marking method.
Background
According to the world health organization data, tens of millions of people die each year from cardiovascular disease (heart disease), estimated to account for 31% of the total number of deaths worldwide. Cardiac Computed Tomography (CT) images are widely used in radiation therapy planning including cardiac diseases, and automatic segmentation of cardiac CT images has become a popular research topic in recent years. Complete heart segmentation based on medical CT images refers to tissue segmentation of the whole sequence of CT heart images, and the segmentation result has important significance for assisting doctors in diagnosing cardiovascular diseases and guiding doctors to perform operations.
Due to the complexity of the cardiac samples and the severe interference of noise, the boundaries of the cardiac tissue structures in the CT images are often blurred. Two main techniques for automatic segmentation of cardiac CT medical images are known at present: the active contour model technology is widely used for segmenting lung CT images; and machine learning methods such as Convolutional Neural Networks (CNNs). Some research reports on how to segment a single atrium or ventricle from a cardiac image are published, but the segmentation results of the whole cardiac CT sequence image are rarely reported. At present, most of full-sequence CT heart segmentation needs to be completed by human-computer interaction under the guidance of a professional doctor for semi-automatic segmentation, and a mature, complete and convincing public data set needs to be labeled, so that a large amount of cost is consumed.
Since the complexity of the diversity of cardiac samples makes the labeling of relevant medical images extremely difficult, the labeling and accurate segmentation of the tissue structure of CT cardiac images becomes a major challenge.
Disclosure of Invention
The invention aims to solve at least one of the technical problems in the prior art, and provides a heart CT image automatic segmentation method, a device and a medium based on a contradiction marking method, which have simple implementation mode and reduce cost while ensuring marking precision.
The technical scheme of the invention comprises a heart CT image automatic segmentation method based on a contradiction marking method, which is characterized in that: s100, carrying out low-precision contradiction annotation on a cardiac CT image to obtain contradiction annotation graphs of two similar annotation information of the same cardiac CT image, and using contradiction annotations included in the contradiction annotation graphs in the same frame of image as a first contradiction annotation set; s200, taking similar annotations contained in the contradictory annotation graphs in different frames but adjacent graphs in the plurality of cardiac CT images as a second contradictory annotation set; s300, preheating the deep neural network based on the U-Net through the accurate annotation data part, and mixing the first contradiction annotation set and the second contradiction annotation set to obtain a mixed contradiction annotation set; s400, training a full convolution neural network through the mixed contradiction labeling set, and segmenting the foreground and the background of the heart CT image until the full convolution neural network is converged; s500, carrying out statistics on gray level histograms of the foreground and the background of the CT heart series images segmented by the U-Net network, taking the lowest valley between two peaks of the histograms as a segmentation threshold, and setting and segmenting an interest region of the heart CT images.
According to the heart CT image automatic segmentation method based on the contradiction marking method, the contradiction annotation in the first contradiction annotation set is 5% of all annotation points of the same frame of a plurality of heart CT images.
According to the heart CT image automatic segmentation method based on the contradiction labeling method, similar annotation information of a second contradictory annotation set is 30% of all annotation points of different frames but adjacent images in a plurality of heart CT.
According to the automatic heart CT image segmentation method based on the contradiction marking method, the low-precision contradiction marking is set as follows: and carrying out contradiction annotation by using the annotation set corresponding to the number of the annotation points lower than that of the standard annotation set, or carrying out contradiction annotation by using the annotation set which is not completely identified by experts in related fields.
The heart CT image automatic segmentation method based on the contradiction marking method is characterized in that the pre-training comprises the following steps: and performing high-generalization segmentation training by using a mixed contradiction label set.
According to the heart CT image automatic segmentation method based on the contradiction marking method, the full convolution neural network model is set to be a U-Net full convolution neural network model.
According to the method for automatically segmenting the cardiac CT image based on the contradiction marking method, S400 comprises the following steps: the mixed contradiction labeling set is used for training the U-Net full convolution neural network model to perform degressive learning, and the learning rate of the degressive learning is 10%.
According to the method for automatically segmenting the cardiac CT image based on the contradiction marking method, S500 comprises the following steps: calculating a U-Net full convolution neural network model to segment foreground and background discrete data of CT heart series images, performing statistics by taking a gray level histogram hist as model output to obtain two corresponding peak values, and setting an interest threshold value and completing automatic segmentation of interest areas of the heart CT images by taking a valley between the two peak values, wherein the valley is 10% of a first peak.
The invention also relates to an automatic heart CT image segmentation device based on the contradiction labeling method, which comprises a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the processor implements any of the method steps when executing the computer program.
The present invention also includes a computer-readable storage medium, in which a computer program is stored, wherein the computer program, when executed by a processor, implements any of the method steps.
The invention has the beneficial effects that: the segmentation of the interested target region of the full CT heart series image meets the requirement; and the related neural network is trained by a contradiction labeling method, so that the whole network training process is faster. This is because the contradiction labeling method does not need a large number of labeling sets, and the labeling precision does not need to be too high. The time cost for preparing the annotation data set is relatively low, i.e. the cost can be reduced while the segmentation is completed.
Drawings
The invention is further described below with reference to the accompanying drawings and examples;
FIG. 1 shows a general flow diagram according to an embodiment of the invention.
FIG. 2 is a diagram illustrating a contradiction annotation set according to an embodiment of the present invention.
Fig. 3 is a schematic diagram before and after contradiction labeling according to an embodiment of the present invention.
Fig. 4 is a view illustrating an example of a CT tomographic image according to an embodiment of the present invention.
Fig. 5 is a diagram illustrating a result of region of interest segmentation according to an embodiment of the present invention.
FIG. 6 shows a diagram of the media of an apparatus according to an embodiment of the invention.
Detailed Description
Reference will now be made in detail to the present preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number.
In the description of the present invention, the consecutive reference numbers of the method steps are for convenience of examination and understanding, and the implementation order between the steps is adjusted without affecting the technical effect achieved by the technical solution of the present invention by combining the whole technical solution of the present invention and the logical relationship between the steps.
FIG. 1 shows a general flow diagram according to an embodiment of the invention. The process comprises the following steps: s100, carrying out low-precision contradiction annotation on the cardiac CT image to obtain contradiction annotation graphs of two similar annotation information of the same cardiac CT image, and taking contradiction annotations included in the contradiction annotation graphs in the same frame of image as a first contradiction annotation set; s200, taking similar annotations contained in contradiction annotation graphs in adjacent graphs of different frames in a plurality of cardiac CT images as a second contradiction annotation set; s300, preheating the deep neural network based on the U-Net through the accurate annotation data part, and mixing the first contradiction annotation set and the second contradiction annotation set to obtain a mixed contradiction annotation set; s400, training the fully-convolutional neural network through a mixed contradiction labeling set, and segmenting the foreground and the background of the heart CT image until the fully-convolutional neural network converges; s500, carrying out statistics on gray level histograms of the foreground and the background of the CT heart series images segmented by the U-Net network, taking the lowest valley between two peaks of the histograms as a segmentation threshold, setting an interest region of the heart CT images and segmenting. The labeling of the cardiac CT image is generally a cardiac CT series image.
Aiming at the technical scheme of FIG. 1, the invention also provides the following implementation means:
1. preparing contradictory annotation graphs of two similar annotation information from the same image (single image) in CT heart series images, wherein about 5% of the contradictory annotations are contained as the contradictory annotation set 1
2. Similar annotation information from different images in a CT heart series image is prepared, wherein about 30% of contradictory annotations (annotation information of similar images is greatly different) are taken as the contradictory annotation set 2.
3. The previously prepared 2 sets of contradicting annotations are mixed. The pictures with marked precision are preheated with ultra-small learning rate. And then complete training is carried out by using a complete data set with contradictions.
4. And (3) training the U-Net network by using the prepared contradictory labeling data set, and segmenting the foreground and the background of the CT heart series images until the network converges (the loss is basically unchanged). The whole U-Net model training process is about 600 epochs, the attenuation learning rates are 0.1, 0.01 and 0.001 respectively by 3 times, and an SGD optimizer with Momentum is used. Finally, the segmentation accuracy of the foreground and background training of the CT heart series images reaches more than 98 percent; the training is terminated.
5. Because the contradiction marking method uses marking data with contradiction, the method has good generalization. The setting of the validation set during training is not necessary. The verification set is not in conflict with the contradiction marking method, but the verification set contains more contradictory data, which can hinder the evaluation of the real precision.
6. And (5) counting the discrete situations of the foreground and the background of the CT heart series images segmented by the U-Net network. Experiments show that two peaks can be obtained by taking hist as a statistical mode of model output. 10% is the trough between the two peaks. And setting a threshold value to finish automatic segmentation of the heart CT image region of interest.
FIG. 2 is a diagram illustrating a contradiction annotation set according to an embodiment of the present invention. From left to right, 2a (cardiac CT image), 2b (label set one), and 2c (label set two), respectively.
The contradictory labeling method aims to overcome the difficulty of labeling a large number of high-precision CT heart sequence images. Because there are fewer public datasets for CT cardiac segmentation on the Internet, there are insufficient labeled datasets available to develop segmentation algorithms; and the preparation of the labeling data set for CT heart segmentation can be realized by the professional knowledge of the CT heart image, so that high-precision labeling data can be manufactured, the amount of the data to be labeled is large, the samples are various and complex, and a long time is required for accurately finishing the labeling, so that the labeling work of the CT heart image is very difficult. The contradiction marking method is designed to allow marking precision to be low, different marking results can be not completely consistent, the data volume is small, and a network is trained by using small data; therefore, the annotation difficulty is reduced, the annotation cost is reduced, and the problem of network training overfitting caused by a small training data set can be avoided. Namely, the design concept of the contradiction notation is to train the neural network by labeling some small data sets with contradiction. The contradiction data used by the contradiction notation design mainly comprises two types of contradiction data of the same image and contradiction data of different images:
the same image uses two different sets of contradictory but similar annotations, as shown in FIG. 2. The images (2b) and (2c) in fig. 2 are both segmentation label sets of the same image (2a), which are similar to each other but slightly different. Careful observation shows that the two sets of annotation data displayed in images (2b) and (2c) are substantially identical but there is indeed a single or small difference, e.g. the lower left corner is highlighted, and the middle area is divided into shapes, etc. that both sets of annotation points are not exactly the same. These differences are the contradictions between the two label sets, and certainly these differences or contradictions should not be too large. The purpose of this labeling method is to force neural networks to pay more attention to commonality between images when learning. And through automatic learning of the neural network, the two labeling results are combined together to find better edge information (e.g., make the edge smoother) than the labeling results. The use of two different, contradictory but similar label sets also prevents over-training of the network, preventing problems such as over-fitting of the network caused by small data sets, insufficient labeling accuracy, etc.
Fig. 3 is a schematic diagram before and after contradiction labeling according to an embodiment of the present invention. Each column is 3a (cardiac CT image) and 3b (annotation image) from left to right. Different or adjacent images contain different sets of segmentation labels. The CT heart sequence images have high similarity between adjacent layers, and similar faults in different heart sequence images also have certain similarity. The different sets of annotations that similar images have may better force neural networks to learn commonalities between images. Errors in segmentation are corrected by auto-learning neural networks to prevent overfitting, etc. This of course results in some loss of segmentation accuracy, but experimental results show that the loss of segmentation accuracy is acceptable on the premise that the accuracy of the annotation data set itself is not high. In some details, these loss of segmentation accuracy may also be used to correct the segmentation annotation set. Another benefit of selecting similar images instead of identical images is that the available data set can be expanded, which may lead to overfitting in actual training if only the same images are labeled differently.
The above two types of contradictory annotation data reflect the nature of the contradictory annotation method. The design concept is that only a relatively small amount of low-precision data labels need to be annotated through a contradiction labeling method, and a model with good generalization capability is trained quickly. In order to verify the effectiveness of the contradiction annotation method, the two types of contradiction annotation data are constructed, and a model is trained in an experiment to verify the theory of the model. In order to simply verify the contradictory notation method, the experimental training process does not include other data enhancement functions, and does not use advanced stopping and other methods. And (4) training by using the original U-Net model only until the model reaches the limit convergence (the loss is basically unchanged). This validates the generalization ability of models trained from the two types of contradictory data above. Experimental results show that the foreground part of the medical image (namely the main region interested by a doctor) can be automatically generated by the neural network trained by the contradiction labeling method only through a small amount of low-precision labeled images.
Whether the low-precision label set is high-precision or high-low is a relative concept; the definition of the precision of the label set can have different ranges according to different processing objects and the requirement of the segmentation purpose, and is not uniformly defined. All data sets which are not completely approved by experts in related fields and have no relevant organizations or experts endorsed should belong to low-precision labeling sets. The low-precision labeling set used in the automatic heart CT image region-of-interest segmentation method based on the contradiction labeling method is that labeling points are relatively few and two manual labeling results in the same image can be different through visual observation without complete examination and approval of experts in related fields, or the labeling results in different images can be different through visual observation. The two types of low-precision labeling sets are contradiction labeling data sets used by a contradiction labeling method training network.
A novel CT heart image segmentation method is designed based on the contradiction marking method. This method can well separate the CT cardiac image into foreground (ROI) and background. The new method constructs a contradictory label set by taking a contradiction label method as a core to complete network training, and then completes the segmentation of the region of interest through a threshold value. The segmentation threshold is 10%, and the requirement of segmentation can be basically met by counting the discrete situations of the foreground and the background.
Through analysis and statistics of the output result of the model, the trained CT heart segmentation model has the following characteristics: due to the limitation of the original data labeling accuracy, the model training has certain influence on the effect of detail segmentation. However, due to the enhancement of generalization capability, the result of model training has a good effect on the segmentation of the image foreground and background. Thus, the model can obtain a complete foreground/region of interest segmentation result. Meanwhile, at another key point, the contradiction training can be used as a pre-training strategy, and a generalization strategy of accurate standard data limit fitting is improved.
Fig. 4 is a view illustrating an example of a CT tomographic image according to an embodiment of the present invention. Each column is 4a (tomogram on layer 50), 4b (tomogram on layer 140), and 4c (tomogram on layer 270), from left to right.
Referring to fig. 4, a neural network is trained by artificially labeling some images of a CT cardiac dataset and using them as a training set; finally, high segmentation accuracy is obtained.
The data set was tested to briefly introduce the specific process of annotation in the experiment using CT cardiac data and a paradoxical annotation approach (the principle of selecting paradoxical). The CT cardiac data used in the experiment was approximately 300 slices per heart. With about 220 frames of the image with contrast. The image to be segmented is an image containing a contrast agent. The image is shown in fig. 4:
the white areas are contrast agent areas. In this experiment we finally obtained 420 valid CT cardiac annotation images. These 420 pictures contained contradictory annotations from two similar annotation information from the same image, with about 5% contradictory annotations. The second type is similar annotation information from different images, which contains about 30% of contradictory annotations (similar images differ more in annotation information).
Results and analysis, as shown in fig. 5, which is a result diagram of region of interest segmentation according to the embodiment of the present invention, the columns from left to right are 5a (cardiac CT image), 5b (contradiction labeling method training result), and 5c (region of interest segmentation result), respectively.
420 data were trained using the original U-Net model. While using data outside the set of labels as a test to verify the training results. Because the contradiction marking method uses marking data with contradiction, in order to avoid the correction of the verification group on the training result, no verification group is set in the training process. It is desirable in this way to exhibit as much of the generalization effect of the contradictory notation as possible.
The whole U-Net model training totals 600 epochs, and the learning rate is attenuated by 10% respectively for 3 times. The segmentation accuracy of the final training was 98.06%. After the verification of the test set, the extraction of the region of interest is complete, and a good segmentation effect is shown. The extraction and segmentation results of the region of interest are shown in fig. 5.
The training data set is labeled by us himself. Due to the low-precision data set, quantitative comparative analysis of segmentation results cannot be completed temporarily; contradiction markers cannot be used in other existing segmentation methods for the time being. Because the data set marked by using the contradiction marking method has certain contradiction segmentation results at present, the upper limit of the precision of the final training is not 100 percent. The contradiction law is satisfied, and different labeling results of the same CT heart picture cannot be judged to be correct at the same time. I.e., de-excitation of the unexplained auto-learning feature of the deep neural network with contradictory attributes. To obtain good segmentation results.
FIG. 6 shows a diagram of the media of an apparatus according to an embodiment of the invention. The apparatus comprises a memory 100 and a processor 200, wherein the processor 200 stores a computer program for performing: performing low-precision contradiction annotation on the cardiac CT image, and taking contradiction annotations included in a contradiction annotation graph as a first contradiction annotation set; taking similar annotations included in the contradictory annotation graphs as a second contradictory annotation set; mixing the first contradiction label set and the second contradiction label set to obtain a mixed contradiction label set, and pre-training the mixed contradiction label set; training a full convolution neural network through a mixed contradiction labeling set, and segmenting the foreground and the background of the heart CT image until the full convolution neural network converges; and (4) counting the dispersion of the foreground and the background of the CT heart series images segmented by the U-Net network, and setting and segmenting the interest region of the heart CT images. Wherein the memory 100 is used for storing data.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.

Claims (10)

1. A heart CT image automatic segmentation method based on a contradiction marking method is characterized in that:
s100, carrying out low-precision contradiction annotation on a cardiac CT image to obtain contradiction annotation graphs of two similar annotation information of the same cardiac CT image, and using contradiction annotations included in the contradiction annotation graphs in the same frame of image as a first contradiction annotation set;
s200, taking similar annotations contained in the contradictory annotation graphs in different frames but adjacent graphs in the plurality of cardiac CT images as a second contradictory annotation set;
s300, preheating the deep neural network based on the U-Net through the accurate annotation data part, and mixing the first contradiction annotation set and the second contradiction annotation set to obtain a mixed contradiction annotation set;
s400, training a full convolution neural network through the mixed contradiction labeling set, and segmenting the foreground and the background of the heart CT image until the full convolution neural network is converged;
s500, carrying out statistics on gray level histograms of the foreground and the background of the CT heart series images segmented by the U-Net network, taking the lowest valley between two peaks of the histograms as a segmentation threshold, and setting and segmenting an interest region of the heart CT images.
2. The method of claim 1, wherein the contradictory annotations in the first set of contradictory annotations are 5% of all annotation points in a same frame of the cardiac CT images.
3. The method of claim 1, wherein the similar annotation information of the second contradictory annotation set is 30% of all annotation points of different frames but adjacent images of the plurality of cardiac CTs.
4. The method of claim 1, wherein the low-precision contradiction labeling is set to: and carrying out contradiction annotation by using the annotation set corresponding to the number of the annotation points lower than that of the standard annotation set, or carrying out contradiction annotation by using the annotation set which is not completely identified by experts in related fields.
5. The method of claim 1, wherein the pre-training comprises: and performing high-generalization segmentation training by using a mixed contradiction label set.
6. The method of claim 1, wherein the fully convolutional neural network model is set as a U-Net fully convolutional neural network model.
7. The method for automatic segmentation of cardiac CT images based on contradictory markers according to claim 6, wherein the S400 comprises:
the mixed contradiction labeling set is used for training the U-Net full convolution neural network model to perform degressive learning, and the learning rate of the degressive learning is 10%.
8. The method for automatic segmentation of cardiac CT images based on contradictory markers according to claim 6, wherein S500 comprises:
calculating a U-Net full convolution neural network model to segment foreground and background discrete data of CT heart series images, performing statistics by taking a gray level histogram hist as model output to obtain two corresponding peak values, and setting an interest threshold value and completing automatic segmentation of interest areas of the heart CT images by taking a valley between the two peak values, wherein the valley is 10% of a first peak.
9. An apparatus for automatic segmentation of cardiac CT images based on contradictory labeling, the apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer program performs the method steps of any of claims 1-8.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 8.
CN202010862313.3A 2020-08-25 2020-08-25 Automatic heart CT image segmentation method, device and medium based on contradiction marking method Pending CN112116625A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010862313.3A CN112116625A (en) 2020-08-25 2020-08-25 Automatic heart CT image segmentation method, device and medium based on contradiction marking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010862313.3A CN112116625A (en) 2020-08-25 2020-08-25 Automatic heart CT image segmentation method, device and medium based on contradiction marking method

Publications (1)

Publication Number Publication Date
CN112116625A true CN112116625A (en) 2020-12-22

Family

ID=73805243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010862313.3A Pending CN112116625A (en) 2020-08-25 2020-08-25 Automatic heart CT image segmentation method, device and medium based on contradiction marking method

Country Status (1)

Country Link
CN (1) CN112116625A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599051A (en) * 2016-11-15 2017-04-26 北京航空航天大学 Method for automatically annotating image on the basis of generation of image annotation library
CN109993758A (en) * 2019-04-23 2019-07-09 北京华力兴科技发展有限责任公司 Dividing method, segmenting device, computer equipment and storage medium
CN110910404A (en) * 2019-11-18 2020-03-24 西南交通大学 Anti-noise data breast ultrasonic nodule segmentation method
CN111166362A (en) * 2019-12-31 2020-05-19 北京推想科技有限公司 Medical image display method and device, storage medium and electronic equipment
CN111242956A (en) * 2020-01-09 2020-06-05 西北工业大学 U-Net-based ultrasonic fetal heart and fetal lung deep learning joint segmentation method
CN111354002A (en) * 2020-02-07 2020-06-30 天津大学 Kidney and kidney tumor segmentation method based on deep neural network
CN111460766A (en) * 2020-03-31 2020-07-28 云知声智能科技股份有限公司 Method and device for identifying contradictory speech block boundaries
CN111539956A (en) * 2020-07-07 2020-08-14 南京安科医疗科技有限公司 Cerebral hemorrhage automatic detection method based on brain auxiliary image and electronic medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599051A (en) * 2016-11-15 2017-04-26 北京航空航天大学 Method for automatically annotating image on the basis of generation of image annotation library
CN109993758A (en) * 2019-04-23 2019-07-09 北京华力兴科技发展有限责任公司 Dividing method, segmenting device, computer equipment and storage medium
CN110910404A (en) * 2019-11-18 2020-03-24 西南交通大学 Anti-noise data breast ultrasonic nodule segmentation method
CN111166362A (en) * 2019-12-31 2020-05-19 北京推想科技有限公司 Medical image display method and device, storage medium and electronic equipment
CN111242956A (en) * 2020-01-09 2020-06-05 西北工业大学 U-Net-based ultrasonic fetal heart and fetal lung deep learning joint segmentation method
CN111354002A (en) * 2020-02-07 2020-06-30 天津大学 Kidney and kidney tumor segmentation method based on deep neural network
CN111460766A (en) * 2020-03-31 2020-07-28 云知声智能科技股份有限公司 Method and device for identifying contradictory speech block boundaries
CN111539956A (en) * 2020-07-07 2020-08-14 南京安科医疗科技有限公司 Cerebral hemorrhage automatic detection method based on brain auxiliary image and electronic medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘俊博 等: "基于机器视觉的多线路钢轨扣件缺损检测方法", 《中国铁道科学》, vol. 40, no. 4, 31 July 2019 (2019-07-31), pages 27 - 35 *
孟健 等: "基于改进EAST算法的电厂电气设备铭牌文字检测", 《现代计算机》, 31 December 2019 (2019-12-31), pages 55 - 60 *

Similar Documents

Publication Publication Date Title
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
EP4002271A1 (en) Image segmentation method and apparatus, and storage medium
CN107578416B (en) Full-automatic heart left ventricle segmentation method for coarse-to-fine cascade deep network
US20200320697A1 (en) Method, system, and device for lung lobe segmentation, model training, model construction and segmentation
CN111369528B (en) Coronary artery angiography image stenosis region marking method based on deep convolutional network
US11315254B2 (en) Method and device for stratified image segmentation
CN109509193B (en) Liver CT atlas segmentation method and system based on high-precision registration
CN110910404B (en) Anti-noise data breast ultrasonic nodule segmentation method
US20150302603A1 (en) Knowledge-based automatic image segmentation
CN111950595A (en) Liver focus image processing method, system, storage medium, program, and terminal
CN114881968A (en) OCTA image vessel segmentation method, device and medium based on deep convolutional neural network
CN111524142A (en) Automatic segmentation method for cerebrovascular image
CN115512110A (en) Medical image tumor segmentation method related to cross-modal attention mechanism
US11842275B2 (en) Improving segmentations of a deep neural network
CN111784713A (en) Attention mechanism-introduced U-shaped heart segmentation method
CN109215035B (en) Brain MRI hippocampus three-dimensional segmentation method based on deep learning
WO2020007026A1 (en) Segmentation model training method and apparatus, and computer-readable storage medium
CN115439478B (en) Pulmonary lobe perfusion intensity assessment method, system, equipment and medium based on pulmonary perfusion
CN112116625A (en) Automatic heart CT image segmentation method, device and medium based on contradiction marking method
CN114359308A (en) Aortic dissection method based on edge response and nonlinear loss
US20220044454A1 (en) Deep reinforcement learning for computer assisted reading and analysis
Hsu et al. A comprehensive study of age-related macular degeneration detection
Hadlich et al. Sliding Window FastEdit: A Framework for Lesion Annotation in Whole-body PET Images
KR102667231B1 (en) System and method for contouring a set of medical images based on deep learning algorighm and anatomical characteristics
US11836926B2 (en) System and method for contouring a set of medical images based on deep learning algorithm and anatomical properties

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination