CN108830155B - Heart coronary artery segmentation and identification method based on deep learning - Google Patents

Heart coronary artery segmentation and identification method based on deep learning Download PDF

Info

Publication number
CN108830155B
CN108830155B CN201810441544.XA CN201810441544A CN108830155B CN 108830155 B CN108830155 B CN 108830155B CN 201810441544 A CN201810441544 A CN 201810441544A CN 108830155 B CN108830155 B CN 108830155B
Authority
CN
China
Prior art keywords
segmentation
layer
neural network
identification
blood vessel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810441544.XA
Other languages
Chinese (zh)
Other versions
CN108830155A (en
Inventor
徐波
梁枭
王筱斐
叶丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hongyun Zhisheng Technology Co ltd
Fuwai Hospital of CAMS and PUMC
Original Assignee
Beijing Hongyun Zhisheng Technology Co ltd
Fuwai Hospital of CAMS and PUMC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hongyun Zhisheng Technology Co ltd, Fuwai Hospital of CAMS and PUMC filed Critical Beijing Hongyun Zhisheng Technology Co ltd
Priority to CN201810441544.XA priority Critical patent/CN108830155B/en
Publication of CN108830155A publication Critical patent/CN108830155A/en
Application granted granted Critical
Publication of CN108830155B publication Critical patent/CN108830155B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a heart coronary artery segmentation and identification method based on deep learning, which comprises the steps of selecting any one frame of picture in a segmented cardioangiography Dicom video as a training sample, carrying out segmentation and identification on blood vessels of the picture in the training sample by a convolution neural network module in a neural network through the deep learning method, and outputting a heart blood vessel characteristic diagram for segmentation and identification to a pyramid module; the pyramid module outputs the cardiovascular feature maps with different scales to the deconvolution layer by applying a pyramid fusion method; the deconvolution layer obtains the technical scheme of heart coronary artery segmentation and blood vessel image identification through a bilinear interpolation method, and can mark each pixel in the image and identify different blood vessel types in the image. The problem of classification imbalance caused by large proportion difference of background pixels and blood vessel pixels is solved, interference caused by blood vessel-like textures in an image background is effectively avoided, and segmentation accuracy is improved.

Description

Heart coronary artery segmentation and identification method based on deep learning
Technical Field
The invention relates to the technical field of internet, in particular to a heart coronary artery segmentation and identification method based on deep learning.
Background
The segmentation of coronary angiography images is an important application of an image segmentation technology in the medical field, accurate extraction of coronary vessels can assist doctors in diagnosing cardiovascular diseases and determining a proper treatment scheme, and meanwhile, the coronary angiography images are also an important basis for three-dimensional reconstruction of the vessels and play an important role in clinical medical treatment.
The prior art generally designs a corresponding filter to complete the tasks of enhancing the vessel characteristics and suppressing the background noise based on the characteristics of the coronary vessels. In general, the background of the contrast image is very close to the color of the blood vessels, and the robustness is poor, so that the stripes in the background are very easy to extract as the blood vessels in the extraction process. This greatly reduces the accuracy of the segmentation.
Since the shape of the vessels in the angiographic images is very similar, it is difficult to determine the specific type of each vessel in the prior art.
Disclosure of Invention
The invention provides a heart coronary artery segmentation and identification method based on deep learning, which solves the problems of segmentation and identification of a heart coronary artery angiography image. According to the technical scheme, the heart coronary artery in the contrast image can be segmented and identified with high accuracy, so that an auxiliary material is provided for a doctor to analyze pathological changes and can be used as the basis of three-dimensional reconstruction of a blood vessel.
In order to achieve the above object, the present invention provides a method for segmenting and identifying cardiac coronary artery based on deep learning, comprising:
selecting any one frame of picture in a segmented cardioangiography Dicom video as a training sample, and inputting the training sample into a neural network; the neural network consists of a convolutional neural network module, a pyramid model and an deconvolution layer;
a convolutional neural network module in the neural network receives the training sample, performs segmentation and identification on blood vessels of the picture in the training sample by a deep learning method, and outputs a heart blood vessel characteristic diagram for segmentation and identification to a pyramid module;
the pyramid module receives the heart blood vessel characteristic graphs for segmentation and identification, and outputs the heart blood vessel characteristic graphs of different scales to a deconvolution layer by applying a pyramid fusion method;
the deconvolution layer receives the cardiovascular feature maps of different scales, and the heart coronary artery segmentation and blood vessel recognition maps are obtained by a bilinear interpolation method.
Further, the method for acquiring the segmented cardioangiography Dicom video comprises the following steps:
receiving a whole segment of cardio radiography Dicom video corresponding to the lesion type information and stored in a medical comprehensive database;
based on the lesion category information, key feature information appearing in the entire segment of the cardioangiography Dicom video is cooperatively analyzed using SSN.
And based on the key characteristic information and by combining with the body position information, segmenting the whole Dicom video, and iterating the step until the segmented video meeting the setting is finally found.
Furthermore, the convolutional neural network module is formed by multiple layers of same units which are stacked for multiple times, and the units sequentially comprise a convolutional layer, a batch normalization layer, a quick connection layer and an activation function layer from top to bottom.
Further, the convolutional layer receives the training sample, performs 2D convolution operation on each pixel block of a fixed size in the training sample data, extracts a feature map for segmentation and recognition contained in the training sample data, and outputs the feature map to the batch normalization layer.
Further, the batch normalization layer receives the feature map output by the convolution layer, performs an operation of subtracting a mean value and dividing by a square difference on the feature map data, so that the feature map data are uniformly distributed, and outputs the batch normalization feature map to the shortcut connection layer.
Further, the shortcut connection layer receives the output of the batch normalization layer, adds the input of the convolution layer and the output of the batch normalization layer according to the weight to obtain a feature map, and outputs the feature map to the activation function layer.
Further, the activation function receives the output of the shortcut connection layer, and performs nonlinear processing on the received data, that is, performs relu operation on the feature maps; inputting the processed data into the convolution layer of the next unit; until all the units in the neural network structure calculate all the feature extraction layers of the convolutional network, obtaining a cardiovascular feature map for segmentation and identification, and inputting the cardiovascular feature map for segmentation and identification into the pyramid module.
Further, the pyramid module receives the cardiovascular feature map for segmentation and identification, and performs convolution operation on the feature map by using a pyramid fusion method, and outputs the cardiovascular feature map with different scales; inputting the cardiovascular feature maps of different scales into the deconvolution layer;
the deconvolution layer receives the cardiovascular feature maps of different scales, the cardiovascular feature maps of different scales are amplified to the same size by a bilinear interpolation method, and finally the images are combined together along one dimension to obtain the heart coronary artery segmentation and identification vessel map.
Further, the method also comprises a step of updating the parameters, wherein the step comprises the following steps:
comparing the output heart coronary artery segmentation and blood vessel image identification with the difference of the doctor precisely-labeled heart coronary artery segmentation and blood vessel image identification to obtain loss values, and updating parameters of each layer of the neural network by a gradient descent method; and (4) iteratively running all the steps until the loss value between the blood vessel graph segmented and identified by the neural network and the precise marking of the doctor is lower than a preset threshold value.
Further, the method also comprises a testing step, wherein the testing step comprises the following steps:
the method comprises the following steps: reading a shot Dicom video file of the patient for the cardioangiography, extracting key frames and inputting the key frames into a neural network; and reading the model parameters corresponding to the body position.
Step two: initializing the neural network, establishing a multi-layer neural network structure, and reading the trained model parameters of the corresponding body position.
Step three: the neural network receives a Dicom video image of a patient during cardioangiography, performs segmentation and detection on blood vessels of an input picture through a deep learning method, and outputs blood vessel segmentation and identification pictures of key frames of different body positions;
step four: and repeating the first step to the third step for different body positions until the key frames of all body positions are processed.
The invention provides a heart coronary artery segmentation and identification method based on deep learning, which comprises the steps of selecting any one frame of picture in a segmented cardioangiography Dicom video as a training sample, and inputting the training sample into a neural network; the neural network consists of a convolutional neural network module, a pyramid model and an deconvolution layer; a convolutional neural network module in the neural network receives the training sample, performs segmentation and identification on blood vessels of the picture in the training sample by a deep learning method, and outputs a heart blood vessel characteristic diagram for segmentation and identification to a pyramid module; the pyramid module receives the heart blood vessel characteristic graphs for segmentation and identification, and outputs the heart blood vessel characteristic graphs of different scales to a deconvolution layer by applying a pyramid fusion method; the deconvolution layer receives the heart blood vessel characteristic images with different scales, the technical scheme of heart coronary artery segmentation and blood vessel image identification is obtained through a bilinear interpolation method, and the image segmentation technology based on deep learning is applied to coronary artery segmentation. The segmentation and identification tasks of the cardioangiographic images can be automatically completed end-to-end. Coronary arteries in a cardiac angiogram are located by segmentation with high accuracy. Each pixel in the picture may be labeled to identify the type of different blood vessels in the picture. By applying the deep learning method, the problem of classification imbalance caused by large proportion difference of background pixels and blood vessel pixels is solved, interference caused by blood vessel-like textures in an image background is effectively avoided, and segmentation accuracy is improved.
Drawings
Fig. 1 is a flow chart of a method for cardiac coronary artery segmentation and identification based on deep learning according to the present invention.
Fig. 2 is a graphical illustration of uniformly distributed features of a method of cardiac coronary segmentation and identification based on deep learning according to the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, fig. 1 shows that the method for cardiac coronary artery segmentation and identification based on deep learning provided by the invention comprises steps S110 to S140:
in step S110, selecting any one frame of picture in the segmented cardioangiography Dicom video as a training sample, and inputting the training sample into a neural network;
the neural network is composed of a convolution neural network module, a pyramid model and a deconvolution layer.
In step S120, the convolutional neural network module in the neural network receives the training sample, performs segmentation and identification on the blood vessel of the picture in the training sample by a deep learning method, and outputs a feature map of the cardiac blood vessel for segmentation and identification to the pyramid module.
In step S130, the pyramid module in the neural network receives the cardiovascular feature maps for segmentation and identification, and outputs the cardiovascular feature maps of different scales to the deconvolution layer by applying the pyramid fusion method.
In step S140, the deconvolution layer in the neural network receives the feature maps of the cardiac vessels with different scales, and obtains the cardiac coronary artery segmentation and the identified vessel map by a bilinear interpolation method.
The acquisition method of the segmented cardioangiography Dicom video comprises the following steps:
receiving a whole segment of cardio radiography Dicom video corresponding to the lesion type information and stored in a medical comprehensive database; based on the lesion type information, using SSN to cooperatively analyze key characteristic information appearing in the whole segment of the Dicom video; and based on the key characteristic information and by combining with the body position information, segmenting the whole Dicom video, and iterating the step until the segmented video meeting the setting is finally found. Namely, a digital subtraction cardioangiography image with clear blood vessel outline is intercepted from a dicom file, and the image is processed into a single-channel gray image and input into a convolution neural network.
The cardiac angiography Dicom video data set is composed of Dicom coronary artery digital subtraction angiography (medical digital imaging communication) files of about 100 coronary heart disease patients. Each patient has multiple Dicom files with different body positions, each Dicom file contains several frames of coronary angiography, and each frame has different types of blood vessels including a left main trunk, a left circumflex, a left anterior descending branch, a side branch, a left interventricular branch, a right crown and the like. The blood vessels that need to be segmented and identified in the present invention are these blood vessels. For each frame of image in the video, the physician makes a fine pixel-level labeling of the blood vessels in the image. These data are used to train a network model, which is then used to perform segmentation and identification of the vessels.
The convolutional neural network module is formed by multiple layers of same units which are stacked for multiple times, and the units sequentially comprise a convolutional layer, a batch normalization layer, a quick connection layer and an activation function layer from top to bottom.
Further, the convolutional layer receives the training sample, performs 2D convolution operation on each pixel block of a fixed size in the training sample data, extracts a feature map for segmentation and recognition contained in the training sample data, and outputs the feature map to the batch normalization layer.
The high-dimensional features in the picture are extracted by performing convolution operation on the input image again and again, and the features contain all information used in the segmentation and identification processes.
Further, the batch normalization layer receives the feature map output by the convolution layer, performs an operation of subtracting a mean value and dividing by a square difference on the feature map data, so that the feature map data are uniformly distributed, and outputs the batch normalization feature map to the shortcut connection layer.
As shown in fig. 2, in general, training of the neural network usually requires three days to one week, and in addition to the experimental verification result, the time cost is often an important factor to be considered. The batch normalization layer is a method capable of accelerating the model training speed and greatly reducing the time cost. The batch normalization programs the features to the same proper distribution to accelerate the convergence speed of the network, the first step of the specific operation is to normalize the input features, and divide the input features by the variance after subtracting the mean value of the input features, and the specific process can be expressed as:
Figure BDA0001655956330000061
wherein
Figure BDA0001655956330000062
Representing normalized features, x representing features of the input, E [ x ](k)]A mean value representing the input features is determined,
Figure BDA0001655956330000063
representing the variance of the input features.
The time cost is also reduced in the process of extracting the cardiovascular features. So after the convolutional layer, a batch normalization process is performed on the images of the cardiac corography output by the convolutional layer. The mean value subtraction and the variance subtraction are carried out on the features output by the convolutional layers, and meanwhile, the mean value and the variance of each layer need to be stored so as to be directly used in the test process, so that the convolved cardiac imaging image has uniform data distribution, and the task of extracting the blood vessel features can be accelerated.
The second step is to translate and scale the normalized features in order for the network to learn the output suitable for the network itself, and the specific process can be expressed as:
Figure BDA0001655956330000064
wherein gamma is(k)For a learnable scaling parameter, beta(k)Are learnable translation parameters.
Further, the shortcut connection layer receives the output of the batch normalization layer, adds the input of the convolution layer and the output of the batch normalization layer according to the weight to obtain a feature map, and outputs the feature map to the activation function layer. The whole neural network is formed by connecting a plurality of quick connection layers.
The deeper the number of layers of the neural network, the higher the dimension of the feature that can be learned, and therefore the number of layers has a great influence on the neural network. However, when the number of layers of the neural network becomes deeper and deeper, the deeper model hardly expresses the low-dimensional features, so that the problems of gradient explosion, gradient disappearance and the like occur. The shortcut connection unit is a method for solving the problem. Note that in the extreme case, f (X) does not learn anything, that is, f (X) is 0, and in this case, h (X) is X. Therefore, the shallow feature can be transmitted backwards, the learned feature of the whole network is not poor, and the quick connection unit is used for extracting the feature of the cardiac coronary angiography image. The model determines the height of the feature dimension which the model wants to extract, and the useful low-dimension cardiovascular features are reserved as far as possible. Thereby solving the problems of gradient explosion and gradient disappearance.
The entire quick connect process can be expressed as:
y=F(x,{wi})+x
where y represents the output characteristics, X represents the input characteristics, F (X, { Wi }) represents the residual mapping function that needs to be trained, and Wi represents the weights of the layer.
Further, the activation function receives the output of the shortcut connection layer, and performs nonlinear processing on the received data, that is, performs relu operation on the feature maps; inputting the processed data into the convolution layer of the next unit; until all the units in the neural network structure calculate all the feature extraction layers of the convolutional network, obtaining a cardiovascular feature map for segmentation and identification, and inputting the cardiovascular feature map for segmentation and identification into the pyramid module.
If these linear convolution networks are simply connected, the final effect is just as a single convolution unit. Therefore, in the actual use process, an activation function layer needs to be introduced, for example, the image is an image of an activation function, and the specific process can be expressed as follows:
Y=G(X)
where y is the output characteristic, x is the input characteristic, and G is the activation function.
In the testing process, the operation of subtracting the mean value and dividing the variance is also carried out on the heart coronary artery blood vessel images after convolution processing, and the consistent distribution of the heart coronary artery blood vessel image characteristics in the testing and training process is ensured.
Further, the pyramid module receives the cardiovascular feature map for segmentation and identification, and performs convolution operation on the feature map by using a pyramid fusion method, and outputs the cardiovascular feature map with different scales; and inputting the cardiovascular feature maps with different scales into the deconvolution layer.
The pyramid module fuses the features of 4 different scales of the extracted cardiac coronary artery image. Namely, four different sizes of heart features are fused, for example, the red color in the first row of the graph is the coarsest heart coronary artery image feature, and the left three rows are different scales of heart coronary artery image pooling features. To guarantee the weight of the global features, if the pyramid has a total of N levels, then using convolution with 1x1 after each level will reduce the level channel to 1/N of the original. And obtaining the size before pooling through bilinear interpolation, and finally combining the sizes together along one dimension.
The deconvolution layer receives the cardiovascular feature maps of different scales, the cardiovascular feature maps of different scales are amplified to the same size by a bilinear interpolation method, and finally the images are combined together along one dimension to obtain the heart coronary artery segmentation and identification vessel map.
Further, the method also comprises a step of updating the parameters, wherein the step comprises the following steps:
comparing the output heart coronary artery segmentation and blood vessel image identification with the difference of the doctor precisely-labeled heart coronary artery segmentation and blood vessel image identification to obtain loss values, and updating parameters of each layer of the neural network by a gradient descent method; and (4) iteratively running all the steps until the loss value between the blood vessel graph segmented and identified by the neural network and the precise marking of the doctor is lower than a preset threshold value.
Further, the method also comprises a testing step, wherein the testing step comprises the following steps:
the method comprises the following steps: reading a shot Dicom video file of the patient for the cardioangiography, extracting key frames and inputting the key frames into a neural network; and reading the model parameters corresponding to the body position.
Step two: initializing the neural network, establishing a multi-layer neural network structure, and reading the trained model parameters of the corresponding body position.
Step three: the neural network receives a Dicom video image of a patient during cardioangiography, performs segmentation and detection on blood vessels of an input picture through a deep learning method, and outputs blood vessel segmentation and identification pictures of key frames of different body positions;
step four: and repeating the first step to the third step for different body positions until the key frames of all body positions are processed.
A preferred embodiment, laboratory hardware: the Intel Xeon CPU E5-2630 v4 CPU and NVIDIA GTX 1080 Ti GPU carry out cooperative control.
First, data reading
The method comprises the following steps: and receiving the whole segment of the Dicom video corresponding to the lesion category information stored in the medical comprehensive database.
Step two: based on the disease species information, key feature information appearing in the whole segment of the Dicom video is analyzed cooperatively by using SSN.
Step three: and based on the key characteristic information and by combining with the body position information, segmenting the whole Dicom video, and iterating the step until the segmented video meeting the setting is finally found.
Step four: and selecting any frame in the video segmentation as a training sample, and inputting the training sample into the neural network module.
Secondly, training the network to segment and detect the blood vessel
The method comprises the following steps: the neural network is initialized, a multilayer neural network structure is established, similar units are stacked for multiple times, and a convolution layer, a batch normalization layer, a quick connection layer and an activation function layer are sequentially arranged in one unit from top to bottom. And reading the parameters of the pre-training model at the same time.
Step two: the neural network receives the digital subtraction angiography image, and performs segmentation and detection on the blood vessel of the input image by a deep learning method.
Step three: the convolution layer receives a digital subtraction angiography image, performs a 2D convolution operation on each fixed-size pixel block in the data, extracts main information contained in the data that can be used for segmentation and recognition, and outputs the information to the batch normalization layer.
Step four: the batch normalization layer receives the characteristic diagram output by the convolution layer, and performs the operation of subtracting the mean value and dividing the square difference on the data, so that the data has a uniform distribution, and the processed characteristic diagram is output to the quick connection layer.
Step five: and the shortcut connection layer receives the output of the batch normalization layer, adds the input of the convolution layer and the output of the batch normalization layer according to the weight to obtain a characteristic diagram, and outputs the characteristic diagram to the activation function layer.
Step six: and the activation function receives the output of the shortcut connection layer and performs nonlinear processing on the received data, namely relu operation is performed on the characteristic graphs. The processed data is input to the convolutional layer of the next unit.
Step seven: and repeating the three steps to the six steps until all the feature extraction layers of the convolutional network are calculated, and obtaining a final feature map. This is all the main information we need for vessel segmentation and identification. This information is input into the pyramid module.
Step eight: the pyramid module receives the heart blood vessel characteristic graphs for segmentation and identification, applies a pyramid fusion method, firstly performs convolution operation on the characteristic graphs, and outputs four heart blood vessel characteristic graphs with different scales. The four different scale cardiovascular feature maps were imported into the deconvolution layer.
Step nine: the deconvolution layer receives the four cardiovascular feature maps with different scales, amplifies the four cardiovascular feature maps with different scales to the same size by a bilinear interpolation method, and finally merges the images together along one dimension. This results in a final segmented identified vessel map.
Step ten: and comparing the finally output segmentation identification blood vessel graph with the doctor precise labeling picture to obtain a loss value, and then updating the parameters of each layer of the neural network by a gradient descent method.
Step eleven: and (5) iteratively operating the steps two to ten until the loss value between the blood vessel graph segmented and identified by the neural network and the precise marking of the doctor is lower than a preset threshold value.
Step twelve: the trained model parameters and neural network model structures are stored for later use in the testing process.
Step thirteen: and training and storing model parameters of different body position data.
The test network segments and detects blood vessels
The method comprises the following steps: and reading the Dicom file of the shot patient, extracting the key frame and inputting the key frame into the neural network. And reading the model parameters corresponding to the body position.
Step two: initializing the neural network, establishing a multilayer neural network structure, and reading the model parameters of the corresponding body positions trained before.
Step three: the neural network receives the digital subtraction angiography image, performs segmentation and detection on the blood vessel of the input image through a deep learning method, and outputs the blood vessel segmentation and identification images of key frames in different body positions.
Step four: repeating the steps one to three for different positions until all key frames of the positions are processed.
The invention provides a heart coronary artery segmentation and identification method based on deep learning, which comprises the steps of selecting any one frame of picture in a segmented cardioangiography Dicom video as a training sample, and inputting the training sample into a neural network; the neural network consists of a convolutional neural network module, a pyramid model and an deconvolution layer; a convolutional neural network module in the neural network receives the training sample, performs segmentation and identification on blood vessels of the picture in the training sample by a deep learning method, and outputs a heart blood vessel characteristic diagram for segmentation and identification to a pyramid module; the pyramid module receives the heart blood vessel characteristic graphs for segmentation and identification, and outputs the heart blood vessel characteristic graphs of different scales to a deconvolution layer by applying a pyramid fusion method; the deconvolution layer receives the heart blood vessel characteristic images with different scales, the technical scheme of heart coronary artery segmentation and blood vessel image identification is obtained through a bilinear interpolation method, and the image segmentation technology based on deep learning is applied to coronary artery segmentation. The segmentation and identification tasks of the cardioangiographic images can be automatically completed end-to-end. Coronary arteries in a cardiac angiogram are located by segmentation with high accuracy. Each pixel in the picture may be labeled to identify the type of different blood vessels in the picture. By applying the deep learning method, the problem of classification imbalance caused by large proportion difference of background pixels and blood vessel pixels is solved, interference caused by blood vessel-like textures in an image background is effectively avoided, and segmentation accuracy is improved.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (8)

1. A method for heart coronary artery segmentation and identification based on deep learning is characterized by comprising the following steps:
selecting any one frame of picture in a segmented cardioangiography Dicom video as a training sample, and inputting the training sample into a neural network;
a convolutional neural network module in the neural network receives the training sample, performs segmentation and identification on blood vessels of the picture in the training sample by a deep learning method, and outputs a heart blood vessel characteristic diagram for segmentation and identification to a pyramid module;
a pyramid module in the neural network receives the cardiovascular feature maps for segmentation and identification, and outputs the cardiovascular feature maps with different scales to a deconvolution layer by applying a pyramid fusion method;
a deconvolution layer in the neural network receives the cardiovascular feature maps of different scales, and a bilinear interpolation method is used for obtaining the heart coronary artery segmentation and blood vessel recognition map;
the convolutional neural network module is formed by multiple layers of same units which are stacked for multiple times, and the units are a convolutional layer, a batch normalization layer, a quick connection layer and an activation function layer from top to bottom in sequence;
the acquisition method of the segmented cardioangiography Dicom video comprises the following steps:
receiving a whole segment of cardio radiography Dicom video corresponding to the lesion type information and stored in a medical comprehensive database;
based on the lesion type information, using SSN to cooperatively analyze key characteristic information appearing in the whole segment of the Dicom video;
and segmenting the whole Dicom video based on the key characteristic information and combined with the body position information, and iterating the step of segmenting the whole Dicom video based on the key characteristic information and combined with the body position information until a segmented video meeting the setting is finally found.
2. The method of claim 1, wherein the convolutional layer receives training samples, performs a 2D convolution operation on each fixed-size pixel block in the training sample data, extracts a feature map for segmentation and recognition contained in the training sample data, and outputs the feature map to the batch normalization layer.
3. The method of claim 1, wherein the batch normalization layer receives a feature map output by the convolutional layer, performs a subtraction-mean-divided-by-variance operation on the feature map data to uniformly distribute the feature map data, and outputs the batch normalized feature map to the shortcut connection layer.
4. The method of claim 1, wherein the shortcut connection layer receives an output of a batch normalization layer, adds an input of a convolution layer and the output of the batch normalization layer by weight to obtain a feature map, and outputs the feature map to an activation function layer.
5. The method of claim 1, wherein the activation function receives the output of the shortcut connection layer, and performs non-linear processing on the received data, i.e. relu operation on the feature maps; inputting the processed data into the convolution layer of the next unit; until all the units in the neural network structure calculate all the feature extraction layers of the convolutional network, obtaining a cardiovascular feature map for segmentation and identification, and inputting the cardiovascular feature map for segmentation and identification into the pyramid module.
6. The method of claim 1, wherein the pyramid module receives the cardiovascular feature maps for segmentation and identification, and applies a pyramid fusion method to perform convolution operation on the feature maps to output the cardiovascular feature maps with different scales; inputting the cardiovascular feature maps of different scales into the deconvolution layer;
the deconvolution layer receives the cardiovascular feature maps of different scales, the cardiovascular feature maps of different scales are amplified to the same size by a bilinear interpolation method, and finally the images are combined together along one dimension to obtain the heart coronary artery segmentation and identification vessel map.
7. The method of claim 1, further comprising the step of updating the parameters, the step comprising:
comparing the output heart coronary artery segmentation and blood vessel image identification with the difference of the doctor precisely-labeled heart coronary artery segmentation and blood vessel image identification to obtain loss values, and updating parameters of each layer of the neural network by a gradient descent method; and (4) iteratively running all the steps until the loss value between the blood vessel graph segmented and identified by the neural network and the precise marking of the doctor is lower than a preset threshold value.
8. The method of claim 1, further comprising a testing step comprising:
the method comprises the following steps: reading a shot Dicom video file of the patient for the cardioangiography, extracting key frames and inputting the key frames into a neural network; reading the model parameters corresponding to the body position of the patient;
step two: initializing a neural network, establishing a multi-layer neural network structure, and reading the trained model parameters of the corresponding body position;
step three: the neural network receives a Dicom video image of a patient during cardioangiography, performs segmentation and detection on blood vessels of an input picture through a deep learning method, and outputs blood vessel segmentation and identification pictures of key frames of different body positions;
step four: and repeating the first step to the third step for different body positions until the key frames of all body positions are processed.
CN201810441544.XA 2018-05-10 2018-05-10 Heart coronary artery segmentation and identification method based on deep learning Active CN108830155B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810441544.XA CN108830155B (en) 2018-05-10 2018-05-10 Heart coronary artery segmentation and identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810441544.XA CN108830155B (en) 2018-05-10 2018-05-10 Heart coronary artery segmentation and identification method based on deep learning

Publications (2)

Publication Number Publication Date
CN108830155A CN108830155A (en) 2018-11-16
CN108830155B true CN108830155B (en) 2021-10-15

Family

ID=64147721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810441544.XA Active CN108830155B (en) 2018-05-10 2018-05-10 Heart coronary artery segmentation and identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN108830155B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009640B (en) * 2018-11-20 2023-09-26 腾讯科技(深圳)有限公司 Method, apparatus and readable medium for processing cardiac video
CN109558904A (en) * 2018-11-21 2019-04-02 咪咕文化科技有限公司 Image local feature classification method and device and storage medium
CN109394269B (en) * 2018-12-08 2021-12-10 沈阳鹏悦科技有限公司 Cardiac target highlighting platform
CN109741332B (en) * 2018-12-28 2021-06-04 天津大学 Man-machine cooperative image segmentation and annotation method
CN111507455B (en) * 2019-01-31 2021-07-13 数坤(北京)网络科技股份有限公司 Neural network system generation method and device, image processing method and electronic equipment
CN109903840B (en) * 2019-02-28 2021-05-11 数坤(北京)网络科技有限公司 Model integration method and device
CN109919931B (en) * 2019-03-08 2020-12-25 数坤(北京)网络科技有限公司 Coronary stenosis degree evaluation model training method and evaluation system
CN110009604B (en) * 2019-03-20 2021-05-14 北京理工大学 Method and device for extracting respiratory signal of contrast image sequence
CN110288611A (en) * 2019-06-12 2019-09-27 上海工程技术大学 Coronary vessel segmentation method based on attention mechanism and full convolutional neural networks
CN112150476B (en) * 2019-06-27 2023-10-27 上海交通大学 Coronary artery sequence blood vessel segmentation method based on space-time discriminant feature learning
TWI711051B (en) * 2019-07-11 2020-11-21 宏碁股份有限公司 Blood vessel status evaluation method and blood vessel status evaluation device
CN110517279B (en) * 2019-09-20 2022-04-05 北京深睿博联科技有限责任公司 Method and device for extracting central line of head and neck blood vessel
KR102375775B1 (en) * 2020-02-10 2022-03-21 주식회사 메디픽셀 Apparatus and method for extracting major vessel region based on blood vessel image
CN111311583B (en) * 2020-02-24 2021-03-12 广州柏视医疗科技有限公司 Method for naming pulmonary trachea and blood vessel by sections
CN111369528B (en) * 2020-03-03 2022-09-09 重庆理工大学 Coronary artery angiography image stenosis region marking method based on deep convolutional network
CN111353989B (en) * 2020-03-03 2022-07-01 重庆理工大学 Coronary artery vessel complete angiography image identification method
CN111445449B (en) * 2020-03-19 2024-03-01 上海联影智能医疗科技有限公司 Method, device, computer equipment and storage medium for classifying region of interest
CN113706568B (en) * 2020-05-20 2024-02-13 阿里巴巴集团控股有限公司 Image processing method and device
CN111657883B (en) * 2020-06-03 2021-05-04 北京理工大学 Coronary artery SYNTAX score automatic calculation method and system based on sequence radiography
CN111862046B (en) * 2020-07-21 2023-11-17 江苏省人民医院(南京医科大学第一附属医院) Catheter position discrimination system and method in heart coronary wave silhouette
CN113487628B (en) * 2021-07-07 2024-02-23 广州市大道医疗科技有限公司 Model training method, coronary vessel identification method, device, equipment and medium
CN113706559A (en) * 2021-09-13 2021-11-26 复旦大学附属中山医院 Blood vessel segmentation extraction method and device based on medical image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107843861A (en) * 2013-03-15 2018-03-27 米利开尔文科技有限公司 Improved technology, system and machine readable program for magnetic resonance

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8560348B2 (en) * 2006-09-26 2013-10-15 Ralph A. Korpman Individual health record system and apparatus
JP4407714B2 (en) * 2007-04-06 2010-02-03 セイコーエプソン株式会社 Biometric authentication device and biometric authentication method
US9053551B2 (en) * 2012-05-23 2015-06-09 International Business Machines Corporation Vessel identification using shape and motion mapping for coronary angiogram sequences
US9767557B1 (en) * 2016-06-23 2017-09-19 Siemens Healthcare Gmbh Method and system for vascular disease detection using recurrent neural networks
CN106372390B (en) * 2016-08-25 2019-04-02 汤一平 A kind of self-service healthy cloud service system of prevention lung cancer based on depth convolutional neural networks
CN106920227B (en) * 2016-12-27 2019-06-07 北京工业大学 The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method
CN107730507A (en) * 2017-08-23 2018-02-23 成都信息工程大学 A kind of lesion region automatic division method based on deep learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107843861A (en) * 2013-03-15 2018-03-27 米利开尔文科技有限公司 Improved technology, system and machine readable program for magnetic resonance

Also Published As

Publication number Publication date
CN108830155A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108830155B (en) Heart coronary artery segmentation and identification method based on deep learning
CN109146872B (en) Heart coronary artery image segmentation and identification method based on deep learning and optical flow method
CN108052977B (en) Mammary gland molybdenum target image deep learning classification method based on lightweight neural network
CN111161275B (en) Method and device for segmenting target object in medical image and electronic equipment
CN111161290B (en) Image segmentation model construction method, image segmentation method and image segmentation system
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
CN110796670B (en) Dissection method and device for dissecting interbed artery
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
CN111462047B (en) Vascular parameter measurement method, vascular parameter measurement device, vascular parameter measurement computer device and vascular parameter measurement storage medium
CN110009656B (en) Target object determination method and device, storage medium and electronic device
CN102419864B (en) Method and device for extracting skeletons of brain CT (computerized tomography) image
CN110717518A (en) Persistent lung nodule identification method and device based on 3D convolutional neural network
de Albuquerque et al. Fast fully automatic heart fat segmentation in computed tomography datasets
CN114693671B (en) Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
CN111325754B (en) Automatic lumbar vertebra positioning method based on CT sequence image
CN113889238A (en) Image identification method and device, electronic equipment and storage medium
CN114037803B (en) Medical image three-dimensional reconstruction method and system
Mohammed et al. Digital medical image segmentation using fuzzy C-means clustering
Chen et al. Detection of various dental conditions on dental panoramic radiography using Faster R-CNN
CN116758087B (en) Lumbar vertebra CT bone window side recess gap detection method and device
CN109767468B (en) Visceral volume detection method and device
CN115661152A (en) Target development condition analysis method based on model prediction
CN113222985B (en) Image processing method, image processing device, computer equipment and medium
CN113689950B (en) Method, system and storage medium for identifying blood vessel distribution pattern of liver cancer IHC staining pattern
CN115880358A (en) Construction method of positioning model, positioning method of image mark points and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210125

Address after: 100086 1704-1705, 17th floor, Qingyun contemporary building, building 9, Manting Fangyuan community, Qingyun Li, Haidian District, Beijing

Applicant after: BEIJING HONGYUN ZHISHENG TECHNOLOGY Co.,Ltd.

Applicant after: FUWAI HOSPITAL, CHINESE ACADEMY OF MEDICAL SCIENCES

Address before: 100086 1704-1705, 17th floor, Qingyun contemporary building, building 9, Manting Fangyuan community, Qingyun Li, Haidian District, Beijing

Applicant before: BEIJING HONGYUN ZHISHENG TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant