CN113269788B - Guide wire segmentation method based on depth segmentation network and shortest path algorithm under X-ray perspective image - Google Patents

Guide wire segmentation method based on depth segmentation network and shortest path algorithm under X-ray perspective image Download PDF

Info

Publication number
CN113269788B
CN113269788B CN202110559572.3A CN202110559572A CN113269788B CN 113269788 B CN113269788 B CN 113269788B CN 202110559572 A CN202110559572 A CN 202110559572A CN 113269788 B CN113269788 B CN 113269788B
Authority
CN
China
Prior art keywords
guide wire
shortest path
path algorithm
convolution
perspective image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110559572.3A
Other languages
Chinese (zh)
Other versions
CN113269788A (en
Inventor
陈阳
李浩凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202110559572.3A priority Critical patent/CN113269788B/en
Publication of CN113269788A publication Critical patent/CN113269788A/en
Application granted granted Critical
Publication of CN113269788B publication Critical patent/CN113269788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a guide wire segmentation method based on a depth segmentation network and a shortest path algorithm under an X-ray perspective image. Firstly, training an X-ray perspective image guide wire segmentation model to obtain a probability value of each pixel belonging to a guide wire, then taking weighted summation of the obtained pixel-by-pixel probability value and a gray value of the position as a distance, and finally optimizing a guide wire segmentation result by using a Dijkstra shortest path algorithm. The invention can automatically and completely divide the guide wire structure under the X-ray perspective image.

Description

Guide wire segmentation method based on depth segmentation network and shortest path algorithm under X-ray perspective image
Technical Field
The invention belongs to the technical field of X-ray perspective image processing, and particularly relates to a guide wire segmentation method based on a depth segmentation network and a shortest path algorithm under an X-ray perspective image.
Background
Percutaneous coronary intervention is the primary method for coronary heart disease. The operation is to observe the current positions of the catheter and the guide wire under the X-ray perspective image, so that the guide wire part under the X-ray perspective image is segmented, and the method has important application value for subsequently improving the imaging quality of the guide wire part in the X-ray perspective image.
Recently, as the deep learning technology shows strong feature learning capability in various fields, a guide wire segmentation method under an X-ray perspective image based on deep learning adopts a U-Net structure as a main body, as shown in fig. 5. And taking the X-ray perspective image as the input of the model, and outputting the probability value of each position belonging to the guide wire.
The segmentation network algorithm based on deep learning independently judges the probability value of each pixel belonging to the guide wire, and easily leaks out the part with lower signal to noise in the guide wire, so that the final guide wire segmentation result is not continuous.
The shortest path algorithm processes the data of the graph structure. In the figure, there are a starting point and an ending point, the points are connected by edges, and the distance between the points is the size of the edges. The purpose of the algorithm is to find a path from the start point to the end point, so that the path meets the condition of shortest distance. Among the common shortest path algorithms, the Dijkstra algorithm adopts a greedy idea, and starts from a starting point, finds a point with the shortest current distance, and loops until an end point is added into a path.
The algorithm requires manual setting of the start and end points and is applied to the image field with only pixels as distance to be easily disturbed by noise, resulting in leakage of the path to the non-guidewire portion.
Disclosure of Invention
The technical problems solved by the invention are as follows: one is the problem that the segmentation result of the guide wire segmentation network based on the depth model is discontinuous under the X-ray perspective image; one is a problem that the shortest path search requires manual setting of a start point and an end point; the last one is the path leakage problem caused by the setting of the distance in the shortest path algorithm. A method of combining a depth model based guidewire segmentation network result with a shortest path search algorithm is then proposed. The algorithm combination method refers to: after the guide wire segmentation network obtains a result, taking the weighted average of the result and the original pixel value as the distance of the shortest path; the starting point and the ending point of the shortest path algorithm are automatically arranged at the two ends of the disconnection of the guide wire segmentation network result. And finally, obtaining a complete guide wire segmentation result.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a guide wire segmentation method based on a depth segmentation network and a shortest path algorithm under an X-ray perspective image comprises the following steps:
step 1: constructing and training OCR U-Net to obtain an X-ray perspective image guide wire segmentation model;
step 2: weighting and summing the result of the guide wire segmentation model and the pixels at each position to be used as the distance of a shortest path algorithm;
step 3: constructing a Diikstra shortest path algorithm to obtain an X-ray perspective image guide wire segmentation post-processing method;
step 4: and performing guide wire segmentation on the X-ray perspective image containing the guide wire by adopting a trained X-ray perspective image guide wire segmentation model and a shortest path algorithm.
Further, the OCR U-Net described in step 1 comprises an encoder and a decoder;
the encoder consists of five convolution blocks and four pooling layers. Each convolution block consists of a convolution layer with the filter size of 3*3, a batch normalization layer and a nonlinear activation layer, the convolution blocks are formed by repeating the steps twice according to the sequence, a pooling layer adopts maximum pooling, and a pooling layer is positioned between the two convolution blocks;
the decoder consists of five convolution blocks and four OCR attention blocks, wherein each convolution block consists of a convolution layer with a filter size of 3*3, a batch normalization layer and a nonlinear activation layer, and the convolution blocks are formed by repeating the steps twice according to the sequence; OCR is an object context representation (Object Contextual Representation), an OCR block is composed of an object region block, a convolution block, an object context aggregation block and an object attention block, wherein the object region block is composed of 1*1 convolution layers, batch normalization layers, nonlinear activation layers and 1*1 convolution layers, the convolution block is composed of 3*3 convolution layers, batch normalization layers and nonlinear activation layers, the object context aggregation block is composed of matrix multiplication, context characteristics and objects are aggregated, and the object attention block is composed of a plurality of 1*1 convolution layers, batch normalization layers and nonlinear activation layers.
Further, the loss function of OCR U-Net described in step 1 is:
wherein x is i A pixel at the i-th position of the current X-ray perspective image; p (x) i ) The true probability distribution of the current position; q (x) i ) Is a predicted probability distribution; λ is the weight balancing the two losses, and is a manually set hyper-parameter; x is a set of prediction results; y is a set of real results; the |·| is the sum of the values in the current set.
Further, the distance in step 2 is calculated by:
distance i,j =pixel i,j +α×f(x i,j )
wherein the pixel i,j Representing pixel values at coordinates (i, j); f (x) i,j ) Representing that the current pixel of the coordinates (i, j) output by the guide wire segmentation network belongs to a guide wire probability value; alpha isManually set parameters to balance the pixel values and probability values.
Further, the shortest path algorithm in step 3 adopts Dijkstra shortest path algorithm.
Further, the structural body of the X-ray perspective image guide wire segmentation model in the step 4 is OCR U-Net, and model parameters are iteratively updated through an optimizer. The post-processing part of the X-ray perspective image guide wire segmentation model in the step 4 can automatically find out the disconnected part of the model prediction result, and then the disconnected prediction part is connected by using a shortest path algorithm.
In the scheme, the constructed shortest path algorithm takes two points of the broken guide wire in the output image of the segmentation model as a starting point and an ending point. The constructed shortest path algorithm adds the shortest distance point in the current path to the path each time until the end point is added to the path.
Compared with the prior art, the invention has the following beneficial effects:
1. the method comprises the steps of firstly constructing an OCR U-Net guide wire segmentation network based on deep learning under an X-ray perspective image, and obtaining an inference model with strong generalization capability after a large amount of data training. Even if the new image has a larger difference with the existing data, the probability distribution map of the guide wire under the preliminary X-ray perspective image can be obtained without manually setting any parameters.
2. Feature maps at different sizes are efficiently encoded and decoded by their encoder-decoder structure. Compared with the traditional U-Net, the novel guide wire segmentation method has the advantages that the OCR attention block is added between the coding layer and the corresponding decoding layer, deep characteristic information is utilized, the same-size coding characteristics are combined, the accuracy of the guide wire segmentation result is improved on the premise that the guide wire semantic information is ensured to be determined, and a novel network is provided for guide wire segmentation based on deep learning.
3. The result of the guide wire segmentation algorithm based on deep learning and the original pixel value are weighted and summed to be used as the distance used in the shortest path algorithm, and the advantage of a deep learning network with strong feature extraction capability is combined, so that the shortest path search algorithm in the image field becomes robust, and the problem of path leakage can be reduced.
4. The traditional shortest path algorithm is integrated into a guide wire segmentation algorithm based on deep learning, the probability distribution map and the original image information are effectively utilized, the part of the guide wire predicted by the segmentation network to be disconnected can be connected, and the precision of the whole guide wire segmentation method is improved.
5. The invention can automatically select the broken part in the probability distribution diagram obtained by the guide wire segmentation network as the starting point and the end point of the shortest path algorithm, does not need to manually set the starting point and the end point in the prior art, and is an end-to-end X-ray perspective image guide wire segmentation algorithm.
Drawings
FIG. 1 is an acquired fluoroscopic image containing a guidewire;
FIG. 2 is an effect image of a binary result obtained by the present invention superimposed on an original image;
FIG. 3 is a schematic diagram of the starting point and the end point of the shortest path algorithm found by searching, wherein a probability distribution diagram obtained by an OCR U-Net model is superimposed on an original image;
FIG. 4 is an effect image of the result of the shortest path algorithm search superimposed on the original image with the binary segmentation result consisting of the probability distribution map;
FIG. 5 OCR U-Net network structure, circles are OCR blocks;
FIG. 6 OCR block internal structure.
Detailed Description
Embodiments of the present invention are described in further detail below with reference to the accompanying drawings.
Example 1: referring to fig. 1 to 6, a guide wire segmentation method based on a depth segmentation network and a shortest path algorithm under an X-ray perspective image comprises the following steps:
step 1: acquiring an X-ray perspective image containing a guide wire;
step 2: the X-ray perspective image is subjected to training OCR U-Net network reasoning to obtain a probability distribution diagram of an X-ray perspective image guide wire;
step 3: the guide wire probability distribution diagram inferred by the OCR U-Net network is used for carrying out weighted summation on the guide wire probability distribution diagram and the pixels at the corresponding positions of the X-ray perspective images, so that a distance matrix with the same size as the original image is obtained;
step 4: searching two disconnected points in the guide wire probability distribution diagram from top to bottom to serve as a starting point and an ending point of a Dijkstra shortest path algorithm;
step 5: taking the absolute value of the difference between each point of the distance matrix and the corresponding value of the starting point as the distance, and obtaining a path coordinate set between two disconnected points through Dijkstra shortest path algorithm;
step 6: and setting the position with the probability larger than 0.5 in the guide wire probability distribution diagram and the position of the path coordinate set obtained by the shortest path algorithm as 1, and setting the rest positions as 0 to obtain the binary segmentation result of the guide wire of the X-ray perspective image.
Specific examples: the invention discloses a guide wire segmentation method based on a depth segmentation network and a shortest path algorithm under an X-ray perspective image, which comprises the following steps of 1-6, wherein the steps are described in detail as follows:
step 1: a fluoroscopic image is acquired containing the guidewire. As shown in fig. 1.
Step 2: and the X-ray perspective image is subjected to training OCR U-Net network reasoning to obtain a probability distribution diagram of the X-ray perspective image guide wire. The result of the binarization of the probability distribution map on the original image is shown in fig. 3. The OCR U-Net model structure is shown in FIG. 5. The method specifically comprises an encoder and a decoder. The encoder consists of five convolution blocks and four pooling layers. Each convolution block consists of a convolution layer with a filter size of 3*3, a batch normalization layer and a nonlinear activation layer, and is formed by repeating the steps twice in this order. The pooling layer adopts maximum pooling, and the pooling layer is positioned between two convolution blocks. The decoder consists of four convolution blocks and four OCR attention blocks. Each convolution block consists of a convolution layer with a filter size of 3*3, a batch normalization layer and a nonlinear activation layer, and is formed by repeating the steps twice in this order. OCR is an object context representation (Object Contextual Representation) with a structure as shown in fig. 6. The OCR block consists of an object region block, a convolution block, an object context aggregation block, and an object attention block. The object region block consists of 1*1 convolution layer, batch normalization layer, nonlinear activation layer, 1*1 convolution layer. The convolution block consists of a convolution layer of 3*3, a batch normalization layer and a nonlinear activation layer. The object context aggregation block consists of matrix multiplication, aggregating the context features and objects. The object attention block is composed of a plurality of 1*1 convolution layers, a batch normalization layer and a nonlinear activation layer. The last layer of the network follows the sigmoid activation layer for mapping the output to the interval 0, 1.
The labels required for training the model in step 2 are shown in fig. 2. The paired X-ray fluoroscopic images, guidewire tag data are fed to the OCR U-Net training, which model outputs probability maps of the same size as the input, with a probability value for each pixel location indicating that the pixel belongs to the guidewire. A probability value other than 0 is considered as a guidewire. The network output result is sent to the loss function together with the label:
wherein x is i A pixel at the i-th position of the current X-ray perspective image; p (x) i ) The true probability distribution of the current position; q (x) i ) Is a predicted probability distribution; λ is the weight balancing the two losses, and is a manually set hyper-parameter; x is a set of prediction results; y is a set of real results; the |·| is the sum of the values in the current set.
Further, the resulting penalty values will be updated by the Adam optimizer into each parameter of the network, and all training data is traversed once as a round of training is completed. The present model will repeat 200 rounds of the training process described above. Thus, a convergent guide wire segmentation model is obtained.
Step 3: and carrying out weighted summation on the guide wire probability distribution diagram inferred by the OCR U-Net network and pixels at the corresponding positions of the X-ray perspective image to obtain a distance matrix with the same size as the original image.
The distance calculation method in the step 3 is as follows:
distance i,j =pixel i,j +α×f(x i,j )
wherein the pixel i,j Representing pixel values at coordinates (i, j); f (x) i,j ) Representing the coordinates (i, j) viaThe current pixel output by the guide wire segmentation network belongs to a guide wire probability value; alpha is a manually set parameter used to balance the pixel and probability values.
Step 4: and searching two broken points in the guide wire probability distribution map from top to bottom to serve as a starting point and an ending point of a Dijkstra shortest path algorithm.
Specifically, the post-processing section of the X-ray fluoroscopic image guidewire segmentation model of step 4 automatically finds a section where the model prediction result is broken by searching from top to bottom for a point whose probability value is greater than 0.5 and whose probability value is less than 0.5 in more than four positions in the eight neighborhood as a start point and whose next probability value is greater than 0.5 as an end point, as two points shown in fig. 3. The disconnected predicted portions are then connected using Dijkstra shortest path algorithm.
Step 5: and taking the absolute value of the difference between each point of the distance matrix and the corresponding value of the starting point as the distance, and obtaining a path coordinate set between the two disconnected points through Dijkstra shortest path algorithm.
Specifically, dijkstra's shortest path algorithm adds a starting point to a path list, traverses the distance from each point in the list to surrounding points (eight connected domains), picks the point with the smallest distance as the point of the next added path, and then repeatedly traverses the distance from each point in the list to the surrounding points until an end point is added to the path. At this point, a broken guidewire portion of the network results is obtained, and the path is combined with the network output results to obtain a complete guidewire, as shown in fig. 4.
Step 6: and setting the position with the probability larger than 0.5 in the guide wire probability distribution diagram and the position of the path coordinate set obtained by the shortest path algorithm as 1, and setting the rest positions as 0 to obtain the binary segmentation result of the guide wire of the X-ray perspective image.
Effect evaluation:
a guide wire segmentation method based on a depth segmentation network and a shortest path algorithm under an X-ray perspective image provides an effective detection method for a percutaneous coronary operation (PCI) doctor to better observe the position of a guide wire on imaging equipment.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the invention without departing from the principles thereof are intended to be within the scope of the invention as set forth in the following claims.

Claims (7)

1. A guidewire segmentation method based on a depth segmentation network and a shortest path algorithm under an X-ray perspective image, characterized in that an OCR U-Net is used as a segmentation model, and a Dijkstra shortest path algorithm is used, and the method steps comprise:
step 1: acquiring an X-ray perspective image containing a guide wire;
step 2: the X-ray perspective image is subjected to training OCR U-Net network reasoning to obtain a probability distribution diagram of an X-ray perspective image guide wire;
step 3: the guide wire probability distribution diagram inferred by the OCR U-Net network is used for carrying out weighted summation on the guide wire probability distribution diagram and the pixels at the corresponding positions of the X-ray perspective images, so that a distance matrix with the same size as the original image is obtained;
step 4: searching two disconnected points in the guide wire probability distribution diagram from top to bottom to serve as a starting point and an ending point of a Dijkstra shortest path algorithm;
step 5: taking the absolute value of the difference between each point of the distance matrix and the corresponding value of the starting point as the distance, and obtaining a path coordinate set between two disconnected points through Dijkstra shortest path algorithm;
step 6: and setting the position with the probability larger than 0.5 in the guide wire probability distribution diagram and the position of the path coordinate set obtained by the shortest path algorithm as 1, and setting the rest positions as 0 to obtain the binary segmentation result of the guide wire of the X-ray perspective image.
2. The method for guidewire segmentation based on a depth segmentation network and a shortest path algorithm under an X-ray fluoroscopic image according to claim 1, wherein the OCR U-Net in step 1 comprises an encoder and a decoder;
wherein the encoder consists of five convolution blocks and four pooling layers; each convolution block consists of a convolution layer with the filter size of 3*3, a batch normalization layer and a nonlinear activation layer, the convolution blocks are formed by repeating the steps twice according to the sequence, a pooling layer adopts maximum pooling, and a pooling layer is positioned between the two convolution blocks;
the decoder consists of five convolution blocks and four OCR attention blocks, wherein each convolution block consists of a convolution layer with a filter size of 3*3, a batch normalization layer and a nonlinear activation layer, and the convolution blocks are formed by repeating the steps twice according to the sequence; OCR is an object context representation (Object Contextual Representation), an OCR block is composed of an object region block, a convolution block, an object context aggregation block and an object attention block, wherein the object region block is composed of 1*1 convolution layers, batch normalization layers, nonlinear activation layers and 1*1 convolution layers, the convolution block is composed of 3*3 convolution layers, batch normalization layers and nonlinear activation layers, the object context aggregation block is composed of matrix multiplication, context characteristics and objects are aggregated, and the object attention block is composed of a plurality of 1*1 convolution layers, batch normalization layers and nonlinear activation layers.
3. The method for segmenting a guide wire based on a depth segmentation network and a shortest path algorithm in an X-ray fluoroscopic image according to claim 1, wherein the loss function of OCR U-Net in step 1 is:
wherein x is i A pixel at the i-th position of the current X-ray perspective image; p (x) i ) The true probability distribution of the current position; q (x) i ) Is a predicted probability distribution; λ is the weight balancing the two losses, and is a manually set hyper-parameter; x is a set of prediction results; y is a set of real results; the |·| is the sum of the values in the current set.
4. The method for segmenting a guide wire based on a depth segmentation network and a shortest path algorithm under an X-ray perspective image according to claim 1, wherein the distance in the step 2 is calculated by the following manner:
distance i,j =pixel i,j +α×f(x i,j )
wherein the pixel i,j Representing pixel values at coordinates (i, j); f (x) i,j ) Representing that the current pixel of the coordinates (i, j) output by the guide wire segmentation network belongs to a guide wire probability value; alpha is a manually set parameter used to balance the pixel values and probability values.
5. The method for segmenting a guide wire based on a depth segmentation network and a shortest path algorithm under an X-ray perspective image according to claim 1, wherein the shortest path algorithm in the step 3 adopts a Dijkstra shortest path algorithm.
6. The guide wire segmentation method based on a depth segmentation network and a shortest path algorithm under an X-ray perspective image according to claim 1, wherein the structural body of the guide wire segmentation model of the X-ray perspective image in the step 4 is OCR U-Net, and model parameters are iteratively updated through an optimizer.
7. The method for segmenting a guide wire based on a depth segmentation network and a shortest path algorithm under an X-ray fluoroscopic image according to claim 1, wherein the post-processing part of the guide wire segmentation model of the X-ray fluoroscopic image in step 4 can automatically find a part of the model predicted to be disconnected, and then connect the disconnected predicted parts using the shortest path algorithm.
CN202110559572.3A 2021-05-21 2021-05-21 Guide wire segmentation method based on depth segmentation network and shortest path algorithm under X-ray perspective image Active CN113269788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110559572.3A CN113269788B (en) 2021-05-21 2021-05-21 Guide wire segmentation method based on depth segmentation network and shortest path algorithm under X-ray perspective image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110559572.3A CN113269788B (en) 2021-05-21 2021-05-21 Guide wire segmentation method based on depth segmentation network and shortest path algorithm under X-ray perspective image

Publications (2)

Publication Number Publication Date
CN113269788A CN113269788A (en) 2021-08-17
CN113269788B true CN113269788B (en) 2024-03-29

Family

ID=77232443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110559572.3A Active CN113269788B (en) 2021-05-21 2021-05-21 Guide wire segmentation method based on depth segmentation network and shortest path algorithm under X-ray perspective image

Country Status (1)

Country Link
CN (1) CN113269788B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612987B (en) * 2022-03-17 2024-09-06 深圳须弥云图空间科技有限公司 Expression recognition method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network
CN111192266A (en) * 2019-12-27 2020-05-22 北京理工大学 2D guide wire tip segmentation method and device
CN111798451A (en) * 2020-06-16 2020-10-20 北京理工大学 3D guide wire tracking method and device based on blood vessel 3D/2D matching
CN112348821A (en) * 2020-11-24 2021-02-09 中国科学院自动化研究所 Guide wire segmentation and tip point positioning method, system and device based on X-ray image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network
CN111192266A (en) * 2019-12-27 2020-05-22 北京理工大学 2D guide wire tip segmentation method and device
CN111798451A (en) * 2020-06-16 2020-10-20 北京理工大学 3D guide wire tracking method and device based on blood vessel 3D/2D matching
CN112348821A (en) * 2020-11-24 2021-02-09 中国科学院自动化研究所 Guide wire segmentation and tip point positioning method, system and device based on X-ray image

Also Published As

Publication number Publication date
CN113269788A (en) 2021-08-17

Similar Documents

Publication Publication Date Title
CN110070091B (en) Semantic segmentation method and system based on dynamic interpolation reconstruction and used for street view understanding
CN112258488A (en) Medical image focus segmentation method
CN110363068B (en) High-resolution pedestrian image generation method based on multiscale circulation generation type countermeasure network
CN110223234A (en) Depth residual error network image super resolution ratio reconstruction method based on cascade shrinkage expansion
CN115661144A (en) Self-adaptive medical image segmentation method based on deformable U-Net
CN104933709A (en) Automatic random-walk CT lung parenchyma image segmentation method based on prior information
CN112419271A (en) Image segmentation method and device and computer readable storage medium
CN111080591A (en) Medical image segmentation method based on combination of coding and decoding structure and residual error module
CN113706545A (en) Semi-supervised image segmentation method based on dual-branch nerve discrimination dimensionality reduction
CN115620010A (en) Semantic segmentation method for RGB-T bimodal feature fusion
CN113192062A (en) Arterial plaque ultrasonic image self-supervision segmentation method based on image restoration
CN112085705B (en) Image segmentation method and device based on improved goblet sea squirt swarm algorithm
CN116228792A (en) Medical image segmentation method, system and electronic device
CN113269788B (en) Guide wire segmentation method based on depth segmentation network and shortest path algorithm under X-ray perspective image
CN117437423A (en) Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement
CN113947538A (en) Multi-scale efficient convolution self-attention single image rain removing method
CN117197166A (en) Polyp image segmentation method and imaging method based on edge and neighborhood information
CN108550111A (en) A kind of residual error example recurrence super-resolution reconstruction method based on multistage dictionary learning
Ling et al. Quality assessment for synthesized view based on variable-length context tree
CN114022719B (en) Significance detection method for multi-feature fusion
CN114926394A (en) Rectal cancer pathological image segmentation method based on pixel comparison learning
Li et al. A Multi-Category Brain Tumor Classification Method Bases on Improved ResNet50.
CN112836708B (en) Image feature detection method based on Gram matrix and F norm
CN115272165B (en) Image feature extraction method, image segmentation model training method and device
CN114445441B (en) Face segmentation method and system without labeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant