CN116342608B - Medical image-based stent adherence measurement method, device, equipment and medium - Google Patents

Medical image-based stent adherence measurement method, device, equipment and medium Download PDF

Info

Publication number
CN116342608B
CN116342608B CN202310624607.6A CN202310624607A CN116342608B CN 116342608 B CN116342608 B CN 116342608B CN 202310624607 A CN202310624607 A CN 202310624607A CN 116342608 B CN116342608 B CN 116342608B
Authority
CN
China
Prior art keywords
stent
blood vessel
slice
wall
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310624607.6A
Other languages
Chinese (zh)
Other versions
CN116342608A (en
Inventor
马永杰
刘宇
郭远昊
韩立强
吉喆
杨万欣
张鸿褀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuanwu Hospital
Original Assignee
Xuanwu Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuanwu Hospital filed Critical Xuanwu Hospital
Priority to CN202310624607.6A priority Critical patent/CN116342608B/en
Publication of CN116342608A publication Critical patent/CN116342608A/en
Application granted granted Critical
Publication of CN116342608B publication Critical patent/CN116342608B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a stent adherence measurement method, device, equipment and medium based on medical images, belongs to the technical field of vascular stent assessment, and solves the problem of adherence measurement of stents and vascular walls. The technical scheme of the invention mainly comprises the following steps: acquiring an image set comprising a number of sequentially arranged axial slice images of a vessel lumen having a stent; inputting the image set into a multi-task segmentation model to obtain a blood vessel inner cavity slice set and a stent slice set; performing three-dimensional reconstruction according to the blood vessel inner cavity slice set to obtain a blood vessel inner wall model, and performing three-dimensional reconstruction according to the stent slice set to obtain a stent network model; extracting three-dimensional information of a plurality of independent stent wires according to the stent network model; and calculating the average distance from each point on each independent stent wire to the blood vessel inner wall model according to the three-dimensional information of the independent stent wires and the blood vessel inner wall model so as to obtain the wall adherence measurement distance.

Description

Medical image-based stent adherence measurement method, device, equipment and medium
Technical Field
The invention belongs to the technical field of vascular stent assessment, and particularly relates to a stent adherence measurement method, device, equipment and medium based on medical images.
Background
Worldwide, cardiovascular and cerebrovascular diseases have become one of the main diseases threatening human health, and coronary atherosclerosis is the main cause of cardiovascular and cerebrovascular diseases. At present, the coronary stent interventional therapy has become a main therapeutic scheme for treating coronary atherosclerosis due to small trauma and good effect. During treatment, stents are placed inside the coronary arteries by treatment to reduce the probability of restenosis and thrombosis.
After a cerebral vessel stent implantation operation is performed, the adherence of each stent wire is an important reference index for the operation effect and prognosis effect.
How to implement the wall-attachment metric analysis of stents and vessel walls is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the above analysis, the embodiments of the present invention aim to provide a stent adherence measurement method, device, equipment and medium based on medical images, which are used for solving the adherence measurement problem of the existing stent and vessel wall.
An embodiment of a first aspect of the present invention provides a stent adherence measurement method based on medical images, including the steps of:
acquiring an image set comprising a number of sequentially arranged axial slice images of a vessel lumen having a stent;
Inputting the image set into a multi-task segmentation model to obtain a blood vessel inner cavity slice set and a stent slice set;
performing three-dimensional reconstruction according to the blood vessel inner cavity slice set to obtain a blood vessel inner wall model, and performing three-dimensional reconstruction according to the stent slice set to obtain a stent network model;
extracting three-dimensional information of a plurality of independent stent wires according to the stent network model;
and calculating the average distance from each point on each independent stent wire to the blood vessel inner wall model according to the three-dimensional information of the independent stent wires and the blood vessel inner wall model so as to obtain the wall adherence measurement distance.
In some embodiments, after the obtaining the set of vessel lumen slices, further comprising post-processing the set of vessel lumen slices, comprising:
and extracting the maximum communication area of the blood vessel inner cavity segmentation result in the blood vessel inner cavity slice so as to filter out the blood vessel inner cavity over-segmentation result in the blood vessel inner cavity slice.
In some embodiments, after the obtaining of the intravascular cavity slice set and the stent slice set, further comprising post-processing of the stent slice set comprises:
and constraining the stent segmentation result of the stent slice according to the vessel lumen segmentation result of the vessel lumen slice so as to filter out the stent over-segmentation result positioned outside the vessel lumen segmentation result.
In some embodiments, after the obtaining the set of vessel lumen slices and the set of stent slices, further comprising post-processing the set of vessel lumen slices and the set of stent slices, comprising:
and filling the lost information between the sections by adopting a linear interpolation algorithm between the adjacent sections in the blood vessel inner cavity section set and the stent section set.
In some embodiments, the extracting a plurality of independent stent wire three-dimensional information according to the stent network model comprises:
extracting a skeleton network of the bracket network model through a network X function library;
and scanning the skeleton network point by point, dividing the skeleton network into a plurality of independent support wire backbones at the crossing points of the skeleton network to serve as three-dimensional information of the independent support wires, wherein each independent support wire backbone comprises nodes and edges, each node comprises a scanning starting point, a scanning ending point and the crossing point of each independent support wire backbone, and two adjacent nodes are connected through the edges.
In some embodiments, the calculating the attachment metric distance of the independent stent wire according to the three-dimensional information of the independent stent wire and the blood vessel inner wall model comprises:
Repeating steps 51 and 52 to traverse at least a portion of the stent points on the individual stent wires;
step 51, extracting support points on the independent support wires according to the three-dimensional information of the independent support wires and obtaining axial slices where the support points are located;
step 52, determining a blood vessel inner wall profile corresponding to the stent point according to the axial slice, determining a stent point profile according to the diameter of the independent stent wire, and calculating the maximum distance, the minimum distance and the average distance between the stent point profile and the blood vessel inner wall profile;
the wall-attaching measurement distance of the independent stent wire is calculated, wherein the wall-attaching measurement distance comprises an average maximum wall-attaching distance, an average minimum wall-attaching distance and an average wall-attaching distance, the average maximum wall-attaching distance is the average value of the maximum distance of each stent point, the average minimum wall-attaching distance is the average value of the minimum distance of each stent point, and the average wall-attaching distance is the average value of the average distance of each stent point.
In some embodiments, the inputting the image set into a multi-task segmentation model to obtain a vessel lumen slice set and a stent slice set comprises:
acquiring an image to be segmented, and carrying out feature extraction on the image to be segmented to acquire a first feature map;
Inputting the first feature map into a first segmentation network to generate a first segmentation result and a second feature map;
generating a query vector through the second feature map;
inputting the first feature map into a transducer module for high-dimensional feature encoding to generate a third feature map, and simultaneously, acquiring the query vector by the transducer module to guide the high-dimensional feature decoding;
the third feature map is input into a second segmentation network to generate a second segmentation result.
An embodiment of a second aspect of the present invention provides a stent-graft performance measurement apparatus based on medical images, including:
an acquisition module for acquiring an image set comprising a number of sequentially arranged axial slice images of a vessel lumen having a stent;
the segmentation module inputs the image set into a multi-task segmentation model to obtain a blood vessel inner cavity slice set and a stent slice set;
the reconstruction module is used for carrying out three-dimensional reconstruction according to the blood vessel inner cavity slice set to obtain a blood vessel inner wall model, and carrying out three-dimensional reconstruction according to the stent slice set to obtain a stent network model;
the extraction module is used for extracting three-dimensional information of a plurality of independent stent wires according to the stent network model;
And the measurement module is used for calculating the average distance from each point on each independent stent wire to the blood vessel inner wall model according to the three-dimensional information of the independent stent wires and the blood vessel inner wall model so as to obtain the adherence measurement distance.
An embodiment of a third aspect of the present invention provides an electronic device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, implements a medical image based stent apposition measurement method according to any of the embodiments above.
An embodiment of a fourth aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a stent apposition measurement method based on medical images as described in any of the embodiments above.
The embodiment of the invention performs multitasking segmentation on an image set formed by a plurality of axial slice images which are sequentially arranged in the blood vessel inner cavity to obtain a blood vessel inner cavity slice set and a bracket slice set, performs three-dimensional reconstruction on a blood vessel inner wall model through the blood vessel inner cavity slice set, performs reconstruction on a bracket network model through the bracket slice set, then extracts three-dimensional information of independent bracket wires according to the bracket network model, so as to obtain the blood vessel inner wall model and the bracket wire model, and can perform distance calculation on the blood vessel inner wall model and the bracket wire model to serve as wall adherence measurement of the bracket wires.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present description, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a schematic flow chart of a stent adherence measurement method based on medical images provided by the invention;
FIG. 2 is a flowchart of an embodiment of a method for measuring stent attachment according to an embodiment of the first aspect of the present invention;
FIG. 3 is a flowchart illustrating a method for partitioning a multi-task partition model according to an embodiment of the first aspect of the present invention;
FIG. 4 is a schematic diagram of a multi-task segmentation model architecture according to an embodiment of the first aspect of the present invention;
fig. 5 is a schematic structural diagram of a CNN convolutional neural network according to a first embodiment of the present invention;
FIG. 6 is a schematic diagram of a split network according to an embodiment of the first aspect of the present invention;
FIG. 7 is a schematic architecture diagram of a query volume generation module according to an embodiment of the first aspect of the present invention;
FIG. 8 is a schematic diagram of a network architecture of a transducer module according to an embodiment of the first aspect of the present invention;
FIG. 9 is a flowchart of a training method for a multi-task segmentation model according to an embodiment of the first aspect of the present invention;
FIG. 10 is a schematic illustration of a mask for labeling an image set according to an embodiment of the first aspect of the invention;
FIG. 11 is a schematic diagram showing the three-dimensional reconstruction result of the inner wall of a cerebral blood vessel according to the embodiment of the first aspect of the present invention;
FIG. 12 is a schematic diagram of three-dimensional reconstruction results of a stent network model according to an embodiment of the first aspect of the present invention;
FIG. 13 is a diagram of an exemplary local area stent wire node and edge extraction according to an embodiment of the first aspect of the present invention;
FIG. 14 is a schematic view of three-dimensional information extraction results of individual stent filaments according to an embodiment of the first aspect of the present invention;
FIG. 15 is a schematic view of a slice based on stent points according to an embodiment of the first aspect of the present invention;
FIG. 16 is a schematic view of a calculation of the measured distance of each individual stent wire according to the first embodiment of the present invention;
FIG. 17 is a schematic diagram showing the measurement results according to the embodiment of the first aspect of the present invention;
FIG. 18 is a schematic diagram of a stent apposition measurement device according to a second embodiment of the present invention;
fig. 19 is a schematic view of an electronic device architecture according to an embodiment of the third aspect of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. It should be noted that embodiments and features of embodiments in the present disclosure may be combined, separated, interchanged, and/or rearranged with one another without conflict. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The embodiment of the invention is applied to various vascular scenes for interventional stent treatment, is used for intuitively determining the measure of the adherence of the stent to the vascular, is convenient for a doctor to evaluate the stent placement effect, and is used for explaining the application scene in the embodiment of the invention by taking a cerebrovascular stent as an example.
The optical coherence tomography (optical coherence tomography, OCT) in the cardiovascular and cerebrovascular vessels is a novel cardiovascular and cerebrovascular imaging technology. Taking a cerebrovascular vessel as an example, the OCT imaging catheter moves forwards along the vessel, and performs fine imaging on the vessel wall with a certain thickness through near infrared imaging to assist diagnosis and treatment (plaque, hemangioma, bleeding and the like) of the cerebrovascular cavity, so that the OCT image in the cerebral vessel consists of a series of axial slices of the cerebrovascular cavity sequentially, and the middle area is a near-circular vessel cavity. When OCT is used for imaging the postoperative and the healed cerebrovascular inner cavity, clear images of stent areas distributed in the cerebrovascular inner cavity can be obtained, so that the distance and the spatial relationship between the stent and the vascular wall can be accurately observed by using an OCT technology, but parameters such as the spatial distance between the stent wire and the vascular wall cannot be accurately calculated due to the fact that the OCT images show only two-dimensional information on axial slices. The present implementation proposes the following solutions with respect to this problem in order to three-dimensionally reconstruct the vessel wall and the stent based on OCT-like images to obtain an intuitive visualization effect, while facilitating statistical calculation of the wall-attachment measure of the stent.
An embodiment of the first aspect of the present invention provides a method for measuring stent adherence based on a medical image, as shown in fig. 1 and fig. 2, fig. 1 is a flowchart of a method for measuring stent adherence based on a medical image provided by the embodiment of the first aspect of the present invention, and fig. 2 is a flowchart of a system of a preferred embodiment of the method for measuring stent adherence provided by the embodiment of the first aspect of the present invention. The method specifically comprises the following steps:
step one, acquiring an image set, wherein the image set comprises a plurality of axial slice images which are sequentially arranged and provided with a blood vessel inner cavity of a bracket;
inputting the image set into a multi-task segmentation model to obtain a blood vessel inner cavity slice set and a stent slice set;
thirdly, performing three-dimensional reconstruction according to the blood vessel inner cavity slice set to obtain a blood vessel inner wall model, and performing three-dimensional reconstruction according to the stent slice set to obtain a stent network model;
step four, extracting three-dimensional information of a plurality of independent stent wires according to the stent network model;
and fifthly, calculating the average distance from each point on each independent stent wire to the blood vessel inner wall model according to the three-dimensional information of the independent stent wires and the blood vessel inner wall model so as to obtain the wall adherence measurement distance.
It should be understood that, for convenience of description, the following references to corresponding step contents by step one, step, etc., the step sequence numbers do not represent a strict limitation on the step sequence.
The invention performs multitasking segmentation on an image set formed by a plurality of axial slice images which are sequentially arranged in the blood vessel inner cavity to obtain the blood vessel inner cavity slice set and the stent slice set, performs three-dimensional reconstruction on a blood vessel inner wall model through the blood vessel inner cavity slice set, performs reconstruction on a stent network model through the stent slice set, then extracts three-dimensional information of independent stent wires according to the stent network model, so as to obtain the blood vessel inner wall model and the stent wire model, and can perform distance calculation on the blood vessel inner wall model and the stent wire model to serve as the wall adhesion measurement of the stent wires.
Regarding step one, in some embodiments, the image set may be an OCT image, and the OCT image is adopted to first meet the requirement of the step one regarding the image set in this embodiment, and for one stent vessel, a plurality of axial slice images arranged in sequence may be obtained by OCT technology, which will be described in this embodiment, hereinafter, the OCT image is referred to as an axial slice image, so this embodiment may be referred to as a stent adherence measurement method based on the OCT image. The invention is not limited thereto but does not exclude that the image set may be a sequentially arranged set of axial slices of the stent vessel obtained by other medical instruments.
Then, in the second step, since the stent is located in the quasi-circular area surrounded by the vessel wall, if the stent and the vessel wall are directly reconstructed in three dimensions, it will not be possible to distinguish which of the information in the OCT images belongs to the vessel wall and which belongs to the stent.
To solve this problem, the present embodiment sequentially inputs OCT images into a multitasking division model in order, divides the OCT images into a vessel wall image containing only vessel wall information and a stent image containing only stent information by the multitasking division model, and outputs images which also preserve the order of the input images to form a vessel lumen slice set and a stent slice set, in other words, images in the vessel lumen slice set and the stent slice set have the same arrangement order as OCT images in the input image set to avoid causing confusion of three-dimensional information of the stent and the vessel wall. The present embodiment constructs an asymmetric multitask transducer segmentation model for segmenting the stent and the vessel lumen of the image to be segmented, and the segmentation method of the present embodiment will be described in terms of the multitask segmentation model, but the concept of the present invention is not limited thereto, and the slice set obtained by the existing segmentation method may be used for the subsequent three-dimensional reconstruction.
Preferably, in some embodiments, as shown in fig. 3 and fig. 4, fig. 3 is a flow chart of a segmentation method of the multi-task segmentation model provided in the present embodiment, and fig. 4 is a schematic diagram of a multi-task segmentation model architecture provided in the present embodiment. The inputting the image set into a multi-task segmentation model to obtain a vessel lumen slice set and a stent slice set, comprising:
s21, obtaining an image to be segmented, and carrying out feature extraction on the image to be segmented to obtain a first feature map;
s22, inputting the first feature map into a first segmentation network to generate a first segmentation result and a second feature map;
s23, generating a query vector through the second feature map;
s24, inputting the first feature map into a transducer module for high-dimensional feature encoding to generate a third feature map, and simultaneously, acquiring the query vector by the transducer module to guide the high-dimensional feature decoding;
s25, inputting the third characteristic diagram into a second segmentation network to generate a second segmentation result.
And for the step S21, acquiring an image to be segmented, and carrying out feature extraction on the image to be segmented to acquire a first feature map.
It should be understood that, in this embodiment, the segmentation task of the OCT image of the stent cerebral blood vessel is described, the image to be segmented is one of a series of OCT images of the stent cerebral blood vessel, the first segmentation target is a blood vessel lumen or a blood vessel wall in the image, the second segmentation target is a stent in the image, and the primary feature map of the image to be segmented is extracted, where the feature extraction is generally performed by using a CNN convolutional neural network.
Preferably, in some embodiments, the acquiring an image to be segmented, and extracting features of the image to be segmented to obtain a first feature map, includes:
the image to be segmented is input into a CNN convolutional neural network, as shown in fig. 5, and fig. 5 is a schematic structural diagram of the CNN convolutional neural network according to the first embodiment of the present invention. The CNN convolutional neural network comprises an initialization module and M first residual modules, wherein the initialization module is used for extracting initial characteristics of an image to be segmented, the first residual modules comprise a pooling layer and a first convolution layer, the pooling layer is used for downsampling the output of the initialization module or the previous first residual modules, and the first convolution layer is used for carrying out convolution operation on the output of the pooling layer. In this embodiment, the value of M is 4.
Specifically, the convolutional neural network used by the invention is used as a basic network skeleton, namely CNN backhaul, to perform low-level feature extraction. Firstly, extracting low-order visual features through an initialization module, inputting an original OCT image matrix into the initialization module, wherein the initialization module comprises three convolution layers, each convolution layer carries out convolution and batch normalization (Batch Normalization, BN) on the input image matrix, and adopts a linear rectification function (Rectified Linear Unit, reLU) to activate, outputs a feature matrix and inputs the feature matrix into the next convolution layer, and the extraction of the low-order visual features is completed after the three convolution layers to obtain initial features.
And then inputting the initial features into the feature extraction of depth in first residual modules after the initial modules, wherein each first residual module comprises a maximum pooling layer, two first convolution layers, a residual channel and a ReLU. And after four downsampling and convolution processes, finally outputting high-dimensional characteristics. The size of the input image matrix is 1x512x512 (CxHxW), after 4 residual convolution blocks, the feature size of the output image matrix is 512x32x32 (CxHxW), and the final image feature is provided for the subsequent modules to process. It is noted that the output of each first residual module is input into the subsequent first residual module on the one hand, and the fourth first residual module outputs a first profile of size 512x32x32 (CxHxW), and on the other hand also via a jump connection.
In this embodiment, the extraction of the first feature map is implemented by using a CNN convolutional neural network.
The first feature map is then input into the corresponding processing modules of S22 and S24 for processing, respectively. It should be understood that S22 and S24 are merely step differentiation and not sequencing.
S22, inputting the first feature map into a first segmentation network to generate a first segmentation result and a second feature map.
In this embodiment, the stent and the vessel lumen are segmented, where the segmentation task of the vessel wall is relatively simple and easy, so that the vessel lumen is directly segmented through the first segmentation network according to the first feature map, and the generated first segmentation result, that is, the vessel lumen slice, is processed through S23. Wherein the first segmentation network and the second segmentation network following stent segmentation are both segmentation networks for a specific target.
Preferably, in some embodiments, the first split network and the second split network in this embodiment have the same split network structure, and the specific split network structure is shown in fig. 6, and fig. 6 is a schematic diagram of the split network structure according to the first aspect of the present invention. The split network architecture includes: m second residual error modules and an output module; the second residual error module comprises an up-sampling layer, a fusion layer and a second convolution layer, wherein the up-sampling layer is used for up-sampling the acquired characteristics, the fusion layer is used for carrying out characteristic fusion on the output of the up-sampling layer and the corresponding output of the first residual error module, and the second convolution layer is used for carrying out convolution operation on the output of the fusion layer. The output module includes a third convolution layer and a Sigmoid function.
In particular, the first segmentation network of the vessel lumen is also referred to as the first segmentation head shown in fig. 4, the second segmentation network of the stent is also referred to as the second segmentation head shown in fig. 4, the actual working of the second residual block is decoding the first feature map, and the second residual block may also be referred to as a decoding convolution block. The second residual module performs upsampling by using bilinear interpolation first, and performs feature fusion with the features from the first residual module, where the feature fusion should be that the size of the feature map output in CNN matches the size of the feature map output by the second residual module, for example, the first second residual module performs feature fusion with the output of the third first residual module after upsampling. Then processing through a 2-layer second convolution layer, simultaneously carrying out residual connection on the fusion characteristic and the outputs of the two second convolution layers, and then activating through a ReLU.
The dimension of the feature map is reduced after the feature map is decoded by the four second residual modules, the dimension is restored to the size of an input image, the feature map is input to a third convolution layer of an output module, the third convolution layer comprises a convolution layer with a convolution kernel size of 3x3 and a convolution layer with a convolution kernel size of 1x1, the probability of each pixel classification is output after the convolution is processed by a Sigmoid function, and a segmentation result is obtained after threshold processing.
In this embodiment, the precision of the segmentation result is further improved through multi-level decoding and encoding and feature fusion.
S23, generating a query vector through the second feature map.
In some embodiments, preferably, the generating a query vector by the second feature map includes:
feature mapping is carried out through a convolution kernel with the size of 1x 1;
acquiring characteristics in the channel dimension through global average pooling;
performing dimension expansion according to the number of dimensions required by the transducer module;
random noise perturbations are added to each dimension separately to form the query vector.
Specifically, in order to make the segmentation of the stent by the model more accurate, query generated by the lumen segmentation is used as a Query vector and input into a Transformer Decoder decoder to guide the model to more accurately locate and segment the region where the stent exists. In the processing procedure of the query vector generation module, as shown in fig. 7, fig. 7 is a schematic diagram of the architecture of the query vector generation module according to the embodiment of the first aspect of the present invention. The output result of the inner cavity segmentation head is subjected to feature mapping through two-dimensional convolution with the convolution kernel size of 1, features in the channel dimension are obtained through global average pooling (Global Average Pooling, GAP), dimension expansion is performed according to the number of query vectors, noise disturbance is added, and the final query feature vector is input into a Transformer Decoder decoder to guide stent segmentation.
In this embodiment, by generating a query vector for guiding stent segmentation according to the feature map of the vessel lumen segmentation result, the accuracy of stent segmentation and positioning can be improved.
S24, inputting the first feature map into a transducer module for high-dimensional feature encoding to generate a third feature map, and simultaneously, acquiring the query vector by the transducer module to guide the high-dimensional feature decoding.
Preferably, in some embodiments, as shown in fig. 8, fig. 8 is a schematic diagram of a network architecture of a transducer module according to an embodiment of the first aspect of the present invention. The transducer module comprises a feature mapping unit, a transducer encoder, a transducer decoder and a multi-head attention module;
the inputting the first feature map into a transducer module for high-dimensional feature encoding to generate a third feature map, and the transducer module obtaining the query vector to guide the high-dimensional feature decoding, includes:
performing feature mapping on the first feature map to generate a feature vector, and performing position coding processing on the feature vector;
the feature vector is acquired by the transducer encoder to generate a first feature code;
the first feature code and the query vector are obtained by the transducer decoder to generate a second feature code;
The first feature code and the second feature code are acquired by the multi-headed attention module to generate a high-dimensional feature;
the third feature map is generated by de-mapping from the high-dimensional features.
Specifically, as shown in fig. 8, fig. 8 is a schematic diagram of a architecture of a transducer codec module according to a first embodiment of the present invention. The basic composition of the Transformer EncoderDecoder module is 3 Transformer Encoder and 2 Tranformer Decoder. In the process of high-dimensional coding, firstly, feature images extracted from a CNN convolutional neural network are converted into image feature vectors through feature mapping, the image feature vectors are input into a Transformer Encoder module for higher-dimensional coding after feature position coding is added, the coded result is input into a Transformer Decoder module for decoding, meanwhile, the Transformer Decoder module receives query vectors generated by an inner cavity segmentation result as segmentation guide, and finally a Transformer Decoder decoder outputs high-dimensional feature codes.
Preferably, in some embodiments, the generating the high-dimensional feature further includes:
and carrying out feature fusion on the high-dimensional features and the feature vectors to generate fusion features, and generating the third feature map through reflection according to the fusion features.
In this embodiment, the transform is used to perform high-dimensional feature encoding and decoding, and the segmentation effect is enhanced by the spatial correlation characteristic of the segmentation target itself. Meanwhile, the generated high-dimensional features and low-dimensional features are fused and then segmented, so that the segmentation accuracy can be further improved.
S25, inputting the third characteristic diagram into a second segmentation network to generate a second segmentation result.
The structure of the second split network is similar to that of the first split network, and will not be described here again.
The embodiment also provides a multi-task segmentation model training method, as shown in fig. 9, including:
constructing a training data set, wherein the training data set comprises a plurality of training images with a first segmentation target and a second segmentation target, the training images are provided with a first label and a second label, the first label is a labeling area of the first segmentation target, and the second label is a labeling area of the second segmentation target;
extracting features of the training image to obtain a first feature map;
inputting the first feature map into a first segmentation network to generate a first segmentation result and a second feature map;
generating a query vector through the second feature map;
Inputting the first feature map into a transducer module for high-dimensional feature encoding to generate a third feature map, and simultaneously, acquiring the query vector by the transducer module to guide the high-dimensional feature decoding;
inputting the third feature map into a second segmentation network to generate a second segmentation result;
determining a first loss value according to the difference between the first segmentation result and the first label, determining a second loss value according to the difference between the second segmentation result and the second label, and performing joint training based on the first loss value and the second loss value to obtain the multi-task segmentation model, wherein the image segmentation model is used for segmenting the first target and the second target in an image to be segmented.
Specifically, the training dataset, that is, the OCT cerebrovascular dataset in this embodiment is manually labeled, and after the labeling is completed and verified, the training dataset is made into a dataset for the deep learning algorithm, the labeling software uses the open source medical labeling software ITK-SNAP, and the labeling of the cerebrovascular lumen and the stent is performed by using a painting tool (Paint Brush) in the labeling tool. The labeling samples total 15 cases, namely 15 cases of original medical digital imaging and communications (Digital Imaging andCommunications in Medicine, DICOM) files are used in total. The labeled objects are a blood vessel inner cavity and a bracket, and in order to solve the problem of overlapping of the areas of the blood vessel inner cavity and the bracket, the two targets are labeled independently. The artificial labeling results of the cerebral vessel lumen and the stent are shown in fig. 10, and the areas pointed by arrows are the mask of the vessel lumen, namely the first label, and the mask of the stent, namely the second label.
In the present embodiment, when creating an image set, an OCT original image and a label are created as a data set in units of DICOM samples. The dataset amounted to 15 samples and the final dataset produced amounted to 2458 images. Randomly selecting 11 samples to train, wherein the total number of the samples is 1855, and the ratio is 75%; randomly selecting 2 samples for verification during training, wherein the total 373 pictures account for 15 percent; the rest 2 samples are taken as independent test sets, and the total number of the samples is 230, and the ratio is 10%; during the training process, successive image slices in a single DICOM sample are placed into the network for training and testing.
For the structure of the model, see the relevant description of the embodiments of the first aspect, which is not repeated here.
Preferably, the loss of the network model, including the first loss value and the second loss value, is determined by a combination of cross entropy of two classifications (Binary CrossEntropy Loss, BCELoss) and tersky, wherein BCELoss is used for classification of pixels and tersky is used to balance the equalization of positive and negative samples.
In particular, the method comprises the steps of,wherein->Is a label (or->Is a predictive value of the model,/>Is the number of total pixels.
Wherein->Is a label (or->Is a predictive value of the model,/>Is the number of total pixels. / >And->Is the corresponding weight.
The first loss value or the second loss value may be expressed as:
wherein->Is a weight parameter.
In this embodiment, the combined optimization training is performed after the first loss value, which is the loss of the vessel lumen segmentation, and the second loss value, which is the loss of the stent segmentation, are added, the model uses a random gradient descent algorithm to perform optimization of network parameters, and the training model uses an Adam optimizer with faster convergence to perform rapid optimization of parameters of the model.
Preferably, in some embodiments, after the obtaining of the set of vessel lumen slices, further comprising post-processing of the set of vessel lumen slices comprises:
and extracting the maximum communication area of the blood vessel inner cavity segmentation result in the blood vessel inner cavity slice so as to filter out the blood vessel inner cavity over-segmentation result in the blood vessel inner cavity slice.
The brain blood vessel lumen may be segmented into blood vessel lumens due to the fact that the brain blood vessel lumen has a large area and some information is lost (the inner wall may be partially missing due to the opacity of the catheter). In this embodiment, the over-segmentation is removed by extracting the maximum connected region, and according to observation, the over-segmentation phenomenon is often caused by that some small cavities similar to the brain blood vessel cavity are formed between the blood vessel cavity and brain tissue, but a similar circular region is not formed, and the model has an over-segmentation effect on the region, so that the segmentation result is processed by adopting the maximum connected region for extracting the segmentation result of the brain blood vessel cavity.
Preferably, in some embodiments, after the obtaining of the vessel lumen slice set and the stent slice set, further comprising post-processing of the stent slice set comprises:
and constraining the stent segmentation result of the stent slice according to the vessel lumen segmentation result of the vessel lumen slice so as to filter out the stent over-segmentation result positioned outside the vessel lumen segmentation result.
Due to the specificity of the stent structure, it exists in a small area within the lumen. Due to OCT imaging, small and fragmented regions resembling stent points are formed outside part of the vessel lumen, which causes excessive stent-like phenomena to occur outside the lumen. Based on the premise that the stent points are in the blood vessel inner cavity, the stent segmentation result is constrained according to the cerebral blood vessel inner cavity segmentation result, and only the stent segmentation result in the blood vessel inner cavity is extracted.
Preferably, in some embodiments, after the obtaining of the vessel lumen slice set and the stent slice set, further comprising post-processing of the vessel lumen slice set and the stent slice set, further comprising:
and filling the lost information between the sections by adopting a linear interpolation algorithm between the adjacent sections in the blood vessel inner cavity section set and the stent section set.
Because of the nature of tomographic imaging, OCT techniques lose a portion of the continuity information when the catheter is imaged in motion in a blood vessel. According to the continuity of the cerebral blood vessel and the stent in the three-dimensional space, in order to reconstruct a three-dimensional model of the cerebral blood vessel and the stent better, a linear interpolation algorithm is used on the Z axis (namely the axis of the motion of the catheter or the axis of the inner cavity of the blood vessel) to fill in the lost information of the Z axis so as to increase the resolution of the Z axis.
Furthermore, in order to enable the reconstructed result to have continuity and accuracy, on the basis of using a Z-axis interpolation method, three-dimensional smoothing is carried out on the final result by using three-dimensional median filtering, so that the reconstructed result has better continuity.
And thirdly, carrying out three-dimensional reconstruction according to the blood vessel inner cavity slice set to obtain a blood vessel inner wall model, and carrying out three-dimensional reconstruction according to the stent slice set to obtain a stent network model.
And the segmentation results are subjected to post-processing and sequentially stacked into three-dimensional data, so that three-dimensional reconstruction is performed. In order to finish the measurement of the wall adhesion of the stent, the inner wall of the blood vessel of the cerebral blood vessel needs to be obtained, and the outer contour directly extracted from the inner cavity of the cerebral blood vessel can be used as the edge of the inner wall of the cerebral blood vessel, so that the invention adopts the outer contour of the three-dimensional reconstruction result of the cerebral blood vessel as the inner wall of the cerebral blood vessel to prepare for the measurement of the wall adhesion of the stent in the next step. Fig. 11 is a schematic diagram showing the three-dimensional reconstruction result of the inner wall of the cerebral blood vessel according to the present embodiment. Fig. 12 is a schematic diagram showing a three-dimensional reconstruction result of the stent network model according to the present embodiment.
At this time, the three-dimensional reconstruction result of the stent is a continuous model constructed according to the slice information, so that only an integral stent network model is formed, instead of a plurality of independent stent wires, and the wall attachment measurement calculation of a single stent wire cannot be performed.
Preferably, in some embodiments, step four, the extracting a plurality of independent stent wire three-dimensional information according to the stent network model includes:
extracting a skeleton network of the bracket network model through a network X function library;
and scanning the skeleton network point by point, dividing the skeleton network into a plurality of independent support wire backbones at the crossing points of the skeleton network to serve as three-dimensional information of the independent support wires, wherein each independent support wire backbone comprises nodes and edges, each node comprises a scanning starting point, a scanning ending point and the crossing point of each independent support wire backbone, and two adjacent nodes are connected through the edges.
Because the structure of the three-dimensional stent is in a net structure, in order to extract the stent structure and analyze the adherence, in the embodiment, network structure information after stent reconstruction is extracted by using an open-source network X function library, and the extracted stent wires are processed in a network diagram mode. Firstly, extracting a skeleton of three-dimensional bracket information; after extracting the skeleton, carrying out point-by-point scanning on the network skeleton, defining a continuous prospect as an edge, and when the edge of the bracket is bifurcated, cutting off the edge into two different independent bracket wires from a bifurcation point, wherein the bifurcation point is used as a bracket node, as shown in fig. 13; the stent starting point and ending point also serve as nodes of the stent.
After the three-dimensional stent information is subjected to scanning grid network modeling, complete three-dimensional stent network model data can be obtained, and finally a plurality of pieces of independent stent wire three-dimensional information are obtained, so that preparation is made for the wall adhesion measurement of the stent. The extracted three-dimensional information of the independent stent filaments is shown in fig. 14.
The calculation of the stent apposition metric may then be performed by step five, preferably, in some embodiments, the calculating the independent stent wire apposition metric distance from the independent stent wire three-dimensional information and the vessel inner wall model includes:
repeating steps 51 and 52 to traverse at least a portion of the stent points on the individual stent wires;
step 51, extracting support points on the independent support wires according to the three-dimensional information of the independent support wires and obtaining axial slices where the support points are located;
step 52, determining a blood vessel inner wall profile corresponding to the stent point according to the axial slice, determining a stent point profile according to the diameter of the independent stent wire, and calculating the maximum distance, the minimum distance and the average distance between the stent point profile and the blood vessel inner wall profile;
the wall-attaching measurement distance of the independent stent wire is calculated, wherein the wall-attaching measurement distance comprises an average maximum wall-attaching distance, an average minimum wall-attaching distance and an average wall-attaching distance, the average maximum wall-attaching distance is the average value of the maximum distance of each stent point, the average minimum wall-attaching distance is the average value of the minimum distance of each stent point, and the average wall-attaching distance is the average value of the average distance of each stent point.
The three-dimensional reconstruction of the inner cavity of the cerebral vessel and the stent and the extraction of the stent wire can obtain the structural information of the stent wire and the inner wall of the cerebral vessel, so as to calculate the Euclidean distance from the point on the stent wire to the outline of the inner cavity of the cerebral vessel. Since the stent filaments have a certain diameter, three distances are chosen in one embodiment to achieve an independent measure of stent filament adhesion: and finally, averaging all the points to obtain the average adherence distance, the maximum adherence distance and the minimum adherence distance of one stent wire.
In a preferred embodiment, a schematic diagram of calculating the attachment distance of nodes on a stent is shown in fig. 15, where the distance between the contour of the stent point area and the inner wall of the blood vessel is calculated first, then the maximum distance and the minimum distance are calculated, and finally the average distance is obtained. On the basis of calculating the adherence distance of a single stent point, calculating the adherence of one stent wire as shown in fig. 16, calculating the adherence of each point on the stent wire, and taking the average value of the calculated adherence of each point to obtain the adherence of the single stent wire.
In a preferred embodiment, the following steps are provided with respect to stent wire apposition measurement and display:
1. The inner wall of the cerebral vessel and the stent wire can be obtained by three-dimensional reconstruction, the single stent wire is used as a unit, the points on the stent wire are extracted one by one, and the position information of the slice where the points are located, namely the sequence of the slices where the points are located, is obtained.
2. The adherence of the profile of the stent is calculated at the position, firstly, the region of the stent point is obtained and the outer profile of the stent region is extracted. And then calculating the distance from the stent contour point to the inner wall of the cerebral vessel, further obtaining the maximum distance and the minimum distance in all the distances, and finally calculating the average distance.
3. The above three distances for all points on the stent wire are averaged as a measure of the adherence of the wire to the one. And carrying out statistical analysis on all the metal wires to obtain the proportion of the number of the metal wires in different adherence intervals to represent adherence.
4. Different adherence stents are displayed according to the adherence interval where the stent wire is located, and the adherence measurement result of the stent is shown in fig. 17.
The parameters and calculation modes of the embodiment about the stent wire adherence measurement are given above, but the invention is not limited to this, and different adherence parameter calculation can be performed according to the three-dimensional information of the stent wire and the inner wall of the cerebral blood vessel according to actual requirements.
An embodiment of the second aspect of the present invention provides a stent apposition measurement device based on medical images, as shown in fig. 18, including:
an acquisition module for acquiring an image set comprising a number of sequentially arranged axial slice images of a vessel lumen having a stent;
the segmentation module inputs the image set into a multi-task segmentation model to obtain a blood vessel inner cavity slice set and a stent slice set;
the reconstruction module is used for carrying out three-dimensional reconstruction according to the blood vessel inner cavity slice set to obtain a blood vessel inner wall model, and carrying out three-dimensional reconstruction according to the stent slice set to obtain a stent network model;
the extraction module is used for extracting three-dimensional information of a plurality of independent stent wires according to the stent network model;
and the measurement module is used for calculating the average distance from each point on each independent stent wire to the blood vessel inner wall model according to the three-dimensional information of the independent stent wires and the blood vessel inner wall model so as to obtain the adherence measurement distance.
An embodiment of a third aspect of the present invention provides an electronic device, as shown in fig. 19, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, implements a stent apposition measurement method based on medical images according to any of the embodiments above.
An embodiment of a fourth aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a stent apposition measurement method based on medical images as described in any of the embodiments above.
Computer-readable storage media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of function in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (9)

1. A stent-graft measurement method based on medical images, comprising the steps of:
acquiring an image set comprising a number of sequentially arranged axial slice images of a vessel lumen having a stent;
inputting the image set into a multi-task segmentation model to obtain a blood vessel inner cavity slice set and a stent slice set, wherein the respective slice arrangement sequences of the blood vessel inner cavity slice set and the stent slice set correspond to the image set;
carrying out three-dimensional reconstruction according to the blood vessel inner cavity slice set according to the slice arrangement sequence stack to obtain a blood vessel inner wall model, and carrying out three-dimensional reconstruction according to the bracket slice set according to the slice arrangement sequence stack to obtain a bracket network model;
extracting three-dimensional information of a plurality of independent stent wires according to the stent network model, wherein the three-dimensional information comprises:
extracting a skeleton network of the bracket network model through a network X function library;
the method comprises the steps of carrying out point-by-point scanning on the skeleton network, dividing the skeleton network into a plurality of independent support wire backbones at the crossing points of the skeleton network to serve as three-dimensional information of the independent support wires, wherein each independent support wire backbone comprises nodes and edges, each node comprises a scanning starting point, a scanning end point and the crossing point of each independent support wire backbone, and two adjacent nodes are connected through the edges;
And calculating the average distance from each point on each independent stent wire to the blood vessel inner wall model according to the three-dimensional information of the independent stent wires and the blood vessel inner wall model so as to obtain the wall adherence measurement distance.
2. The medical image-based stent apposition measurement method according to claim 1, wherein: after the obtaining of the set of vessel lumen slices, further comprising post-processing of the set of vessel lumen slices, comprising:
and extracting the maximum communication area of the blood vessel inner cavity segmentation result in the blood vessel inner cavity slice so as to filter out the blood vessel inner cavity over-segmentation result in the blood vessel inner cavity slice.
3. The medical image-based stent apposition measurement method according to claim 1 or 2, wherein: after the obtaining of the vessel lumen slice set and the stent slice set, further comprising post-processing of the stent slice set, comprising:
and constraining the stent segmentation result of the stent slice according to the vessel lumen segmentation result of the vessel lumen slice so as to filter out the stent over-segmentation result outside the vessel lumen segmentation result.
4. The medical image-based stent apposition measurement method according to claim 1, wherein: after the obtaining of the vessel lumen slice set and the stent slice set, further comprising a post-processing of the vessel lumen slice set and the stent slice set, comprising:
And filling the lost information between the sections by adopting a linear interpolation algorithm between the adjacent sections in the blood vessel inner cavity section set and the stent section set.
5. The medical image-based stent apposition measurement method according to claim 1, wherein: the calculating the wall-attaching measurement distance of the independent stent wire according to the three-dimensional information of the independent stent wire and the blood vessel inner wall model comprises the following steps:
repeating steps 51 and 52 to traverse at least a portion of the stent points on the individual stent wires;
step 51, extracting support points on the independent support wires according to the three-dimensional information of the independent support wires and obtaining axial slices where the support points are located;
step 52, determining a blood vessel inner wall profile corresponding to the stent point according to the axial slice, determining a stent point profile according to the diameter of the independent stent wire, and calculating the maximum distance, the minimum distance and the average distance between the stent point profile and the blood vessel inner wall profile;
the wall-attaching measurement distance of the independent stent wire is calculated, wherein the wall-attaching measurement distance comprises an average maximum wall-attaching distance, an average minimum wall-attaching distance and an average wall-attaching distance, the average maximum wall-attaching distance is the average value of the maximum distance of each stent point, the average minimum wall-attaching distance is the average value of the minimum distance of each stent point, and the average wall-attaching distance is the average value of the average distance of each stent point.
6. The medical image-based stent apposition measurement method according to claim 1, wherein: the inputting the image set into a multi-task segmentation model to obtain a vessel lumen slice set and a stent slice set, comprising:
acquiring an image to be segmented, and carrying out feature extraction on the image to be segmented to acquire a first feature map;
inputting the first feature map into a first segmentation network to generate a first segmentation result and a second feature map;
generating a query vector through the second feature map;
inputting the first feature map into a transducer module for high-dimensional feature encoding to generate a third feature map, and simultaneously, acquiring the query vector by the transducer module to guide the high-dimensional feature decoding;
the third feature map is input into a second segmentation network to generate a second segmentation result.
7. A stent apposition measurement device based on medical images, comprising:
an acquisition module for acquiring an image set comprising a number of sequentially arranged axial slice images of a vessel lumen having a stent;
the segmentation module inputs the image set into a multi-task segmentation model to obtain a blood vessel inner cavity slice set and a bracket slice set, and the respective slice arrangement sequence of the blood vessel inner cavity slice set and the bracket slice set corresponds to the image set;
The reconstruction module is used for carrying out three-dimensional reconstruction according to the blood vessel inner cavity slice set according to the slice arrangement sequence to obtain a blood vessel inner wall model, and carrying out three-dimensional reconstruction according to the bracket slice set according to the slice arrangement sequence to obtain a bracket network model;
the extraction module is used for extracting three-dimensional information of a plurality of independent stent wires according to the stent network model, and comprises the following steps:
extracting a skeleton network of the bracket network model through a network X function library;
the method comprises the steps of carrying out point-by-point scanning on the skeleton network, dividing the skeleton network into a plurality of independent support wire backbones at the crossing points of the skeleton network to serve as three-dimensional information of the independent support wires, wherein each independent support wire backbone comprises nodes and edges, each node comprises a scanning starting point, a scanning end point and the crossing point of each independent support wire backbone, and two adjacent nodes are connected through the edges;
and the measurement module is used for calculating the average distance from each point on each independent stent wire to the blood vessel inner wall model according to the three-dimensional information of the independent stent wires and the blood vessel inner wall model so as to obtain the adherence measurement distance.
8. An electronic device comprising a memory and a processor, the memory storing a computer program that when executed by the processor implements the medical image-based stent apposition measurement method of any of claims 1-6.
9. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements the medical image-based stent apposition measurement method according to any one of claims 1-6.
CN202310624607.6A 2023-05-30 2023-05-30 Medical image-based stent adherence measurement method, device, equipment and medium Active CN116342608B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310624607.6A CN116342608B (en) 2023-05-30 2023-05-30 Medical image-based stent adherence measurement method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310624607.6A CN116342608B (en) 2023-05-30 2023-05-30 Medical image-based stent adherence measurement method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN116342608A CN116342608A (en) 2023-06-27
CN116342608B true CN116342608B (en) 2023-08-15

Family

ID=86876343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310624607.6A Active CN116342608B (en) 2023-05-30 2023-05-30 Medical image-based stent adherence measurement method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116342608B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105559829A (en) * 2016-01-29 2016-05-11 任冰冰 Ultrasonic diagnosis and imaging method thereof
CN108280290A (en) * 2018-01-22 2018-07-13 青岛理工大学 A kind of aggregate numerical model method for reconstructing
CN110136157A (en) * 2019-04-09 2019-08-16 华中科技大学 A kind of three-dimensional carotid ultrasound image vascular wall dividing method based on deep learning
CN111476796A (en) * 2020-03-10 2020-07-31 西北大学 Semi-supervised coronary artery segmentation system and segmentation method combining multiple networks
CN113205538A (en) * 2021-05-17 2021-08-03 广州大学 Blood vessel image segmentation method and device based on CRDNet
CN113516309A (en) * 2021-07-12 2021-10-19 福州大学 OD flow direction clustering method based on multi-path graph cutting rule and ant colony optimization
CN115205298A (en) * 2022-09-19 2022-10-18 真健康(北京)医疗科技有限公司 Method and device for segmenting blood vessels of liver region
CN115908297A (en) * 2022-11-11 2023-04-04 大连理工大学 Topology knowledge-based blood vessel segmentation modeling method in medical image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11830227B2 (en) * 2020-05-12 2023-11-28 Lunit Inc. Learning apparatus and learning method for three-dimensional image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105559829A (en) * 2016-01-29 2016-05-11 任冰冰 Ultrasonic diagnosis and imaging method thereof
CN108280290A (en) * 2018-01-22 2018-07-13 青岛理工大学 A kind of aggregate numerical model method for reconstructing
CN110136157A (en) * 2019-04-09 2019-08-16 华中科技大学 A kind of three-dimensional carotid ultrasound image vascular wall dividing method based on deep learning
CN111476796A (en) * 2020-03-10 2020-07-31 西北大学 Semi-supervised coronary artery segmentation system and segmentation method combining multiple networks
CN113205538A (en) * 2021-05-17 2021-08-03 广州大学 Blood vessel image segmentation method and device based on CRDNet
CN113516309A (en) * 2021-07-12 2021-10-19 福州大学 OD flow direction clustering method based on multi-path graph cutting rule and ant colony optimization
CN115205298A (en) * 2022-09-19 2022-10-18 真健康(北京)医疗科技有限公司 Method and device for segmenting blood vessels of liver region
CN115908297A (en) * 2022-11-11 2023-04-04 大连理工大学 Topology knowledge-based blood vessel segmentation modeling method in medical image

Also Published As

Publication number Publication date
CN116342608A (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN108038848B (en) Fast computing method and system based on medical image sequence plaque stability index
US11278208B2 (en) Automated measurement system and method for coronary artery disease scoring
CN111192316B (en) Deep learning for arterial analysis and assessment
US8787641B2 (en) Method and apparatus for quantitative analysis of a tree of recursively splitting tubular organs
CN101065776A (en) Multi-component vessel segmentation
CN113066091A (en) Cerebral vessel segmentation method and device based on black vessel wall curved surface reconstruction and storage medium
CN111681254A (en) Catheter detection method and system for vascular aneurysm interventional operation navigation system
CN117115150B (en) Method, computing device and medium for determining branch vessels
CN113592879A (en) Carotid plaque segmentation method and device based on artificial intelligence and storage medium
CN109949300B (en) Method, system and computer readable medium for anatomical tree structure analysis
CN111932652B (en) Coronary artery modeling dynamic adjustment method based on blood flow parameters
CN116342608B (en) Medical image-based stent adherence measurement method, device, equipment and medium
CN112151180B (en) Method and device for synthesizing mathematical model of blood vessel with stenosis
CN109919913B (en) Coronary artery radius calculation method, terminal and storage medium
CN115205298B (en) Method and device for segmenting blood vessels of liver region
CN116758093B (en) Image segmentation method, model training method, device, equipment and medium
Atlı et al. 3D reconstruction of coronary arteries using deep networks from synthetic X-ray angiogram data
Rajchl et al. Real-time segmentation in 4D ultrasound with continuous max-flow
CN113744215B (en) Extraction method and device for central line of tree-shaped lumen structure in three-dimensional tomographic image
CN113706568B (en) Image processing method and device
US20240070855A1 (en) Method and System of Calculating the Cross-section Area and Included Angle of Three-dimensional Blood Vessel Branch
CN112669370B (en) Coronary artery radius calculation method, terminal and storage medium
Cai et al. [Retracted] Detection of 3D Arterial Centerline Extraction in Spiral CT Coronary Angiography
CN117455932A (en) OCT image neovascularization and lumen joint segmentation method and device based on deep learning
CN113744215A (en) Method and device for extracting center line of tree-shaped lumen structure in three-dimensional tomography image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant