WO2021213124A1 - 血流特征预测方法、装置、计算机设备和存储介质 - Google Patents

血流特征预测方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2021213124A1
WO2021213124A1 PCT/CN2021/082629 CN2021082629W WO2021213124A1 WO 2021213124 A1 WO2021213124 A1 WO 2021213124A1 CN 2021082629 W CN2021082629 W CN 2021082629W WO 2021213124 A1 WO2021213124 A1 WO 2021213124A1
Authority
WO
WIPO (PCT)
Prior art keywords
blood vessel
center point
image
center
sample
Prior art date
Application number
PCT/CN2021/082629
Other languages
English (en)
French (fr)
Inventor
李璟
马骏
兰宏志
郑凌霄
Original Assignee
深圳睿心智能医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳睿心智能医疗科技有限公司 filed Critical 深圳睿心智能医疗科技有限公司
Publication of WO2021213124A1 publication Critical patent/WO2021213124A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/02007Evaluating blood vessel condition, e.g. elasticity, compliance
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/026Measuring blood flow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • G06T2207/30104Vascular flow; Blood flow; Perfusion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • This application relates to a blood flow feature prediction method, device, computer equipment and storage medium.
  • Blood flow characteristics are important physiological indicators for doctors to assess the health of blood vessels. Blood flow characteristics include Fractional Flow Reserve (FFR), pressure, and shear force. Predicting blood flow characteristics can be Provide assistance in the diagnosis and treatment of vascular diseases.
  • FFR Fractional Flow Reserve
  • CFD computational Fluid Dynamics
  • a blood flow feature prediction method including:
  • the centerline extraction model Based on the centerline extraction model, extract the centerline of the blood vessel in the three-dimensional blood vessel image to generate a blood vessel tree;
  • the centerline extraction model is a deep learning model trained in advance based on the Transformer network structure;
  • the blood flow feature prediction model is used for parallel processing of the feature sequence corresponding to each center point in the input center point sequence, and the self-attention mechanism is used to predict the blood flow feature.
  • a blood flow feature prediction device which includes:
  • the blood vessel tree generation module is used to extract the center line of the blood vessel in the three-dimensional blood vessel image based on the center line extraction model to generate the blood vessel tree;
  • the center line extraction model is a deep learning model trained in advance based on the Transformer network structure;
  • the blood flow feature prediction module is used to select the center point sequence from the center line in the blood vessel tree from the blood vessel entrance in the blood vessel tree, from the proximal end to the distal end, and place each center point in the center point sequence.
  • the corresponding feature sequence is input into the blood flow feature prediction model trained based on the Transformer network structure, and the blood flow features along each center point are predicted;
  • the blood flow feature prediction model is used for parallel processing of the feature sequence corresponding to each center point in the input center point sequence, and the self-attention mechanism is used to predict the blood flow feature.
  • a computer device includes a memory and a processor, and a computer program is stored in the memory.
  • the processor executes the steps in the blood flow feature prediction method of each embodiment of the present application.
  • a computer-readable storage medium in which a computer program is stored.
  • the processor executes the steps in the blood flow characteristic prediction method of each embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a method for predicting blood flow characteristics in an embodiment
  • Figure 2 is a schematic structural diagram of a Transformer network in an embodiment
  • Figure 3 is a schematic diagram of the structure of an encoder and a decoder in an embodiment
  • FIG. 4 is a schematic diagram of the flow of vascular tree generation in an embodiment
  • Fig. 5 is a schematic diagram of the center point at the entrance of the blood vessel in an embodiment
  • Figure 6 is a schematic diagram of centerline extraction in an embodiment
  • Fig. 7 is a training schematic diagram of a centerline extraction model in an embodiment
  • Figure 8 is a schematic diagram of centerline correction in an embodiment
  • Fig. 9 is a training schematic diagram of a centerline correction model in an embodiment
  • Figure 10 is a flow chart of centerline extraction and vascular tree generation in an embodiment
  • Figure 11 is a schematic diagram of blood flow feature prediction in an embodiment
  • FIG. 12 is a schematic diagram of training of a blood flow feature prediction model in an embodiment
  • Figure 13 is a flowchart of blood flow feature prediction in an embodiment
  • Figure 14 is a schematic diagram of blood vessel straightening in an embodiment
  • Figure 15 is a schematic diagram of blood vessel contour extraction in an embodiment
  • FIG. 16 is a schematic diagram of training of a blood vessel contour extraction model in an embodiment
  • Figure 17 is a flow chart of blood vessel contour extraction in an embodiment
  • Figure 18 is an overall flowchart of a blood flow feature prediction method in an embodiment
  • FIG. 19 is a structural block diagram of a blood flow feature prediction device in an embodiment
  • FIG. 20 is a structural block diagram of an apparatus for predicting blood flow characteristics in an embodiment
  • Fig. 21 is a diagram of the internal structure of a computer device in an embodiment.
  • a method for predicting blood flow characteristics is provided.
  • the method is applied to a server for illustration. It is understandable that the method can also be applied to a terminal, or It is applied to a system including a terminal and a server, and is realized through the interaction between the terminal and the server.
  • the method includes the following steps:
  • step S102 the centerline of the blood vessel in the three-dimensional blood vessel image is extracted based on the centerline extraction model to generate a blood vessel tree;
  • the centerline extraction model is a deep learning model trained in advance based on the Transformer network structure.
  • the centerline extraction model is a model for extracting the centerline of the blood vessel.
  • the three-dimensional blood vessel image is a three-dimensional image of the blood vessel.
  • the vascular tree is a tree-shaped structure composed of the center lines of the blood vessels in the three-dimensional blood vessel image.
  • Transformer network is an artificial neural network for sequence feature analysis.
  • the Transformer network is formed by connecting multiple encoders and decoders.
  • the first encoder inputs sequence features, and the last decoder outputs the predicted results.
  • the structure of the encoder and decoder is shown in Figure 3.
  • the encoder contains a self-attention module and a feed-forward module, and the decoder contains two self-attention modules and a forward module. To the propagation module, the input and output of the encoder and the decoder are all sequence characteristics.
  • the three-dimensional blood vessel image may be a CT (Computed Tomography) image or an MRI (Magnetic Resonance Imaging) image.
  • CT Computer Tomography
  • MRI Magnetic Resonance Imaging
  • the three-dimensional image of blood vessels may be a three-dimensional image of blood vessels in the whole body.
  • the three-dimensional blood vessel image may also be a three-dimensional image taken of at least one of cardiovascular, cerebrovascular, or peripheral blood vessels.
  • the server extracts the center line of the three-dimensional blood vessel image based on the center line extraction model to obtain the center line of the blood vessel in the three-dimensional blood vessel image, and then generates a blood vessel tree based on all the center lines.
  • the server may extract the center point in the three-dimensional blood vessel image based on the center line extraction model, and then generate the center line of the blood vessel in the three-dimensional blood vessel image based on all the center points.
  • the center point is a point on the center line.
  • the server may also correct the extracted center line to obtain a more accurate center line.
  • Step S104 starting from the entrance of the blood vessel in the blood vessel tree, selecting the center point sequence from the center line in the blood vessel tree in the order from the proximal end to the distal end, and adding the feature sequence corresponding to each center point in the center point sequence, Input the blood flow feature prediction model trained based on the Transformer network structure to predict the blood flow features along each center point.
  • the center point sequence contains multiple center points on the center line.
  • the feature sequence contains multiple features needed to predict blood flow features.
  • Blood flow characteristics are important physiological indicators for doctors to assess the health of blood vessels.
  • the blood flow characteristics include at least one of Fractional Flow Reserve (FFR), flow rate, pressure, and shear force.
  • FFR Fractional Flow Reserve
  • the blood flow feature prediction model is used for parallel processing of the feature sequence corresponding to each center point in the input center point sequence, and the self-attention mechanism is used to predict the blood flow feature.
  • the center point sequence is selected from the center line in the blood vessel tree, and then the characteristic sequence corresponding to each center point in the center point sequence is obtained, And input the feature sequence into the blood flow feature prediction model trained based on the Transformer network structure, and predict the blood flow feature along each center point.
  • the center point sequence may be selected from the center line of the preset length in the blood vessel tree, and then the characteristic sequence corresponding to the center point sequence on the center line of each preset length may be input into the blood flow feature prediction. In the model.
  • the center line of the blood vessel in the three-dimensional blood vessel image is first extracted based on the center line extraction model to generate a blood vessel tree.
  • the center line extraction model is a deep learning model trained in advance based on the Transformer network structure, and then the Starting from the blood vessel entrance, in the order from the proximal end to the distal end, select the center point sequence from the center line in the blood vessel tree, and input the characteristic sequence corresponding to each center point in the center point sequence into the blood trained based on the Transformer network structure.
  • the blood flow feature along each center point is predicted.
  • the blood flow feature prediction model is used to parallelize the feature sequence corresponding to each center point in the input center point sequence and adopt self-attention The mechanism predicts blood flow characteristics.
  • the center line extraction model based on the transformer network structure training is used to extract the center line
  • the blood flow feature prediction model based on the transformer network structure training is used to predict the blood flow feature. It is not necessary to simulate the flow of blood in the blood vessel to calculate the blood flow feature, and reduce The amount of calculation is improved, thereby improving the efficiency of blood flow feature prediction.
  • the blood flow feature prediction model trained based on the transformer network structure can perform parallel processing on the feature sequence corresponding to each center point in the center point sequence. The inclusive state depends on the previous implicit state and the input data, as far as serial processing is required), which further improves the efficiency of blood flow feature prediction, and there is no problem of gradient disappearance.
  • the blood flow feature prediction model trained based on the transformer network structure uses a self-attention mechanism to predict blood flow features, and can notice the correlation between the feature sequences of each center point, compared to the same treatment of the entire sequence.
  • focus for example, there is no focus when using a recurrent neural network for processing
  • the accuracy of blood flow feature prediction is improved.
  • the solution of the present application considers the spatial sequence relationship, which can predict the blood flow characteristics more accurately than the processing of separately separating each point on the blood vessel path and extracting local features for calculation. Furthermore, starting from the entrance of the blood vessel in the vascular tree, in the order from the proximal end to the distal end, the center point sequence is selected from the center line in the vascular tree. This selection method is closer to the flow pattern of blood flow and takes the entire blood vessel into consideration. Compared with the processing that only considers the spatial sequence relationship of a section of blood vessel path, the spatial sequence relationship of the tree can predict the blood flow characteristics more accurately. Moreover, while referring to the longer spatial sequence relationship of the entire vascular tree, It does not lead to a large increase in computational complexity, and thus does not affect the prediction efficiency of blood flow characteristics.
  • the centerline extraction model includes a first network and a second network.
  • the second network is a deep learning network based on the Transformer structure.
  • Step S102 specifically includes the following steps:
  • Step S402 Obtain the position feature parameters of the center point at the entrance of the blood vessel in the three-dimensional blood vessel image; the position feature parameters include position coordinates, direction vectors, and point classification labels; The branch point is not the end point.
  • the center point is the point on the center line of the blood vessel.
  • the point classification label can be 0, 1, or 2.
  • 0 can indicate that the center point is a non-branch point and not an end point
  • 1 can indicate that the center point is a branch point
  • 2 can indicate that the center point is an end point.
  • Fig. 5 it is the center point at the entrance of the blood vessel, and the arrow indicates the direction represented by the direction vector of the center point.
  • the position feature parameter of the center point of the entrance of the blood vessel in the three-dimensional blood vessel image can be automatically obtained by a machine learning method.
  • train a model for extracting the location feature parameters of the entrance center point The specific steps are as follows: obtain sample 3D vascular images and label data of the location feature parameters of the center points at the entrance of each sample 3D vascular image, and the server will sample 3D vascular images Enter the model to be trained (can be a convolutional neural network, such as: ResNet), predict the predicted position feature parameter of the center point at the entrance of the blood vessel, and iteratively adjust according to the difference between the predicted position feature parameter and the label data of the position feature parameter The parameters of the model to be trained to reduce the difference until the model converges to obtain the final model for extracting the characteristic parameters of the entrance center point position. Then, input the three-dimensional blood vessel image into the model to predict the position feature parameters of the center point at the entrance and exit.
  • ResNet convolutional neural network
  • Step S404 taking the center point at the entrance as the current center point, extracting the neighborhood image block of the current center point, and inputting the neighborhood image block into the first network for convolution processing, and outputting low-dimensional abstract features.
  • the neighborhood image block is a three-dimensional image block of a preset volume extracted from the three-dimensional blood vessel image with the current center point as the center.
  • the preset volume of the neighborhood image block may be 25*25*25 pixels.
  • Low-dimensional abstract features are low-dimensional abstract features extracted by the first network.
  • the neighborhood image block may be an image block extracted from the original image of the three-dimensional blood vessel image. In an embodiment, the neighborhood image block may also be an image block after normalization processing.
  • the first network may be composed of multiple convolutional neural networks (CNN), and the weights of each convolutional neural network are shared.
  • the server inputs the neighborhood image block of the current center point into the convolutional neural network corresponding to the current center point, and outputs the low-dimensional abstract features corresponding to the current center point.
  • Step S406 input the low-dimensional abstract features and location feature parameters of the current center point into the second network, predict the location feature parameters of the next center point, use the next center point as the current center point, and return to extract the neighbors of the current center point.
  • the domain image block continues to execute until the next center point is the end point, and the position feature parameters of all the center points of the blood vessel in the three-dimensional blood vessel image are obtained.
  • All center points are a series of center points on the blood vessel in the three-dimensional blood vessel image, and the intervals between each adjacent center points are equal. It can be understood that the server iteratively predicts the position feature parameters of the next center point of the current center point through the centerline extraction model, until the next center point is the end point, and finally the position feature parameters of all the center points of the blood vessels in the three-dimensional blood vessel image can be obtained.
  • the position coordinates and direction vectors in the predicted position feature parameters are regression tasks.
  • the point classification label in the predicted location feature parameter is a classification task.
  • the server inputs the low-dimensional abstract features and location feature parameters corresponding to the current center point into the second network, predicts the location feature parameters of the next center point, takes the next center point as the current center point, and returns to extract the current center
  • the neighborhood image block of the point continues to execute until the next center point is the end point, and the position feature parameters of all the center points of the blood vessel in the three-dimensional blood vessel image are obtained.
  • Fig. 6 is a schematic diagram of extracting the center line of a blood vessel in a three-dimensional blood vessel image based on a center line extraction model.
  • the leftmost point on the blood vessel is the center point at the entrance of the blood vessel, and the position feature parameters (position coordinates), (direction vector) and (point classification label) of the center point at the entrance are obtained.
  • Extract the neighborhood image block of the center point at the entrance input the neighborhood image block into the first network (CNN) for convolution processing, output low-dimensional abstract features, and then input low-dimensional abstract features and location feature parameters into the second In the network (Transformer), predict the location feature parameters of the next center point, then extract the neighborhood image block of the next center point, input the neighborhood image block into the first network (CNN) for convolution processing, and output low-dimensional abstraction Then input the low-dimensional abstract features and location feature parameters into the second network (Transformer) to predict the location feature parameters of the next center point, and so on, until the next center point is the end point, and get the blood vessel information in the three-dimensional vascular image. Position feature parameters of all center points.
  • the region where the current center point is located can be grown to obtain multiple connected regions, and the value of each connected region
  • the centroid is used as the next center point (that is, the position coordinates of the next center point are obtained), and the direction represented by the direction vector of the next center point is the direction from the current center point to the next center point. It can be understood that the current center point corresponds to several branch blood vessels, and several connected areas are obtained.
  • the position coordinates and direction vector of the next center point can also be predicted by a deep learning method.
  • the image block at the blood vessel branch is input to the branch point processing model to be trained (can be a convolutional neural network, such as: ResNet) to obtain the prediction direction vector of multiple branches, and iterate according to the difference between the prediction direction vector and the direction vector labeling data Adjust the parameters of the branch point processing model to be trained to reduce the difference until the model converges to obtain the branch point processing model. Then, input the neighborhood image block corresponding to the current center point into the branch point processing model to obtain the direction vector of multiple branches (the direction vector of the next center point), and move the current center point a certain distance along the direction vector of each branch , Get multiple next center points after branching (that is, get the position coordinates of the next center point).
  • the branch point processing model to be trained can be a convolutional neural network, such as: ResNet
  • step S408 the center line of the blood vessel in the three-dimensional blood vessel image is generated according to all the center points represented by all the position feature parameters, and the blood vessel tree is obtained.
  • the server may perform spline interpolation processing (for example, cubic spline interpolation) on all center points represented by all position feature parameters to generate the center line of the blood vessel in the three-dimensional blood vessel image.
  • spline interpolation processing for example, cubic spline interpolation
  • the position feature parameters of the center point at the entrance of the blood vessel in the three-dimensional blood vessel image are acquired, the center point at the entrance is taken as the current center point, the neighborhood image block of the current center point is extracted, and the neighborhood of the current center point is
  • the image block and location feature parameters are input into the centerline extraction model to predict the location feature parameters of the next center point, and use the next center point as the current center point to iteratively predict the location feature parameters of all the center points of the blood vessel in the three-dimensional blood vessel image ,
  • the center line of the blood vessel in the three-dimensional blood vessel image is generated, and the blood vessel tree is obtained.
  • the centerline extraction model based on Transformer network structure training is used to extract the centerline, which reduces the amount of calculation and improves the efficiency of centerline extraction.
  • the training step of the centerline extraction model specifically includes the following steps: obtaining sample data; the sample data includes a sequence of sample image blocks composed of sample neighborhood image blocks of consecutive sample center points, and sample center points Corresponding real position feature parameters; the continuous sample center point is a continuous center point selected from the blood vessel center line in the sample three-dimensional blood vessel image; in each round of training, iteratively replace the current sample center point in the sample image block sequence
  • the sample neighborhood image block is input into the first network of this round to extract low-dimensional abstract features, and the extracted low-dimensional abstract features and the predicted position feature parameters of the current sample center point are input into the second network of this round, and the next one is output
  • the predicted location feature parameters of the center point of the sample and update the model parameters of the first network and the second network of the current round according to the difference between the predicted location parameter corresponding to each sample center point and the real location feature parameter, until the training stop is reached Condition, the centerline extraction model is obtained; the centerline extraction model includes the first network and the second network when the training is stopped.
  • the sample center point is the point on the center line of the blood vessel in the sample three-dimensional blood vessel image as the training sample.
  • the continuous sample center point is a continuous center point selected from the blood vessel center line in the sample three-dimensional blood vessel image, and the interval between adjacent center points is equal.
  • the sample neighborhood image block is a three-dimensional image block of a preset volume extracted from the sample three-dimensional blood vessel image as a training sample with the sample center point as the center.
  • the preset volume of the sample neighborhood image block can be 25*25* 25 pixels.
  • a sequence of sample image blocks, a sequence of sample image blocks is a series of sample neighborhood image blocks, that is, contains multiple sample neighborhood image blocks.
  • the true location feature parameter is the location feature parameter corresponding to the center point of each sample in the sample data, which is known and used as a reference in the training process of the centerline extraction model.
  • the server iteratively inputs the sample neighborhood image block at the center point of the current sample in the sample image block sequence into the first network of this round to extract low-dimensional abstract features, and then extracts the extracted low-dimensional abstract features
  • the predicted location feature parameters of the center point of the current sample are input into the second network of this round, and the predicted location feature parameters of the center point of the next sample are output, so that the predicted location feature parameters corresponding to the center point of each sample in this round of training can be finally obtained
  • the model parameters of the first network and the second network of this round are updated to reduce the difference until the training stop condition is reached. Centerline extraction model.
  • the predicted location feature parameters of the center point of the current sample are the low-dimensional abstract features of the center point of the previous sample and the predicted location feature parameters of the center point of the previous sample into the second network of this round of training. owned.
  • the current sample center point is the sample center point at the blood vessel entrance (ie the first sample center point)
  • the true position feature parameter of the sample center point at the blood vessel entrance is used as input .
  • FIG. 7 it is a schematic diagram of the training of the centerline extraction model.
  • first network convolutional neural network
  • extract low-dimensional abstract features and then combine the extracted low-dimensional abstract features with the first sample
  • the real location feature parameters of the center point are input into the second network (Transformer network) of this round, and the predicted location feature parameters of the center point of the second sample are output, and then the neighborhood image block of the center point of the second sample is input into the first network
  • the extracted low-dimensional abstract features and the predicted location feature parameters of the second sample center point are input into the second network, and the predicted location feature parameters of the third sample center point are output.
  • the predicted location feature parameters of the sample center point according to the difference between the predicted location parameter corresponding to each sample center point and the real location feature parameter, update the model parameters of the first network and the second network in this round to reduce the difference , Start the next round of training until the training stop condition is reached, and get the centerline extraction model.
  • the training stop condition may be model convergence.
  • the center line extraction model is obtained through iterative training of sample data, which can extract the center line more accurately.
  • step S102 further includes the following steps: starting from the blood vessel entrance in the blood vessel tree, in order from the proximal end to the distal end, sequentially selecting the center points to be corrected on the center line of the blood vessel tree; For the center point to be corrected, a preset number of images centered on the center point to be corrected and perpendicular to the center line are obtained to obtain the first image sequence; the first image sequence is input into the centerline correction model to obtain the image sequence to be corrected The position offset corresponding to the center point of the center line; the center line correction model is a deep learning model trained in advance based on the Transformer network structure; the position of the center point to be corrected is corrected according to the position offset, and the corrected center point is obtained; according to All the corrected center points generate a corrected center line, and the final vascular tree is obtained.
  • the centerline correction model is used to correct the position of the centerline to make the position of the centerline more accurate.
  • the position offset is the position offset between the center point to be corrected and the center of the blood vessel on the cross section of the blood vessel.
  • Position correction is to adjust the position of the center point to be corrected.
  • the center points located on the center line of the blood vessel tree are sequentially selected as the to-be-corrected Center point.
  • a preset number of images centered on the center point to be corrected and perpendicular to the center line are obtained to obtain a first image sequence. It can be understood that the first image sequence contains multiple images.
  • the server inputs the first image sequence into the centerline correction model to obtain the position offset corresponding to the center point to be corrected.
  • the server adjusts the position of each center point to be corrected according to the position offset of each center point to be corrected to obtain the corrected center point. Based on all the corrected center points, a corrected center line is generated to obtain the final vascular tree.
  • the position offset may include offsets in two directions (ie, the x direction and the y direction).
  • the centerline correction model is composed of multiple convolutional neural networks and one Transformer network, and the weights of each convolutional neural network are shared.
  • FIG. 8 it is a schematic diagram of centerline correction through the centerline correction model.
  • a preset number of images centered on the center point to be corrected and perpendicular to the centerline are acquired to obtain the first image sequence.
  • Input each image in the first image sequence into each convolutional neural network extract low-dimensional abstract features, and then input all low-dimensional abstract features into the Transformer network, and predict the position offset of the center point to be corrected.
  • the position offset includes the offsets dx and dy in two directions.
  • the server may perform spline interpolation processing (for example, cubic spline interpolation) on all the corrected center points to obtain the corrected center line.
  • spline interpolation processing for example, cubic spline interpolation
  • the center line correction model is performed based on the center line correction model trained in advance based on the Transformer network structure to obtain the final blood vessel tree, which can obtain a more accurate blood vessel tree.
  • the self-attention mechanism of the Transformer network structure can learn the correlation in the image sequence, so as to obtain a more accurate position offset, and then a more accurate blood vessel tree.
  • the use of a centerline correction model trained based on the Transformer network structure can perform parallel processing on image sequences and improve processing efficiency.
  • the step of inputting the first image sequence into the centerline correction model to obtain the position offset corresponding to the center point to be corrected includes the following steps: inputting the first image sequence into the centerline correction model , Predict the position offset corresponding to the image at the center position in the first image sequence as the position offset corresponding to the center point to be corrected.
  • the server inputs the first image sequence corresponding to the center point to be corrected into the centerline correction model, predicts the position offset corresponding to the image in the center position in the first image sequence, and uses the position offset as the The position offset corresponding to the center point to be corrected.
  • the following example illustrates that assuming that N images centered on the center point to be corrected and perpendicular to the centerline are taken as the first image sequence, then the centerline correction model predicts the position offset of the N/2th image And use the position offset as the position offset corresponding to the center point to be corrected.
  • N can be an odd number or an even number. When N is an odd number, the value of N/2 is rounded.
  • the position offset corresponding to the image in the center position in the first image sequence is predicted as the position offset corresponding to the center point to be corrected .
  • the first image sequence can be processed in parallel through the centerline correction model based on the Transformer network structure, and the position offset of the center point can be quickly obtained, which improves the efficiency of centerline correction.
  • the training step of the centerline correction model specifically includes the following steps: acquiring the first sample image sequence corresponding to each sample center point on the blood vessel centerline in the sample three-dimensional blood vessel image, and the real image sequence corresponding to each sample center point. Position offset; in each round of iterative training, the first sample image sequence is input into the center line correction model to be trained to obtain the predicted position offset; the center line correction model to be trained includes the Transformer network structure; According to the difference between the predicted position offset and the actual position offset, the center line correction model to be trained is adjusted iteratively until the model converges, and the final center line correction model is obtained.
  • the first sample image sequence is a preset number of images extracted from the sample three-dimensional blood vessel image, centered on the sample center point and perpendicular to the blood vessel center line in the sample three-dimensional blood vessel image. It can be understood that the first sample The image sequence contains multiple two-dimensional images.
  • the actual position offset is the position offset used as a reference during the training process of the centerline correction model.
  • the centerline correction model to be trained is composed of multiple convolutional neural networks and one Transformer network, and the weights of each convolutional neural network are shared.
  • Figure 9 it is a model training schematic diagram of the centerline correction model.
  • the server inputs each image in the first sample image sequence into each convolutional neural network to extract low-dimensional abstract features, and then inputs all the low-dimensional abstract features corresponding to each image into the Transformer network to predict the first sample image sequence The predicted position offsets dx and dy of the image at the center position. According to the difference between the predicted position offset and the actual position offset, iteratively adjust the parameters of the convolutional neural network and the Transformer network until the model converges, and the final centerline correction model is obtained.
  • the centerline correction model includes convolutional neural network and Transformer network when the model converges.
  • Fig. 10 is a flowchart of centerline extraction and vascular tree generation in an embodiment.
  • the centerline correction model is obtained through iterative training, so that a more accurate centerline can be obtained.
  • step S104 specifically includes the following steps: starting from the blood vessel entrance of the blood vessel tree, in order from the proximal end to the distal end, sequentially selecting points on the center line of a preset length from the blood vessel tree to obtain the center point sequence ; Obtain the feature sequence corresponding to each central point sequence; input the feature sequence corresponding to each central point sequence into the blood flow feature prediction model trained based on the Transformer network structure, and predict the blood flow feature along each central point in the central point sequence .
  • the points on the centerline of the predetermined length are selected from the vascular tree in turn to obtain the center point sequence, which divides the vascular tree into predetermined lengths For multiple segments, select the center point sequence in each segment.
  • the characteristic sequence includes not less than one known characteristic
  • the blood flow characteristic includes not less than one blood flow characteristic.
  • the corresponding feature sequence composed of known features that need to be input is input into the blood flow feature prediction model.
  • the blood flow feature prediction model is composed of multiple convolutional neural networks (CNN) or fully connected networks (FC) and a Transformer network, and each convolutional neural network or each fully connected network shares weights.
  • CNN convolutional neural networks
  • FC fully connected networks
  • FIG 11 it is a schematic diagram of blood flow feature prediction.
  • the server inputs the feature sequence corresponding to each central point in the central point sequence into a convolutional neural network (CNN) or a fully connected network (FC).
  • CNN convolutional neural network
  • FC fully connected network
  • Extract low-dimensional abstract features Extract low-dimensional abstract features, and then input all the low-dimensional abstract features corresponding to each center point into the Transformer network, and predict the blood flow characteristics along each center point in the center point sequence.
  • the blood flow feature prediction model trained based on the Transformer network structure is used to predict the blood flow feature, without using a large amount of computing power for simulation to obtain the blood flow feature, reducing the amount of calculation, thereby improving the efficiency of blood flow feature prediction .
  • the blood flow feature prediction model trained based on the Transformer network structure can perform parallel processing on the feature sequence corresponding to each center point in the center point sequence, which also improves the efficiency of blood flow feature prediction.
  • the blood flow feature prediction model trained based on the Transformer network structure uses a self-attention mechanism to predict blood flow features, and can notice the correlation between the feature sequences of each center point, so that blood flow features can be predicted more accurately. Starting from the vascular entrance in the vascular tree, select the center point sequence from the center line in the vascular tree in the order from the proximal end to the distal end. This selection method can simulate the flow of blood flow, so as to more accurately predict blood flow feature.
  • the step of training the blood flow feature prediction model specifically includes the following steps: acquiring the sample feature sequence of each center point on the center line of the vascular tree in the sample three-dimensional vascular image and the sample blood flow characteristics corresponding to each center point;
  • the sample feature sequence of each center point on the center line of the preset length is input into the blood flow feature prediction model to be trained, and the predicted blood flow feature of each center point along the blood vessel on the center line of the preset length is obtained; according to the sample blood flow feature and Predict the difference of blood flow characteristics, iteratively adjust the blood flow characteristic prediction model to be trained until the model converges, and obtain the final blood flow characteristic prediction model.
  • the blood flow characteristic of the sample is a known blood flow characteristic that is used as a reference in the training process of the blood flow characteristic prediction model.
  • the blood flow feature prediction model to be trained is composed of multiple convolutional neural networks (CNN) or fully connected networks (FC) and one Transformer network, and the convolutional neural networks or fully connected networks share weights.
  • CNN convolutional neural networks
  • FC fully connected networks
  • FIG 12 it is a schematic diagram of the training of the blood flow feature prediction model.
  • the sample feature sequence of each center point on the center line of the preset length is input into the convolutional neural network (CNN) or the fully connected network (FC), and the low Dimensional abstract features, and then all the low-dimensional abstract features corresponding to each center point are input into the Transformer network, and the predicted blood flow characteristics of each center point along the blood vessel on the center line of the preset length are predicted.
  • the final blood flow feature prediction model includes Multiple convolutional neural networks or fully connected networks and one Transformer network when the model converges.
  • FIG. 13 it is a flowchart of blood flow feature prediction in an embodiment.
  • the blood flow feature prediction model trained based on the Transformer network structure can perform parallel processing on the feature sequence corresponding to each center point in the center point sequence, which also improves the efficiency of blood flow feature prediction.
  • the blood flow feature prediction model trained based on the Transformer network structure uses a self-attention mechanism to predict blood flow features, and can notice the correlation between the feature sequences of each center point, so that blood flow features can be predicted more accurately. Starting from the vascular entrance in the vascular tree, select the center point sequence from the center line in the vascular tree in the order from the proximal end to the distal end. This selection method can simulate the flow of blood flow, so as to more accurately predict blood flow feature.
  • the feature sequence includes the original 3D image block corresponding to the center point in the center point sequence, the 3D segmented image block, the diameter of the blood vessel, the distance to the blood vessel entrance, the distance to the nearest bifurcation upstream, and the total length of the upstream plaque.
  • the blood flow feature includes at least one of blood flow reserve, flow rate, pressure, and shear force.
  • the original 3D image block is the 3D image block corresponding to the center point in the 3D blood vessel image.
  • the three-dimensional segmented image block is a three-dimensional image block corresponding to the center point in the three-dimensional segmented image of the three-dimensional blood vessel image.
  • Upstream is the upstream of the center point.
  • Downstream is the downstream of the center point.
  • the blood flow reserve fraction is the ratio of the maximum blood flow that the myocardial area supplied by the coronary arteries can obtain to the maximum blood flow that the myocardial area can obtain under normal conditions in the presence of coronary artery stenosis, that is, the myocardium
  • the original 3D image block, the 3D segmented image block, the diameter of the blood vessel, the distance to the blood vessel entrance, the distance to the nearest bifurcation of the upstream, the total length of the upstream plaque, the average length of the upstream plaque, the average area of the upstream path, the largest upstream path Area, minimum area of upstream path, total length of downstream patch, average length of downstream path, average area of downstream path, maximum area of downstream path and minimum area of downstream path are all corresponding characteristics of each center point.
  • Atrial volume, myocardial volume, omics characteristics and blood vessel area are the overall characteristics, and the atrial volume, myocardial volume, omics characteristics and blood vessel area characteristics of each center point are consistent.
  • the input feature sequence is determined according to which blood flow features need to be predicted.
  • the blood flow reserve score in the sample blood flow feature may be obtained through CFD (Computational Fluid Dynamics) simulation, or it may be clinical data .
  • the server can extract the three-dimensional blood vessel contour in the three-dimensional blood vessel image containing the blood vessel tree through the deep learning model. It can be understood that the three-dimensional blood vessel contour is equivalent to the three-dimensional segmented image of the blood vessel, so that the center point is in the three-dimensional blood vessel contour. The corresponding three-dimensional segmented image block.
  • the blood flow feature prediction model can predict a variety of blood flow features, which can provide help for the diagnosis and treatment of vascular diseases.
  • the three-dimensional segmented image block is an image block in the pixel space corresponding to the center point in the center point sequence in the three-dimensional blood vessel contour.
  • the step of selecting the center point sequence from the center line in the blood vessel tree starting from the blood vessel entrance in the blood vessel tree, in the order from the proximal end to the distal end it also includes the following steps: performing blood vessel imaging on the three-dimensional blood vessel image containing the blood vessel tree The contour extraction process obtains the blood vessel contour, and performs lofting and interpolation on the blood vessel contour to generate a three-dimensional blood vessel model; the three-dimensional blood vessel model is a three-dimensional segmented image.
  • the server may perform blood vessel contour extraction processing on the three-dimensional blood vessel image containing the blood vessel tree through the deep learning model to obtain the blood vessel contour, and perform lofting and interpolation on the blood vessel contour to generate a three-dimensional blood vessel model, which is a three-dimensional segmented image.
  • the image block corresponding to each center point is extracted from the three-dimensional blood vessel model as a three-dimensional segmented image block.
  • the three-dimensional blood vessel model is in a continuous space, and the three-dimensional blood vessel model needs to be converted to the pixel space first, and then the image blocks corresponding to each center point are extracted from the three-dimensional blood vessel model in the pixel space as the three-dimensional segmented image Piece.
  • the three-dimensional segmented image block is an image block corresponding to the center point in the center point sequence in the three-dimensional blood vessel model.
  • a three-dimensional blood vessel model can be used to simulate the flow of blood in the blood vessel, so as to calculate the blood flow reserve fraction.
  • the blood flow reserve score obtained in this embodiment can be used as part or all of the sample blood flow features in the sample blood flow feature sequence.
  • the width, length, plaque and other information of the blood vessel can be known through the three-dimensional blood vessel model. Therefore, some or all of the sample features in the sample feature sequence can be calculated according to the three-dimensional blood vessel model. Specifically, the total length of the upstream plaque, the average length of the upstream plaque, the average area of the upstream path, the maximum area of the upstream path, the minimum area of the upstream path, the total length of the downstream plaque, the average length of the downstream plaque, and the downstream path can be calculated according to the three-dimensional blood vessel model. Average area, maximum area of downstream path and minimum area of downstream path. In the training of the blood flow feature prediction model, the data obtained in this embodiment can be used as the sample feature in the sample feature sequence.
  • the three-dimensional blood vessel model can be rendered visually, so that the doctor can view the condition of the blood vessel, such as the stenosis of the blood vessel.
  • the step of extracting the contour of the blood vessel on the three-dimensional blood vessel image including the blood vessel tree to obtain the contour of the blood vessel specifically includes the following steps: aligning the blood vessels in the three-dimensional blood vessel image containing the blood vessel tree along the center line of the blood vessel tree Perform straightening processing to obtain a blood vessel straightened image; from the blood vessel straightened image, extract a preset number of blood vessel cross-sectional images adjacent to each center point on the center line to obtain a second image sequence corresponding to each center point; The second image sequence corresponding to each center point is input into the blood vessel contour extraction model, and the contour distance corresponding to each center point is obtained; the contour distance is used to indicate the distance from the blood vessel contour corresponding to the center point to the center point; The contour distance corresponding to each center point generates a blood vessel contour.
  • the blood vessel cross-sectional image is a two-dimensional image perpendicular to the center line. It can be understood that extracting a preset number of blood vessel cross-sectional images adjacent to each center point on the center line is to extract a preset number of two-dimensional images centered on each center point on the center line and perpendicular to the center line.
  • the second image sequence corresponding to each center point contains multiple blood vessel cross-sectional images.
  • the contour distance is used to indicate the distance from the blood vessel contour corresponding to the center point to the center point. It can be understood that the contour distance includes the distance from each point on the blood vessel contour to the center point.
  • FIG 14 it is a schematic diagram of blood vessel straightening.
  • the left image is the blood vessel before straightening.
  • the entire blood vessel in the three-dimensional blood vessel image including the blood vessel tree may also be straightened to obtain an entire blood vessel straightened image.
  • the blood vessel cross-sectional image may be an original cross-sectional image extracted from a three-dimensional blood vessel image.
  • the blood vessel cross-sectional image may also be a blood vessel contour map model of the original cross-sectional image, and the blood vessel contour map model is composed of nodes and edges.
  • the blood vessel cross-sectional image may also be a polar coordinate transformed image of the original cross-sectional image, that is, for the original cross-sectional image, line segments of the same length are radiated from the center point in a clockwise direction toward each angle, and Rearrange all the line segments in order to generate a two-dimensional image.
  • the blood vessel contour extraction model includes multiple convolutional neural networks and one Transformer network, and weights are shared among the multiple convolutional neural networks.
  • Figure 15 it is a schematic diagram of blood vessel contour extraction.
  • the server inputs each image in the second image sequence corresponding to the central point into each convolutional neural network (CNN) to extract low-dimensional abstract features , And then all the low-dimensional abstract features are input into the Transformer network, the contour distance y corresponding to the center point is predicted, and the blood vessel contour is generated according to the contour distance corresponding to each center point.
  • CNN convolutional neural network
  • the N/2th image is predicted by the blood vessel contour extraction model (N is an integer.
  • N is an integer.
  • N/2 is Rounding
  • the contour of the blood vessel corresponding to each center point can be obtained according to the contour distance corresponding to each center point.
  • the blood vessel contour extraction process is performed on the three-dimensional blood vessel image including the blood vessel tree through the blood vessel contour extraction model to obtain the blood vessel contour, and then the three-dimensional segmented image block corresponding to each center point can be obtained, so that the three-dimensional segmented image block can be used as a feature Sequence input blood flow characteristics prediction model to predict blood flow characteristics.
  • the method further includes the following steps: determining the contour of the blood vessel corresponding to each center point, and taking the centroid of the corresponding blood vessel contour as the new center point corresponding to each center point; The new center point generates a new vascular tree.
  • step S104 starting from the entrance of the blood vessel in the vascular tree, and in the order from the proximal end to the distal end, the step of selecting the center point sequence from the center line in the vascular tree includes the following steps: , According to the order from the proximal end to the distal end, select the center point sequence from the center line in the vascular tree.
  • the server can determine the blood vessel contour corresponding to each center point according to the contour distance corresponding to each center point, and take the centroid of the corresponding blood vessel contour as the new center point corresponding to each center point.
  • the new center point is subjected to spline interpolation processing to obtain a new center line, thereby generating a new blood vessel tree.
  • the centroid of the blood vessel contour corresponding to each center point is used as the new center point to generate a new blood vessel tree.
  • the blood vessel tree can be corrected to obtain a more accurate blood vessel tree, and then the more accurate blood vessel tree Blood flow feature prediction, improving the accuracy of blood flow feature prediction.
  • the training step of the blood vessel contour extraction model specifically includes the following steps: performing blood vessel straightening processing on the sample three-dimensional blood vessel image to obtain the sample blood vessel straightened image; aiming at each sample center on the center line of the sample blood vessel straightening image Point, take multiple sample blood vessel cross-sectional images close to the center point of each sample to obtain multiple second sample image sequences; obtain the true contour distance marked for each sample blood vessel cross-sectional image; the true contour distance is used to represent the sample blood vessel The distance from each point on the blood vessel contour in the cross-sectional image to the center point corresponding to the sample blood vessel cross-sectional image; input the second sample image sequence into the blood vessel contour extraction model to be trained to obtain the second sample image sequence The predicted contour distance corresponding to the image of, according to the difference between the predicted contour distance and the real contour distance, iteratively adjust the blood vessel contour extraction model to be trained until the model converges, and the blood vessel contour extraction model is obtained.
  • the true contour distance is a known contour distance used as a reference in the process of training the blood vessel contour extraction model.
  • the blood vessel contour extraction model to be trained includes multiple convolutional neural networks and one Transformer network, and weights are shared among the multiple convolutional neural networks.
  • Figure 16 it is a schematic diagram of the training of the blood vessel contour extraction model.
  • the second sample image sequence corresponding to the sample center point in the sample three-dimensional blood vessel image is input into each convolutional neural network (CNN) to extract low-dimensional abstract features, and then Input all the low-dimensional abstract features into the Transformer network to obtain the predicted contour distance y corresponding to the image of the second sample image sequence.
  • CNN convolutional neural network
  • the contour distance is not a single value, but an array including multiple values.
  • the contour distance can be an array of 64 values, that is, the contour 360° is divided into 64 equal angular intervals, and each yi in the array is the distance from the center point to the corresponding angle of the contour.
  • FIG. 17 it is a flowchart of blood vessel contour extraction in an embodiment.
  • the blood vessels in the three-dimensional blood vessel image containing the blood vessel tree are straightened, and then the blood vessel contour extraction model based on the Transformer network structure is trained, and the blood vessel contour corresponding to each center point is generated based on the blood vessel contour extraction model, and then the new blood vessel contour is obtained according to the blood vessel contour.
  • the center point is a flowchart of blood vessel contour extraction in an embodiment.
  • the vessel contour can be extracted more accurately.
  • Fig. 18 is an overall flowchart of a blood flow feature prediction method in an embodiment.
  • the centerline of the three-dimensional blood vessel image is extracted, and the blood vessel tree is generated.
  • the vessel contour extraction model based on the vessel contour extraction model, the vessel contour corresponding to each center point is obtained, so as to obtain the three-dimensional segmented image block corresponding to the center point.
  • the blood flow feature prediction model predict the blood flow features along each center point.
  • the three-dimensional segmented image block obtained based on the contour of the blood vessel can be used as the input feature sequence of the blood flow feature prediction model.
  • a blood flow feature prediction device 1900 which includes: a blood vessel tree generation module 1902 and a blood flow feature prediction module 1904, wherein:
  • the blood vessel tree generation module 1902 is used to extract the center line of the blood vessel in the three-dimensional blood vessel image based on the center line extraction model to generate a blood vessel tree; the center line extraction model is a deep learning model trained in advance based on the Transformer network structure.
  • the blood flow feature prediction module 1904 is used to select the center point sequence from the center line in the blood vessel tree from the blood vessel entrance in the blood vessel tree, from the proximal end to the distal end, and to combine the center points in the center point sequence
  • the corresponding feature sequence is input into the blood flow feature prediction model trained based on the Transformer network structure, and the blood flow features along each center point are predicted.
  • the blood flow feature prediction model is used for parallel processing of the feature sequence corresponding to each center point in the input center point sequence, and the self-attention mechanism is used to predict the blood flow feature.
  • the centerline extraction model includes a first network and a second network.
  • the second network is a deep learning network based on the Transformer structure.
  • the blood vessel tree generation module 1902 is also used to obtain the position feature parameters of the center point at the entrance of the blood vessel in the three-dimensional blood vessel image; the position feature parameters include position coordinates, direction vectors, and point classification labels; point classification labels are used to indicate that the center point is a branch Point, end point or non-branch point non-end point; take the center point at the entrance as the current center point, extract the neighborhood image block of the current center point, and input the neighborhood image block into the first network for convolution processing, and output low-dimensionality Abstract features; input the low-dimensional abstract features and location feature parameters of the current center point into the second network, predict the location feature parameters of the next center point, use the next center point as the current center point, and return to extract the neighbors of the current center point
  • the domain image block continues to execute until the next center point as the end point to obtain the position feature parameters of all the center points of the
  • the blood flow feature prediction device 1900 further includes:
  • the model training module 1906 is used to obtain sample data; the sample data includes a sequence of sample image blocks composed of sample neighborhood image blocks of continuous sample center points, and real position feature parameters corresponding to the sample center points; continuous sample centers
  • the point is a continuous center point selected from the center line of the blood vessel in the sample three-dimensional blood vessel image; in each round of training, iteratively input the sample neighborhood image block of the current sample center point in the sample image block sequence into the first round of the current round.
  • Extract low-dimensional abstract features from a network input the extracted low-dimensional abstract features and the predicted location feature parameters of the current sample center point into the second network of this round, and output the predicted location feature parameters of the next sample center point, and according to The difference between the predicted position parameter and the real position feature parameter corresponding to the center point of each sample, update the model parameters of the first network and the second network of this round until the training stop condition is reached, and the centerline extraction model is obtained; centerline extraction The model includes the first network and the second network when the training stops.
  • the vascular tree generation module 1902 is also used to start from the vascular entrance in the vascular tree and sequentially select the center points to be corrected on the center line of the vascular tree in order from the proximal end to the distal end; A preset number of images centered on the center point to be corrected and perpendicular to the center line are acquired to obtain the first image sequence; the first image sequence is input into the center line correction model to obtain the The position offset corresponding to the corrected center point; the center line correction model is a deep learning model trained in advance based on the Transformer network structure; the position of the center point to be corrected is corrected according to the position offset, and the corrected center point is obtained; Based on all the corrected center points, a corrected center line is generated to obtain the final vascular tree.
  • the vascular tree generation module 1902 is also used to input the first image sequence into the centerline correction model, and predict the position offset corresponding to the image in the center position in the first image sequence as the center to be corrected The position offset corresponding to the point.
  • the model training module 1906 is also used to obtain the first sample image sequence corresponding to each sample center point on the blood vessel center line in the sample three-dimensional blood vessel image, and the actual position offset corresponding to each sample center point;
  • the first sample image sequence is input into the center line correction model to be trained to obtain the predicted position offset;
  • the center line correction model to be trained includes the Transformer network structure; according to the predicted position offset The difference between the offset and the actual position offset, iteratively adjust the center line correction model to be trained until the model converges, and the final center line correction model is obtained.
  • the blood flow feature prediction module 1904 is also used to start from the blood vessel entrance of the blood vessel tree and sequentially select points on the center line of a preset length from the blood vessel tree in order from the proximal end to the distal end to obtain the center Point sequence; Obtain the feature sequence corresponding to each center point sequence; Input the feature sequence corresponding to each center point sequence into the blood flow feature prediction model trained based on the Transformer network structure, and predict the blood flow along each center point in the center point sequence. Flow characteristics.
  • the model training module 1906 is also used to obtain the sample feature sequence of each center point on the center line of the vascular tree in the sample three-dimensional vascular image and the sample blood flow characteristics corresponding to each center point;
  • the sample feature sequence of each center point on the above is input into the blood flow feature prediction model to be trained, and the predicted blood flow feature of each center point along the blood vessel on the center line of the preset length is obtained; according to the difference between the sample blood flow feature and the predicted blood flow feature , Iteratively adjust the blood flow feature prediction model to be trained until the model converges to obtain the final blood flow feature prediction model.
  • the feature sequence includes the original 3D image block corresponding to the center point in the center point sequence, the 3D segmented image block, the diameter of the blood vessel, the distance to the blood vessel entrance, the distance to the nearest bifurcation upstream, and the total length of the upstream plaque.
  • the blood flow feature includes at least one of blood flow reserve, flow rate, pressure, and shear force.
  • the three-dimensional segmented image block is an image block in the pixel space corresponding to the center point in the center point sequence in the three-dimensional blood vessel model.
  • the blood flow feature prediction device 1900 further includes a blood vessel contour extraction module 1908, which is used to perform a blood vessel contour extraction process on a three-dimensional blood vessel image containing a blood vessel tree to obtain a blood vessel contour, and perform lofting and interpolation on the blood vessel contour to generate Three-dimensional blood vessel model; the three-dimensional blood vessel model is a three-dimensional segmented image.
  • a blood vessel contour extraction module 1908 which is used to perform a blood vessel contour extraction process on a three-dimensional blood vessel image containing a blood vessel tree to obtain a blood vessel contour, and perform lofting and interpolation on the blood vessel contour to generate Three-dimensional blood vessel model; the three-dimensional blood vessel model is a three-dimensional segmented image.
  • the blood vessel contour extraction module 1908 is also used to straighten the blood vessels in the three-dimensional blood vessel image including the blood vessel tree along the center line in the blood vessel tree to obtain a blood vessel straightened image; from the blood vessel straightened image , Extract a preset number of blood vessel cross-sectional images adjacent to each center point on the center line, and obtain a second image sequence corresponding to each center point; input the second image sequence corresponding to each center point into the blood vessel contour extraction model , Get the contour distance corresponding to each center point; contour distance, used to indicate the distance from the blood vessel contour corresponding to the center point to the center point; generate the blood vessel contour according to the contour distance corresponding to each center point.
  • the blood vessel contour extraction module 1908 is also used to determine the blood vessel contour corresponding to each center point, and take the centroid of the corresponding blood vessel contour as the new center point corresponding to each center point; A new center point generates a new vascular tree.
  • the blood flow feature prediction module 1904 is also used to select the center point sequence from the center line in the blood vessel tree from the blood vessel entrance in the new blood vessel tree in the order from the proximal end to the distal end.
  • the model training module 1906 is also used to perform blood vessel straightening processing on the sample three-dimensional blood vessel image to obtain a sample blood vessel straightened image; for each sample center point on the center line in the sample blood vessel straightened image, take the same Multiple sample blood vessel cross-sectional images near the center point of the sample to obtain multiple second sample image sequences; obtain the true contour distance marked for each sample blood vessel cross-sectional image; the true contour distance is used to indicate the value in the sample blood vessel cross-sectional image The distance between each point on the contour of the blood vessel and the center point corresponding to the cross-sectional image of the sample blood vessel; input the second sample image sequence into the blood vessel contour extraction model to be trained, and obtain the prediction corresponding to the image of the second sample image sequence Contour distance: According to the difference between the predicted contour distance and the real contour distance, iteratively adjust the blood vessel contour extraction model to be trained until the model converges to obtain the blood vessel contour extraction model.
  • the center line of the blood vessel in the three-dimensional blood vessel image is first extracted based on the center line extraction model to generate a blood vessel tree.
  • the center line extraction model is a deep learning model trained in advance based on the Transformer network structure, and then the Starting from the blood vessel entrance, in the order from the proximal end to the distal end, select the center point sequence from the center line in the blood vessel tree, and input the characteristic sequence corresponding to each center point in the center point sequence into the blood trained based on the Transformer network structure.
  • the blood flow feature along each center point is predicted.
  • the blood flow feature prediction model is used to parallelize the feature sequence corresponding to each center point in the input center point sequence and adopt self-attention The mechanism predicts blood flow characteristics.
  • the center line extraction model based on the Transformer network structure training is used to extract the center line
  • the blood flow feature prediction model based on the Transformer network structure training is used to predict the blood flow feature.
  • the blood flow feature prediction model trained based on the Transformer network structure can perform parallel processing on the feature sequence corresponding to each center point in the center point sequence, which also improves the efficiency of blood flow feature prediction.
  • the blood flow feature prediction model trained based on the Transformer network structure uses a self-attention mechanism to predict blood flow features, and can notice the correlation between the feature sequences of each center point, so that blood flow features can be predicted more accurately.
  • the center point sequence Starting from the vascular entrance in the vascular tree, select the center point sequence from the center line in the vascular tree in the order from the proximal end to the distal end. This selection method can simulate the flow of blood flow, so as to more accurately predict blood flow feature.
  • Each module in the above blood flow characteristic prediction device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 21.
  • the computer equipment includes a processor, a memory, and a network interface connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the database of the computer equipment is used to store blood flow feature prediction data.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer program is executed by the processor to realize a blood flow feature prediction method.
  • FIG. 21 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • a computer device including a memory and a processor, and a computer program is stored in the memory, and the processor implements the steps in the foregoing method embodiments when the processor executes the computer program.
  • a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the steps in the foregoing method embodiments.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical storage.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM may be in various forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Software Systems (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physiology (AREA)
  • Cardiology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Hematology (AREA)
  • Evolutionary Biology (AREA)
  • Vascular Medicine (AREA)
  • Fuzzy Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Hardware Design (AREA)

Abstract

提供一种血流特征预测方法、装置、计算机设备和存储介质。该方法包括以下步骤:基于中心线提取模型提取三维血管影像中血管的中心线,生成血管树,其中,所述中心线提取模型,是预先基于Transformer网络结构训练的深度学习模型;从所述血管树中的血管入口开始,按照近端到远端的顺序,从所述血管树中的中心线上选取中心点序列,并将所述中心点序列中的各中心点所对应的特征序列,输入基于Transformer网络结构训练的血流特征预测模型中,预测出沿各所述中心点的血流特征,其中,所述血流特征预测模型,用于对输入的中心点序列中各中心点对应的特征序列进行并行处理、且采用自注意力机制预测血流特征。

Description

血流特征预测方法、装置、计算机设备和存储介质
相关申请交叉引用
本申请要求2020年04月21日递交的、申请号为2020103147766、标题为“血流特征预测方法、装置、计算机设备和存储介质”的中国申请,其公开内容通过引用全部结合在本申请中。
技术领域
本申请涉及一种血流特征预测方法、装置、计算机设备和存储介质。
背景技术
血管疾病严重威胁着人类健康,血流特征是医生评估血管的健康状况的重要生理指标,血流特征包括血流储备分数(FFR)、压强和剪切力等,对血流特征进行预测能够为血管疾病的诊断和治疗提供帮助。
传统技术中,一般是通过CFD(Computational Fluid Dynamics,计算流体动力学)模拟仿真血液在血管中的流动,计算出沿血管各中心点的血流特征。
发明内容
根据多个实施例,提供一种血流特征预测方法,该方法包括:
基于中心线提取模型提取三维血管影像中血管的中心线,生成血管树;中心线提取模型,是预先基于Transformer网络结构训练的深度学习模型;以及
从血管树中的血管入口开始,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列,并将中心点序列中的各中心点所对应的特征序列,输入基于Transformer网络结构训练的血流特征预测模型中,预测出沿各中心点的血流特征;
其中,血流特征预测模型,用于对输入的中心点序列中各中心点对应的特征序列进行并行处理、且采用自注意力机制预测血流特征。
一种血流特征预测装置,该装置包括:
血管树生成模块,用于基于中心线提取模型提取三维血管影像中血管的中心线,生成血管树;中心线提取模型,是预先基于Transformer网络结构训练的深度学习模型;
血流特征预测模块,用于从血管树中的血管入口开始,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列,并将中心点序列中的各中心点所对应的特征序列,输入基于Transformer网络结构训练的血流特征预测模型中,预测出沿各中心点的血流特征;
其中,血流特征预测模型,用于对输入的中心点序列中各中心点对应的特征序列进行并行处理、且采用自注意力机制预测血流特征。
一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,计算机程序被处理器执行时,使得处理器执行本申请各实施例的血流特征预测方法中的步骤。
一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时,使得处理器执行本申请各实施例的血流特征预测方法中的步骤。
本发明的一个或多个实施例的细节在下面的附图和描述中提出。本发明的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为一个实施例中血流特征预测方法的流程示意图;
图2为一个实施例中Transformer网络的结构示意图;
图3为一个实施例中编码器和解码器的结构示意图;
图4为一个实施例中血管树生成的流程示意图;
图5为一个实施例中血管入口处的中心点的示意图;
图6为一个实施例中中心线提取的示意图;
图7为一个实施例中中心线提取模型的训练示意图;
图8为一个实施例中中心线修正的示意图;
图9为一个实施例中中心线修正模型的训练示意图;
图10为一个实施例中中心线提取和血管树生成的流程图;
图11为一个实施例中血流特征预测的示意图;
图12为一个实施例中血流特征预测模型的训练示意图;
图13为一个实施例中血流特征预测的流程图;
图14为一个实施例中血管拉直的示意图;
图15为一个实施例中血管轮廓提取的示意图;
图16为一个实施例中血管轮廓提取模型的训练示意图;
图17为一个实施例中血管轮廓提取的流程图;
图18为一个实施例中血流特征预测方法的整体流程图;
图19为一个实施例中血流特征预测装置的结构框图;
图20为一个实施例中血流特征预测装置的结构框图;
图21为一个实施例中计算机设备的内部结构图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
通过CFD模拟仿真血液在血管中的流动,计算出沿血管各中心点的血流特征通常需要进行大量的计算,对计算机算力要求很高,这降低了血流特征预测的效率。
在一个实施例中,如图1所示,提供了一种血流特征预测方法,本实施例以该方法应用于服务器进行举例说明,可以理解的是,该方法也可以应用于终端,还可以应用于包括终端和服务器的系统,并通过终端和服务器的交互实现。本实施例中,该方法包括以下步骤:
步骤S102,基于中心线提取模型提取三维血管影像中血管的中心线,生成血管树;中心线提取模型,是预先基于Transformer网络结构训练的深度学习模型。
中心线提取模型,是用于提取血管的中心线的模型。三维血管影像,是对血管拍摄的三维影像。血管树,是由三维血管影像中血管的中心线组成的树形结构。
Transformer网络,是一种进行序列特征分析的人工神经网络。如图2所示,Transformer网络由多个编码器和解码器相连接而成,从第一个编码器输入序列特征,由最后一个解码器输出预测的结果。编码器和解码器的结构如图3所示,编码器包含一个自注意力(self-attention)模块和一个前向传播(feed-forward)模块,解码器包含两个自注意力模块和一个前向传播模块,编码器和解码器的输入和输出均为序列特征。
在一个实施例中,三维血管影像可以是CT(Computed Tomography,电子计算机断层扫描)影像或MRI(Magnetic Resonance Imaging,磁共振成像)影像。
在一个实施例中,血管三维影像可以是对全身的血管拍摄的三维影像。在一个实施例中,三维血管影像也可以是对心血管、脑血管或外周血管等中的至少一种拍摄的三维影像。
具体地,服务器基于中心线提取模型,对三维血管影像进行中心线提取,得到三维血管影像 中血管的中心线,然后根据全部中心线,生成血管树。
在一个实施例中,服务器可以基于中心线提取模型提取三维血管影像中的中心点,然后根据全部中心点,生成三维血管影像中血管的中心线。中心点为中心线上的点。
在一个实施例中,服务器还可以对提取的中心线进行修正,得到更加准确的中心线。
步骤S104,从血管树中的血管入口开始,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列,并将中心点序列中的各中心点所对应的特征序列,输入基于Transformer网络结构训练的血流特征预测模型中,预测出沿各中心点的血流特征。
中心点序列,包含中心线上的多个中心点。特征序列,包含预测血流特征所需要的多个特征。血流特征,是医生评估血管的健康状况的重要生理指标。血流特征包括血流储备分数(FFR)、流速、压强和剪切力等中的至少一种。血流特征预测模型,用于对输入的中心点序列中各中心点对应的特征序列进行并行处理、且采用自注意力机制预测血流特征。
具体地,从血管树中的血管入口开始,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列,然后获取中心点序列中的各中心点所对应的特征序列,并将特征序列输入基于Transformer网络结构训练的血流特征预测模型中,预测出沿各中心点的血流特征。
在一个实施例中,可以分别从血管树中预设长度的中心线上,选取中心点序列,然后将各预设长度的中心线上的中心点序列所对应的特征序列分别输入血流特征预测模型中。
上述血流特征预测方法中,首先基于中心线提取模型提取三维血管影像中血管的中心线,生成血管树,中心线提取模型是预先基于Transformer网络结构训练的深度学习模型,然后从血管树中的血管入口开始,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列,并将中心点序列中的各中心点所对应的特征序列,输入基于Transformer网络结构训练的血流特征预测模型中,预测出沿各中心点的血流特征,其中,血流特征预测模型,用于对输入的中心点序列中各中心点对应的特征序列进行并行处理、且采用自注意力机制预测血流特征。采用基于transformer网络结构训练的中心线提取模型提取中心线,并采用基于transformer网络结构训练的血流特征预测模型预测血流特征,不需要模拟仿真血液在血管中的流动来计算血流特征,降低了计算量,从而提高了血流特征预测的效率。而且,基于transformer网络结构训练的血流特征预测模型能够对中心点序列中各中心点对应的特征序列进行并行处理,相较于一些串行处理(比如,使用循环神经网络计算时,后面的隐含状态取决于前一个隐含状态和输入的数据,就需要串行处理)而言,进一步地提高了血流特征预测的效率,而且不存在梯度消失的问题。
其次,基于transformer网络结构训练的血流特征预测模型采用自注意力机制预测血流特征,能够注意到各中心点的特征序列之间的相关性,相较于对整个序列完全相同的对待而没有侧重点(比如使用循环神经网络进行处理就没有侧重点)而言,考虑到了序列之间的侧重点和关联性,提高了血流特征预测的准确性。
此外,本申请的方案考虑了空间序列关系,相较于将血管路径上的各个点独立开,提取局部特征单独进行计算的处理而言,能够更加准确地预测血流特征。再者,从血管树中的血管入口开始,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列,这种选取方式比较接近血流的流动形式,考虑了整个血管树的空间序列关系,相较于仅考虑一段血管路径的空间序列关系的处理而言,能够更加准确地预测血流特征,而且,在参考整个血管树这一更长的空间序列关系的同时,并不会导致计算复杂度的大量增加,从而并不影响血流特征的预测效率。
在一个实施例中,中心线提取模型中包括第一网络和第二网络。第二网络是基于Transformer结构的深度学习网络。步骤S102具体包括如下步骤:
步骤S402,获取三维血管影像中血管的入口处的中心点的位置特征参数;位置特征参数包括位置坐标、方向向量和点分类标签;点分类标签,用于表示中心点为分支点、终点或非分支点非终点。
中心点,是血管的中心线上的点。
在一个实施例中,点分类标签可以为0、1或2。比如,0可以表示中心点为非分支点非终点, 1可以表示中心点为分支点,2可以表示中心点为终点。
如图5所示,为血管的入口处的中心点,箭头表示该中心点的方向向量所表征的方向。
在一个实施例中,可以通过机器学习方法自动获取三维血管影像中血管的入口处的中心点的位置特征参数。首先,训练用于提取入口中心点位置特征参数的模型,具体步骤如下:获取样本三维血管影像以及各样本三维血管影像血管的入口处的中心点的位置特征参数标注数据,服务器将样本三维血管影像输入待训练的模型(可以是卷积神经网络,比如:ResNet)中,预测出血管的入口处的中心点的预测位置特征参数,根据预测位置特征参数与位置特征参数标注数据的差异,迭代调整待训练的模型的参数,以使差异减小,直到模型收敛,得到最终的用于提取入口中心点位置特征参数的模型。然后,将三维血管影像输入该模型中,即可预测出入口处的中心点的位置特征参数。
步骤S404,将入口处的中心点作为当前中心点,提取当前中心点的邻域图像块,并将邻域图像块输入第一网络中进行卷积处理,输出低维度抽象特征。
邻域图像块是在三维血管影像中以当前中心点为中心提取的预设体积的三维图像块,比如,邻域图像块的预设体积可以为25*25*25像素。低维度抽象特征,是第一网络提取的低维度抽象特征。
在一个实施例中,邻域图像块可以是在三维血管影像的原图像中提取的图像块。在一个实施例中,邻域图像块也可以是在经过归一化处理后的图像块。
在一个实施例中,第一网络可以由多个卷积神经网络(CNN)组成,且各卷积神经网络之间权值共享。服务器将当前中心点的邻域图像块输入当前中心点所对应的卷积神经网络中,输出当前中心点所对应的低维度抽象特征。
步骤S406,将当前中心点的低维度抽象特征与位置特征参数输入第二网络中,预测下一个中心点的位置特征参数,并将下一个中心点作为当前中心点,返回提取当前中心点的邻域图像块以继续执行,直到下一个中心点为终点,得到三维血管影像中血管的全部中心点的位置特征参数。
全部中心点是三维血管影像中血管上的一系列中心点,各相邻中心点之间的间隔相等。可以理解,服务器通过中心线提取模型迭代地预测当前中心点的下一个中心点的位置特征参数,直到下一个中心点为终点,最终可以得到三维血管影像中血管的全部中心点的位置特征参数。预测得到位置特征参数中的位置坐标和方向向量为回归任务。预测得到位置特征参数中的点分类标签为分类任务。
具体地,服务器将当前中心点所对应的低维度抽象特征和位置特征参数输入第二网络中,预测下一个中心点的位置特征参数,并将下一个中心点作为当前中心点,返回提取当前中心点的邻域图像块以继续执行,直到下一个中心点为终点,得到三维血管影像中血管的全部中心点的位置特征参数。
在一个实施例中,当下一个中心点的位置特征参数中的点分类标签表示该中心点为终点时,则判定下一个中心点为终点。
图6为基于中心线提取模型提取三维血管影像中血管的中心线的示意图。图6中,血管上最左边的点为血管的入口处的中心点,获取该入口处的中心点的位置特征参数(位置坐标)、(方向向量)和(点分类标签)。提取该入口处的中心点的邻域图像块,将邻域图像块输入第一网络(CNN)中进行卷积处理,输出低维度抽象特征,再将低维度抽象特征与位置特征参数输入第二网络(Transformer)中,预测下一个中心点的位置特征参数,再提取下一个中心点的邻域图像块,将邻域图像块输入第一网络(CNN)中进行卷积处理,输出低维度抽象特征,再将低维度抽象特征与位置特征参数输入第二网络(Transformer)中,预测下一个中心点的位置特征参数,以此类推,直到下一个中心点为终点,得到三维血管影像中血管的全部中心点的位置特征参数。
在一个实施例中,当当前中心点为分支点(点分类标签表示当前中心点为分支点)时,可以对当前中心点所在的区域进行区域生长,得到多个连通区域,取各连通区域的质心作为下一个中心点(即得到了下一个中心点的位置坐标),下一个中心点的方向向量所表征的方向是由当前中 心点指向下一个中心点的方向。可以理解,当前中心点对应几个分支血管,则得到几个连通区域。
在一个实施例中,当当前中心点为分支点(点分类标签表示当前中心点为分支点)时,也可以通过深度学习方法预测下一个中心点的位置坐标和方向向量。首先训练分支点处理模型,具体步骤如下:将多幅血管分支处的图像块作为样本分支图像块,并获取各样本分支图像块中分支处的中心点的多个分支的方向向量标注数据,将血管分支处的图像块输入待训练的分支点处理模型(可以是卷积神经网络,比如:ResNet)中,得到多个分支的预测方向向量,根据预测方向向量与方向向量标注数据的差异,迭代调整待训练的分支点处理模型的参数,以使差异减小,直到模型收敛,得到分支点处理模型。然后,将当前中心点对应的邻域图像块输入分支点处理模型中,得到多个分支的方向向量(下一个中心点的方向向量),将当前中心点沿着各个分支的方向向量移动一定距离,得到分支后的多个下一个中心点(即得到了下一个中心点的位置坐标)。
步骤S408,根据全部位置特征参数所表征的全部中心点,生成三维血管影像中血管的中心线,得到血管树。
在一个实施例中,服务器可以对全部位置特征参数所表征的全部中心点进行样条插值处理(比如:三次样条插值),生成三维血管影像中血管的中心线。
本实施例中,获取三维血管影像中血管的入口处的中心点的位置特征参数,将入口处的中心点作为当前中心点,提取当前中心点的邻域图像块,将当前中心点的邻域图像块和位置特征参数输入中心线提取模型中,预测下一个中心点的位置特征参数,将下一个中心点作为当前中心点,迭代地预测出三维血管影像中血管的全部中心点的位置特征参数,根据全部位置特征参数所表征的全部中心点,生成三维血管影像中血管的中心线,得到血管树。采用基于Transformer网络结构训练的中心线提取模型提取中心线,降低了计算量,从而提高了中心线提取的效率。
在一个实施例中,中心线提取模型的训练步骤具体包括如下步骤:获取样本数据;样本数据中,包括由连续的样本中心点的样本邻域图像块组成的样本图像块序列、以及样本中心点对应的真实位置特征参数;连续的样本中心点,是从样本三维血管影像中的血管中心线上选取的连续的中心点;在每轮训练中,迭代地将样本图像块序列中当前样本中心点的样本邻域图像块输入本轮的第一网络中提取低维度抽象特征,将提取的低维度抽象特征和当前样本中心点的预测位置特征参数,输入本轮的第二网络中,输出下一个样本中心点的预测位置特征参数,并根据每个样本中心点对应的预测位置参数和真实位置特征参数之间的差异,更新本轮的第一网络和第二网络的模型参数,直至达到训练停止条件,得到中心线提取模型;中心线提取模型中包括训练停止时的第一网络和第二网络。
样本中心点,是作为训练样本的样本三维血管影像中血管的中心线上的点。连续的样本中心点,是从样本三维血管影像中的血管中心线上选取的连续的、且相邻中心点之间的间隔相等的中心点。样本邻域图像块,是在作为训练样本的样本三维血管影像中以样本中心点为中心提取的预设体积的三维图像块,比如,样本邻域图像块的预设体积可以为25*25*25像素。样本图像块序列,样本图像块序列,是一系列的样本邻域图像块,即包含多个样本邻域图像块。真实位置特征参数,是样本数据中的各样本中心点所对应的位置特征参数,是已知的、且用于在中心线提取模型的训练过程中作为参照。
具体地,在每轮训练中,服务器迭代地将样本图像块序列中当前样本中心点的样本邻域图像块输入本轮的第一网络中提取低维度抽象特征,然后将提取的低维度抽象特征和当前样本中心点的预测位置特征参数输入本轮的第二网络中,输出下一个样本中心点的预测位置特征参数,这样最终可以得到本轮训练中每个样本中心点对应的预测位置特征参数,根据每个样本中心点对应的预测位置参数和真实位置特征参数之间的差异,更新本轮的第一网络和第二网络的模型参数,以使差异减小,直至达到训练停止条件,得到中心线提取模型。
可以理解,当前样本中心点的预测位置特征参数,是在本轮训练中将上一个样本中心点的低维度抽象特征和上一个样本中心点的预测位置特征参数输入本轮训练的第二网络中得到的。当当前样本中心点为血管入口处的样本中心点(即第一个样本中心点)时,则不存在对应的预测位置 特征参数,则用血管入口处的样本中心点的真实位置特征参数作为输入。
如图7所示,为中心线提取模型的训练示意图。在每轮训练中,先将第一个样本中心点的邻域图像块输入第一网络(卷积神经网络)中,提取低维度抽象特征,然后将提取的低维度抽象特征与第一个样本中心点的真实位置特征参数输入本轮的第二网络(Transformer网络)中,输出第二个样本中心点的预测位置特征参数,再将第二个样本中心点的邻域图像块输入第一网络所提取的低维度抽象特征,与第二个样本中心点的预测位置特征参数输入第二网络中,输出第三个样本中心点的预测位置特征参数,以此类推,得到本轮训练中每个样本中心点的预测位置特征参数,根据每个样本中心点对应的预测位置参数和真实位置特征参数之间的差异,更新本轮的第一网络和第二网络的模型参数,以使差异减小,开始下一轮训练,直至达到训练停止条件,得到中心线提取模型。
在一个实施例中,训练停止条件可以是模型收敛。
本实施例中,通过样本数据迭代地训练,得到中心线提取模型,能够更加准确地提取中心线。
在一个实施例中,步骤S102还包括如下步骤:从血管树中的血管入口开始,按照近端到远端的顺序,依次选取位于血管树的中心线上的待修正的中心点;针对每个待修正的中心点,获取以待修正的中心点为中心的、且垂直于中心线的预设数量的图像,得到第一图像序列;将第一图像序列输入中心线修正模型中,得到待修正的中心点所对应的位置偏移量;中心线修正模型,是预先基于Transformer网络结构训练的深度学习模型;按位置偏移量对待修正的中心点进行位置修正,得到修正后的中心点;根据全部修正后的中心点生成修正后的中心线,得到最终的血管树。
中心线修正模型用于修正中心线的位置,以使中心线的位置更加准确。位置偏移量,是待修正的中心点在血管横截面上与血管正中心之间的位置偏移量。位置修正,是调整待修正的中心点的位置。
具体地,在包含生成的血管树的三维血管影像中,从血管树中的血管入口开始,按照近端到远端的顺序,依次选取位于血管树的中心线上的中心点,作为待修正的中心点。针对每个待修正的中心点,获取以该待修正的中心点为中心的、且垂直于中心线的预设数量的图像,得到第一图像序列,可以理解,第一图像序列中包含多幅二维图像。服务器将第一图像序列输入中心线修正模型中,得到待修正的中心点所对应的位置偏移量。服务器根据各待修正的中心点的位置偏移量,调整各待修正的中心点的位置,得到修正后的中心点。根据全部修正后的中心点生成修正后的中心线,得到最终的血管树。
在一个实施例中,位置偏移量可以包括两个方向(即x方向和y方向)上的偏移量。
在一个实施例中,中心线修正模型由多个卷积神经网络和一个Transformer网络组成,各卷积神经网络之间权值共享。如图8所示,为通过中心线修正模型进行中心线修正的示意图,图中获取以待修正的中心点为中心的、且垂直于中心线的预设数量的图像,得到第一图像序列,将第一图像序列中的各图像分别输入各卷积神经网络中,提取低维度抽象特征,再将各低维度抽象特征全部输入Transformer网络中,预测得到待修正的中心点的位置偏移量,位置偏移量包括两个方向上的偏移量dx和dy。
在一个实施例中,服务器可以对全部修正后的中心点进行样条插值处理(比如:三次样条插值),得到修正后的中心线。
本实施例中,通过预先基于Transformer网络结构训练的中心线修正模型进行中心线修正,得到最终的血管树,能够得到更加准确的血管树。并且,Transformer网络结构的自注意力机制能够对图像序列中的相关性进行学习,从而得到更加准确的位置偏移量,进而得到更加准确的血管树。此外,采用基于Transformer网络结构训练的中心线修正模型,能够对图像序列进行并行处理,提高了处理效率。
在一个实施例中,将第一图像序列输入中心线修正模型中,得到待修正的中心点所对应的位置偏移量的步骤,具体包括如下步骤:将第一图像序列输入中心线修正模型中,预测第一图像序列中处于中心位置的图像所对应的位置偏移量,作为待修正的中心点所对应的位置偏移量。
具体地,服务器将待修正的中心点对应的第一图像序列输入中心线修正模型中,预测第一图像序列中处于中心位置的图像所对应的位置偏移量,将该位置偏移量作为该待修正的中心点所对应的位置偏移量。下面举例说明,假设将以待修正的中心点为中心的、且垂直于中心线的N幅图像,作为第一图像序列,那么,中心线修正模型预测出第N/2幅图像的位置偏移量,并将该位置偏移量作为该待修正的中心点所对应的位置偏移量。其中,N可以为奇数或偶数,当N为奇数时,将N/2的值进行四舍五入。
本实施例中,通过将第一图像序列输入中心线修正模型中,预测第一图像序列中处于中心位置的图像所对应的位置偏移量,作为待修正的中心点所对应的位置偏移量,能够通过基于Transformer网络结构的中心线修正模型对第一图像序列进行并行处理,快速得到中心点的位置偏移量,提高了中心线修正的效率。
在一个实施例中,中心线修正模型的训练步骤具体包括如下步骤:获取样本三维血管影像中血管中心线上的各样本中心点对应的第一样本图像序列、以及各样本中心点对应的真实位置偏移量;在每轮迭代训练中,将第一样本图像序列输入待训练的中心线修正模型中,得到预测的位置偏移量;待训练的中心线修正模型中包括Transformer网络结构;根据预测的位置偏移量与真实位置偏移量的差异,迭代调整待训练的中心线修正模型,直到模型收敛,得到最终的中心线修正模型。
第一样本图像序列,是从样本三维血管影像中提取的,以样本中心点为中心的、且垂直于样本三维血管影像中血管中心线的预设数量的图像,可以理解,第一样本图像序列中包含多幅二维图像。真实位置偏移量,是在中心线修正模型的训练过程中作为参照的位置偏移量。
在一个实施例中,待训练的中心线修正模型由多个卷积神经网络和一个Transformer网络组成,各卷积神经网络之间权值共享。如图9所示,为中心线修正模型的模型训练示意图。服务器将第一样本图像序列中的各图像分别输入各个卷积神经网络中提取低维度抽象特征,再将各图像对应的低维度抽象特征全部输入Transformer网络中,预测第一样本图像序列中处于中心位置的图像的预测的位置偏移量dx和dy。根据预测的位置偏移量与真实位置偏移量的差异,迭代调整卷积神经网络和Transformer网络的参数,直到模型收敛,得到最终的中心线修正模型。中心线修正模型包括模型收敛时的卷积神经网络和Transformer网络。
图10是一个实施例中中心线提取和血管树生成的流程图。首先获取三维血管影像(即原始的医学影像),训练基于Transformer网络结构的中心线提取模型,然后基于中心线提取模型提取三维血管影像中血管的中心线,生成血管树。然后训练基于Transformer网络结构的中心线修正模型,基于中心线修正模型修正三维血管影像中血管的中心线,得到最终的血管树。
本实施例中,通过迭代地训练得到中心线修正模型,从而能够得到更加准确的中心线。
在一个实施例中,步骤S104具体包括如下步骤:从血管树的血管入口开始,按照近端到远端的顺序,依次从血管树中选取预设长度的中心线上的点,得到中心点序列;获取各中心点序列对应的特征序列;将各中心点序列对应的特征序列分别输入基于Transformer网络结构训练的血流特征预测模型中,预测出沿各中心点序列中各中心点的血流特征。
可以理解,从血管树的血管入口开始,按照近端到远端的顺序,依次从血管树中选取预设长度的中心线上的点,得到中心点序列,是将血管树分成了预设长度的多段,在每段中选取中心点序列。
在一个实施例中,特征序列包含不少于一个已知的特征,血流特征包含不少于一个血流特征。待预测哪些血流特征,则将相应的需要输入的已知特征组成的特征序列输入血流特征预测模型中。
在一个实施例中,血流特征预测模型由多个卷积神经网络(CNN)或全连接网络(FC)和一个Transformer网络组成,各卷积神经网络或各全连接网络之间权值共享。如图11所示,为血流特征预测的示意图,针对一个中心点序列,服务器将中心点序列中各中心点对应的特征序列分别输入卷积神经网络(CNN)或全连接网络(FC)中,提取低维度抽象特征,然后将各中心点对应的低维度抽象特征全部输入Transformer网络中,预测出沿该中心点序列中各中心点的血流特征。
本实施例中,采用基于Transformer网络结构训练的血流特征预测模型预测血流特征,不需要使用大量算力进行模拟仿真得到血流特征,降低了计算量,从而提高了血流特征预测的效率。而且,基于Transformer网络结构训练的血流特征预测模型能够对中心点序列中各中心点对应的特征序列进行并行处理,同样也提高了血流特征预测的效率。此外,基于Transformer网络结构训练的血流特征预测模型采用自注意力机制预测血流特征,能够注意到各中心点的特征序列之间的相关性,从而能够更加准确地预测血流特征。从血管树中的血管入口开始,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列,这种选取方式能够模拟血流的流动,从而能够更加准确地预测血流特征。
在一个实施例中,血流特征预测模型的训练步骤具体包括如下步骤:获取样本三维血管影像中血管树的中心线上各中心点的样本特征序列和各中心点对应的样本血流特征;将预设长度的中心线上各中心点的样本特征序列输入待训练的血流特征预测模型中,得到预设长度的中心线上沿血管各中心点的预测血流特征;根据样本血流特征与预测血流特征的差异,迭代调整待训练的血流特征预测模型,直到模型收敛,得到最终的血流特征预测模型。
样本血流特征是已知的、在血流特征预测模型的训练过程中作为参照的血流特征。
在一个实施例中,待训练的血流特征预测模型由多个卷积神经网络(CNN)或全连接网络(FC)和一个Transformer网络组成,卷积神经网络或全连接网络权值共享。如图12所示,为血流特征预测模型的训练示意图,将预设长度的中心线上各中心点的样本特征序列输入卷积神经网络(CNN)或全连接网络(FC)中,提取低维度抽象特征,然后将各中心点对应的低维度抽象特征全部输入Transformer网络中,预测出预设长度的中心线上沿血管各中心点的预测血流特征。根据样本血流特征与预测血流特征的差异,迭代调整卷积神经网络或全连接网络和Transformer网络的参数,直到模型收敛,得到最终的血流特征预测模型,最终的血流特征预测模型包括模型收敛时的多个卷积神经网络或全连接网络和一个Transformer网络。
如图13所示,为一个实施例中血流特征预测的流程图。首先获取包含血管树的三维血管影像中沿着中心线上各中心点的特征序列,然后训练基于Transformer网络结构的血流特征预测模型,将各中心点的特征序列输入血流特征预测模型中,预测沿各中心点的血流特征。
本实施例中,通过迭代地训练基于Transformer网络结构的血流特征预测模型,不需要使用大量算力进行模拟仿真得到血流特征,降低了计算量,从而提高了血流特征预测的效率。而且,基于Transformer网络结构训练的血流特征预测模型能够对中心点序列中各中心点对应的特征序列进行并行处理,同样也提高了血流特征预测的效率。此外,基于Transformer网络结构训练的血流特征预测模型采用自注意力机制预测血流特征,能够注意到各中心点的特征序列之间的相关性,从而能够更加准确地预测血流特征。从血管树中的血管入口开始,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列,这种选取方式能够模拟血流的流动,从而能够更加准确地预测血流特征。
在一个实施例中,特征序列包括中心点序列中的中心点对应的原三维图像块、三维分割图像块、血管直径、到血管入口的距离、到上游最近分叉的距离、上游斑块总长度、上游斑块平均长度、上游路径平均面积、上游路径最大面积、上游路径最小面积、下游斑块总长度、下游斑块平均长度、下游路径平均面积、下游路径最大面积和下游路径最小面积,以及心房体积、心肌体积、组学特征和血管面积中的至少一种;血流特征包括血流储备分数、流速、压强和剪切力中的至少一种。
原三维图像块,是中心点在三维血管影像中对应的三维图像块。三维分割图像块,是中心点在三维血管影像的三维分割图像中对应的三维图像块。上游,是中心点的上游。下游,是中心点的下游。血流储备分数,是在冠状动脉存在狭窄病变的情况下,该冠状动脉血管所供的心肌区域能获得的最大血流与该心肌区域正常情况下所能获得的最大血流之比,即心肌最大充血状态下的狭窄远端冠状动脉内平均压(Pd)与冠状动脉口部主动脉平均压(Pa)的比值。
可以理解,原三维图像块、三维分割图像块、血管直径、到血管入口的距离、到上游最近分 叉的距离、上游斑块总长度、上游斑块平均长度、上游路径平均面积、上游路径最大面积、上游路径最小面积、下游斑块总长度、下游斑块平均长度、下游路径平均面积、下游路径最大面积和下游路径最小面积,均为各中心点分别对应的特征。心房体积、心肌体积、组学特征和血管面积为整体特征,各中心点的心房体积、心肌体积、组学特征和血管面积特征一致。
可以理解,输入的特征序列,根据需要预测哪些血流特征来确定。
在一个实施例中,在血流特征预测模型的训练中,样本血流特征中的血流储备分数可以是通过CFD(Computational Fluid Dynamics,计算流体动力学)模拟仿真得到的,也可以是临床数据。
在一个实施例中,服务器可以通过深度学习模型提取包含血管树的三维血管影像中的三维血管轮廓,可以理解,三维血管轮廓即相当于血管的三维分割图像,从而得到中心点在三维血管轮廓中对应的三维分割图像块。
本实施例中,血流特征预测模型可以预测多种血流特征,能够为血管疾病的诊断和治疗提供帮助。
在一个实施例中,三维分割图像块,是中心点序列中的中心点在三维的血管轮廓中对应的、且在像素空间下的图像块。在从血管树中的血管入口开始,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列的步骤之前,还包括如下步骤:对包含血管树的三维血管影像进行血管轮廓提取处理,得到血管轮廓,并对血管轮廓进行放样插值,生成三维血管模型;三维血管模型为三维分割图像。
具体地,服务器可以通过深度学习模型对包含血管树的三维血管影像进行血管轮廓提取处理,得到血管轮廓,并对血管轮廓进行放样插值,生成三维血管模型,三维血管模型为三维分割图像。从三维血管模型中提取各中心点对应的图像块,作为三维分割图像块。
在一个实施例中,三维血管模型是在连续空间下的,需要先将三维血管模型转换至像素空间,再从像素空间下的三维血管模型中提取各中心点对应的图像块,作为三维分割图像块。
本实施例中,三维分割图像块,是中心点序列中的中心点在三维血管模型中对应的图像块。通过对包含血管树的三维血管影像进行血管轮廓提取处理,得到血管轮廓,进而得到各中心点对应的三维分割图像块,从而能够将三维分割图像块作为特征序列输入血流特征预测模型中预测血流特征。
在一个实施例中,可以使用三维血管模型,模拟仿真血液在血管中的流动,从而计算得到血流储备分数。在血流特征预测模型的训练中,可以将本实施例中得到的血流储备分数作为样本血流特征序列中的部分或全部样本血流特征。
在一个实施例中,可以理解,通过三维血管模型可以知晓血管宽窄、长度以及斑块等信息,所以,可以根据三维血管模型计算得到样本特征序列中的部分或全部样本特征。具体地,可以根据三维血管模型计算上游斑块总长度、上游斑块平均长度、上游路径平均面积、上游路径最大面积、上游路径最小面积、下游斑块总长度、下游斑块平均长度、下游路径平均面积、下游路径最大面积和下游路径最小面积。在血流特征预测模型的训练中,可以将本实施例中得到的数据作为样本特征序列中的样本特征。
在一个实施例中,可以对三维血管模型进行可视化渲染,从而方便医生查看血管的情况,比如:血管的狭窄情况。
在一个实施例中,对包括血管树的三维血管影像进行血管轮廓提取处理,得到血管轮廓的步骤,具体包括如下步骤:将包含血管树的三维血管影像中的血管,沿血管树中的中心线进行拉直处理,得到血管拉直图像;从血管拉直图像中,提取中心线上各中心点临近的预设数量的血管横截面图像,得到分别与每个中心点对应的第二图像序列;将每个中心点对应的第二图像序列输入血管轮廓提取模型中,得到每个中心点所对应的轮廓距离;轮廓距离,用于表示中心点所对应的血管轮廓到中心点的距离;根据每个中心点所对应的轮廓距离,生成血管轮廓。
其中,血管横截面图像,是垂直于中心线的二维图像。可以理解,提取中心线上各中心点临近的预设数量的血管横截面图像,是提取以中心线上各中心点为中心的、且垂直于中心线的预设 数量的二维图像。每个中心点对应的第二图像序列中包含多幅血管横截面图像。轮廓距离,用于表示中心点所对应的血管轮廓到中心点的距离,可以理解,轮廓距离包含血管轮廓上各点到中心点的距离。
可以理解,血管是弯曲的,将血管沿中心线进行拉直处理,是将弯曲的血管拉直。如图14所示,即为血管拉直的示意图,左图为拉直前的血管,沿着中心线将中心线上各中心点对应的横截面堆叠在一起,得到拉直后的血管(右图)。
在一个实施例中,可以从血管入口处开始,按近端到远端的顺序,依次选取中心线上的中心点,并对以该中心点为中心的一段血管进行拉直处理,得到该中心点对应的血管拉直图像。
在一个实施例中,也可以将包含血管树的三维血管影像中的血管整体进行拉直处理,得到整体的血管拉直图像。
在一个实施例中,血管横截面图像可以是从三维血管影像中提取的原始的横截面图像。在一个实施例中,血管横截面图像也可以是原始的横截面图像的血管轮廓图模型,血管轮廓图模型由节点和边构成。在一个实施例中,血管横截面图像也可以是原始的横截面图像的极坐标变换图像,即,对于原始的横截面图像,从中心点按顺时针方向朝各个角度辐射长度相同的线段,并将全部线段按顺序重新排列,生成二维图像。
在一个实施例中,血管轮廓提取模型包含多个卷积神经网络和一个Transformer网络,多个卷积神经网络之间权值共享。如图15所示,为血管轮廓提取的示意图,针对一个中心点,服务器将该中心点对应的第二图像序列中的各图像分别输入各卷积神经网络(CNN)中,提取低维度抽象特征,然后将低维度抽象特征全部输入Transformer网络中,预测出该中心点对应的轮廓距离y,再根据每个中心点所对应的轮廓距离,生成血管轮廓。
在一个实施例中,假设中心点对应的第二图像序列中有N幅图像,则通过血管轮廓提取模型预测出第N/2幅(N为整数,当N为奇数时,对N/2进行四舍五入)图像中血管轮廓上各点到中心点的距离,并将其作为该第二图像序列所对应的中心点的轮廓距离。
在一个实施例中,可以根据各中心点所对应的轮廓距离,得到各中心点所对应的血管轮廓。
本实施例中,通过血管轮廓提取模型对包括血管树的三维血管影像进行血管轮廓提取处理,得到血管轮廓,进而能够得到各中心点对应的三维分割图像块,从而能够将三维分割图像块作为特征序列输入血流特征预测模型中预测血流特征。
在一个实施例中,该方法还包括如下步骤:确定每个中心点所对应的血管轮廓,并取所对应的血管轮廓的质心,作为每个中心点所对应的新的中心点;根据每个新的中心点,生成新的血管树。步骤S104中从血管树中的血管入口开始,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列的步骤,具体包括如下步骤:从新的血管树中的血管入口处,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列。
具体地,服务器可以根据各中心点所对应的轮廓距离,确定每个中心点对应的血管轮廓,并取所对应的血管轮廓的质心,作为每个中心点所对应的新的中心点,对全部新的中心点进行样条插值处理,得到新的中心线,从而生成新的血管树。
本实施例中,将各中心点对应的血管轮廓的质心,作为新的中心点,生成新的血管树,能够对血管树进行修正,得到更加准确的血管树,进而通过更加准确的血管树进行血流特征预测,提高血流特征预测的准确性。
在一个实施例中,血管轮廓提取模型的训练步骤具体包括如下步骤:对样本三维血管影像进行血管拉直处理,得到样本血管拉直图像;针对样本血管拉直图像中的中心线上各样本中心点,取与各样本中心点临近的多个样本血管横截面图像,得到多个第二样本图像序列;获取针对各样本血管横截面图像标注的真实轮廓距离;真实轮廓距离,用于表示样本血管横截面图像中的血管轮廓上的各点,到样本血管横截面图像所对应的中心点之间的距离;将第二样本图像序列输入待训练的血管轮廓提取模型中,得到第二样本图像序列的图像对应的预测轮廓距离;根据预测轮廓距离与真实轮廓距离的差异,迭代调整待训练的血管轮廓提取模型,直到模型收敛,得到血管轮 廓提取模型。
真实轮廓距离是已知的、用于在训练血管轮廓提取模型的过程中作为参照的轮廓距离。
在一个实施例中,待训练的血管轮廓提取模型包含多个卷积神经网络和一个Transformer网络,多个卷积神经网络之间权值共享。如图16所示,为血管轮廓提取模型的训练示意图,将样本三维血管影像中的样本中心点对应的第二样本图像序列输入各卷积神经网络(CNN)中,提取低维度抽象特征,然后将低维度抽象特征全部输入Transformer网络中,得到第二样本图像序列的图像对应的预测轮廓距离y,根据预测轮廓距离与真实轮廓距离的差异,迭代调整卷积神经网络和Transformer网络的参数,以使差异减小,直到模型收敛,得到血管轮廓提取模型,血管轮廓提取模型包括模型收敛时的多个卷积神经网络和Transformer网络。需要说明的是,轮廓距离并不是一个单值,而是包括多个值的数组。比如,轮廓距离可以是64个值的数组,即将轮廓360°分成64个相等角度间隔,数组中的每一个yi是中心点到轮廓对应角度的距离。
如图17所示,为一个实施例中血管轮廓提取的流程图。首先将包含血管树的三维血管影像中的血管进行拉直处理,然后训练基于Transformer网络结构的血管轮廓提取模型,并基于血管轮廓提取模型生成各中心点对应的血管轮廓,再根据血管轮廓得到新的中心点。
本实施例中,通过迭代地训练基于Transformer网络的血管轮廓提取模型,能够更加准确地提取血管轮廓。
图18为一个实施例中血流特征预测方法的整体流程图。首先,基于中心线提取模型对三维血管影像进行中心线提取,并生成血管树。然后,基于血管轮廓提取模型,得到各中心点对应的血管轮廓,从而得到中心点对应的三维分割图像块。最后,基于血流特征预测模型,预测沿各中心点的血流特征。其中,基于血管轮廓得到的三维分割图像块可以作为血流特征预测模型的输入特征序列。
应该理解的是,虽然图1和图4的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1和图4中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一个实施例中,如图19所示,提供了一种血流特征预测装置1900,包括:血管树生成模块1902和血流特征预测模块1904,其中:
血管树生成模块1902,用于基于中心线提取模型提取三维血管影像中血管的中心线,生成血管树;中心线提取模型,是预先基于Transformer网络结构训练的深度学习模型。
血流特征预测模块1904,用于从血管树中的血管入口开始,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列,并将中心点序列中的各中心点所对应的特征序列,输入基于Transformer网络结构训练的血流特征预测模型中,预测出沿各中心点的血流特征。
血流特征预测模型用于对输入的中心点序列中各中心点对应的特征序列进行并行处理、且采用自注意力机制预测血流特征。
在一个实施例中,中心线提取模型中包括第一网络和第二网络。第二网络是基于Transformer结构的深度学习网络。血管树生成模块1902还用于获取三维血管影像中血管的入口处的中心点的位置特征参数;位置特征参数包括位置坐标、方向向量和点分类标签;点分类标签,用于表示中心点为分支点、终点或非分支点非终点;将入口处的中心点作为当前中心点,提取当前中心点的邻域图像块,并将邻域图像块输入第一网络中进行卷积处理,输出低维度抽象特征;将当前中心点的低维度抽象特征与位置特征参数输入第二网络中,预测下一个中心点的位置特征参数,并将下一个中心点作为当前中心点,返回提取当前中心点的邻域图像块以继续执行,直到下一个中心点为终点,得到三维血管影像中血管的全部中心点的位置特征参数;根据全部位置特征参数所表征的全部中心点,生成三维血管影像中血管的中心线,得到血管树。
在一个实施例中,血流特征预测装置1900还包括:
模型训练模块1906,用于获取样本数据;样本数据中,包括由连续的样本中心点的样本邻域图像块组成的样本图像块序列、以及样本中心点对应的真实位置特征参数;连续的样本中心点,是从样本三维血管影像中的血管中心线上选取的连续的中心点;在每轮训练中,迭代地将样本图像块序列中当前样本中心点的样本邻域图像块输入本轮的第一网络中提取低维度抽象特征,将提取的低维度抽象特征和当前样本中心点的预测位置特征参数,输入本轮的第二网络中,输出下一个样本中心点的预测位置特征参数,并根据每个样本中心点对应的预测位置参数和真实位置特征参数之间的差异,更新本轮的第一网络和第二网络的模型参数,直至达到训练停止条件,得到中心线提取模型;中心线提取模型中包括训练停止时的第一网络和第二网络。
在一个实施例中,血管树生成模块1902还用于从血管树中的血管入口开始,按照近端到远端的顺序,依次选取位于血管树的中心线上的待修正的中心点;针对每个待修正的中心点,获取以待修正的中心点为中心的、且垂直于中心线的预设数量的图像,得到第一图像序列;将第一图像序列输入中心线修正模型中,得到待修正的中心点所对应的位置偏移量;中心线修正模型,是预先基于Transformer网络结构训练的深度学习模型;按位置偏移量对待修正的中心点进行位置修正,得到修正后的中心点;根据全部修正后的中心点生成修正后的中心线,得到最终的血管树。
在一个实施例中,血管树生成模块1902还用于将第一图像序列输入中心线修正模型中,预测第一图像序列中处于中心位置的图像所对应的位置偏移量,作为待修正的中心点所对应的位置偏移量。
在一个实施例中,模型训练模块1906还用于获取样本三维血管影像中血管中心线上的各样本中心点对应的第一样本图像序列、以及各样本中心点对应的真实位置偏移量;在每轮迭代训练中,将第一样本图像序列输入待训练的中心线修正模型中,得到预测的位置偏移量;待训练的中心线修正模型中包括Transformer网络结构;根据预测的位置偏移量与真实位置偏移量的差异,迭代调整待训练的中心线修正模型,直到模型收敛,得到最终的中心线修正模型。
在一个实施例中,血流特征预测模块1904还用于从血管树的血管入口开始,按照近端到远端的顺序,依次从血管树中选取预设长度的中心线上的点,得到中心点序列;获取各中心点序列对应的特征序列;将各中心点序列对应的特征序列分别输入基于Transformer网络结构训练的血流特征预测模型中,预测出沿各中心点序列中各中心点的血流特征。
在一个实施例中,模型训练模块1906还用于获取样本三维血管影像中血管树的中心线上各中心点的样本特征序列和各中心点对应的样本血流特征;将预设长度的中心线上各中心点的样本特征序列输入待训练的血流特征预测模型中,得到预设长度的中心线上沿血管各中心点的预测血流特征;根据样本血流特征与预测血流特征的差异,迭代调整待训练的血流特征预测模型,直到模型收敛,得到最终的血流特征预测模型。
在一个实施例中,特征序列包括中心点序列中的中心点对应的原三维图像块、三维分割图像块、血管直径、到血管入口的距离、到上游最近分叉的距离、上游斑块总长度、上游斑块平均长度、上游路径平均面积、上游路径最大面积、上游路径最小面积、下游斑块总长度、下游斑块平均长度、下游路径平均面积、下游路径最大面积和下游路径最小面积,以及心房体积、心肌体积、组学特征和血管面积中的至少一种;血流特征包括血流储备分数、流速、压强和剪切力中的至少一种。
在一个实施例中,三维分割图像块是中心点序列中的中心点在三维血管模型中对应的、且在像素空间下的图像块。
如图20所示,血流特征预测装置1900还包括血管轮廓提取模块1908,其用于对包含血管树的三维血管影像进行血管轮廓提取处理,得到血管轮廓,并对血管轮廓进行放样插值,生成三维血管模型;三维血管模型为三维分割图像。
在一个实施例中,血管轮廓提取模块1908还用于将包含血管树的三维血管影像中的血管,沿血管树中的中心线进行拉直处理,得到血管拉直图像;从血管拉直图像中,提取中心线上各中心 点临近的预设数量的血管横截面图像,得到分别与每个中心点对应的第二图像序列;将每个中心点对应的第二图像序列输入血管轮廓提取模型中,得到每个中心点所对应的轮廓距离;轮廓距离,用于表示中心点所对应的血管轮廓到中心点的距离;根据每个中心点所对应的轮廓距离,生成血管轮廓。
在一个实施例中,血管轮廓提取模块1908还用于确定每个中心点所对应的血管轮廓,并取所对应的血管轮廓的质心,作为每个中心点所对应的新的中心点;根据每个新的中心点,生成新的血管树。血流特征预测模块1904还用于从新的血管树中的血管入口处,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列。
在一个实施例中,模型训练模块1906还用于对样本三维血管影像进行血管拉直处理,得到样本血管拉直图像;针对样本血管拉直图像中的中心线上各样本中心点,取与各样本中心点临近的多个样本血管横截面图像,得到多个第二样本图像序列;获取针对各样本血管横截面图像标注的真实轮廓距离;真实轮廓距离,用于表示样本血管横截面图像中的血管轮廓上的各点,到样本血管横截面图像所对应的中心点之间的距离;将第二样本图像序列输入待训练的血管轮廓提取模型中,得到第二样本图像序列的图像对应的预测轮廓距离;根据预测轮廓距离与真实轮廓距离的差异,迭代调整待训练的血管轮廓提取模型,直到模型收敛,得到血管轮廓提取模型。
上述血流特征预测装置中,首先基于中心线提取模型提取三维血管影像中血管的中心线,生成血管树,中心线提取模型是预先基于Transformer网络结构训练的深度学习模型,然后从血管树中的血管入口开始,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列,并将中心点序列中的各中心点所对应的特征序列,输入基于Transformer网络结构训练的血流特征预测模型中,预测出沿各中心点的血流特征,其中,血流特征预测模型,用于对输入的中心点序列中各中心点对应的特征序列进行并行处理、且采用自注意力机制预测血流特征。采用基于Transformer网络结构训练的中心线提取模型提取中心线,并采用基于Transformer网络结构训练的血流特征预测模型预测血流特征,不需要使用大量算力进行模拟仿真得到血流特征,降低了计算量,从而提高了血流特征预测的效率。而且,基于Transformer网络结构训练的血流特征预测模型能够对中心点序列中各中心点对应的特征序列进行并行处理,同样也提高了血流特征预测的效率。
此外,基于Transformer网络结构训练的血流特征预测模型采用自注意力机制预测血流特征,能够注意到各中心点的特征序列之间的相关性,从而能够更加准确地预测血流特征。从血管树中的血管入口开始,按照近端到远端的顺序,从血管树中的中心线上选取中心点序列,这种选取方式能够模拟血流的流动,从而能够更加准确地预测血流特征。
关于血流特征预测装置的具体限定可以参见上文中对于血流特征预测方法的限定,在此不再赘述。上述血流特征预测装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图21所示。该计算机设备包括通过系统总线连接的处理器、存储器和网络接口。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储血流特征预测数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种血流特征预测方法。
本领域技术人员可以理解,图21中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,还提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各方法实施例中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。可以理解,对于本领域的技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (16)

  1. 一种血流特征预测方法,包括:
    基于中心线提取模型提取三维血管影像中血管的中心线,生成血管树;其中,所述中心线提取模型是预先基于Transformer网络结构训练的深度学习模型;以及
    从所述血管树中的血管入口开始,按照近端到远端的顺序,从所述血管树中的中心线上选取中心点序列,并将所述中心点序列中的各中心点所对应的特征序列,输入基于Transformer网络结构训练的血流特征预测模型中,预测出沿各所述中心点的血流特征;
    其中,所述血流特征预测模型,用于对输入的中心点序列中各中心点对应的特征序列进行并行处理、且采用自注意力机制预测血流特征。
  2. 根据权利要求1所述的方法,其中所述中心线提取模型中包括第一网络和第二网络;所述第二网络是基于Transformer结构的深度学习网络;所述基于中心线提取模型提取三维血管影像中血管的中心线,生成血管树包括:
    获取所述三维血管影像中血管的入口处的中心点的位置特征参数;所述位置特征参数包括位置坐标、方向向量和点分类标签;所述点分类标签,用于表示中心点为分支点、终点或非分支点非终点;
    将所述入口处的中心点作为当前中心点,提取所述当前中心点的邻域图像块,并将所述邻域图像块输入所述第一网络中进行卷积处理,输出低维度抽象特征;
    将所述当前中心点的低维度抽象特征与所述位置特征参数输入所述第二网络中,预测下一个中心点的位置特征参数,并将所述下一个中心点作为当前中心点,返回所述提取所述当前中心点的邻域图像块以继续执行,直到所述下一个中心点为终点,得到三维血管影像中血管的全部中心点的位置特征参数;以及
    根据全部所述位置特征参数所表征的全部中心点,生成所述三维血管影像中血管的中心线,得到血管树。
  3. 根据权利要求2所述的方法,其中所述中心线提取模型的训练包括:
    获取样本数据;所述样本数据中,包括由连续的样本中心点的样本邻域图像块组成的样本图像块序列、以及所述样本中心点对应的真实位置特征参数;并且所述连续的样本中心点,是从样本三维血管影像中的血管中心线上选取的连续的中心点;以及
    在每轮训练中,迭代地将样本图像块序列中当前样本中心点的样本邻域图像块,输入本轮的第一网络中提取低维度抽象特征,将提取的低维度抽象特征和当前样本中心点的预测位置特征参数,输入本轮的第二网络中,输出下一个样本中心点的预测位置特征参数,并根据每个样本中心点对应的预测位置参数和所述真实位置特征参数之间的差异,更新本轮的所述第一网络和所述第二网络的模型参数,直至达到训练停止条件,得到中心线提取模型;其中,所述中心线提取模型中包括训练停止时的第一网络和第二网络。
  4. 根据权利要求2所述的方法,其中所述基于中心线提取模型提取三维血管影像中血管的中心线,生成血管树还包括:
    从所述血管树中的血管入口开始,按照近端到远端的顺序,依次选取位于所述血管树的中心线上的待修正的中心点;
    针对每个所述待修正的中心点,获取以所述待修正的中心点为中心的、且垂直于所述中心线的预设数量的图像,得到第一图像序列;
    将所述第一图像序列输入中心线修正模型中,得到所述待修正的中心点所对应的位置偏移量;其中,所述中心线修正模型,是预先基于Transformer网络结构训练的深度学习模型;
    按所述位置偏移量对待修正的中心点进行位置修正,得到修正后的中心点;以及
    根据全部所述修正后的中心点生成修正后的中心线,得到最终的血管树。
  5. 根据权利要求4所述的方法,其中所述将所述第一图像序列输入中心线修正模型中,得到所述待修正的中心点所对应的位置偏移量包括:
    将所述第一图像序列输入中心线修正模型中,预测所述第一图像序列中处于中心位置的图像所对应的位置偏移量,作为所述待修正的中心点所对应的位置偏移量。
  6. 根据权利要求4所述的方法,其中所述中心线修正模型的训练包括:
    获取样本三维血管影像中血管中心线上的各样本中心点对应的第一样本图像序列、以及各所述样本中心点对应的真实位置偏移量;
    在每轮迭代训练中,将所述第一样本图像序列输入待训练的中心线修正模型中,得到预测的位置偏移量;其中,待训练的中心线修正模型中包括Transformer网络结构;以及
    根据所述预测的位置偏移量与所述真实位置偏移量的差异,迭代调整所述待训练的中心线修正模型,直到模型收敛,得到最终的中心线修正模型。
  7. 根据权利要求1所述的方法,其中所述从所述血管树中的血管入口开始,按照近端到远端的顺序,从所述血管树中的中心线上选取中心点序列,并将所述中心点序列中的各中心点所对应的特征序列,输入基于Transformer网络结构训练的血流特征预测模型中,预测出沿各所述中心点的血流特征包括:
    从所述血管树的血管入口开始,按照近端到远端的顺序,依次从所述血管树中选取预设长度的中心线上的点,得到中心点序列;
    获取各所述中心点序列对应的特征序列;以及
    将各所述中心点序列对应的特征序列,分别输入基于Transformer网络结构训练的血流特征预测模型中,预测出沿各所述中心点序列中各中心点的血流特征。
  8. 根据权利要求7所述的方法,其中所述血流特征预测模型的训练包括:
    获取样本三维血管影像中血管树的中心线上各中心点的样本特征序列和所述各中心点对应的样本血流特征;
    将预设长度的中心线上各中心点的所述样本特征序列输入待训练的血流特征预测模型中,得到所述预设长度的中心线上沿血管各中心点的预测血流特征;以及
    根据所述样本血流特征与所述预测血流特征的差异,迭代调整所述待训练的血流特征预测模型,直到模型收敛,得到最终的血流特征预测模型。
  9. 根据权利要求1至8中任一项所述的方法,其中所述特征序列包括所述中心点序列中的中心点对应的原三维图像块、三维分割图像块、血管直径、到血管入口的距离、到上游最近分叉的距离、上游斑块总长度、上游斑块平均长度、上游路径平均面积、上游路径最大面积、上游路径最小面积、下游斑块总长度、下游斑块平均长度、下游路径平均面积、下游路径最大面积和下游路径最小面积,以及心房体积、心肌体积、组学特征和血管面积中的至少一种;并且所述血流特征包括血流储备分数、流速、压强和剪切力中的至少一种。
  10. 根据权利要求9所述的方法,其中所述三维分割图像块,是所述中心点序列中的中心点在三维血管模型中对应的、且在像素空间下的图像块;
    在所述从所述血管树中的血管入口开始,按照近端到远端的顺序,从所述血管树中的中心线上选取中心点序列之前,所述方法还包括:
    对包含血管树的三维血管影像进行血管轮廓提取处理,得到血管轮廓,并对所述血管轮廓进行放样插值,生成三维血管模型;其中,所述三维血管模型为三维分割图像。
  11. 根据权利要求10所述的方法,其中所述对包括血管树的三维血管影像进行血管轮廓提取处理,得到血管轮廓包括:
    将包含血管树的三维血管影像中的血管,沿所述血管树中的中心线进行拉直处理,得到血管拉直图像;
    从所述血管拉直图像中,提取中心线上各中心点临近的预设数量的血管横截面图像,得到分别与每个中心点对应的第二图像序列;
    将每个中心点对应的第二图像序列输入血管轮廓提取模型中,得到每个所述中心点所对应的轮廓距离;其中,所述轮廓距离,用于表示所述中心点所对应的血管轮廓到所述中心点的距离; 以及
    根据每个所述中心点所对应的轮廓距离,生成血管轮廓。
  12. 根据权利要求11所述的方法,其中所述方法还包括:
    确定每个所述中心点所对应的血管轮廓,并取所对应的血管轮廓的质心,作为每个所述中心点所对应的新的中心点;
    根据每个新的中心点,生成新的血管树;并且
    所述从所述血管树中的血管入口开始,按照近端到远端的顺序,从所述血管树中的中心线上选取中心点序列包括:
    从所述新的血管树中的血管入口处,按照近端到远端的顺序,从所述血管树中的中心线上选取中心点序列。
  13. 根据权利要求11所述的方法,其中所述血管轮廓提取模型的训练包括:
    对样本三维血管影像进行血管拉直处理,得到样本血管拉直图像;
    针对样本血管拉直图像中的中心线上各样本中心点,取与各所述样本中心点临近的多个样本血管横截面图像,得到多个第二样本图像序列;
    获取针对各所述样本血管横截面图像标注的真实轮廓距离;所述真实轮廓距离,用于表示所述样本血管横截面图像中的血管轮廓上的各点,到所述样本血管横截面图像所对应的中心点之间的距离;
    将所述第二样本图像序列输入待训练的血管轮廓提取模型中,得到所述第二样本图像序列的图像对应的预测轮廓距离;以及
    根据所述预测轮廓距离与所述真实轮廓距离的差异,迭代调整所述待训练的血管轮廓提取模型,直到模型收敛,得到所述血管轮廓提取模型。
  14. 一种血流特征预测装置,所述装置包括:
    血管树生成模块,用于基于中心线提取模型提取三维血管影像中血管的中心线,生成血管树;所述中心线提取模型,是预先基于Transformer网络结构训练的深度学习模型;以及
    血流特征预测模块,用于从所述血管树中的血管入口开始,按照近端到远端的顺序,从所述血管树中的中心线上选取中心点序列,并将所述中心点序列中的各中心点所对应的特征序列,输入基于Transformer网络结构训练的血流特征预测模型中,预测出沿各所述中心点的血流特征;
    其中,所述血流特征预测模型,用于对输入的中心点序列中各中心点对应的特征序列进行并行处理、且采用自注意力机制预测血流特征。
  15. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现权利要求1至13中任一项所述的方法的步骤。
  16. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至13中任一项所述的方法的步骤。
PCT/CN2021/082629 2020-04-21 2021-03-24 血流特征预测方法、装置、计算机设备和存储介质 WO2021213124A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010314776.6 2020-04-21
CN202010314776.6A CN111680447B (zh) 2020-04-21 2020-04-21 血流特征预测方法、装置、计算机设备和存储介质

Publications (1)

Publication Number Publication Date
WO2021213124A1 true WO2021213124A1 (zh) 2021-10-28

Family

ID=72451774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/082629 WO2021213124A1 (zh) 2020-04-21 2021-03-24 血流特征预测方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN111680447B (zh)
WO (1) WO2021213124A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114052762A (zh) * 2021-11-30 2022-02-18 燕山大学 基于Swin-T预测狭窄血管尺寸和器械尺寸的方法
CN114366295A (zh) * 2021-12-31 2022-04-19 杭州脉流科技有限公司 微导管路径生成方法、塑形针的塑形方法、计算机设备、可读存储介质和程序产品
CN114757942A (zh) * 2022-05-27 2022-07-15 南通大学 一种基于深度学习的多层螺旋ct对活动性肺结核的检测方法
CN114972220A (zh) * 2022-05-13 2022-08-30 北京医准智能科技有限公司 一种图像处理方法、装置、电子设备及可读存储介质
CN117490908A (zh) * 2023-12-31 2024-02-02 武汉华康世纪医疗股份有限公司 一种用于负压病房的负压检测方法及系统

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111680447B (zh) * 2020-04-21 2023-11-17 深圳睿心智能医疗科技有限公司 血流特征预测方法、装置、计算机设备和存储介质
CN112419462A (zh) * 2020-11-25 2021-02-26 苏州润迈德医疗科技有限公司 三维血管的渲染合成方法、系统及存储介质
CN112446866B (zh) * 2020-11-25 2023-05-26 上海联影医疗科技股份有限公司 血流参数的计算方法、装置、设备及存储介质
CN112614217A (zh) * 2020-12-17 2021-04-06 深圳睿心智能医疗科技有限公司 管状物三维模型的拉直方法、装置及电子设备
CN112785591B (zh) * 2021-03-05 2023-06-13 杭州健培科技有限公司 一种ct影像中肋骨骨折的检测与分割方法及装置
CN113012146B (zh) * 2021-04-12 2023-10-24 东北大学 血管信息获取方法及装置、电子设备和存储介质
CN113205488B (zh) * 2021-04-19 2023-12-29 深圳睿心智能医疗科技有限公司 血流特性预测方法、装置、电子设备及存储介质
CN113192031B (zh) * 2021-04-29 2023-05-30 上海联影医疗科技股份有限公司 血管分析方法、装置、计算机设备和存储介质
CN113034683B (zh) * 2021-04-30 2022-12-02 北京阅影科技有限公司 确定血管中心线真伪以及截断位置的方法与装置
EP4131154A1 (en) * 2021-08-05 2023-02-08 Robovision Coronary artery narrowing detection based on patient imaging and 3d deep learning
CN113838572B (zh) * 2021-09-10 2024-03-01 深圳睿心智能医疗科技有限公司 血管生理参数获取方法、装置、电子设备及存储介质
CN115880381A (zh) * 2021-09-28 2023-03-31 深圳市中兴微电子技术有限公司 图像处理方法、图像处理装置、模型训练方法
CN113888690B (zh) * 2021-10-19 2022-08-12 柏意慧心(杭州)网络科技有限公司 用于确定血管中的目标区段的方法、设备和介质
CN114972242B (zh) * 2022-05-23 2023-04-07 北京医准智能科技有限公司 心肌桥检测模型的训练方法、装置及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980899A (zh) * 2017-04-01 2017-07-25 北京昆仑医云科技有限公司 预测血管树血管路径上的血流特征的深度学习模型和系统
CN110517279A (zh) * 2019-09-20 2019-11-29 北京深睿博联科技有限责任公司 头颈血管中心线提取方法及装置
CN110853029A (zh) * 2017-11-15 2020-02-28 深圳科亚医疗科技有限公司 用于基于医学图像自动预测血流特征的方法、系统和介质
CN111680447A (zh) * 2020-04-21 2020-09-18 深圳睿心智能医疗科技有限公司 血流特征预测方法、装置、计算机设备和存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9814433B2 (en) * 2012-10-24 2017-11-14 Cathworks Ltd. Creating a vascular tree model
WO2018001099A1 (zh) * 2016-06-30 2018-01-04 上海联影医疗科技有限公司 一种血管提取方法与系统
CN108830848B (zh) * 2018-05-25 2022-07-05 深圳科亚医疗科技有限公司 利用计算机确定血管上的血管状况参数的序列的装置和系统
CN110599444B (zh) * 2018-08-23 2022-04-19 深圳科亚医疗科技有限公司 预测血管树的血流储备分数的设备、系统以及非暂时性可读存储介质
CN109559326B (zh) * 2018-11-05 2020-11-13 深圳睿心智能医疗科技有限公司 一种血流动力学参数计算方法、系统及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980899A (zh) * 2017-04-01 2017-07-25 北京昆仑医云科技有限公司 预测血管树血管路径上的血流特征的深度学习模型和系统
CN110853029A (zh) * 2017-11-15 2020-02-28 深圳科亚医疗科技有限公司 用于基于医学图像自动预测血流特征的方法、系统和介质
CN110517279A (zh) * 2019-09-20 2019-11-29 北京深睿博联科技有限责任公司 头颈血管中心线提取方法及装置
CN111680447A (zh) * 2020-04-21 2020-09-18 深圳睿心智能医疗科技有限公司 血流特征预测方法、装置、计算机设备和存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ASHISH VASWANI; NOAM SHAZEER; NIKI PARMAR; JAKOB USZKOREIT; LLION JONES; AIDAN N GOMEZ; LUKASZ KAISER; ILLIA POLOSUKHIN: "Attention Is All You Need", ARXIV.ORG, 6 December 2017 (2017-12-06), pages 1 - 15, XP080973732 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114052762A (zh) * 2021-11-30 2022-02-18 燕山大学 基于Swin-T预测狭窄血管尺寸和器械尺寸的方法
CN114366295A (zh) * 2021-12-31 2022-04-19 杭州脉流科技有限公司 微导管路径生成方法、塑形针的塑形方法、计算机设备、可读存储介质和程序产品
CN114972220A (zh) * 2022-05-13 2022-08-30 北京医准智能科技有限公司 一种图像处理方法、装置、电子设备及可读存储介质
CN114972220B (zh) * 2022-05-13 2023-02-21 北京医准智能科技有限公司 一种图像处理方法、装置、电子设备及可读存储介质
CN114757942A (zh) * 2022-05-27 2022-07-15 南通大学 一种基于深度学习的多层螺旋ct对活动性肺结核的检测方法
CN117490908A (zh) * 2023-12-31 2024-02-02 武汉华康世纪医疗股份有限公司 一种用于负压病房的负压检测方法及系统
CN117490908B (zh) * 2023-12-31 2024-04-09 武汉华康世纪医疗股份有限公司 一种用于负压病房的负压检测方法及系统

Also Published As

Publication number Publication date
CN111680447B (zh) 2023-11-17
CN111680447A (zh) 2020-09-18

Similar Documents

Publication Publication Date Title
WO2021213124A1 (zh) 血流特征预测方法、装置、计算机设备和存储介质
JP6918912B2 (ja) 画像処理装置、画像処理方法、及びプログラム
CN110638438B (zh) 用于对血流储备分数的基于机器学习的评估的方法和系统
CN107403446B (zh) 用于使用智能人工代理的图像配准的方法和系统
US20190130578A1 (en) Vascular segmentation using fully convolutional and recurrent neural networks
EP3467779A1 (en) Systems and methods for data and model-driven image reconstruction and enhancement
Girum et al. Learning with context feedback loop for robust medical image segmentation
US20080273777A1 (en) Methods And Apparatus For Segmentation And Reconstruction For Endovascular And Endoluminal Anatomical Structures
CN111095354A (zh) 经改进的3-d血管树表面重构
CN111754506A (zh) 基于腔内影像的冠脉狭窄率计算方法、装置、系统和计算机存储介质
CN105190630A (zh) 计算血流储备分数
JP6001783B2 (ja) 神経繊維構造の定位
CN105913479B (zh) 一种基于心脏ct图像的血管曲面重建方法
CN111862046A (zh) 一种心脏冠脉剪影中导管位置判别系统和方法
CN115731232A (zh) 血管图像处理方法、装置、可读存储介质及电子设备
JP2020508709A (ja) 変形可能なオブジェクトへのデバイス挿入のための応力予測および応力評価
Taylor et al. Patient-specific modeling of blood flow in the coronary arteries
CN112734917B (zh) 医疗数据的三维曲面重建和优化方法、系统及存储介质
CN117036530B (zh) 基于跨模态数据的冠状动脉血流储备分数预测方法及装置
EP3516621B1 (en) Apparatus for adaptive contouring of a body part
US20230060613A1 (en) Blood parameter assessment systems and methods
US20140032180A1 (en) Method and apparatus for computing deformation of an object
CN111784732A (zh) 训练心脏运动场估计模型、心脏运动场估计的方法及系统
JP7433913B2 (ja) 経路決定方法、医用画像処理装置、モデル学習方法及びモデル学習装置
CN113850710B (zh) 一种跨模态医学影像精准转换方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21791887

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16/03/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21791887

Country of ref document: EP

Kind code of ref document: A1