CN117952939A - Real-time measurement method for heart outflow tract wall shearing force based on neural network - Google Patents

Real-time measurement method for heart outflow tract wall shearing force based on neural network Download PDF

Info

Publication number
CN117952939A
CN117952939A CN202410133372.5A CN202410133372A CN117952939A CN 117952939 A CN117952939 A CN 117952939A CN 202410133372 A CN202410133372 A CN 202410133372A CN 117952939 A CN117952939 A CN 117952939A
Authority
CN
China
Prior art keywords
layer
decoder
outflow tract
feature
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410133372.5A
Other languages
Chinese (zh)
Inventor
马振鹤
宋佰航
赵玉倩
王毅
刘健
栾景民
杨艳秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University Qinhuangdao Branch
Original Assignee
Northeastern University Qinhuangdao Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University Qinhuangdao Branch filed Critical Northeastern University Qinhuangdao Branch
Priority to CN202410133372.5A priority Critical patent/CN117952939A/en
Publication of CN117952939A publication Critical patent/CN117952939A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of biomedicine and computer vision, and discloses a real-time measurement method of heart outflow tract wall shearing force based on a neural network. An SD-OCT imaging system is used for acquiring images, and a structural diagram and a flow velocity diagram are acquired. The Doppler angle is recorded by changing the angle of incidence of the probe beam by index head rotation. The ALSegNet network is used for extracting the flow area of the central dirty outflow channel in the structure diagram; and calculating the shear force of the wall of the outflow tract of the heart by using a flow velocity gradient of a laminar flow model and a three-dimensional rotation correction algorithm in combination with the extracted flow region. The ALSegNet network designed by the method has small complexity, can be migrated into mobile equipment, and has high reasoning speed, wherein the parallel subnetwork and the PDAM can ensure that the target of the OFT blood flow area is not lost to a certain extent, namely, the method has higher reasoning precision. The method can acquire the shear force of the wall of the outflow tract of the heart in real time at a high speed.

Description

Real-time measurement method for heart outflow tract wall shearing force based on neural network
Technical Field
The invention relates to the technical field of biomedicine and computer vision, in particular to a method for measuring the shear force of the wall of a heart outflow tract in real time based on a neural network.
Background
Wall shear is the shear stress created by the flowing object and the interface. In early embryonic development, cardiac growth and morphological changes are closely related to the biomechanical environment. In blood vessels, the blood flow can cause mechanical stresses on the vessel wall, in particular wall shear forces (WSS), i.e. frictional forces acting on the surface of endothelial cells, and the magnitude and direction of such forces are related to the change in blood flow velocity as the blood flows through the cardiovascular system. WSS can induce changes in the morphology and function of endothelial cells, thereby regulating cardiac development. Since the cardiac outflow tract (OFT) connects the ventricles and arterial systems early in heart development, many congenital heart diseases are associated with abnormal changes in OFT during heart development. These abnormalities can lead to developmental defects of heart valves and major blood vessels, such as aortic malpositioning, etc., and WSS measuring OFT are therefore of great importance for the study of congenital heart disease.
In order to explore embryo development, animal models are often used, as their early stages of embryonic heart development resemble humans. Measuring the WSS in real time in OFT can capture the value of real-time feedback information generated during animal embryo heart beating. The method is an important requirement for researchers to accurately grasp the change of the hemodynamic environment of the embryo heart and effectively carry out experiments. However, due to the small size of early embryonic hearts and rapid heart beat, it is difficult in the prior art to accurately and in real time measure WSS in OFT. Optical Coherence Tomography (OCT) has ultra-high spatial and temporal resolution and is mature as a non-invasive real-time imaging technique. The spectral domain OCT (SD-OCT) can realize high-speed and high-signal-to-noise ratio imaging and real-time monitoring of sample structure and flow velocity information, and is very suitable for animal embryo imaging. Calculating WSS using OCT requires obtaining absolute blood flow velocity of OFT, but at present measurement of WSS is based on post-processing, and real-time measurement is not yet achieved. However, WSS measurements require computation from the OFT structure. The traditional OFT structure segmentation based on image processing has the problems of incapability of automatic positioning, low segmentation precision, long time consumption and the like in the aspect of real-time measurement.
With the wide application of deep learning technology in the field of medical imaging, a segmentation method based on Convolutional Neural Network (CNN) can be used for automatically extracting any target region in the imaging range. The development of these deep learning techniques has enabled real-time measurement of WSS in cardiac OFT, particularly in key steps of automatically extracting blood flow regions. However, due to the high complexity and significant individual specificity of the structure around OFT, the use of direct migration based on deep learning of traditional segmentation techniques may lead to problems such as target loss.
Based on the above difficulties, further research is needed on how to measure wall shear force of the cardiac outflow tract in real time using a deep learning method.
Disclosure of Invention
In order to solve at least one of the technical problems existing in the prior art to a certain extent, the invention aims to provide a real-time measurement method for the wall shear force of a heart outflow tract based on a neural network.
The technical scheme adopted by the invention is as follows: a real-time measurement method of heart outflow tract wall shearing force based on neural network comprises the following steps:
Acquiring images by using a built SD-OCT imaging system, and acquiring a structural diagram and a flow velocity diagram;
In an SD-OCT imaging system, a probe arm is arranged on a graduated dividing head, and the Doppler angle is recorded by changing the incidence angle of a probe beam through the rotation of the dividing head;
The ALSegNet network is used for extracting the flow area of the central dirty outflow channel in the structure diagram; and calculating the shear force of the wall of the outflow tract of the heart by using a flow velocity gradient of a laminar flow model and a three-dimensional rotation correction algorithm in combination with the extracted flow region.
The ALSegNet network comprises a CNN backbone network, a cavity space pyramid pooling-extrusion and excitation module, a decoder, a classification and frame regression sub-network, a non-maximum value suppression module NMS and a probability dependent attention module PDAM;
The input of ALSegNet network is a structural diagram, the CNN backbone network is used as an encoder to extract the characteristics, the obtained characteristic diagram is input to a cavity space pyramid pooling-extrusion and excitation module to obtain a multi-scale characteristic diagram, and the multi-scale characteristic diagram is input to a decoder; the feature layer of the decoder is divided into three parts, a feature image obtained by the first part of feature layer is input into a classification and frame regression sub-network, and an obtained boundary box is input into a non-maximum suppression module to obtain a cardiac outflow tract boundary box; the feature map obtained by the second part of feature layer of the decoder is input into the probability dependent attention module together with the cardiac outflow tract boundary box after up-sampling, and the feature map obtained by the probability dependent attention module is input into the third part of feature layer of the decoder to output and obtain the extracted flow region.
The CNN backbone network comprises five coding layers; the decoder comprises 3 upsampling residual blocks URB 1-3, a deep feature layer, a 1 x 1 convolution layer and a sigmoid activation function; inputting the deepest features obtained by the encoder into a cavity space pyramid pooling-extrusion and excitation module, and splicing features of a feature map output by the cavity space pyramid pooling-extrusion and excitation module and a 4 th feature layer output L4 of the encoder in a channel dimension to serve as a fifth feature layer of the decoder; the decoder fifth feature layer generates a decoder deep feature layer using a 3x 3 convolution with a step size of 2; the fifth characteristic layer of the decoder is input to the output characteristic of the first upsampling residual block URB 1,URB1 and the dimension splicing characteristic of the L3 channel output by the 3 rd characteristic layer of the encoder, and is recorded as a fourth characteristic layer of the decoder; the fourth characteristic layer of the decoder is input to a second upsampling residual block URB 2,URB2 to output characteristics and the 2 nd characteristic layer of the encoder is output to output L2 channel dimension splicing characteristics, and the characteristics are recorded as a third characteristic layer of the decoder; the third characteristic layer of the decoder is input to the 3 rd upsampling residual block URB 3,URB3 to output the characteristic and the 1 st characteristic layer of the encoder to output the L1 channel dimension splicing characteristic, and the characteristic is recorded as a second characteristic layer of the decoder;
Inputting the deep feature layer of the decoder, the fifth feature layer of the decoder and the fourth feature layer of the decoder into a classification and frame regression sub-network, merging the outputs of the classification and frame regression sub-network, and inputting the merged outputs into a non-maximum suppression module together to generate a heart outflow tract boundary frame;
The heart outflow tract boundary box is used as a first input branch of the probability dependent attention module, the decoder third characteristic layer and the decoder second characteristic layer are used as a second input branch of the probability dependent attention module after upsampling, the two input branches are simultaneously input into the probability dependent attention module, and the output characteristic is marked as the decoder first characteristic layer; the first feature layer of the decoder finally gets the extracted flow rate region after 1×1 convolution and sigmoid activation.
The probability-dependent attention module first input branch is overlapped by using a Gaussian function to obtain a two-dimensional OFT probability distribution function; the two-dimensional OFT probability distribution function is based on the distance from a point to the center point of the boundary box of the cardiac outflow tract, and the two-dimensional OFT probability distribution function is based on the following:
F=Normalize(Ftarget+Fbackward)
Wherein II·IIrepresents the Euclidean distance and P 1 represents a point within the cardiac outflow tract bounding box; p 2 denotes a point outside the cardiac outflow tract bounding box within the background region; d represents the vertex of the cardiac outflow tract bounding box, C represents the center point of the cardiac outflow tract bounding box; alpha and beta are constants for adjusting the degree of attention; the total probability distribution function F normalizes F target and F backward, and alpha < beta;
The probability-dependent attention module second input branch is split into two convolution branches; the first volume integral branch uses a 1X 1 convolution layer and is multiplied by the two-dimensional OFT probability distribution point by point, and the second volume integral branch uses a plurality of 3X 3 convolution layers to extract features; and the feature graphs obtained by the first convolution branch and the second convolution branch are subjected to feature splicing in the channel dimension to obtain the output of the probability dependent attention module.
The 3 upsampling residual blocks URB 1-3 have the same structure, firstly, the input upsampling is performed twice to obtain a main characteristic layer by performing 3×3 convolution, batch standardization and Relu activation, and secondly, the input upsampling result is added with the main characteristic layer point by using a 1×1 convolution layer as residual connection to obtain an output.
The cavity space pyramid pooling-extrusion and excitation module ASPP-SE is connected in parallel with the input characteristic diagram by using n layers of cavity convolution layers with expansion rate of 6 n; the hole convolution layer comprises batch standardization and Relu function activation; simultaneously, carrying out feature extraction on an input feature map, carrying out feature integration on the extracted features of each layer by using 1X 1 convolution with the same channel after passing through Concatenate processes, and marking the generated feature map as an intermediate feature map; carrying out global average pooling on the intermediate feature map, inputting the intermediate feature map into two full-connection layers, wherein an activation function of a first full-connection layer uses Relu, and an activation function of a second full-connection layer uses sigmoid; and re-distributing weights by combining the feature vectors obtained after the two full-connection processes are executed with the intermediate feature map.
The ALSegNet network performs cardiac outflow tract localization and pixel-level semantic segmentation, ALSegNet network training uses three different types of loss functions: segmentation loss L seg, regression loss L box, and classification loss L cls;
The partition penalty is a combination of the Dice penalty L Dice and the weighted two-class cross entropy penalty L WBCE, which is applied at the output of the encoder;
The classifying loss uses the focal loss to judge the class of the generated OFT bounding box; regression loss uses the smoothl 1 loss for assessing positioning accuracy; classification loss and regression loss act in the cardiac outflow tract boundary box obtained by non-maximum suppression;
Lloss=ζ1Lcls2Lbox3Lseg
Lseg=LDice+LWBCE
The adjustable coefficients ζ 1、ζ2 and ζ 3 are used to control the degree of interest of ALSegNet networks for different tasks.
The cardiac outflow tract wall shear force is calculated as follows:
Projecting the flow velocity distribution in the extracted flow region to a plane perpendicular to the flow direction to obtain a projection image, wherein the rotation angle is based on the Doppler angle theta recorded by the dial when the flow velocity map is acquired;
Performing OFT WSS calculation on the projection image, recording the geometric center of the flow velocity region as a maximum flow velocity point according to the absolute flow velocity gradient of the edge region, intercepting an annular region from the edge to a certain position away from the maximum flow velocity point, discretizing the flow velocity point in the annular region, and traversing the flow velocity point to the direction of the maximum flow velocity point to calculate the velocity gradient Wherein/>Calculating and obtaining the line width of the annular region intercepted by the connecting line from the edge point to the maximum flow velocity point;
finally, the calculated WSS is projected back into the original OCT scan plane.
The beneficial effects of the invention are as follows:
1. The method for measuring the wall shear force of the outflow tract of the heart in real time, provided by the invention, can acquire the wall shear force of the outflow tract of the heart in real time at a high speed by combining a structural diagram and a flow velocity diagram.
2. The ALSegNet network designed by the invention has small complexity, can be migrated into mobile equipment, and has high reasoning speed, wherein the parallel sub-network and the probability-dependent attention module can ensure that the target of the OFT blood flow area is not lost to a certain extent, i.e. has higher reasoning precision.
Drawings
FIG. 1 is a flow chart of a real-time measurement of wall shear force of a cardiac outflow tract based on a neural network in an example of the invention.
FIG. 2 is a schematic diagram of the structure of SD-OCT in the example of the present invention.
FIG. 3 (a) is a schematic diagram of a ALSegNet network structure in an example of the present invention;
figure 3 (b) is an upsampling decoding block (URB),
Fig. 3 (c) shows a cavity space pyramid pooling-extrusion-excitation module (ASPP-SE).
Fig. 4 is a schematic diagram of a probability-dependent module (PDAM) according to an embodiment of the present invention.
Fig. 5 is a schematic diagram illustrating a ALSegNet network training process in an example of the present invention.
FIG. 6 is a schematic diagram of a shear force calculation process in an example of the invention.
Fig. 7 (a) -7 (h) are 2D views of WSS test results at different phases of the beating cycle using chick embryo hearts in examples of the present invention.
Figure 8 is a 3D view of WSS test results at different beat cycle phases using chick embryo hearts in an example of the present invention. Wherein fig. 8 (a) is a structural diagram of an M-section, fig. 8 (b) is an absolute flow velocity diagram corresponding to fig. 8 (a), fig. 8 (c) is a 3D view of the time-varying OFT blood flow area, and fig. 8 (D) is a 3D view of the time-varying WSS calculated.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
In the description of the present invention, it should be understood that references to orientation descriptions such as upper, lower, front, rear, left, right, etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of description of the present invention and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present invention.
In the description of the present invention, a number means one or more, a number means two or more, and greater than, less than, exceeding, etc. are understood to not include the present number, and above, below, within, etc. are understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present invention can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical scheme.
Congenital heart disease is the most common birth defect in newborns and children, emphasizing the importance of heart development. In the early stages of development, the biomechanical environment, in particular WSS, plays a crucial role in heart morphology changes. Outflow tract (OFT) is an important part of the embryonic heart, and a significant portion of congenital heart defects originate from OFT. However, real-time measurement of WSS in OFT in animal models remains challenging.
The method comprises the steps of acquiring images by using SD-OCT and ALSegNet to extract a blood flow region, and finally measuring OFT wall shear force in real time by combining a flow velocity diagram, wherein the method comprises the following steps of:
Step 1: building an SD-OCT imaging system: the broadband light source emits broadband light which is coupled into the optical fiber-based michelson interferometer through the optical fiber circulator. An optical interference signal between the backscattering of the light from the probe and the reference arm is transmitted to the spectrometer. Interference spectra were imaged into a line scan camera (CCD) by a spectrometer and the doppler angle θ was quantified using a graduated divider head. The process eventually obtains a map of the heart OFT and a map of flow rates at several locations.
The broadband light source is a broadband superluminescent diode (SLD) with a center wavelength of 1310nm and a bandwidth of 52nm, and can provide an axial resolution of about 14 μm when illuminating a sample exposed to air. The probe comprises a collimator, an X-Y2D galvanometer scanning system and an objective lens (focal length 50mm, lateral resolution 16 μm); the spectrometer consists of a collimator lens (f=5 mm), 1145line/mm transmission grid and fourier lens (f=100 mm); the line scan camera is 1024 pixel infrared InGaAs.
Step 2: a ALSegNet network is constructed, the ALSegNet network including a CNN backbone network, decoders, a hole space pyramid pooling-extrusion and excitation module (ASPP-SE), parallel subnetworks, non-maximum suppression (NMS), and Probability Dependent Attention Module (PDAM).
The decoder in turn comprises three identical upsampling decoding blocks (URBs), a deep feature layer, a1 x 1 convolutional layer and a sigmoid activation function.
The URB includes an upsampling, a convolution layer, and a residual connection layer, wherein the convolution layer includes a3 x3 convolution, relu activation functions, and a batch normalization, and the residual connection layer includes a 1x1 convolution, relu activation functions, and a batch normalization.
The ASPP-SE comprises: using n layers of cavity convolution layers with expansion rate of 6n for input in parallel; simultaneously extracting the characteristics of the input characteristic diagram, and carrying out characteristic integration by using 1X 1 convolution with the same channel after each layer of extracted characteristics pass through Concatenate processes; global average pooling is carried out on the integrated feature graphs, then two full connection layers (FC 1 and FC 2) are used, corresponding activation functions are respectively Relu used for FC1, sigmoid used for FC2, and weights are redistributed by combining the integrated feature graphs; wherein the hole convolution layer comprises batch normalization and Relu function activation.
The parallel subnetwork (classification and block regression subnetwork) may use, but is not limited to, several 3 x 3 convolutional layers and 1 x 1 convolutional layers.
The PDAM includes: the input 1 is an OFT bounding box after NMS, and a two-dimensional probability distribution function is defined by using the input 1 and is based on mixed two-dimensional Gaussian distribution; input 2 is the result of bilinear interpolation up-sampling of the last two shallow decoder layers, input 2 is split into two branches, for branch 1, a1×1 convolution layer is used and is multiplied by a defined two-dimensional probability distribution point by point, for branch 2, a plurality of 3×3 convolution layers are used to extract features, and feature graphs of the two branches are output after passing through Concatenate; wherein the 3 x 3 convolution layer includes 3 x 3 convolutions, batch normalization, and Relu function activations, and the 1 x 1 convolution layer includes 1 x 1 convolutions, batch normalization, and Relu function activations.
The CNN backbone network in this example may use, but is not limited to ResNet.
Step 3: a loss function for network training is set, and the loss function has three parts: segmentation loss, regression loss, and classification loss. Wherein the partition loss uses the sum of the Dice loss and the weighted two-class cross entropy loss, the classification loss uses the focal loss, and the regression loss uses the smooth L1 loss.
Step 4: preprocessing the structure diagram acquired in the step 1, wherein the size of the structure diagram is uniform, and the structure diagram can be obtained by using, but not limited to, images with the size of 512 multiplied by 300, manually marking the structure diagram at the pixel level by using marking tools such as labelme, wherein the marked images are used as segmentation labels and are binarized images; and drawing a marking frame by taking the upper, lower, left and right endpoints of the marking area of the segmentation label, wherein the coordinates of a pair of diagonal points are used as position labels.
Step 5: pairing and packaging the original image, the segmentation labels and the position labels into a dataset, and performing iterative training until the network converges to obtain a trained ALSegNet network.
Step 6: loading trained ALSegNet network weights, inputting a heart OFT structure diagram, and extracting a blood flow region in real time.
Step 7: and (6) combining the blood flow area extracted in the step (6) with a heart OFT flow velocity map, calculating the shear force of the wall of a heart outflow tract by using a laminar flow model flow velocity gradient and a three-dimensional rotation correction algorithm, namely rotating a sample on an X-Y plane until the X scanning direction of OCT is perpendicular to the flow direction, wherein the rotation angle is the Doppler angle theta recorded when the flow velocity map is obtained in the step (1).
As shown in fig. 1, the embodiment provides a method for measuring the shear force of the wall of a cardiac outflow tract in real time based on ALSegNet, which specifically includes the following steps:
S1, building an SD-OCT imaging system, wherein the structural diagram is shown in figure 2: the broadband light source emits broadband light which is coupled into the optical fiber-based michelson interferometer through the optical fiber circulator. An optical interference signal between the backscattering of the light from the probe and the reference arm is transmitted to the spectrometer. Interference spectra are imaged by a spectrometer into a line scan camera (CCD).
The broadband light source is a broadband super-radiation diode (SLD) with a center wavelength of 1310nm and a bandwidth of 52nm, and can provide axial resolution of about 14 mu m in air; the probe comprises a collimator, an X-Y2D galvanometer scanning system and an objective lens (focal length 50mm, lateral resolution 16 μm); the spectrometer consists of a collimator lens (f=5 mm), 1145line/mm transmission grid and fourier lens (f=100 mm); the line scan camera is 1024 pixel infrared InGaAs.
In SD-OCT, one axial information is collected in parallel, the interference spectrum is encoded with the backward heat dissipation information of the sample, and different frequencies correspond to different depths. Thus, by fourier transforming the interference spectrum in wavenumber space, the backscattering distributions at different depths of the sample can be obtained:
Wherein A is the Fourier transform of the spectral light source S (k), Representing a convolution operation; /(I)As a coherence function curve, the half-width of the curve is the coherence length, which is located near z=0; /(I)For the backscattering amplitude distribution at different depths of the sample, a (z) and a * (-z) are symmetric about z=0; /(I)Fourier transforms that are mutually coherent at different depths for the sample. The fast fourier transform of the interference spectrum is complex, containing phase information. When erythrocytes in a blood vessel flow, a phase change is introduced. By conjugate multiplication between two sequential A-lines at each pixel, the phase change/>, can be extracted
Where λ is the center wavelength of the light source; n is the refractive index of the tissue; τ is the time interval between two consecutive a-scans; θ is the Doppler angle, the angle between the probe beam and the flow direction; v is absolute velocity. Thus, the absolute velocity may be obtained by:
For accurate measurement of the Doppler angle θ, the present example mounts the probe arm on a graduated index head. Thus, the angle of incidence of the probe beam can be changed by index head rotation.
The process eventually obtains a map of the heart OFT and a map of flow rates at several locations.
S2, building ALSegNet a network, as in fig. 3, the ALSegNet network includes CNN backbone network as encoder, decoder, hole space pyramid pooling-extrusion and excitation module (ASPP-SE), multiple parallel subnetworks (classification and regression) for OFT localization, non-maximum suppression (NMS) and Probability Dependent Attention Module (PDAM).
S2.1 in order to capture features of different depths, this example can use, but is not limited to, an 18-layer ResNet as CNN backbone network, the decoder consisting of 3 Upsampled Residual Blocks (URBs). The URB includes an upsampling, a convolution layer, and a residual connection layer, wherein the convolution layer includes a 3 x 3 convolution, relu activation functions, and a batch normalization, and the residual connection layer includes a1 x1 convolution, relu activation functions, and a batch normalization.
In addition, the ALSegNet network is added with jump connection from the 1 st to 4 th feature layers (L1-L4) of the CNN backbone network to the decoder, and semantic features extracted by the backbone network are fully utilized. The structure of each decoding unit is shown in fig. 3 (a). The module upsamples the input feature map by bilinear interpolation and extracts features using two 3 x3 convolutions and one 1 x 1 convolution as the residual connection. The decoder of the network generates a plurality of different feature layers at different depths.
S2.2, global feature information of the target is needed for positioning the OFT in the OCT image. Because the first two layers of the decoder both contain global feature information, the present example uses multiple parallel subnetworks (classification and frame regression subnetworks) for OFT localization. The parallel subnetwork may use, but is not limited to, several 3 x 3 convolutional layers and 1 x 1 convolutional layers. To generate more global information, the present example downsamples the deep layers of the decoder to generate additional deep feature layers, together as inputs to the parallel subnetwork. In each layer of the parallel sub-network, the present example sets the aspect ratio of the a priori box to [ (1:0.5), (1:1), (1:2) ], using three ratios to generate 9 bounding boxes for each feature point. Since the outputs of the parallel sub-network are a plurality of binding boxes, it is necessary to further perform non-maximum suppression (NMS) on the outputs of the parallel sub-network to output a unique binding box, thereby achieving the positioning of the OFT.
S2.3, because the parallel sub-network only uses global features, the output is approximate OFT positioning. In order to improve the extraction accuracy of the OFT sounding box, the embodiment designs a cavity space pyramid pooling-extrusion-excitation module (ASPP-SE) at the junction of the encoder and the decoder. The detailed architecture of the ASPP-SE is shown in fig. 3 (b). The module extracts and integrates features of different scales from a depth feature map generated by a CNN backbone network, and introduces an attention mechanism in a channel dimension. The ASPP-SE comprises: using n layers of cavity convolution layers with expansion rate of 6n for input in parallel; simultaneously extracting the characteristics of the input characteristic diagram, and carrying out characteristic integration by using 1X 1 convolution with the same channel after each layer of extracted characteristics pass through Concatenate processes; global average pooling is carried out on the integrated feature graphs so as to generate global information; then using two fully connected layers (FC 1 and FC 2), the corresponding activation functions are Relu for FC1 and sigmoid for FC2, respectively, which helps the network learn the nonlinear interactions between the channels and capture their correlations; re-assigning weights in combination with the integrated feature map; wherein the hole convolution layer comprises batch normalization and Relu function activation. The integration of the ASPP-SE module not only reduces the positioning deviation of parallel sub-networks, but also improves the overall information learning efficiency in the network to a great extent.
S2.4, according to the flow, the network approximately locates the OFT in the OCT image. But there is no guarantee that all areas of blood flow are in the obtained OFT priming box. Since performing the pixel-level segmentation directly within the OFT bounding box results in relatively low accuracy, this example designs a Probability Dependent Attention Module (PDAM) to guide the high-accuracy segmentation (fig. 4).
The PDAM includes: the first input branch is an OFT bounding box of a parallel sub-network after NMS, a two-dimensional OFT probability distribution function is constructed based on the first input branch to restrict the division of a flow area, and the distribution principle is as follows:
F=Normalize(Ftarget+Fbackward)
Wherein II is Euclidean distance, P 1 is the point in the OFT binding box; p 2 denotes a point within the background region (outside the binding box); d represents the vertex of the OFT bounding box, and C represents the center point of the OFT bounding box; alpha and beta are constants for adjusting the degree of attention. The total probability distribution function (F) normalizes F target and F backward, which indicates that points outside the bounding box can still be identified. Typically, background points are of less interest than points (α < β) within a bounding box. This process reduces the strong constraints of the OFT bounding box on pixel-level semantic segmentation. In other words, the procedure effectively expands the binding box to some extent.
The second input branch is the result of up-sampling of the last two shallow decoder layers through bilinear interpolation, the second input branch is split into two branches, for the first branch, a 1X 1 convolution layer is used and multiplied by a defined two-dimensional probability distribution point by point, for the second branch, a plurality of 3X 3 convolution layers are used for extracting features, and the feature graphs of the two branches are output after passing through Concatenate; wherein the 3 x 3 convolution layer includes 3 x 3 convolutions, batch normalization, and Relu function activations, and the 1 x 1 convolution layer includes 1 x 1 convolutions, batch normalization, and Relu function activations. The two branches are used in order to avoid over-constraint of the OFT buffering box by the compensation mechanism. The second branch uses features from the decoder surface to compensate for the bias in the OFT positioning in the first branch. The output of the PDAM is split by 1 x 1 convolution and sigmoid.
S2.5, set up the loss function for network training, since ALSegNet network includes OFT localization and pixel level semantic segmentation, the loss function of network (L loss) consists of three parts (segmentation loss, regression loss, and classification loss):
Lloss=ζ1Lcls2Lbox3Lseg
Lseg=LDice+LWBCE
Wherein the classification penalty (L cls) uses the focal penalty to determine the class (true or false) of the generated OFT bounding box; regression loss (L box) uses the smoth L1 loss for assessing positioning accuracy. The partition loss (L seg) is a combination of the Dice loss (L Dice) and the weighted bi-classification cross entropy loss (L WBCE). ζ 1、ζ2 and ζ 3 represent adjustable loss coefficients. t i denotes a pixel value of the real image, y i denotes a pixel value of the predicted image, and σ is used as a weight parameter to adjust the attention to the OFT region.
S3, preprocessing the structure diagram acquired in the S1, wherein the size of the structure diagram is uniform, the structure diagram can be obtained by using, but not limited to, images with the size of 512 multiplied by 320, manually marking the structure diagram at the pixel level by using marking tools such as labelme, and the marked images are used as segmentation labels and are binarized images; and drawing a marking frame by taking the upper, lower, left and right endpoints of the marking area of the segmentation label, wherein the coordinates of a pair of diagonal points are used as position labels. Pairing and packaging the original image, the segmentation labels and the position labels into a dataset, and performing iterative training until the network converges to obtain a trained ALSegNet network, wherein the training flow is shown in fig. 5. Loading trained ALSegNet network weights, inputting a heart OFT structure diagram, and extracting a blood flow region in real time.
S4, in order to realize quantitative measurement of the shear stress of the OFT wall surface of the chick embryo heart, blood is generally considered to be Newtonian fluid with a small Reynolds number. Thus, the blood flow in the OFT can be considered to be laminar. The viscous blood flow suspends erythrocytes, leukocytes, platelets and other particles (37 ℃, viscosity coefficient of blood μ=2.5-4×10 -3 pa·s), and contacts the endocardium of the heart vessel wall, creating a frictional force, i.e. wall shear stress. To better understand the effect of embryo morphology on hemodynamics, this example quantifies WSS and gives the definition of WSS according to the general law of fluid:
Wherein the method comprises the steps of Represents absolute flow rate gradient and μ represents blood viscosity. Theoretically, WSS calculations are performed on a cross section perpendicular to the flow direction (the red plane with coordinates (x, y, z) in fig. 6 (a)). However, doppler OCT provides a flow velocity image only in a plane that is not perpendicular to the blood flow (doppler angle is not equal to pi/2, the blue plane with coordinates (x ', y ', z ') in fig. 6 (a)). Since OCT measures the flow velocity component parallel to the probe beam, in order to calculate the WSS, it is necessary to project the flow velocity distribution in the extracted flow region (blue plane) to a plane perpendicular to the flow direction (red plane). However, the coordinate systems of the two planes are independent, and projection is difficult to realize. To simplify the calculation, the present example rotates the sample on the X-Y plane until the X-scan direction of the OCT is perpendicular to the flow direction, i.e., the X-axis of the red plane and the X' -axis of the blue plane are in the same direction. The projection principle is as follows:
And θ is the Doppler angle recorded when the flow velocity diagram is acquired by S1. Fig. 6 (d) shows the result after projection. Further, the doppler angle is used to correct the flow rate (velocity component) of OCT acquisition to an absolute velocity, and computation of the OFT WSS is performed on the projection image (6 (e)). Finally, the calculated WSS is projected back into the original OCT scan plane (6 (f)).
The embodiment of the neural network-based real-time measurement method for the wall shear force of the cardiac outflow tract can be applied to any device with data processing capability, and the device with data processing capability can be a device or an apparatus such as a computer. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present invention. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Supplementary experiment: since chick embryos at stage 18 of Hamburg-Hamilton (HH) were relatively easy to obtain and developed at a fast rate compared to most other animal models, this example used chick embryo hearts at stage 18 of HH to carry out additional experiments on the method. Wherein the preparation of chicken embryos followed standard procedures: fertilized white Leghorn eggs were incubated in a rotating incubator at 38 degrees celsius at 85% humidity for about 3 days. After the balloon end opens the eggshell, a small section of the inner shell membrane is carefully removed. Eggs were then placed in a custom acrylic box. During image acquisition, the temperature in the acrylic cassette was maintained at 37.5 degrees celsius by a heated blanket. Figure 7 is a 2D view of the results of a WSS test using chick embryo hearts at different phases of the beating cycle in an example of the present invention. Figure 8 is a 3D view of WSS test results at different beat cycle phases using chick embryo hearts in an example of the present invention. Wherein (a) is a structure diagram of an M section, (b) is an absolute flow velocity diagram corresponding to (a), (c) is a 3D view of the OFT blood flow area changing along with time, and (c) is a 3D view of the WSS changing along with time.

Claims (8)

1. The real-time measurement method of the heart outflow tract wall shearing force based on the neural network is characterized by comprising the following steps of:
Acquiring images by using a built SD-OCT imaging system, and acquiring a structural diagram and a flow velocity diagram;
In an SD-OCT imaging system, a probe arm is arranged on a graduated dividing head, and the Doppler angle is recorded by changing the incidence angle of a probe beam through the rotation of the dividing head;
The ALSegNet network is used for extracting the flow area of the central dirty outflow channel in the structure diagram; and calculating the shear force of the wall of the outflow tract of the heart by using a flow velocity gradient of a laminar flow model and a three-dimensional rotation correction algorithm in combination with the extracted flow region.
2. The method for real-time measurement of cardiac outflow tract wall shear force based on neural network according to claim 1, wherein said ALSegNet network comprises CNN backbone network, hole space pyramid pooling-extrusion and excitation module, decoder, classification and frame regression sub-network, non-maximum suppression module NMS and probability dependent attention module PDAM;
The input of ALSegNet network is a structural diagram, the CNN backbone network is used as an encoder to extract the characteristics, the obtained characteristic diagram is input to a cavity space pyramid pooling-extrusion and excitation module to obtain a multi-scale characteristic diagram, and the multi-scale characteristic diagram is input to a decoder; the feature layer of the decoder is divided into three parts, a feature image obtained by the first part of feature layer is input into a classification and frame regression sub-network, and an obtained boundary box is input into a non-maximum suppression module to obtain a cardiac outflow tract boundary box; the feature map obtained by the second part of feature layer of the decoder is input into the probability dependent attention module together with the cardiac outflow tract boundary box after up-sampling, and the feature map obtained by the probability dependent attention module is input into the third part of feature layer of the decoder to output and obtain the extracted flow region.
3. A method for real-time measurement of cardiac outflow tract wall shear forces based on neural networks according to claim 2, wherein the CNN backbone network comprises five coding layers; the decoder comprises 3 upsampling residual blocks URB 1-3, a deep feature layer, a 1 x 1 convolution layer and a sigmoid activation function; inputting the deepest features obtained by the encoder into a cavity space pyramid pooling-extrusion and excitation module, and splicing features of a feature map output by the cavity space pyramid pooling-extrusion and excitation module and a 4 th feature layer output L4 of the encoder in a channel dimension to serve as a fifth feature layer of the decoder; the decoder fifth feature layer generates a decoder deep feature layer using a 3x 3 convolution with a step size of 2; the fifth characteristic layer of the decoder is input to the output characteristic of the first upsampling residual block URB 1,URB1 and the dimension splicing characteristic of the L3 channel output by the 3 rd characteristic layer of the encoder, and is recorded as a fourth characteristic layer of the decoder; the fourth characteristic layer of the decoder is input to a second upsampling residual block URB 2,URB2 to output characteristics and the 2 nd characteristic layer of the encoder is output to output L2 channel dimension splicing characteristics, and the characteristics are recorded as a third characteristic layer of the decoder; the third characteristic layer of the decoder is input to the 3 rd upsampling residual block URB 3,URB3 to output the characteristic and the 1 st characteristic layer of the encoder to output the L1 channel dimension splicing characteristic, and the characteristic is recorded as a second characteristic layer of the decoder;
Inputting the deep feature layer of the decoder, the fifth feature layer of the decoder and the fourth feature layer of the decoder into a classification and frame regression sub-network, merging the outputs of the classification and frame regression sub-network, and inputting the merged outputs into a non-maximum suppression module together to generate a heart outflow tract boundary frame;
The heart outflow tract boundary box is used as a first input branch of the probability dependent attention module, the decoder third characteristic layer and the decoder second characteristic layer are used as a second input branch of the probability dependent attention module after upsampling, the two input branches are simultaneously input into the probability dependent attention module, and the output characteristic is marked as the decoder first characteristic layer; the first feature layer of the decoder finally gets the extracted flow rate region after 1×1 convolution and sigmoid activation.
4. A method for measuring wall shear force of a cardiac outflow tract in real time based on a neural network according to claim 3, wherein the probability-dependent attention module first input branch uses superposition of gaussian functions to obtain a two-dimensional OFT probability distribution function; the two-dimensional OFT probability distribution function is based on the distance from a point to the center point of the boundary box of the cardiac outflow tract, and the two-dimensional OFT probability distribution function is based on the following:
F=Normalize(Ftarget+Fbackward)
Wherein II·IIrepresents the Euclidean distance and P 1 represents a point within the cardiac outflow tract bounding box; p 2 denotes a point outside the cardiac outflow tract bounding box within the background region; d represents the vertex of the cardiac outflow tract bounding box, C represents the center point of the cardiac outflow tract bounding box; alpha and beta are constants for adjusting the degree of attention; the total probability distribution function F normalizes F target and F backward, and alpha < beta;
The probability-dependent attention module second input branch is split into two convolution branches; the first volume integral branch uses a 1X 1 convolution layer and is multiplied by the two-dimensional OFT probability distribution point by point, and the second volume integral branch uses a plurality of 3X 3 convolution layers to extract features; and the feature graphs obtained by the first convolution branch and the second convolution branch are subjected to feature splicing in the channel dimension to obtain the output of the probability dependent attention module.
5. The method for measuring wall shear force of cardiac outflow tract based on neural network according to claim 4, wherein the 3 upsampling residual blocks URB 1-3 have the same structure, and the main feature layer is obtained by performing 3×3 convolution, batch normalization and Relu activation twice after upsampling the input, and the output is obtained by adding the result after upsampling the input with the main feature layer point by point using 1×1 convolution layer as residual connection.
6. The method for measuring the wall shear force of a cardiac outflow tract based on a neural network in real time according to claim 5, wherein the cavitation space pyramid pooling-extrusion-excitation module ASPP-SE uses n layers of cavitation convolution layers with expansion rate of 6n for an input feature map in parallel; the hole convolution layer comprises batch standardization and Relu function activation; simultaneously, carrying out feature extraction on an input feature map, carrying out feature integration on the extracted features of each layer by using 1X 1 convolution with the same channel after passing through Concatenate processes, and marking the generated feature map as an intermediate feature map; carrying out global average pooling on the intermediate feature map, inputting the intermediate feature map into two full-connection layers, wherein an activation function of a first full-connection layer uses Relu, and an activation function of a second full-connection layer uses sigmoid; and re-distributing weights by combining the feature vectors obtained after the two full-connection processes are executed with the intermediate feature map.
7. The method of claim 6, wherein the ALSegNet network performs cardiac outflow tract localization and pixel-level semantic segmentation, and ALSegNet network training uses three different types of loss functions: segmentation loss L seg, regression loss L box, and classification loss L cls;
The partition penalty is a combination of the Dice penalty L Dice and the weighted two-class cross entropy penalty L WBCE, which is applied at the output of the encoder;
The classifying loss uses the focal loss to judge the class of the generated OFT bounding box; regression loss uses the smoothl 1 loss for assessing positioning accuracy; classification loss and regression loss act in the cardiac outflow tract boundary box obtained by non-maximum suppression;
Lseg=LDice+LWBCE
Using adjustable coefficients And/>To control ALSegNet the degree of attention of the network to different tasks.
8. A method for real-time measurement of wall shear force of a cardiac outflow tract based on neural network according to any of claims 1-8, wherein the wall shear force of the cardiac outflow tract is calculated as follows:
Projecting the flow velocity distribution in the extracted flow region to a plane perpendicular to the flow direction to obtain a projection image, wherein the rotation angle is based on the Doppler angle theta recorded by the dial when the flow velocity map is acquired;
Performing OFT WSS calculation on the projection image, recording the geometric center of the flow velocity region as a maximum flow velocity point according to the absolute flow velocity gradient of the edge region, intercepting an annular region from the edge to a certain position away from the maximum flow velocity point, discretizing the flow velocity point in the annular region, and traversing the flow velocity point to the direction of the maximum flow velocity point to calculate the velocity gradient Wherein/>Calculating and obtaining the line width of the annular region intercepted by the connecting line from the edge point to the maximum flow velocity point;
finally, the calculated WSS is projected back into the original OCT scan plane.
CN202410133372.5A 2024-01-31 2024-01-31 Real-time measurement method for heart outflow tract wall shearing force based on neural network Pending CN117952939A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410133372.5A CN117952939A (en) 2024-01-31 2024-01-31 Real-time measurement method for heart outflow tract wall shearing force based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410133372.5A CN117952939A (en) 2024-01-31 2024-01-31 Real-time measurement method for heart outflow tract wall shearing force based on neural network

Publications (1)

Publication Number Publication Date
CN117952939A true CN117952939A (en) 2024-04-30

Family

ID=90801054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410133372.5A Pending CN117952939A (en) 2024-01-31 2024-01-31 Real-time measurement method for heart outflow tract wall shearing force based on neural network

Country Status (1)

Country Link
CN (1) CN117952939A (en)

Similar Documents

Publication Publication Date Title
US7869663B2 (en) Methods, systems and computer program products for analyzing three dimensional data sets obtained from a sample
Golabbakhsh et al. Vessel‐based registration of fundus and optical coherence tomography projection images of retina using a quadratic registration model
US20200286208A1 (en) Neural network based enhancement of intensity images
WO2017133083A1 (en) Angiography method and system based on splitting full space of modulation spectrum and angle-based combination
CN110448319B (en) Blood flow velocity calculation method based on contrast image and coronary artery
JP6436442B2 (en) Photoacoustic apparatus and image processing method
Duncan et al. Absolute blood velocity measured with a modified fundus camera
CN104361554B (en) A kind of externa automatic testing method based on ivus image
CN112057049B (en) Optical coherent blood flow radiography method and system based on multi-dimensional feature space
JP2024515635A (en) System and method for reconstructing 3D images from ultrasound and camera images - Patents.com
CN113192069A (en) Semantic segmentation method and device for tree structure in three-dimensional tomography image
WO2023039353A2 (en) Real-time super-resolution ultrasound microvessel imaging and velocimetry
Parameswari et al. RETRACTED ARTICLE: Prediction of atherosclerosis pathology in retinal fundal images with machine learning approaches
US10229494B2 (en) Automated analysis of intravascular OCT image volumes
Zhang et al. TranSegNet: hybrid CNN-vision transformers encoder for retina segmentation of optical coherence tomography
Yan et al. A novel segmentation approach for intravascular ultrasound images
WO2021100694A1 (en) Image processing device, image processing method, and program
CN108846896A (en) A kind of automatic molecule protein molecule body diagnostic system
CN117952939A (en) Real-time measurement method for heart outflow tract wall shearing force based on neural network
CN112085830A (en) Optical coherent angiography imaging method based on machine learning
Lee et al. Automated drosophila heartbeat counting based on image segmentation technique on optical coherence tomography
CN113706567A (en) Blood flow imaging quantitative processing method and device combining blood vessel morphological characteristics
Guan et al. Full‐field optical multi‐functional angiography based on endogenous hemodynamic characteristics
JP7446277B2 (en) Colocalization detection of retinal perfusion and optic disc deformation
Xu et al. Improving the resolution of retinal OCT with deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination