CN117911318A - Method, device, electronic equipment and storage medium for determining image parameters - Google Patents

Method, device, electronic equipment and storage medium for determining image parameters Download PDF

Info

Publication number
CN117911318A
CN117911318A CN202311540909.1A CN202311540909A CN117911318A CN 117911318 A CN117911318 A CN 117911318A CN 202311540909 A CN202311540909 A CN 202311540909A CN 117911318 A CN117911318 A CN 117911318A
Authority
CN
China
Prior art keywords
image
ccta
model
training
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311540909.1A
Other languages
Chinese (zh)
Inventor
徐利建
倪紫煜
夏清
李鸿升
张少霆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bozhi Perceptual Interaction Research Center Co ltd
Sensetime Group Ltd
Original Assignee
Bozhi Perceptual Interaction Research Center Co ltd
Sensetime Group Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bozhi Perceptual Interaction Research Center Co ltd, Sensetime Group Ltd filed Critical Bozhi Perceptual Interaction Research Center Co ltd
Priority to CN202311540909.1A priority Critical patent/CN117911318A/en
Publication of CN117911318A publication Critical patent/CN117911318A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, electronic equipment and a storage medium for determining image parameters, wherein the method for determining the image parameters comprises the following steps: acquiring CCTA images and physiological characteristic information of a target object; and determining the hemodynamic parameters of the target object based on the CCTA image and the physiological characteristic information by adopting a neural network. According to the scheme, the hemodynamic parameters of the target object can be directly output based on the CCTA image and the physiological characteristic information of the target object through the neural network, the acquisition speed of the hemodynamic parameters is improved, the physiological characteristic information of the target object is fused, and the accuracy of the hemodynamic parameters can be improved.

Description

Method, device, electronic equipment and storage medium for determining image parameters
Technical Field
The present application relates to the field of image processing technology for deep learning, and in particular, to a method, an apparatus, an electronic device, and a computer readable storage medium for determining image parameters.
Background
Coronary vascular disease (coronery ARTERY DISEASE, CAD) is one of the most common cardiovascular diseases in the world, mainly caused by plaque build-up on the arterial wall. Although in clinic, stenosis is often considered the standard of interventional surgical intervention. Biomechanical and hemodynamic changes are also believed to play a critical role in the pathogenesis and treatment of CAD. For example, fractional flow reserve (Fractional Flow Reserve, FFR) has become the gold standard for diagnosis of chronic CAD target stenosis; wall shear stress (WALL SHEAR STRESS, WSS) can be used to detect the negative effects of abnormal WSS on endothelial function. Quantitative assessment of these indicators will aid in early diagnosis of coronary vascular disease. Therefore, how to quickly obtain parameters such as fractional flow reserve and wall shear stress is a challenge to be solved.
Disclosure of Invention
The application provides at least a method, a device, an electronic device and a computer readable storage medium for determining image parameters.
The first aspect of the present application provides a method for determining an image parameter, comprising: acquiring a Coronary Computed Tomography (CCTA) image and physiological characteristic information of a target object; and determining the hemodynamic parameters of the target object based on the CCTA image and the physiological characteristic information by adopting a neural network.
Therefore, the hemodynamic parameters of the target object can be directly output based on the CCTA image and the physiological characteristic information of the target object through the neural network, the acquisition speed of the hemodynamic parameters is improved, and the accuracy of the hemodynamic parameters can be improved by fusing the physiological characteristic information of the target object.
In some embodiments, the neural network includes a segmentation network model and a prediction network model; determining hemodynamic parameters of a target object based on the CCTA image and the physiological characteristic information using a neural network, comprising: dividing the CCTA image to obtain a plurality of image blocks corresponding to the CCTA image; respectively extracting features of each image block based on the segmentation network model to obtain a three-dimensional model corresponding to each image block; the three-dimensional model comprises a plurality of nodes and connecting edges among the nodes; determining a point cloud data set of the CCTA image based on the position information of each image block in the CCTA image and the node corresponding to each image block; and inputting the point cloud data set and the physiological characteristic information of the CCTA image into a prediction network model to obtain the hemodynamic parameters of the target object.
Therefore, by dividing the CCTA image, the influence on the calculation speed caused by overlarge data size of the CCTA image is avoided; extracting the characteristics of each image block through the segmentation network model to obtain a three-dimensional model corresponding to each image block, and enriching semantic information of coronary artery blood vessels; and the accuracy of hemodynamic parameters of a target object is improved by predicting a point cloud data set and physiological characteristic information of a network model CCTA image.
In some embodiments, the segmentation network model includes a three-dimensional segmentation model and a mesh deformation model, the mesh deformation model being connected to the prediction network model; feature extraction is respectively carried out on each image block based on the segmentation network model to obtain a three-dimensional model corresponding to each image block, and the method comprises the following steps: image segmentation is carried out on the image block based on the three-dimensional segmentation model, and a coronary artery vascular mask image of the image block is obtained; mapping the coronary artery vascular mask map of the image block to an initial spherical grid to obtain a coronary artery vascular spherical grid of the image block; the initial spherical grid comprises a plurality of original nodes and connecting lines among the original nodes; and (3) up-sampling and feature extraction are carried out on the coronary artery vessel spherical grid based on the grid deformation model, so that a three-dimensional model corresponding to each image block is obtained.
Therefore, the image block is subjected to image segmentation through the three-dimensional segmentation model so as to extract coronary artery blood vessels, the grid deformation model is used for up-sampling and feature extraction of the spherical grid of the coronary artery blood vessels so as to amplify the extracted coronary artery blood vessels, and the resolution of the coronary artery blood vessels is improved.
In some embodiments, a method of training a segmented network model includes: acquiring a CCTA training image, wherein the CCTA training image is provided with a plurality of corresponding sub-images, and each sub-image is associated with a marking mask image and a marking three-dimensional model; inputting the sub-image into the three-dimensional segmentation model to obtain a prediction mask image of the sub-image; mapping the prediction mask image to an initial spherical grid to obtain a spherical grid of a sub-image; inputting the spherical grid of the sub-image into a grid deformation model to obtain a prediction three-dimensional model of the sub-image; and carrying out joint iterative training on the three-dimensional segmentation model and the grid deformation model based on the weighted sum of the error value between the prediction mask image and the annotation mask image corresponding to the sub-image and the error value between the annotation three-dimensional model and the prediction three-dimensional model.
Therefore, by carrying out joint training on the three-dimensional segmentation model and the grid deformation model, the detection accuracy of the segmentation network model can be improved in an end-to-end mode.
In some embodiments, inputting the point cloud dataset of the CCTA image and the physiological characteristic information into the predictive network model to obtain the hemodynamic parameters of the target object, comprising: mapping the coronary artery vascular mask map of each image block to a point cloud data set in the CCTA image to obtain an updated point cloud data set; and carrying out feature fusion on the prediction network model based on the updated point cloud data set and the hemodynamic parameters to obtain hemodynamic parameters of the target object.
Therefore, the coronary artery vascular mask map of each image block is mapped to the point cloud data set in the CCTA image, and the detection accuracy of hemodynamic parameters can be improved by combining more semantic information.
In some embodiments, a training method of a predictive network model includes: acquiring a plurality of training data sets, wherein the training data sets comprise CCTA training images and physiological training data, and the CCTA training images are associated with sample point cloud data; marking parameters are associated with each training data set; the sample point cloud data is acquired by the trained segmentation network model based on a CCTA training image; inputting sample point cloud data and physiological training data associated with the CCTA training image into a prediction network model to obtain prediction parameters; and carrying out iterative training on the trained segmentation network model and the trained prediction network model based on the error value between the prediction parameter and the labeling parameter corresponding to the training data set.
Therefore, the CCTA training image is processed through the trained segmentation network model to obtain sample point cloud data corresponding to the CCTA training image, then the prediction network model is trained based on the sample point cloud data corresponding to the CCTA training image, the prediction accuracy of the prediction network model can be improved through an end-to-end mode, corresponding parameters can be directly obtained based on the CCTA training image and physiological training data through the segmentation network model and the prediction network model, and the detection speed is improved.
In some embodiments, the method of determining image parameters further comprises: the CCTA image is preprocessed, wherein the preprocessing includes normalizing CT values and normalizing CT sizes.
Therefore, the CCTA images of different size types are conveniently adjusted to preset requirements by preprocessing the CCTA images, so that the generalization performance of the method of hemodynamic parameters is improved.
The second aspect of the present application provides an apparatus for determining an image parameter, the apparatus for determining an image parameter comprising: the acquisition module is used for acquiring the CCTA image and the physiological characteristic information of the target object; and the determining module is used for determining the hemodynamic parameters of the target object based on the CCTA image and the physiological characteristic information by adopting the neural network.
A third aspect of the present application provides an electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement a method of determining an image parameter as in the first aspect described above.
A fourth aspect of the application provides a computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement a method of determining an image parameter as in the first aspect described above.
It will be appreciated that the advantages of the second to fourth aspects may be found in the relevant description of the first aspect and are not repeated here.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flowchart illustrating an embodiment of a method for determining image parameters according to the present application;
FIG. 2 is a schematic diagram of an embodiment of a method for determining image parameters according to the present application;
FIG. 3 is a flowchart illustrating a training method of a segmentation network model according to an embodiment of the present application;
FIG. 4 is a flowchart of an embodiment of a training method of a predictive network model according to the present application;
FIG. 5 is a flowchart illustrating a step S12 of the method for determining image parameters in FIG. 1 according to an embodiment;
FIG. 6 is a flowchart illustrating an embodiment of step S122 in the method for determining image parameters provided in FIG. 5;
FIG. 7 is a schematic block diagram illustrating one embodiment of an apparatus for determining image parameters in accordance with the present application;
FIG. 8 is a schematic block diagram of one embodiment of a neural network provided by the present application;
FIG. 9 is a schematic block diagram of one embodiment of a determination module provided by the present application;
FIG. 10 is a schematic block diagram illustrating one embodiment of an apparatus for determining image parameters in accordance with the present application;
FIG. 11 is a schematic diagram of a frame of an embodiment of an electronic device provided by the present application;
fig. 12 is a schematic diagram of a computer readable storage medium according to an embodiment of the present application.
Detailed Description
The following describes embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, may mean including any one or more elements selected from the group consisting of A, B and C.
The main execution body of the method for determining the image parameter may be an image processing apparatus, which may be any terminal device or server or other processing device capable of executing the method according to the embodiment of the present application, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the image processing method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1 and 2, fig. 1 is a flowchart illustrating an embodiment of a method for determining image parameters according to the present application; fig. 2 is a schematic diagram of a method for determining image parameters according to an embodiment of the present application.
Specifically, the method may include the steps of:
S11: and acquiring CCTA images and physiological characteristic information of the target object.
In particular, coronary computed tomography angiography (Coronary Computed Tomography Angiongraphy, CCTA) is a common means of clinical diagnosis of cardiovascular disease, and is also a gold standard for diagnosis of cardiovascular disease.
And carrying out coronary computed tomography angiography on the target object to obtain a CCTA image of the target object. CCTA images are three-dimensional images of coronary vessels.
And detecting the physiological information of the target object to obtain the physiological characteristic information of the target object. The physiological characteristic information comprises heart rate, blood pressure, blood fat and the like. The physiological characteristic information of each target object is different.
In one embodiment, the target object may be a human or an animal. In this embodiment, the target object is a person, and may specifically be a patient.
S12: and determining the hemodynamic parameters of the target object based on the CCTA image and the physiological characteristic information by adopting a neural network.
In one embodiment, the neural network includes a segmentation network model and a prediction network model. In this embodiment, the segmentation network model includes a three-dimensional segmentation model and a mesh deformation model that are sequentially cascaded, and the mesh deformation model is connected with the prediction network model.
In one embodiment, a method for training a segmented network model includes the following steps.
Referring to fig. 3, fig. 3 is a flowchart illustrating an embodiment of a training method of a split network model according to the present application.
S101: and acquiring a CCTA training image, wherein the CCTA training image is provided with a plurality of corresponding sub-images, and each sub-image is associated with an annotation mask image and an annotation three-dimensional model.
Specifically, CCTA training images of a patient are acquired. The CCTA training image has a plurality of sub-images segmented. The size of the sub-image may be a preset size. The preset size can be set according to the actual situation, for example, the number of the cells to be processed, preset size 36 x 36.
Each acquired sub-image has a corresponding annotation mask image and an annotation three-dimensional model. The annotation mask image is a coronary artery blood vessel image truly contained in the sub-image. The three-dimensional model is marked as a real three-dimensional model constructed after the resolution of the coronary artery blood vessel is improved.
S102: and inputting the sub-image into the three-dimensional segmentation model to obtain a prediction mask image of the sub-image.
Specifically, each sub-image is respectively input into a three-dimensional segmentation model, and the three-dimensional segmentation model performs image segmentation on the sub-images to obtain a prediction mask image of the sub-images. In this embodiment, the three-dimensional segmentation model may employ a 3D-UNet network or a V-Net network.
S103: and mapping the prediction mask image to the initial spherical grid to obtain the spherical grid of the sub-image.
Specifically, the prediction mask image of the sub-image is mapped to the initial spherical mesh, resulting in the spherical mesh of the sub-image. The predicted spherical mesh includes a plurality of original nodes and connecting lines between the original nodes.
S104: and inputting the spherical grid of the sub-image into the grid deformation model to obtain a prediction three-dimensional model of the sub-image.
Specifically, up-sampling and feature extraction are carried out on the spherical grids of the sub-images based on the grid deformation model, so as to obtain a prediction three-dimensional model of the sub-images. In this embodiment, the grid deformation model may employ a GCN network.
S105: and carrying out joint iterative training on the three-dimensional segmentation model and the grid deformation model based on the weighted sum of the error value between the prediction mask image and the annotation mask image corresponding to the sub-image and the error value between the annotation three-dimensional model and the prediction three-dimensional model.
Specifically, an error value between a prediction mask image and a annotation mask image corresponding to the sub-image is determined using a cross entropy loss function and a DICE loss function.
And determining error values between the labeled three-dimensional model and the predicted three-dimensional model corresponding to the sub-image by using the Chamfer loss function, the Normal loss function, the LAPLACIAN REGULARIZATION loss function and the EDGE LENGTH regularization loss function.
And carrying out weighted summation on the error value between the prediction mask image and the annotation mask image corresponding to the sub-image and the error value between the annotation three-dimensional model and the prediction three-dimensional model to obtain the loss value corresponding to the sub-image. And carrying out joint iterative training on the three-dimensional segmentation model and the grid deformation model through the loss value. I.e. the parameters in the three-dimensional segmentation model and the parameters in the grid deformation model are synchronously updated through the loss values.
When the weighted sum of the error value between the prediction mask image and the annotation mask image corresponding to the sub-image and the error value between the annotation three-dimensional model and the prediction three-dimensional model is smaller than a threshold value, training of the three-dimensional segmentation model and the grid deformation model can be stopped. The threshold value can be 5% or 1%, and is specifically set according to practical situations.
In one embodiment, the training method of the predictive network model includes the following steps.
Referring to fig. 4, fig. 4 is a flowchart illustrating an embodiment of a training method of a predictive network model according to the present application.
S201: acquiring a plurality of training data sets, wherein the training data sets comprise CCTA training images and physiological training data, and the CCTA training images are associated with sample point cloud data; marking parameters are associated with each training data set; the sample point cloud data is acquired based on the CCTA training image by the trained segmentation network model.
Specifically, a plurality of training data sets including CCTA training images and physiological training data are acquired. And dividing the CCTA training image to obtain a plurality of sub-images.
In this embodiment, each sub-image corresponding to the CCTA training image is input to the split network model obtained by training in steps S101 to S105, and the split network model outputs the three-dimensional model corresponding to each sub-image. The three-dimensional model includes a plurality of nodes and connecting edges between the nodes. And extracting all nodes in the three-dimensional model, and converting the three-dimensional model into point clouds corresponding to the sub-images. And according to the position information of each sub-image in the CCTA training image, forming sample point cloud data corresponding to the CCTA training image based on the point cloud corresponding to each sub-image.
S202: and inputting the sample point cloud data and the physiological training data associated with the CCTA training image into a prediction network model to obtain prediction parameters.
Specifically, sample point cloud data and physiological training data corresponding to CCTA training images in the training data set are input into a prediction network model, and prediction parameters corresponding to the training data set are obtained. In this embodiment, the predicted parameter is a hemodynamic parameter. In this embodiment, the predicted network model may employ PointNet ++ network.
S203: and carrying out iterative training on the trained segmentation network model and the trained prediction network model based on the error value between the prediction parameter and the labeling parameter corresponding to the training data set.
Specifically, an error value between a predicted parameter and a labeling parameter corresponding to the training data set is determined through an L1 or L2 loss function. And back-propagating the error value to the trained segmentation network model and the prediction network model so as to update network parameters in the trained segmentation network model and the prediction network model.
And when the error value between the prediction parameter and the labeling parameter corresponding to the training data set is smaller than the threshold value, the training of the segmentation network model and the prediction network model can be stopped, and the trained neural network is obtained. The threshold value can be 5% or 1%, and is specifically set according to practical situations.
The trained neural network can directly determine the hemodynamic parameters of the target object based on the CCTA image and the physiological characteristic information of the target object, and the method is simple and reliable, has high determination speed and is convenient for clinical application.
In one embodiment, the CCTA image is preprocessed, wherein the preprocessing includes normalizing the CT values and normalizing the CT sizes. For example, the spatial pitch and window width level of the CCTA image are respectively adjusted to corresponding preset requirements, for example, the spatial pitch is set to (1, 1), and the window width level of the CCTA image is adjusted to [ -200, 400]. In other embodiments, the preprocessing further includes denoising.
In a specific embodiment, the determining the hemodynamic parameters of the target object based on the CCTA image and the physiological characteristic information using the neural network in step S2 is as follows.
Referring to fig. 5, fig. 5 is a flowchart of an embodiment of step S12 in the method for determining image parameters provided in fig. 1.
Specifically, the neural network includes a cascade of a segmentation network model and a prediction network model in sequence.
S121: and dividing the CCTA image to obtain a plurality of image blocks corresponding to the CCTA image.
Specifically, to facilitate subsequent data processing, CCTA images are divided into a plurality of smaller image blocks. The size of the image block may be a predetermined size, for example, the predetermined size may be 306×36×36.
S122: respectively extracting features of each image block based on the segmentation network model to obtain a three-dimensional model corresponding to each image block; the three-dimensional model includes a plurality of nodes and connecting edges between the nodes.
In this embodiment, the segmentation network model includes a three-dimensional segmentation model and a mesh deformation model that are sequentially cascaded, and the mesh deformation model is connected with the prediction network model.
In particular, the acquisition of a three-dimensional model of an image block comprises in particular the following steps.
Referring to fig. 6, fig. 6 is a flowchart illustrating an embodiment of step S122 in the method for determining image parameters provided in fig. 5.
S1221: and performing image segmentation on the image block based on the three-dimensional segmentation model to obtain a coronary artery vessel mask image of the image block.
Specifically, three downsampling processes and feature extraction and three upsampling processes are sequentially performed on the image block based on the three-dimensional segmentation model, so that the resolution of the obtained feature map is restored to the resolution of the image block, and a coronary artery vascular mask map corresponding to the image block is determined based on the obtained feature map by using a Sigmoid function.
S1222: mapping the coronary artery vascular mask map of the image block to an initial spherical grid to obtain a coronary artery vascular spherical grid of the image block; the initial spherical mesh includes a plurality of original nodes and connecting lines between the original nodes.
Specifically, an initial spherical network is obtained, the initial spherical network including 162 original nodes and connecting edges connecting the original nodes. The initial spherical grid has a small number of original nodes, which is beneficial to the model to quickly learn the basic characteristics of the grid. And mapping the coronary artery vascular mask map of the image block to the initial spherical grid so that each original node has a characteristic vector, and further obtaining the coronary artery vascular spherical grid corresponding to each image block.
S1223: and (3) up-sampling and feature extraction are carried out on the coronary artery vessel spherical grid based on the grid deformation model, so that a three-dimensional model corresponding to each image block is obtained.
Specifically, the coronary artery vessel sphere grid is input into a grid deformation model, the grid deformation model firstly extracts features of the coronary artery vessel sphere grid, then carries out up-sampling treatment so as to increase the peak of the coronary artery vessel sphere grid to 642 nodes, and then carries out up-sampling treatment so as to increase the peak of the coronary artery vessel sphere grid to 2562 nodes. Upsampling and feature extraction may continue in the manner described above. The up-sampling process is specifically performed by interpolation.
The grid deformation model can acquire more detailed feature vectors based on more coronary artery blood vessel spherical grids of the nodes, and further a more accurate three-dimensional model of the coronary artery blood vessel corresponding to each image block is obtained.
S123: and determining a point cloud data set of the CCTA image based on the position information of each image block in the CCTA image and the node corresponding to each image block.
In an embodiment, after step S121, each image block corresponding to the CCTA image may be processed by using an untrained segmentation network model, to obtain a three-dimensional model of each image block, and further obtain an original point cloud of each image block. And determining an original point cloud data set corresponding to the CCTA image based on the position information of the image blocks in the CCTA image and the original point clouds of the image blocks. And obtaining a three-dimensional model of each image block corresponding to the CCTA image through the trained segmentation network model, and further obtaining the node corresponding to each image block. And updating original point clouds corresponding to the image blocks by utilizing nodes corresponding to the image blocks according to the position information of the image blocks in the CCTA image, and obtaining a point cloud data set corresponding to the CCTA image after updating is completed.
In another embodiment, through obtaining the three-dimensional model of the image block in step S1223, all nodes in the three-dimensional model of the image block are extracted to obtain all nodes corresponding to the image block. And generating a point cloud data set corresponding to the CCTA image according to the position information of each image block in the CCTA image and all the nodes corresponding to each image block. The point cloud data set corresponding to the CCTA image is the point cloud data corresponding to the complete or partial coronary artery blood vessel of the target object.
S124: and inputting the point cloud data set and the physiological characteristic information of the CCTA image into a prediction network model to obtain the hemodynamic parameters of the target object.
Specifically, a point cloud data set and physiological characteristic information of a CCTA image are input into a prediction network model, each node in the point cloud data set is extracted in a layered mode by the prediction network model, and after feature encoding is carried out, corresponding point cloud features are obtained through gradual up-sampling processing. More feature information can be combined by combining the coronary vessel mask map of each image block into the point cloud features after the up-sampling process to obtain the fused point cloud features. And then decoding the fusion point cloud features to restore the fusion point cloud features to the size of the point cloud data set of the CCTA image, and obtaining the hemodynamic parameters of the target object based on the decoded point cloud features and the physiological feature information of the target object by using a Sigmoid function or a Tanh function so as to enable the obtained hemodynamic parameters to be more attached to the target object, and improve the accuracy of the hemodynamic parameters. Wherein the hemodynamic parameters include at least fractional flow reserve and wall shear stress.
The application provides an end-to-end neural network, which can realize accurate prediction from CCTA images to hemodynamic parameters only in about 10 seconds, and can effectively promote application of hemodynamics in clinic.
In this embodiment, the hemodynamic parameters of the target object can be directly output based on the CCTA image and the physiological characteristic information of the target object through the neural network, so that the acquisition speed of the hemodynamic parameters is improved, and the accuracy of the hemodynamic parameters can be improved by fusing the physiological characteristic information of the target object.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Referring to fig. 7 and 8, fig. 7 is a schematic block diagram illustrating an embodiment of an apparatus for determining image parameters according to the present application; fig. 8 is a schematic block diagram of one embodiment of a neural network provided by the present application.
In this embodiment, an apparatus 60 for determining an image parameter is provided, where the apparatus 60 for determining an image parameter includes an acquisition module 61 and a determination module 62.
The acquisition module 61 is used for acquiring CCTA images and physiological characteristic information of the target object;
the determining module 62 is configured to determine hemodynamic parameters of the target object based on the CCTA image and the physiological characteristic information using the neural network 70.
In one embodiment, the neural network 70 includes a cascade of a segmentation network model 71 and a prediction network model 72 in sequence.
Referring to fig. 9, fig. 9 is a schematic block diagram of a determining module according to an embodiment of the present application.
The determination module 62 includes a segmentation unit 621, a feature extraction unit 622, a combination unit 623, and a prediction unit 624, which are sequentially cascaded.
The segmentation unit 621 is configured to segment the CCTA image to obtain a plurality of image blocks corresponding to the CCTA image.
The feature extraction unit 622 is configured to perform feature extraction on each image block based on the segmentation network model 71, so as to obtain a three-dimensional model corresponding to each image block; the three-dimensional model includes a plurality of nodes and connecting edges between the nodes.
The combining unit 623 is configured to determine a point cloud data set of the CCTA image based on the position information of each image block in the CCTA image and the node corresponding to each image block.
The prediction unit 624 is configured to input the point cloud data set and the physiological characteristic information of the CCTA image into the prediction network model 72, so as to obtain the hemodynamic parameters of the target object. Specifically, the prediction unit 624 is further configured to map the coronary artery vascular mask map of each image block to a point cloud data set in the CCTA image, so as to obtain an updated point cloud data set; the predictive network model 72 performs feature fusion based on the updated point cloud data set and the hemodynamic parameters to obtain hemodynamic parameters of the target object.
In one embodiment, the segmentation network model 71 includes a three-dimensional segmentation model 711 and a mesh deformation model 712 that are cascaded in sequence, the mesh deformation model 712 being connected to the prediction network model 72. The feature extraction unit 622 includes a target extraction subunit 6221 and an increased resolution subunit 6222.
The target extraction subunit 6221 is configured to perform image segmentation on the image block based on the three-dimensional segmentation model 711, to obtain a coronary vessel mask map of the image block; the method is also used for mapping the coronary artery vascular mask map of the image block to the initial spherical grid to obtain the coronary artery vascular spherical grid of the image block; the initial spherical mesh includes a plurality of original nodes and connecting lines between the original nodes.
The resolution-increasing subunit 6222 is configured to upsample and extract features from the coronary artery vessel spherical mesh based on the mesh deformation model 712, to obtain a three-dimensional model corresponding to each image block.
Referring to fig. 10, fig. 10 is a schematic block diagram of an apparatus for determining image parameters according to an embodiment of the present application.
In an embodiment, the apparatus 60 for determining image parameters further comprises a first training module 63 and a second training module 64.
The first training module 63 is configured to obtain a CCTA training image, where the CCTA training image has a plurality of sub-images corresponding to each other, and each sub-image is associated with a label mask image and a label three-dimensional model; inputting the sub-image into the three-dimensional segmentation model 711 to obtain a prediction mask image of the sub-image; mapping the prediction mask image to an initial spherical grid to obtain a spherical grid of a sub-image; inputting the spherical grid of the sub-image into the grid deformation model 712 to obtain a predicted three-dimensional model of the sub-image; the three-dimensional segmentation model 711 and the mesh deformation model 712 are jointly iteratively trained based on a weighted sum of error values between the prediction mask image and the annotation mask image corresponding to the sub-images and error values between the annotation three-dimensional model and the prediction three-dimensional model.
In one embodiment, the second training module 64 is configured to obtain a plurality of training data sets, where the training data sets include CCTA training images and physiological training data, and the CCTA training images are associated with sample point cloud data; marking parameters are associated with each training data set; inputting the CCTA training images and physiological training data into a predictive network model 72 to obtain predictive parameters; the segmentation network model 71 and the prediction network model 72 are iteratively trained based on error values between the prediction parameters and the labeling parameters corresponding to the target object.
The application can directly output the hemodynamic parameters of the target object based on the CCTA image and the physiological characteristic information of the target object through the neural network, improves the acquisition speed of the hemodynamic parameters, and can improve the accuracy of the hemodynamic parameters by fusing the physiological characteristic information of the target object.
Referring to fig. 11, fig. 11 is a schematic diagram of a frame of an embodiment of an electronic device according to the present application. The electronic device 80 comprises a memory 81 and a processor 82 coupled to each other, the processor 82 being adapted to execute program instructions stored in the memory 81 to implement the steps of the training method embodiment of any of the image detection models described above, or to implement the steps of the image detection method embodiment of any of the above. In one particular implementation scenario, electronic device 80 may include, but is not limited to: the microcomputer and the server, and the electronic device 80 may also include a mobile device such as a notebook computer and a tablet computer, which is not limited herein.
Specifically, the processor 82 is configured to control itself and the memory 81 to implement the steps of the training method embodiment of any one of the image detection models described above, or to implement the steps of any one of the image detection method embodiments described above. The processor 82 may also be referred to as a CPU (Central Processing Unit ). The processor 82 may be an integrated circuit chip having signal processing capabilities. The Processor 82 may also be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 82 may be commonly implemented by an integrated circuit chip.
In the above aspect, a method for determining an image parameter includes: acquiring CCTA images and physiological characteristic information of a target object; and determining the hemodynamic parameters of the target object based on the CCTA image and the physiological characteristic information by adopting a neural network.
Referring to fig. 12, fig. 12 is a schematic diagram of a frame of an embodiment of a computer readable storage medium according to the present application. The computer readable storage medium 90 stores program instructions 901 executable by a processor, the program instructions 901 for implementing the steps of the training method embodiment of any one of the image detection models described above, or implementing the steps of any one of the image detection method embodiments described above.
In the above aspect, a method for determining an image parameter includes: acquiring CCTA images and physiological characteristic information of a target object; and determining the hemodynamic parameters of the target object based on the CCTA image and the physiological characteristic information by adopting a neural network.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (10)

1. A method of determining image parameters, comprising:
acquiring a Coronary Computed Tomography (CCTA) image and physiological characteristic information of a target object;
And determining the hemodynamic parameters of the target object based on the CCTA image and the physiological characteristic information by using a neural network.
2. The method of determining image parameters of claim 1, wherein the neural network comprises a segmentation network model and a prediction network model;
The determining, with a neural network, a hemodynamic parameter of the target object based on the CCTA image and the physiological characteristic information, includes:
Dividing the CCTA image to obtain a plurality of image blocks corresponding to the CCTA image;
respectively extracting features of the image blocks based on the segmentation network model to obtain a three-dimensional model corresponding to each image block; the three-dimensional model comprises a plurality of nodes and connecting edges among the nodes;
Determining a point cloud data set of the CCTA image based on the position information of each image block in the CCTA image and the node corresponding to each image block;
And inputting the point cloud data set of the CCTA image and the physiological characteristic information into the prediction network model to obtain the hemodynamic parameters of the target object.
3. The method of determining image parameters of claim 2, wherein the segmentation network model comprises a three-dimensional segmentation model and a mesh deformation model, the mesh deformation model being connected to the prediction network model;
The feature extraction is performed on each image block based on the segmentation network model to obtain a three-dimensional model corresponding to each image block, including:
Image segmentation is carried out on the image block based on the three-dimensional segmentation model, and a coronary artery vascular mask image of the image block is obtained;
Mapping the coronary artery vascular mask map of the image block to an initial spherical grid to obtain a coronary artery vascular spherical grid of the image block; the initial spherical mesh comprises a plurality of original nodes and connecting lines between the original nodes;
And up-sampling and feature extraction are carried out on the coronary artery vessel spherical grid based on the grid deformation model, so that the three-dimensional model corresponding to each image block is obtained.
4. A method of determining an image parameter according to claim 3,
The training method of the segmentation network model comprises the following steps:
Acquiring a CCTA training image, wherein the CCTA training image is provided with a plurality of corresponding sub-images, and each sub-image is associated with an annotation mask image and an annotation three-dimensional model;
inputting the sub-image into the three-dimensional segmentation model to obtain a prediction mask image of the sub-image;
mapping the prediction mask image to the initial spherical grid to obtain a spherical grid of the sub-image;
inputting the spherical grid of the sub-image into the grid deformation model to obtain a predicted three-dimensional model of the sub-image;
And carrying out joint iterative training on the three-dimensional segmentation model and the grid deformation model based on a weighted sum of error values between the prediction mask image and the annotation mask image corresponding to the sub-image and error values between the annotation three-dimensional model and the prediction three-dimensional model.
5. The method of determining image parameters according to claim 2, wherein,
Inputting the point cloud data set of the CCTA image and the physiological characteristic information into the prediction network model to obtain the hemodynamic parameters of the target object, wherein the method comprises the following steps:
Mapping the coronary artery vascular mask map of each image block to a point cloud data set in the CCTA image to obtain an updated point cloud data set;
and the prediction network model performs feature fusion based on the updated point cloud data set and the hemodynamic parameters to obtain hemodynamic parameters of the target object.
6. The method of determining image parameters according to claim 5, wherein,
The training method of the prediction network model comprises the following steps:
Acquiring a plurality of training data sets, wherein the training data sets comprise CCTA training images and physiological training data, and the CCTA training images are associated with sample point cloud data; each training data set is associated with a labeling parameter; the sample point cloud data is obtained by the trained segmentation network model based on the CCTA training image;
Inputting the sample point cloud data and the physiological training data associated with the CCTA training image into the prediction network model to obtain prediction parameters;
And carrying out iterative training on the trained segmentation network model and the trained prediction network model based on the error value between the prediction parameter and the labeling parameter corresponding to the training data set.
7. The method of determining image parameters according to claim 1, further comprising:
And preprocessing the CCTA image, wherein the preprocessing comprises a normalized CT value and a normalized CT size.
8. An apparatus for determining image parameters, comprising:
The acquisition module is used for acquiring the CCTA image and the physiological characteristic information of the target object;
and the determining module is used for determining the hemodynamic parameters of the target object based on the CCTA image and the physiological characteristic information by adopting a neural network.
9. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the steps in the method of determining image parameters of any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon program instructions, which when executed by a processor, perform the steps of the method of determining image parameters of any of claims 1 to 7.
CN202311540909.1A 2023-11-17 2023-11-17 Method, device, electronic equipment and storage medium for determining image parameters Pending CN117911318A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311540909.1A CN117911318A (en) 2023-11-17 2023-11-17 Method, device, electronic equipment and storage medium for determining image parameters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311540909.1A CN117911318A (en) 2023-11-17 2023-11-17 Method, device, electronic equipment and storage medium for determining image parameters

Publications (1)

Publication Number Publication Date
CN117911318A true CN117911318A (en) 2024-04-19

Family

ID=90687800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311540909.1A Pending CN117911318A (en) 2023-11-17 2023-11-17 Method, device, electronic equipment and storage medium for determining image parameters

Country Status (1)

Country Link
CN (1) CN117911318A (en)

Similar Documents

Publication Publication Date Title
CN110807495B (en) Multi-label classification method, device, electronic equipment and storage medium
CN108615237B (en) Lung image processing method and image processing equipment
US10460447B2 (en) Method and system for performing segmentation of image having a sparsely distributed object
CN109753978B (en) Image classification method, device and computer readable storage medium
CN109409503B (en) Neural network training method, image conversion method, device, equipment and medium
CN110852987B (en) Vascular plaque detection method and device based on deep morphology and storage medium
CN111178420B (en) Coronary artery segment marking method and system on two-dimensional contrast image
CN113222964B (en) Method and device for generating coronary artery central line extraction model
WO2023207743A1 (en) Image detection method and apparatus, and computer device, storage medium and program product
CN110570394A (en) medical image segmentation method, device, equipment and storage medium
CN112634231A (en) Image classification method and device, terminal equipment and storage medium
CN112541924A (en) Fundus image generation method, device, equipment and storage medium
JP7349018B2 (en) Coronary artery segmentation method, apparatus, electronic device and computer readable storage medium
CN113469963B (en) Pulmonary artery image segmentation method and device
CN112617770A (en) Intracranial aneurysm risk prediction method based on artificial intelligence and related device
CN111325756A (en) Three-dimensional image artery and vein segmentation method and system based on deep learning network
CN116993812A (en) Coronary vessel centerline extraction method, device, equipment and storage medium
CN117911318A (en) Method, device, electronic equipment and storage medium for determining image parameters
CN114565623A (en) Pulmonary vessel segmentation method, device, storage medium and electronic equipment
CN113902689A (en) Blood vessel center line extraction method, system, terminal and storage medium
CN113379691A (en) Breast lesion deep learning segmentation method based on prior guidance
CN111402274A (en) Processing method, model and training method for magnetic resonance left ventricle image segmentation
JP2021111076A (en) Diagnostic device using ai, diagnostic system, and program
CN116071372B (en) Knee joint segmentation method, knee joint segmentation device, electronic equipment and storage medium
Ammann et al. Multilevel comparison of deep learning models for function quantification in cardiovascular magnetic resonance: On the redundancy of architectural variations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination