CN111341420B - Cardiovascular image recognition system and method based on whole-heart seven-dimensional model - Google Patents

Cardiovascular image recognition system and method based on whole-heart seven-dimensional model Download PDF

Info

Publication number
CN111341420B
CN111341420B CN202010106254.7A CN202010106254A CN111341420B CN 111341420 B CN111341420 B CN 111341420B CN 202010106254 A CN202010106254 A CN 202010106254A CN 111341420 B CN111341420 B CN 111341420B
Authority
CN
China
Prior art keywords
image
cardiovascular
neural network
dimensional model
heart
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202010106254.7A
Other languages
Chinese (zh)
Other versions
CN111341420A (en
Inventor
刘琦
李登
周翔鸿
葛玲玲
姚怡君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202010106254.7A priority Critical patent/CN111341420B/en
Publication of CN111341420A publication Critical patent/CN111341420A/en
Application granted granted Critical
Publication of CN111341420B publication Critical patent/CN111341420B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a cardiovascular image recognition system and method based on a full-heart seven-dimensional model, wherein the system comprises an image data processing subsystem, a cloud database and a full-heart seven-dimensional model construction subsystem which are sequentially connected; the image data processing subsystem is used for processing the input cardiovascular image to be identified, accurately marking the image and uploading the image to the cloud database; the cloud database is used for storing all accurately marked cardiovascular images; the whole-heart seven-dimensional model building subsystem is used for building a corresponding whole-heart seven-dimensional model according to the needed cardiovascular image in the cloud database and realizing the identification of the cardiovascular image according to the built whole-heart seven-dimensional model. The invention can accurately identify the cardiovascular ultrasonic image and ensure the identification quality; the system has a high-compatibility modeling system and is suitable for various imaging examination modes; the time consumed by imaging doctors in reading and modeling is reduced, and the burden of the imaging doctors is greatly reduced.

Description

Cardiovascular image recognition system and method based on whole-heart seven-dimensional model
Technical Field
The invention belongs to the technical field of ultrasonic image identification, and particularly relates to a cardiovascular image identification system and method based on a whole-heart seven-dimensional model.
Background
In the field of cardiovascular image acquisition, the most commonly used imaging techniques at present are cardiovascular ultrasound and Computed Tomography (CT), which functionally supplement each other. The cardiovascular ultrasound is tissue organ observation by utilizing reflection of an ultrasonic wave from a human cardiovascular system, wherein the heart color ultrasound is the only examination mode capable of dynamically displaying the structure in a heart cavity and the pulsation and hemodynamics of the heart, and common methods of the cardiovascular ultrasound comprise a B-type ultrasound method, an M-type ultrasound method and a Doppler ultrasound method. Computed Tomography (CT) is an imaging technique for obtaining cross-sectional information of an object by performing ray projection measurement on the object at different angles, and the core of the CT technique is the theory of projection image reconstruction, which is essentially to reversely find the attenuation coefficient value of each point on the name of the imaged object from the projection data obtained by scanning.
In addition to the two imaging techniques, Fractional Flow Reserve (FFR), Maximum Intensity Projection (MIP), Multiplanar reconstruction (MPR), and Volume Rendering Technique (VRT) are also widely used in cardiovascular image acquisition; wherein, the Fractional Flow Reserve (FFR) refers to the ratio of the maximum blood Flow obtained from the myocardial region of the blood vessel in the coronary artery with stenosis to the maximum blood Flow obtained from the same region under the theoretically normal condition, i.e. the ratio of the mean pressure (Pd) in the narrow distal coronary artery to the mean pressure (Pa) in the coronary artery mouth in the maximum hyperemia state of the myocardium; maximum Intensity Projection algorithm (MIP): MIP employs fluoroscopy to obtain a two-dimensional image, i.e., by computing the maximum density of pixels encountered along each ray of the scanned object. When the optical fiber bundle passes through an original image of a section of tissue, pixels with the maximum density in the image are reserved and projected onto a two-dimensional plane, so that an MIP reconstruction image is formed; multiplanar reconstruction (MPR) is to superpose all axial images in a scanning range, then to superpose the images of the reconstructed lines marked by certain marked lines, and then to carry out image reconstruction of coronaries, sagittal positions and oblique positions of any angle on tissues appointed by the lines marked by certain marked lines; the Volume Rendering Technique (VRT) is the most powerful three-dimensional imaging reconstruction technique currently used, has the characteristics of powerful modeling function and vivid shape and color, and can realize the reconstruction of structural three-dimensional models of arteriovenous vessels and heart.
Although the above techniques are widely and well-established, there are still many problems associated with the use of these techniques for the diagnosis of cardiovascular diseases:
(1) the cardiovascular image demand growth rate and the imager growth rate are disjointed, and the imager is overloaded;
(2) the cardiovascular image has large judgment subjectivity and higher missed diagnosis and misdiagnosis rate: the interpretation of the cardiovascular image requires that imaging doctors have high three-dimensional thinking ability, can judge the structural and functional abnormality in the three-dimensional space from two-dimensional image data, and has numerous parameters of the cardiovascular image, complex analysis and difficult grasp of all key points by a single doctor. Therefore, the interpretation of the cardiovascular image at present depends greatly on the professional level and medical experience of the doctor. The characteristics are particularly remarkable in the aspect of cardiovascular ultrasound, and because the acquisition of cardiovascular images is not standard, the information amount is large, and the interpretation subjectivity is strong, the misdiagnosis and missed diagnosis rate of the cardiovascular ultrasound during interpretation is very high;
(3) the existing CT-based cardiovascular three-dimensional model reconstruction technology provides single information, and the ultrasonic-based three-dimensional reconstruction technology is not mature: the cardiovascular system has specificity relative to other systems, and most of the interpretation of imaging data outside the cardiovascular system only needs to consider structural information, but the cardiovascular system needs to consider structural information, hemodynamic information and time phase information at the same time. Currently, cardiovascular system models or images obtained by CT-based Maximum Intensity Projection (MIP) technology, Multiplanar reconstruction (MPR) technology, etc. can only provide structural three-dimensional information, but hemodynamic information and phase information are not shown. The cardiovascular ultrasound can acquire structural information, hemodynamic information and time phase information more accurately, but a three-dimensional modeling technology for the cardiovascular ultrasound is not mature, and a three-dimensional modeling technology for integrating all information is not reported yet;
(4) the cardiovascular diagnosis and treatment data has low utilization rate, and the scientific research value is wasted: at present, the medical big data in China still belong to a novel concept, and still have some technical difficulties and obstacles, and the data of each hospital is just compared with individual information 'isolated island', and data integration is poor, and the utilization ratio is low, and these are all the problems that big data application is rather core.
Disclosure of Invention
Aiming at the defects in the prior art, the cardiovascular image recognition system and method based on the whole-heart seven-dimensional model provided by the invention solve the problems that the existing cardiovascular image interpretation is greatly influenced by subjectivity and the interpretation result is not accurate enough.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: the cardiovascular image recognition system based on the whole-heart seven-dimensional model comprises an image data processing subsystem, a cloud database and a whole-heart seven-dimensional model construction subsystem which are sequentially connected;
the image data processing subsystem is used for processing the input cardiovascular image to be identified, accurately marking the image and uploading the image to the cloud database;
the cloud database is used for storing all accurately marked cardiovascular images;
the whole-heart seven-dimensional model building subsystem is used for building a corresponding whole-heart seven-dimensional model according to the needed cardiovascular image in the cloud database and realizing the identification of the cardiovascular image according to the built whole-heart seven-dimensional model.
Furthermore, the image data processing subsystem is a convolutional neural network and a support vector machine for integrated learning;
the convolutional neural network comprises a first convolutional layer, a second convolutional layer, a first pooling layer, a third convolutional layer, a fourth convolutional layer, a second pooling layer, a fifth convolutional layer, a sixth convolutional layer, a third pooling layer and a full-connection layer which are connected in sequence;
ReLU activation functions are arranged between the first convolution layer and the second convolution layer, between the second convolution layer and the first pooling layer, between the third convolution layer and the fourth convolution layer, between the fourth convolution layer and the second pooling layer, between the fifth convolution layer and the sixth convolution layer, and between the sixth convolution layer and the third pooling layer.
Further, the cloud database is a dynamo database in the form of Nosql.
Furthermore, the whole-heart seven-dimensional model building subsystem is a CNN neural network, a GAN neural network and a seven-dimensional model synthesis unit for mutual reinforcement learning;
the output end of the CNN convolutional neural network is respectively connected with the GAN neural network and the seven-dimensional model synthesis unit, and the output end of the GAN neural network is connected with the seven-dimensional model synthesis unit.
The cardiovascular image identification method based on the whole-heart seven-dimensional model comprises the following steps:
s1, constructing an image data processing subsystem and training the image data processing subsystem;
s2, inputting the cardiovascular image to be identified into a trained image data processing subsystem to obtain a correctly labeled cardiovascular image, and inputting the correctly labeled cardiovascular image into a cloud database;
and S3, inputting the cardiovascular image correctly marked in the cloud database into the trained all-heart seven-dimensional model building subsystem, and building a corresponding all-heart seven-dimensional model for cardiovascular image recognition.
Further, in step S1, the method for training the image data processing subsystem specifically includes:
inputting the cardiovascular image which is correctly labeled manually into a convolutional neural network and a support vector machine for ensemble learning, rechecking data output after the ensemble learning, and further adjusting parameters of the convolutional neural network and the support vector machine until the labeling accuracy of the cardiovascular image by a data processing subsystem reaches 99%, and obtaining a trained image data processing subsystem.
Further, in step S3, the method for training the full-heart seven-dimensional model building subsystem specifically includes:
a1, inputting the current correctly labeled cardiovascular image and the unlabeled cardiovascular image stored in the cloud database into a CNN neural network, and inputting the image output by the CNN neural network into a GAN neural network; simultaneously, inputting the correctly marked cardiovascular image stored in the cloud database into the GAN neural network;
a2, processing the input image through a GAN neural network, and outputting the corresponding image;
a3, calculating an optimized loss function value according to the image output by the CNN neural network and the image output by the GAN neural network;
a4, judging whether the optimization Loss function value Loss is smaller than a set threshold value;
if yes, go to step A5;
if not, go to step A6;
a5, taking the parameter values of the current CNN neural network and the current GAN neural network as the parameter values of the whole-heart seven-dimensional model building subsystem to obtain the trained whole-heart seven-dimensional model building subsystem;
and A6, adjusting the parameter values of the CNN neural network and the GAN neural network according to the optimized loss function value, and returning to the step A1.
Further, the calculation formula for optimizing the loss function value in step a3 is:
Figure BDA0002388538040000051
in the formula (I), the compound is shown in the specification,
Figure BDA0002388538040000052
is the loss weight;
L i is the loss weight;
Figure BDA0002388538040000053
the weight of j cases in the T test set;
x i is the ith test example;
Figure BDA0002388538040000054
generating a weight value for the i of the Tth test set;
y i is the ith prediction result;
Δ is the error per iteration.
Further, the step S3 is specifically:
s31, inputting the marked cardiovascular image to be identified into a trained CNN neural network to obtain a processed image and hemodynamic parameters corresponding to the cardiovascular image, and inputting the processed image into the trained GAN neural network;
s32, processing the input processed images through a GAN neural network to obtain a plurality of corresponding three-dimensional images;
s33, synthesizing a plurality of three-dimensional images according to the three-dimensional structural characteristics of the real heart 3D model to obtain corresponding three-dimensional structural characteristics of the images;
and S34, synthesizing the hemodynamic parameters output by the CNN neural network and the three-dimensional characteristic structure of the image obtained by the GAN neural network through a seven-dimensional model synthesis unit, and constructing a corresponding full-heart seven-dimensional model.
Further, the hemodynamic parameters in step S31 include blood flow, flow rate, pressure, and flow state.
The invention has the beneficial effects that:
(1) accurately identifying the cardiovascular ultrasonic image, and ensuring the identification quality: the identification system provided by the invention realizes automatic and accurate identification of the standard acoustic window, ensures the accuracy of acoustic window acquisition, and ensures that cardiovascular ultrasonic images acquired by different hospitals and the same hospital have better comparability;
(2) the high-compatibility modeling system is suitable for various imaging examination modes: the full-heart seven-dimensional model established by the method is modeled based on images of CT, MR and cardiovascular ultrasound at the same time, breaks through the limitation mode that a traditional inspection means corresponds to a set of modeling system, and has strong compatibility and universality;
(3) promote work efficiency, for the image doctor subtracts burden: the method of the invention is based on AI algorithm, realizes automatic realization and three-dimensional modeling of cardiovascular ultrasound, improves the image acquisition efficiency and accuracy of cardiovascular ultrasound doctors, reduces the time consumed by imaging doctors in reading and modeling, and greatly reduces the burden of the imaging doctors.
Drawings
Fig. 1 is a structural diagram of a cardiovascular image recognition system based on a full-heart seven-dimensional model according to the present invention.
Fig. 2 is a flowchart of a cardiovascular image recognition method based on a full-heart seven-dimensional model according to the present invention.
FIG. 3 is a flowchart of a training method for providing a full-heart seven-dimensional model building subsystem according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Example 1:
as shown in fig. 1, the cardiovascular image recognition system based on the full-heart seven-dimensional model comprises an image data processing subsystem, a cloud database and a full-heart seven-dimensional model construction subsystem which are connected in sequence;
the image data processing subsystem is used for processing the input cardiovascular image to be identified, accurately marking the image and uploading the image to the cloud database;
the cloud database is used for storing all accurately marked cardiovascular images;
the whole-heart seven-dimensional model building subsystem is used for building a corresponding whole-heart seven-dimensional model according to the needed cardiovascular image in the cloud database and realizing the identification of the cardiovascular image according to the built whole-heart seven-dimensional model.
The image data processing subsystem in the embodiment of the invention is a convolutional neural network and a support vector machine for integrated learning; the convolutional neural network comprises a first convolutional layer, a second convolutional layer, a first pooling layer, a third convolutional layer, a fourth convolutional layer, a second pooling layer, a fifth convolutional layer, a sixth convolutional layer, a third pooling layer and a full-connection layer which are connected in sequence; a ReLU activation function is arranged between the first convolution layer and the second convolution layer, between the second convolution layer and the first pooling layer, between the third convolution layer and the fourth convolution layer, between the fourth convolution layer and the second pooling layer, between the fifth convolution layer and the sixth convolution layer, and between the sixth convolution layer and the third pooling layer. The convolution layers are all composed of a plurality of convolution units, parameters of each convolution unit are obtained through optimization of a back propagation algorithm, the convolution operation aims at extracting different input features, the first convolution layer can only extract some low-level features such as edge, line and angle level layers, and more network layers can iteratively extract more complex features from the low-level features; usually, a large feature is obtained after the convolutional layers, so a pooling layer is arranged behind each convolutional layer, the feature is cut into a plurality of regions, and the maximum value or the average value of the regions is taken to obtain a new feature with a small dimension; the fully connected layer combines all local features into a global feature that is used to calculate a score for each class.
The cloud database in the embodiment of the invention is a dynamo database in a Nosql form, and for the data stored in the database, the data is firstly processed by a dynamic data desensitization technology, so that the sensitive information part related to the user in the data is added with noise, the privacy of hospitals and patients is protected, and the safety of the system is enhanced. dynamo databases are distributed databases designed to address the core problems of database management, performance, scalability, and reliability, developers can create a database table, the table can store and retrieve any amount of data, the DynamoDB improves the consistency hash algorithm of the mesh, the consistency hash algorithm is improved by adopting a virtual node mechanism, the table has Q virtual nodes and S physical nodes, then Q/S virtual nodes are distributed to each physical node, wherein Q > S, the virtual nodes have the advantages of being capable of being distributed unevenly, reducing cache redistribution when the servers are increased or decreased to the maximum extent, being unfixed, having random positions, if a new node is added, all data objects on all nodes need to be scanned, and whether migration is needed or not is judged, and the global scanning causes a large overhead. DynamoDB fixes the virtual nodes and only changes the correspondence between the virtual nodes and the nodes.
The whole-heart seven-dimensional model building subsystem in the embodiment of the invention is a CNN neural network, a GAN neural network and a seven-dimensional model synthesis unit for mutual reinforcement learning; the output end of the CNN convolutional neural network is respectively connected with the GAN neural network and the seven-dimensional model synthesis unit, and the output end of the GAN neural network is connected with the seven-dimensional model synthesis unit. The CNN convolutional neural network mainly comprises three layers, namely a convolutional layer, a convolutional layer and a pooling layer, and is provided with 7 neurons, 10 neurons and 4 neurons respectively, the input indexes of the first layer in the CNN neural network are 3 different gray features and one expression feature of a graph, and the output indexes of the last layer are four graph feature indexes required by GAN. The GAN neural network is mainly derived from the thought of the game of zero sum in the game theory, and is applied to the deep learning neural network, namely, the game is continuously performed through a generation network G (Generator) and a discrimination network D (discriminator), so that the G learns the distribution of data, and if the picture generation is used, the G can generate a vivid image from a section of random number after the training is completed. G, D main functions are: g is a generative network which receives a random noise z (random number) by which to generate an image; d is a discrimination network for discriminating whether a picture is "true". The input parameter is x, x represents a picture, and the output D (x) represents the probability that x is a real picture, and if the x is 1, the accurate picture is generated. And the seven-dimensional model synthesis unit synthesizes the 3-dimensional part of the cardiovascular image generated by the GAN neural network and the specific detail part in the cardiovascular image generated by the CNN neural network to obtain the full-heart seven-dimensional model construction subsystem with high reusability.
Example 2:
as shown in fig. 2, corresponding to the above embodiment 1, the present invention further provides a cardiovascular image recognition method based on a full-heart seven-dimensional model, including the following steps:
s1, constructing an image data processing subsystem and training the image data processing subsystem;
s2, inputting the cardiovascular image to be recognized into a trained image data processing subsystem to obtain a correctly labeled cardiovascular image, and inputting the correctly labeled cardiovascular image into a cloud database;
and S3, inputting the cardiovascular image correctly labeled in the cloud database into the trained full-heart seven-dimensional model construction subsystem, and constructing a corresponding full-heart seven-dimensional model for cardiovascular image identification.
In step S1, the method for training the image data processing subsystem in the embodiment of the present invention includes: inputting the cardiovascular image which is correctly labeled manually into a convolutional neural network and a support vector machine for ensemble learning, rechecking data output after the ensemble learning, and further adjusting parameters of the convolutional neural network and the support vector machine until the labeling accuracy of the cardiovascular image by a data processing subsystem reaches 99%, and obtaining a trained image data processing subsystem. Generally speaking, at least double examination is needed to be performed on a piece of cardiovascular image data, three or even more times of rechecking verification can be performed on some disputed data to ensure the final cardiovascular image labeling quality, a method of synchronizing manual labeling and intelligent labeling is adopted, the data adopts a method of mainly adopting intelligent labeling and assisting in manual labeling, and the specific working steps are divided into three steps: firstly, manually marking data to generate a small amount of accurate data samples, then inputting the data samples into a specific neural network model to train the data samples, and manually rechecking the data results predicted by the neural network, thereby carrying out iterative optimization of the algorithm of the neural network, and finally achieving the effect of intelligent marking when the marking of the neural network reaches 99%. After the intelligent labeling model is obtained, the intelligent labeling model is applied to labeling application of a large number of pictures, so that original image data are changed into high-performance and available high-quality image data sets, and the data are stored in a dynamo database in a Nosql form through steps of desensitization processing, cloud uploading and the like.
As shown in fig. 3, in step S3 of the embodiment of the present invention, the method for training the full-heart seven-dimensional model building subsystem specifically includes:
a1, inputting the current correctly labeled cardiovascular image and the unlabeled cardiovascular image stored in the cloud database into a CNN neural network, and inputting the image output by the CNN neural network into a GAN neural network; simultaneously, inputting the correctly labeled cardiovascular image stored in the cloud database into the GAN neural network;
a2, processing the input image through a GAN neural network, and outputting the corresponding image;
a3, calculating an optimized loss function value according to the image output by the CNN neural network and the image output by the GAN neural network;
a4, judging whether the optimization Loss function value Loss is smaller than a set threshold value;
if yes, go to step A5;
if not, go to step A6;
a5, taking the parameter values of the current CNN neural network and the current GAN neural network as the parameter values of the whole-heart seven-dimensional model building subsystem to obtain the trained whole-heart seven-dimensional model building subsystem;
and A6, adjusting the parameter values of the CNN neural network and the GAN neural network according to the optimized loss function value, and returning to the step A1.
Wherein, the calculation formula for optimizing the loss function value in the step a3 is:
Figure BDA0002388538040000111
in the formula (I), the compound is shown in the specification,
Figure BDA0002388538040000112
is the loss weight;
L i is the loss weight;
Figure BDA0002388538040000113
the weight of j cases in the T test set;
x i is the ith test example;
Figure BDA0002388538040000114
generating a weight value for i of the Tth test set;
y i is the ith prediction result;
Δ is the error per iteration.
In the embodiment of the present invention, step S3 specifically includes:
s31, inputting the marked cardiovascular image to be identified into a trained CNN neural network to obtain a processed image and hemodynamic parameters corresponding to the cardiovascular image, and inputting the processed image into the trained GAN neural network;
among the hemodynamic parameters are blood flow, flow rate, pressure, and flow regime.
S32, processing the input processed images through a GAN neural network to obtain a plurality of corresponding three-dimensional images;
s33, synthesizing a plurality of three-dimensional images according to the three-dimensional structural characteristics of the real heart 3D model to obtain corresponding three-dimensional structural characteristics of the images;
and S34, synthesizing the hemodynamic parameters output by the CNN neural network and the three-dimensional characteristic structure of the image obtained by the GAN neural network through a seven-dimensional model synthesis unit, and constructing a corresponding whole-heart seven-dimensional model.
It should be noted that, when the cardiovascular image is modeled by the method of the present invention, the cardiovascular image is not only cardiovascular ultrasonic image but also the corresponding full-heart seven-dimensional model is constructed based on images such as CT and MR, so the method of the present invention has high compatibility and can be applied to various imaging examination modes.
The invention has the beneficial effects that:
(1) accurately identifying the cardiovascular ultrasonic image, and ensuring the identification quality: the recognition system provided by the invention realizes automatic and accurate recognition of the standard acoustic window, ensures the accuracy of acoustic window acquisition, and ensures that cardiovascular ultrasonic images acquired by different hospitals and the same hospital have better comparability;
(2) the high-compatibility modeling system is suitable for various imaging examination modes: the full-heart seven-dimensional model established by the method is modeled based on images of CT, MR and cardiovascular ultrasound at the same time, breaks through the limitation mode that a traditional inspection means corresponds to a set of modeling system, and has strong compatibility and universality;
(3) promote work efficiency, for the image doctor subtracts burden: the method of the invention is based on AI algorithm, realizes automatic realization and three-dimensional modeling of cardiovascular ultrasound, improves the image acquisition efficiency and accuracy of cardiovascular ultrasound doctors, reduces the time consumed by imaging doctors in reading and modeling, and greatly reduces the burden of imaging doctors.

Claims (7)

1. The cardiovascular image recognition system based on the full-heart seven-dimensional model is characterized by comprising an image data processing subsystem, a cloud database and a full-heart seven-dimensional model construction subsystem which are sequentially connected;
the image data processing subsystem is used for processing the input cardiovascular image to be identified, accurately marking the image and uploading the image to the cloud database;
the cloud database is used for storing all accurately marked cardiovascular images;
the whole-heart seven-dimensional model building subsystem is used for building a corresponding whole-heart seven-dimensional model according to the needed cardiovascular image in the cloud database and realizing the identification of the cardiovascular image according to the built whole-heart seven-dimensional model;
the image data processing subsystem is a convolutional neural network and a support vector machine for integrated learning;
the whole-heart seven-dimensional model building subsystem is a CNN neural network, a GAN neural network and a seven-dimensional model synthesis unit for mutual reinforcement learning;
the output end of the CNN neural network is respectively connected with the GAN neural network and the seven-dimensional model synthesis unit, and the output end of the GAN neural network is connected with the seven-dimensional model synthesis unit.
2. The cardiovascular image recognition system based on the full-heart seven-dimensional model of claim 1,
the convolutional neural network comprises a first convolutional layer, a second convolutional layer, a first pooling layer, a third convolutional layer, a fourth convolutional layer, a second pooling layer, a fifth convolutional layer, a sixth convolutional layer, a third pooling layer and a full-connection layer which are connected in sequence;
ReLU activation functions are arranged between the first convolution layer and the second convolution layer, between the second convolution layer and the first pooling layer, between the third convolution layer and the fourth convolution layer, between the fourth convolution layer and the second pooling layer, between the fifth convolution layer and the sixth convolution layer, and between the sixth convolution layer and the third pooling layer.
3. The cardiovascular image recognition system based on the global seven-dimensional model of claim 1, wherein the cloud database is a dynamo database in the form of Nosql.
4. The cardiovascular image identification method based on the whole-heart seven-dimensional model is characterized by comprising the following steps of:
s1, constructing an image data processing subsystem and training the image data processing subsystem;
s2, inputting the cardiovascular image to be recognized into a trained image data processing subsystem to obtain a correctly labeled cardiovascular image, and inputting the correctly labeled cardiovascular image into a cloud database;
s3, inputting the cardiovascular image correctly labeled in the cloud database into a trained full-heart seven-dimensional model construction subsystem, and constructing a corresponding full-heart seven-dimensional model for cardiovascular image recognition;
in step S1, the method for training the image data processing subsystem specifically includes:
inputting the cardiovascular image which is correctly labeled manually into a convolutional neural network and a support vector machine for ensemble learning, and rechecking data output after the ensemble learning, so as to adjust parameters of the convolutional neural network and the support vector machine until the labeling accuracy of the cardiovascular image by a data processing subsystem reaches 99%, and obtaining a trained image data processing subsystem;
in step S3, the method for training the full-heart seven-dimensional model building subsystem specifically includes:
a1, inputting the current correctly labeled cardiovascular image and the unlabeled cardiovascular image stored in the cloud database into a CNN neural network, and inputting the image output by the CNN neural network into a GAN neural network; simultaneously, inputting the correctly marked cardiovascular image stored in the cloud database into the GAN neural network;
a2, processing the input image through a GAN neural network, and outputting the corresponding image;
a3, calculating an optimized loss function value according to the image output by the CNN neural network and the image output by the GAN neural network;
a4, judging whether the optimization Loss function value Loss is less than a set threshold value;
if yes, go to step A5;
if not, go to step A6;
a5, taking the parameter values of the current CNN neural network and the current GAN neural network as the parameter values of the whole-heart seven-dimensional model building subsystem to obtain the trained whole-heart seven-dimensional model building subsystem;
and A6, adjusting the parameter values of the CNN neural network and the GAN neural network according to the optimized loss function value, and returning to the step A1.
5. The method for recognizing cardiovascular images based on the full-heart seven-dimensional model of claim 4, wherein the calculation formula for optimizing the loss function value in step A3 is as follows:
Figure FDA0003723816500000031
in the formula (I), the compound is shown in the specification,
Figure FDA0003723816500000032
is the loss weight;
L i is a loss function value;
Figure FDA0003723816500000033
the weight of j cases in the Tth test set is taken as the weight;
x i is the ith test example;
Figure FDA0003723816500000034
generating a weight value for i of the Tth test set;
y i is the ith prediction result;
Δ is the error per iteration.
6. The method for recognizing cardiovascular images based on the full-heart seven-dimensional model according to claim 4, wherein the step S3 specifically comprises:
s31, inputting the marked cardiovascular image to be identified into a trained CNN neural network to obtain a processed image and hemodynamic parameters corresponding to the cardiovascular image, and inputting the processed image into the trained GAN neural network;
s32, processing the input processed images through a GAN neural network to obtain a plurality of corresponding three-dimensional images;
s33, synthesizing a plurality of three-dimensional images according to the three-dimensional structural characteristics of the real heart 3D model to obtain corresponding three-dimensional structural characteristics of the images;
and S34, synthesizing the hemodynamic parameters output by the CNN neural network and the three-dimensional characteristic structure of the image obtained by the GAN neural network through a seven-dimensional model synthesis unit, and constructing a corresponding full-heart seven-dimensional model.
7. The cardiovascular image recognition method based on the full-heart seven-dimensional model according to claim 6, wherein the hemodynamic parameters in step S31 include blood flow, flow rate, pressure and flow state.
CN202010106254.7A 2020-02-21 2020-02-21 Cardiovascular image recognition system and method based on whole-heart seven-dimensional model Expired - Fee Related CN111341420B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010106254.7A CN111341420B (en) 2020-02-21 2020-02-21 Cardiovascular image recognition system and method based on whole-heart seven-dimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010106254.7A CN111341420B (en) 2020-02-21 2020-02-21 Cardiovascular image recognition system and method based on whole-heart seven-dimensional model

Publications (2)

Publication Number Publication Date
CN111341420A CN111341420A (en) 2020-06-26
CN111341420B true CN111341420B (en) 2022-08-30

Family

ID=71184121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010106254.7A Expired - Fee Related CN111341420B (en) 2020-02-21 2020-02-21 Cardiovascular image recognition system and method based on whole-heart seven-dimensional model

Country Status (1)

Country Link
CN (1) CN111341420B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734906B (en) * 2020-12-30 2022-08-19 华东师范大学 Three-dimensional reconstruction method of ultrasonic or CT medical image based on knowledge distillation
CN115294284B (en) * 2022-10-09 2022-12-20 南京纯白矩阵科技有限公司 High-resolution three-dimensional model generation method for guaranteeing uniqueness of generated model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980899A (en) * 2017-04-01 2017-07-25 北京昆仑医云科技有限公司 The deep learning model and system of flow characteristic on prediction vascular tree blood flow paths
CN107610195A (en) * 2017-07-28 2018-01-19 上海联影医疗科技有限公司 The system and method for image conversion
CN108961229A (en) * 2018-06-27 2018-12-07 东北大学 Cardiovascular OCT image based on deep learning easily loses plaque detection method and system
CN109726753A (en) * 2018-12-25 2019-05-07 脑玺(上海)智能科技有限公司 The dividing method and system of perfusion dynamic image based on time signal curve
CN110494889A (en) * 2017-03-30 2019-11-22 皇家飞利浦有限公司 Opacifying injection imaging

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10074038B2 (en) * 2016-11-23 2018-09-11 General Electric Company Deep learning medical systems and methods for image reconstruction and quality evaluation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110494889A (en) * 2017-03-30 2019-11-22 皇家飞利浦有限公司 Opacifying injection imaging
CN106980899A (en) * 2017-04-01 2017-07-25 北京昆仑医云科技有限公司 The deep learning model and system of flow characteristic on prediction vascular tree blood flow paths
CN107610195A (en) * 2017-07-28 2018-01-19 上海联影医疗科技有限公司 The system and method for image conversion
CN108961229A (en) * 2018-06-27 2018-12-07 东北大学 Cardiovascular OCT image based on deep learning easily loses plaque detection method and system
CN109726753A (en) * 2018-12-25 2019-05-07 脑玺(上海)智能科技有限公司 The dividing method and system of perfusion dynamic image based on time signal curve

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Three-dimensional whole-heart vs. two-dimensional high-resolution perfusion-CMR: a pilot study comparing myocardial ischaemic burden";Adam K. McDiarmid等;《European Heart Journal - Cardiovascular Imaging》;20160831;第900-908页 *
State-of-the-Art Deep Learning in Cardiovascular Image Analysis;GeertLitjensPhD等;《https://doi.org/10.1016/j.jcmg.2019.06.009》;20190831;第1549-1565页 *
深度学习技术在搭建医学影像标准化平台过程中的应用价值研究;陈佳庚;《中国博士学位论文全文数据库 (医药卫生科技辑)》;20190115;第E076-7页 *

Also Published As

Publication number Publication date
CN111341420A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
US11501485B2 (en) System and method for image-based object modeling using multiple image acquisitions or reconstructions
US11847781B2 (en) Systems and methods for medical acquisition processing and machine learning for anatomical assessment
US20240023927A1 (en) Three-Dimensional Segmentation from Two-Dimensional Intracardiac Echocardiography Imaging
US5889524A (en) Reconstruction of three-dimensional objects using labeled piecewise smooth subdivision surfaces
CN110807829B (en) Method for constructing three-dimensional heart model based on ultrasonic imaging
CN107392994B (en) Three-dimensional rebuilding method, device, equipment and the storage medium of coronary artery blood vessel
US11690551B2 (en) Left atrium shape reconstruction from sparse location measurements using neural networks
CN110517238B (en) AI three-dimensional reconstruction and human-computer interaction visualization network system for CT medical image
US9514530B2 (en) Systems and methods for image-based object modeling using multiple image acquisitions or reconstructions
JP2022505587A (en) CT image generation method and its equipment, computer equipment and computer programs
RU2595757C2 (en) Device to superimpose images
CN109035284A (en) Cardiac CT image dividing method, device, equipment and medium based on deep learning
CN101849813A (en) Three-dimensional cardiac ultrasonic virtual endoscope system
CN111341420B (en) Cardiovascular image recognition system and method based on whole-heart seven-dimensional model
KR20150045885A (en) Systems and methods for registration of ultrasound and ct images
CN110070612B (en) CT image interlayer interpolation method based on generation countermeasure network
JP2017500102A (en) Model-based segmentation of anatomical structures
Banerjee et al. Automated 3D whole-heart mesh reconstruction from 2D cine MR slices using statistical shape model
Sakly et al. Moving towards a 5D cardiac model
Rezaei Generative adversarial network for cardiovascular imaging
CN114864095A (en) Analysis method for blood circulation change of narrow coronary artery under combination of multiple exercise strengths
US11786212B1 (en) Echocardiogram classification with machine learning
Du et al. Morphology reconstruction of obstructed coronary artery in angiographic images
da Silva Corado Echocardiography Automatic Image Quality Enhancement Using Generative Adversarial Networks
Ford et al. Heartpad: real-time visual guidance for cardiac ultrasound

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220830