CN116130090A - Ejection fraction measuring method and device, electronic device, and storage medium - Google Patents
Ejection fraction measuring method and device, electronic device, and storage medium Download PDFInfo
- Publication number
- CN116130090A CN116130090A CN202310124229.5A CN202310124229A CN116130090A CN 116130090 A CN116130090 A CN 116130090A CN 202310124229 A CN202310124229 A CN 202310124229A CN 116130090 A CN116130090 A CN 116130090A
- Authority
- CN
- China
- Prior art keywords
- image
- queue
- left ventricle
- sequence
- connecting line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/06—Measuring blood flow
- A61B8/065—Measuring blood flow to determine blood output from the heart
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5223—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/778—Active pattern-learning, e.g. online learning of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Software Systems (AREA)
- Pathology (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Physiology (AREA)
- Quality & Reliability (AREA)
- Hematology (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Cardiology (AREA)
- Data Mining & Analysis (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
The application discloses a method and a device for measuring ejection fraction, electronic equipment and a storage medium, wherein the method comprises the steps of acquiring an ultrasonic heart map sequence frame; inputting each frame of image into a pre-trained deep neural network model to obtain a left ventricle image sequence frame; calculating the pixel area of each frame of image in the left ventricle image sequence frame to obtain a left ventricle pixel area change sequence; determining an end diastole image and an end systole image based on the sequence of left ventricular pixel area variations; based on the end diastole image and the end systole image, an ejection fraction is calculated. According to the method, the ultrasonic heart map sequence frames are analyzed and processed, so that the measurement of the ejection fraction is realized, and the problem that the traditional manual measurement method is time-consuming and labor-consuming is effectively avoided; the definition accuracy of the end diastole of the left ventricle and the end systole of the left ventricle is effectively improved, and the problem of poor accuracy in defining the end diastole of the left ventricle and the end systole of the left ventricle based on the QRS wave of the electrocardiogram in the traditional mode is avoided.
Description
Technical Field
The application belongs to the technical field of medical image processing, and particularly relates to a method and a device for measuring ejection fraction, electronic equipment and a storage medium.
Background
Heart disease is one of the major diseases that seriously threatens human health and life, wherein heart failure, abbreviated heart failure, is the final stage of heart disease development. The echocardiographic ejection fraction detection is one of noninvasive detection methods for judging heart failure and detecting heart function, and is widely used for heart disease patients to monitor failure development and detect heart function.
The method of echocardiography measurement of ejection fraction EF is to measure the left ventricle LV volume, measure the left ventricle volume EDV at end diastole and ESV at end systole respectively, calculate ejection fraction EF based on the formula ef= (EDV-ESV)/EDV x 100%. In the prior art, a two-dimensional echocardiogram, a four-chamber heart apex map or a two-chamber heart apex map is generally adopted to measure ejection fraction, firstly, a patient end diastole cardiogram is taken, a left ventricle boundary is drawn by a dotting method, EDV is calculated by a disc superposition method, and then the left ventricle systolic volume ESV is calculated by the same method. The method is time consuming and laborious, and the end diastole and end systole of the left ventricle are defined based on the QRS wave of the electrocardiogram, and the end diastole or end systole of the left ventricle cannot necessarily be accurately positioned, and the accuracy of the image ejection fraction measurement is not necessarily high.
Disclosure of Invention
The invention aims to provide a method and a device for measuring ejection fraction, electronic equipment and a storage medium, which are used for solving the technical problems that an ejection fraction measuring scheme in the prior art is time-consuming and labor-consuming, the end diastole or the end systole of a left ventricle can not be accurately positioned, and the accuracy of image ejection fraction measurement is not necessarily improved.
In order to achieve the above purpose, a technical scheme adopted in the application is as follows:
there is provided a method of ejection fraction measurement, comprising:
acquiring an ultrasonic heart map sequence frame;
inputting each frame image in the ultrasonic heart map sequence frame into a pre-trained deep neural network model to obtain a left ventricle image sequence frame;
calculating the pixel area of each frame image in the left ventricle image sequence frame to obtain a left ventricle pixel area change sequence;
determining end diastole and end systole images from the left ventricular image sequence frames based on the left ventricular pixel area variation sequence;
based on the end diastole and end systole images, an ejection fraction is calculated.
In one or more embodiments, the step of inputting each frame image in the ultrasound cardiac image sequence frame into a pre-trained deep neural network model to obtain a left ventricle image sequence frame includes:
Carrying out homogenization treatment on all images in the ultrasonic heart map sequence frame;
inputting each frame image in the ultrasonic heart map sequence frame into a pre-trained first depth neural network model to obtain a left ventricle rectangular image sequence frame;
and inputting each frame image in the left ventricle rectangular image sequence frame into a pre-trained second deep network learning model to obtain the left ventricle image sequence frame.
In one or more embodiments, the training method of the first deep neural network model includes:
acquiring a first sample target set and dividing the first sample target set into a first sample training set and a first sample verification set, wherein the first sample target set comprises a plurality of first ultrasonic heart maps, and rectangular marks of left ventricles with boundaries circumscribed with the left ventricle boundaries in the first ultrasonic heart maps;
training the first deep neural network model with the first sample training set;
and verifying the effectiveness of the first deep neural network model by using the first sample verification set so as to obtain optimal model parameters.
In one or more embodiments, the training the first deep neural network model with the first sample training set includes:
Carrying out homogenization treatment on all images in the first sample training set;
performing image space transformation on each image in the first sample training set, and collecting the images obtained after transformation into the first sample training set to obtain a first amplified sample training set;
training the first deep neural network model with the first amplified sample training set.
In one or more embodiments, the training method of the second deep neural network model includes:
acquiring a second sample target set and dividing the second sample target set into a second sample training set and a second sample verification set, wherein the second sample target set comprises a plurality of second ultrasonic heart maps, and the second ultrasonic heart maps comprise images in the left ventricle rectangular image sequence frame marked with the left ventricle boundary;
training the first deep neural network model with the second sample training set;
and verifying the effectiveness of the first deep neural network model by using the second sample verification set so as to obtain optimal model parameters.
In one or more embodiments, the training the first deep neural network model with the second sample training set includes:
Carrying out homogenization treatment on all images in the second sample training set;
performing image space transformation on each image in the second sample training set, and collecting the images obtained after transformation into the second sample training set to obtain a second amplified sample training set;
training the second deep neural network model with the second amplified sample training set.
In one or more embodiments, the step of determining end diastole and end systole images from the left ventricular image sequence frame based on the left ventricular pixel area variation sequence comprises:
traversing the left ventricle pixel area change sequence to remove jump values;
selecting a sequence segment positioned in the same heartbeat period from the left ventricle pixel area change sequence, and sorting according to the size to obtain an area maximum value and an area minimum value;
and selecting an image corresponding to the area maximum value from the left ventricle image sequence frame as an end diastole image, and selecting an image corresponding to the area minimum value from the left ventricle image sequence frame as an end systole image.
In one or more embodiments, the step of calculating an ejection fraction based on the end diastole and end systole images comprises:
Obtaining boundary point coordinates of the end diastole image to obtain a first point set, and obtaining boundary point coordinates of the end systole image to obtain a second point set;
acquiring a first queue and a second queue based on the first point set and the second point set, wherein the first queue comprises the connection line of any two boundary points in the first point set, and the second queue comprises the connection line of any two boundary points in the second point set;
traversing the length of each connecting line in the first queue to obtain a first longest connecting line, and collecting each pixel point through which the first longest connecting line passes to obtain a third point set;
traversing the length of each connecting line in the second queue to obtain a second longest connecting line, and collecting each pixel point through which the second longest connecting line passes to obtain a fourth point set;
based on the third point set and the first queue, a third queue is obtained, wherein the third queue comprises a connecting line which is intersected with any pixel point in the third point set in the first queue and is perpendicular to the first longest connecting line;
a fourth queue is obtained based on the fourth point set and the second queue, and the fourth queue comprises a connecting line which is intersected with any pixel point in the fourth point set in the second queue and is perpendicular to the second longest connecting line;
Based on the third queue and the fourth queue, an ejection fraction is calculated.
In one or more embodiments, the calculating the ejection fraction based on the third queue and the fourth queue includes:
traversing the pixel length of each connecting line in the third queue, and substituting the pixel length into a formula:
calculating an end diastole volume, wherein a i I= … N, LV for the pixel length of the line in the third queue s Is end diastole volume;
traversing the pixel length of each connecting line in the fourth queue, and substituting the pixel length into a formula:
calculating an end-systole volume, wherein b j For the pixel length of the connection in the third queue, j= … M, LV o Is end-systolic volume;
based on the end diastole volume and the end systole volume, an ejection fraction is calculated.
In one or more embodiments, the step of calculating the ejection fraction based on the third queue and the fourth queue further comprises:
acquiring a plurality of standard left ventricular ultrasound images with known left ventricular volumes;
obtaining boundary point coordinates of the standard left ventricle ultrasonic image to obtain a fifth point set;
acquiring a fifth queue based on the fifth point set, wherein the fifth queue comprises connecting lines of any two boundary points in the fifth point set;
Traversing the length of each connecting line in the fifth queue to obtain a third longest connecting line, and collecting each pixel point through which the third longest connecting line passes to obtain a sixth point set;
a sixth queue is obtained based on the sixth point set and the fifth queue, wherein the sixth queue comprises a connecting line which is intersected with any pixel point in the sixth point set and is perpendicular to the third longest connecting line in the fifth queue;
traversing the pixel length of each connecting line in the sixth queue, substituting the pixel length based on a formulaIn the constructed machine learning model, taking the left ventricle volume of the standard left ventricle ultrasonic image as expected output, training the machine learning model to obtain optimal model parameters, wherein c x For the pixel length of the line in the third queue, x= … M.
In order to achieve the above purpose, another technical scheme adopted in the application is as follows:
there is provided an ejection fraction measurement device including:
the acquisition module is used for acquiring an ultrasonic heart map sequence frame;
the segmentation module is used for inputting each frame of image in the ultrasonic heart map sequence frame into a pre-trained deep neural network model to obtain a left ventricle image sequence frame;
the first calculation module is used for calculating the pixel area of each frame of image in the left ventricle image sequence frame to obtain a left ventricle pixel area change sequence;
A determining module for determining end diastole and end systole images from the left ventricular image sequence frame based on the left ventricular pixel area variation sequence;
and a second calculation module for calculating ejection fraction based on the end diastole image and the end systole image.
In order to achieve the above object, another technical solution adopted in the present application is:
there is provided an electronic device including:
at least one processor; and
a memory storing instructions that, when executed by the at least one processor, cause the at least one processor to perform the ejection fraction measurement method of any one of the embodiments described above.
In order to achieve the above object, another technical solution adopted in the present application is:
there is provided a machine readable storage medium storing executable instructions that when executed cause the machine to perform a method of ejection fraction measurement as described in any one of the embodiments above.
The beneficial effect of this application is, in contrast to prior art:
according to the method, the ultrasonic heart map sequence frames are analyzed and processed, so that the measurement of the ejection fraction is realized, and the problem that the traditional manual measurement method is time-consuming and labor-consuming is effectively avoided;
According to the method, each image in the ultrasonic heart image sequence frame is segmented to obtain a left ventricle image sequence frame, the pixel area of each image is calculated based on the left ventricle image sequence frame, so that a left ventricle pixel area change sequence is obtained, the maximum value and the minimum value of the left ventricle pixel area can be obtained based on the left ventricle pixel area change sequence, the volume EDV of the end diastole and the volume ESV of the end systole are calculated by taking the maximum value and the minimum value as references respectively, the definition accuracy of the end diastole and the end systole of the left ventricle can be effectively improved, and the problem that the definition accuracy of the end diastole and the end systole of the left ventricle is poor in a traditional mode based on the QRS wave of an electrocardiogram is avoided;
the method comprises the steps of firstly dividing the rectangular image comprising the left ventricle from the image in the ultrasonic heart map sequence frame, then dividing the left ventricle from the rectangular image, and effectively reducing the calculated amount and improving the recognition efficiency by setting double-model superposition division.
Drawings
FIG. 1 is a schematic illustration of one scenario in which the methods and apparatus for measuring ejection fraction according to embodiments of the present application are applied;
FIG. 2 is a flow chart of an embodiment of a method for measuring ejection fraction according to the present application;
FIG. 3 is an image of a sequence frame of an ultrasound cardiac map in an embodiment of the present application;
FIG. 4 is a flowchart of an embodiment corresponding to the step S200 in FIG. 2;
FIG. 5 is a schematic view of segmentation of images in a sequence of frames of an ultrasound cardiac map in an embodiment of the present application;
FIG. 6 is a flowchart of an embodiment corresponding to the step S400 in FIG. 2;
FIG. 7 is a flowchart of the step S500 in FIG. 2;
FIG. 8 is a flowchart of the step S507 in FIG. 7;
FIG. 9 is a schematic diagram of an embodiment of an ejection fraction measurement device of the present application;
fig. 10 is a schematic structural diagram of an embodiment of the electronic device of the present application.
Detailed Description
The present application will be described in detail with reference to the embodiments shown in the drawings. The embodiments are not intended to be limiting and structural, methodological, or functional changes made by those of ordinary skill in the art in light of the embodiments are intended to be included within the scope of the present application.
As described in the background art, the current method for measuring ejection fraction is to manually trace the boundary of the left ventricle from the end diastole cardiogram and the end systole cardiogram of the patient by a dotting method, then calculate the end diastole volume EDV and the end systole volume ESV by a disk superposition method, and then calculate the ejection fraction based on the formulas.
The measurement method is time consuming and laborious and is defined for end diastole and end systole based on the QRS wave of the electrocardiogram, the R wave peak of the QRS wave of the electrocardiogram is defined as end diastole of the left ventricle, the T wave end point is defined as end systole of the left ventricle, the positioning is not necessarily accurate for locating end diastole or end systole of the left ventricle.
In order to solve the problems, the applicant develops a method for measuring the ejection fraction based on an ultrasonic heart map, which is mainly applied to an artificial intelligence medical auxiliary scene, and is particularly used for analyzing an input ultrasonic image video of a patient and outputting the ejection fraction of the patient, so that the efficiency of measuring the ejection fraction is effectively improved, and the accuracy of measurement is ensured.
Specifically, referring to fig. 1, fig. 1 is a schematic view of a scenario in which the methods and apparatuses for measuring ejection fraction according to embodiments of the present application are applied. As shown in FIG. 1, a large amount of cardiac image videos can be acquired by a cardiac ultrasonic detection device or a palm-top cardiac ultrasonic device, and in the embodiment of the application, the cardiac ultrasonic detection device or the palm-top cardiac ultrasonic device can adopt ultrasonic echo Doppler imaging and mainly utilize continuously acquired ultrasonic cardiac image sequence frames to achieve the purpose of measuring the ejection fraction.
In this scenario, the cardiac ultrasound detection device or the palm-top cardiac ultrasound device may send the ultrasound cardiac map sequence frames to a server, analyze the ultrasound cardiac map sequence frames via a machine learning model pre-trained in the server, and send the measurement of the ejection fraction back to the cardiac ultrasound detection device, the palm-top cardiac ultrasound device, and/or other illustrated terminal devices in the scenario.
It can be understood that the cardiac ultrasonic detection device, the palm cardiac ultrasonic device, the server and the terminal device included in the scene can be three independent devices, or can be integrated into the same system, and are not limited herein.
Referring to fig. 2, fig. 2 is a flow chart illustrating an embodiment of a method for measuring ejection fraction according to the present application. The measuring method comprises the following steps:
s100, acquiring an ultrasonic heart map sequence frame.
In particular, the ultrasound cardiac map sequence frame is a sequence of images with temporal properties comprising real-time images of the heart, which may be, for example, a video file comprising an ultrasound heart.
In this embodiment, the ultrasound cardiac map sequence frame may include an ultrasound cardiac map, in particular, a two-dimensional ultrasound cardiac map, a four-chamber map of the apex, or a two-chamber map of the apex. Referring to fig. 3, fig. 3 is an image of an ultrasound cardiac image sequence frame in an embodiment of the present application.
In other embodiments, the ultrasound cardiac map sequence frame may also include other types of cardiac imaging files such as magnetic resonance imaging, which can achieve the effects of this embodiment.
S200, inputting each frame image in the ultrasonic heart map sequence frame into a pre-trained deep neural network model to obtain a left ventricle image sequence frame.
Each image in the sequence of frames of the ultrasound cardiac map may be segmented using a deep neural network model, and an image of the left ventricular region of interest is extracted from the image, thereby obtaining a sequence of frames of left ventricular images.
In the present embodiment, the left ventricle is used as a reference to measure the ejection fraction, and in other embodiments, other ventricles in which the ejection fraction can be measured may be selected, thereby achieving the effects of the present embodiment.
In one embodiment, referring to fig. 4, fig. 4 is a flow chart of an embodiment corresponding to step S200 in fig. 2.
The method for acquiring the left ventricle image sequence frame can comprise the following steps:
s201, carrying out homogenization processing on all images in the ultrasonic heart map sequence frame.
In order to avoid that the brightness difference, the contrast difference and the like of the images caused by different environments of image acquisition affect the segmentation effect of the final images, all the images in the sequence frames are subjected to homogenization treatment before segmentation, so that the gray scale of all the images is adjusted to be in a range of 0-255.
S202, inputting each frame image in the ultrasonic heart map sequence frame into a pre-trained first depth neural network model to obtain a left ventricle rectangular image sequence frame.
S203, inputting each frame image in the left ventricle rectangular image sequence frame into a pre-trained second deep network learning model to obtain the left ventricle image sequence frame.
Specifically, a rectangular region image including a left ventricle can be firstly segmented from each frame image in the ultrasonic heart map sequence frame, and a left ventricle rectangular image sequence frame can be obtained; and then, segmenting the left ventricle image from each frame image in the left ventricle rectangular image sequence frame, thereby obtaining the left ventricle image sequence frame.
Referring to fig. 5, fig. 5 is a schematic view illustrating image segmentation in a sequence of frames of an ultrasound cardiac map according to an embodiment of the present application.
As shown, a rectangular image including the left ventricle is segmented from images in the ultrasound cardiac image sequence frame, and then the left ventricle is segmented from the rectangular image.
By arranging double-model superposition segmentation, the calculated amount can be effectively reduced, and the recognition efficiency is improved.
Specifically, the first deep neural network model and the second deep neural network model may be common machine learning models such as a recursive algorithm, a convolution algorithm, a decision tree, and the like.
In one embodiment, a method of training a first deep neural network model may include:
acquiring a first sample target set and dividing the first sample target set into a first sample training set and a first sample verification set, wherein the first sample target set comprises a plurality of first ultrasonic heart maps, and rectangular marks of left ventricles in the first ultrasonic heart maps are circumscribed by boundaries and left ventricle boundaries;
training a first deep neural network model with a first sample training set;
and verifying the effectiveness of the first deep neural network model by using the first sample verification set so as to obtain optimal model parameters.
It can be appreciated that by marking the left ventricle on the ultrasound heart map with a rectangle circumscribing the left ventricle boundary, enough samples can be obtained to obtain a first sample target set, and the first sample training set and the first sample verification set are obtained by dividing the first sample target set to respectively perform training and verification of the model, so that optimal model parameters can be obtained.
Wherein the data volume ratio of the first sample training set to the first sample validation set may be 8:2.
to further enhance the segmentation effect of the model, the step of training the first deep neural network model with the first sample training set may include:
Carrying out homogenization treatment on all images in the first sample training set;
performing image space transformation on each image in the first sample training set, and collecting the images obtained after transformation into the first sample training set to obtain a first amplified sample training set;
training the first deep neural network model with a first amplified sample training set.
Firstly, carrying out homogenization treatment on all images in a first sample training set, and avoiding that the training effect of a model is influenced by different brightness and contrast of the images caused by different environments of image acquisition.
Then, the image is subjected to image space transformation, such as translation, transposition, mirroring, rotation, scaling and other transformation modes, and the transformed image is collected into a first sample training set, so that more training data are obtained.
Based on the amplified first amplified sample training set, training the first deep neural network model, and the segmentation effect of the model on images at different angles can be effectively improved.
In one embodiment, the training method of the second deep neural network model may include:
acquiring a second sample target set and dividing the second sample target set into a second sample training set and a second sample verification set, wherein the second sample target set comprises a plurality of second ultrasonic heart maps, and the second ultrasonic heart maps comprise images in left ventricle rectangular image sequence frames marked with left ventricle boundaries;
Training the first deep neural network model by using a second sample training set;
and verifying the effectiveness of the first deep neural network model by using the second sample verification set so as to obtain optimal model parameters.
It can be appreciated that after the left ventricle rectangular image sequence frame is segmented based on the first depth neural network model, the left ventricle boundary of each image in the left ventricle rectangular image sequence frame can be marked, so that enough samples are obtained, and a second sample target set is obtained.
And (3) dividing the second sample target set to obtain a second sample training set and a second sample verification set for respectively training and verifying the model, so that optimal model parameters can be obtained.
Wherein the data volume ratio of the second sample training set and the second sample validation set may be 8:2.
to further enhance the segmentation effect of the model, the step of training the second deep neural network model with the second sample training set may include:
carrying out homogenization treatment on all images in the second sample training set;
performing image space transformation on each image in the second sample training set, and collecting the images obtained after transformation into the second sample training set to obtain a second amplified sample training set;
Training the second deep neural network model with the second augmented sample training set.
Firstly, carrying out homogenization treatment on all images in a second sample training set, and avoiding that the training effect of a model is influenced by different brightness, contrast and the like of the images caused by different environments of image acquisition.
And then performing image space transformation such as translation, transposition, mirroring, rotation, scaling and the like on the image, and collecting the transformed image into a second sample training set so as to obtain more training data.
Training the second deep neural network model based on the amplified second amplified sample training set can effectively improve the segmentation effect of the model on images at different angles.
S300, calculating the pixel area of each frame image in the left ventricle image sequence frame to obtain a left ventricle pixel area change sequence.
After obtaining the left ventricle image sequence frame including the left ventricle image, the pixel area of each frame image can be calculated respectively, so as to obtain the left ventricle pixel area change sequence with time sequence.
In one embodiment, the pixel area of the image may be calculated using a pixelwise method, i.e., by traversing each pixel in the image to calculate its pixel area. Specifically, each pixel of the image can be marked as 1 in the traversal process, and the pixel area of the image can be obtained by adding all the values after the traversal is completed, so that the left ventricle pixel area change sequence is obtained.
In other embodiments, the pixel area of each image may be calculated in other ways, and the effects of this embodiment can be achieved.
S400, determining end diastole images and end systole images from the left ventricle image sequence frames based on the left ventricle pixel area change sequence.
After the left ventricle pixel area change sequence is obtained, a sequence segment positioned in the same heartbeat period can be selected from the sequence, and a maximum value and a minimum value, namely an area maximum value and an area minimum value, are selected from the sequence.
The image in the sequence frame corresponding to the area maximum value may be considered as an end diastole image and the image in the sequence frame corresponding to the area minimum value may be considered as an end systole image.
It can be appreciated that the end diastole volume EDV and the end systole volume ESV can be obtained by directly processing and analyzing the acquired cardiac images, so that the end diastole and the end systole of the left ventricle can be effectively positioned, the accuracy of data is ensured, and the accuracy of measuring the ejection fraction is ensured.
Specifically, in one embodiment, referring to fig. 6, fig. 6 is a flow chart of an embodiment corresponding to step S400 in fig. 2.
The method of determining end diastole and end systole images may comprise:
S401, traversing the left ventricle pixel area change sequence to eliminate jump values.
The jump values in the left ventricle pixel area change sequence can be eliminated, such as jump values smaller than the values at two sides or larger than the values at two sides, so that the influence on the result is avoided.
S402, selecting a sequence segment positioned in the same heartbeat period from the left ventricle pixel area change sequence, and sorting according to the size to obtain an area maximum value and an area minimum value.
The sequence segment in the same heartbeat period can be selected from the left ventricle pixel area change sequence, and it can be understood that the left ventricle pixel area change sequence is a period change sequence, and a period segment, namely a sequence segment of a heartbeat period, can be selected from the sequence. The values in the sequence segment are ordered by size to obtain an area maximum and an area minimum.
S403, selecting an image corresponding to the area maximum value from the left ventricle image sequence frame as an end diastole image, and selecting an image corresponding to the area minimum value from the left ventricle image sequence frame as an end systole image.
S500, calculating ejection fraction based on the end diastole image and the end systole image.
After the end diastole and end systole images are determined, the end diastole and end systole volumes can be calculated, respectively, to calculate the ejection fraction.
Specifically, referring to fig. 7, fig. 7 is a flow chart of an embodiment corresponding to step S500 in fig. 2.
The step of calculating the ejection fraction includes:
s501, obtaining boundary point coordinates of an end diastole image to obtain a first point set, and obtaining boundary point coordinates of an end systole image to obtain a second point set.
First, boundary points of the image may be acquired based on the end diastole image and the end systole image, respectively, thereby forming a first set of points and a second set of points.
The boundary points can be selected based on a preset number, for example, 500 boundary points can be preset to be selected on the boundary of the image; the boundary points may be selected on a pixel basis, and for example, each pixel point through which the boundary line of the image passes may be used as one boundary point, thereby achieving the effect of the present embodiment.
S502, acquiring a first queue and a second queue based on the first point set and the second point set.
Specifically, the first queue includes a connection line of any two boundary points in the first point set, and the second queue includes a connection line of any two boundary points in the second point set.
After the first point set is obtained, the boundary points in the first point set can be connected in pairs, so that the combination of the boundary point connection lines, namely the first queue, is obtained.
Correspondingly, after the second point set is acquired, the boundary points in the second point set can be connected in pairs, so that the combination of the boundary point connection lines, namely a second queue, is obtained.
S503, traversing the length of each connecting line in the first queue to obtain a first longest connecting line, and collecting each pixel point through which the first longest connecting line passes to obtain a third point set.
Specifically, the length of each link in the first queue may be calculated based on the coordinates of the boundary points, thereby obtaining the first longest link.
The first longest connecting line can be used as a central line of the left ventricle image, and the coordinates of each pixel point through which the first longest connecting line passes can be obtained based on the coordinates of the first longest connecting line, so that a third point set is obtained.
S504, traversing the length of each connecting line in the second queue to obtain a second longest connecting line, and collecting each pixel point through which the second longest connecting line passes to obtain a fourth point set.
Specifically, the fourth point set may be obtained in the same manner as the third point set, and based on the length of each line in the second queue, the second longest line is obtained, so as to obtain the fourth point set including coordinates of each pixel point through which the second longest line passes.
S505, obtaining a third queue based on the third point set and the first queue.
The third queue includes a line intersecting any pixel point in the third set of points in the first queue and perpendicular to the first longest line.
The coordinates of the pixel points in the third point set are known, the coordinates of each connecting line in the first queue are known, and the coordinates of the first longest connecting line are known, so that the connecting line perpendicular to the first longest connecting line can be found in the first queue.
Meanwhile, each pixel point corresponds to a connecting line perpendicular to the first longest connecting line in the first queue, so that a third queue is obtained.
It can be appreciated that by making a perpendicular to each pixel point on the first longest line, the set of all perpendicular lines can cover the entire left ventricle, ensuring accuracy in the subsequent volume calculation.
S506, obtaining a fourth queue based on the fourth point set and the second queue.
The fourth queue comprises a connecting line which is intersected with any pixel point in the fourth point set in the second queue and is perpendicular to the second longest connecting line.
S507, calculating to obtain ejection fraction based on the third queue and the fourth queue.
After the third and fourth queues are obtained, the end diastole and end systole volumes may be calculated based on the third and fourth queues, respectively, to calculate the ejection fraction.
Specifically, referring to fig. 8, fig. 8 is a flow chart of an embodiment corresponding to step S507 in fig. 7.
Based on the third queue and the fourth queue, the step of calculating the ejection fraction includes:
s5071, traversing the pixel length of each connecting line in the third queue, and substituting the pixel length into the formula:
Wherein a is i I= … N, LV for the pixel length of the line in the third queue s Is end diastole volume.
In one application scenario, n may be equal to 1, i.eIn other scenes with higher requirements on precision, n can also take other values larger than 1 at will, and the effects of the embodiment can be achieved.
S5072, traversing the pixel length of each connecting line in the fourth queue, and substituting the pixel length into the formula:
Wherein b j For the pixel length of the line in the third queue, j= … M, LV o Is the end-systolic volume.
In one application scenario, n may be equal to 1, i.eIn other scenes with higher requirements on precision, n can also take other values larger than 1 at will, and the effects of the embodiment can be achieved.
Firstly, calculating the corresponding circular area of the connecting line through the length of the connecting line pixel, and adding all the corresponding circular areas of the connecting line to obtain the volume of a three-dimensional structure with each circular cross section, wherein the shape of the three-dimensional structure is close to the left ventricle, but larger error exists; and then correcting the three-dimensional structure by adopting a plurality of items of the length of the connecting line pixels and the product of parameters to ensure that the three-dimensional structure is more similar to the volume of a real left ventricle, thereby ensuring the accuracy of the end diastole volume and the end systole volume obtained by calculation.
The parameter k may be a default value, or may be adjusted according to users of different ages, sexes, and weights. In particular, in order to ensure the accuracy of the parameters, the optimal parameters can also be obtained by a machine learning method.
Specifically, the step of obtaining the optimal parameters may include:
acquiring a plurality of standard left ventricular ultrasound images with known left ventricular volumes;
obtaining boundary point coordinates of a standard left ventricle ultrasonic image to obtain a fifth point set;
based on the fifth point set, a fifth queue is obtained, wherein the fifth queue comprises connecting lines of any two boundary points in the fifth point set;
traversing the length of each connecting line in the fifth queue to obtain a third longest connecting line, and collecting each pixel point through which the third longest connecting line passes to obtain a sixth point set;
obtaining a sixth queue based on the sixth point set and the fifth queue, wherein the sixth queue comprises a connecting line which is intersected with any pixel point in the sixth point set in the fifth queue and is perpendicular to a third longest connecting line;
traversing the pixel length of each connecting line in the sixth queue, substituting the pixel length into the formulaIn the constructed machine learning model, taking the left ventricle volume of a standard left ventricle ultrasonic image as expected output, training the machine learning model to obtain optimal model parameters, wherein c x For the pixel length of the line in the third queue, x= … M.
It will be appreciated that by using standard left ventricular ultrasound images of known left ventricular volumes, substituted into the formula and with the left ventricular volume as the desired output, by training the machine learning model, the optimal model parameters can be achieved, thereby improving the accuracy of the end diastole and end systole volume calculations.
S5073, calculating to obtain ejection fraction based on the end diastole volume and the end systole volume.
Specifically, the end diastole and end systole volumes may be substituted into the formula:
In an application scenario, the ejection fraction may also be obtained by directly calculating the pixel length of each connecting line in the third queue and the fourth queue, which is specifically as follows:
substituting the pixel length of each connecting line in the third queue and the fourth queue into the above can directly obtain the ejection fraction, and the effect of the embodiment can be realized.
The present application further provides an ejection fraction measurement device, referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of the ejection fraction measurement device of the present application.
The device comprises an acquisition module 21, a segmentation module 22, a first calculation module 23, a determination module 24 and a second calculation module 25.
Wherein, the acquisition module 21 is used for acquiring an ultrasonic heart map sequence frame; the segmentation module 22 is configured to input each frame of image in the sequence of frames of the ultrasound cardiac map into a pre-trained deep neural network model to obtain a sequence of frames of images of the left ventricle; the first calculating module 23 is configured to calculate a pixel area of each frame image in the left ventricle image sequence frame, so as to obtain a left ventricle pixel area variation sequence; the determining module 24 is configured to determine end diastole and end systole images from the left ventricular image sequence frames based on the left ventricular pixel area variation sequence; the second calculation module 25 is configured to calculate an ejection fraction based on the end diastole image and the end systole image.
In one embodiment, the ejection fraction measurement device further includes a first deep neural network model training module 26, where the first deep neural network model training module 26 is configured to acquire a first sample target set and divide the first sample target set into a first sample training set and a first sample verification set, and the first sample target set includes a plurality of first ultrasound cardiac images, where a rectangular mark left ventricle in the first ultrasound cardiac images has been circumscribed by a boundary with a left ventricle boundary; training the first deep neural network model with a first sample training set; and verifying the effectiveness of the first deep neural network model with the first sample verification set to obtain optimal model parameters.
In one embodiment, the ejection fraction measurement device further includes a second deep neural network model training module 27, where the second deep neural network model training module 27 is configured to acquire a second sample target set and divide the second sample target set into a second sample training set and a second sample verification set, where the second sample target set includes a plurality of second ultrasound cardiac images, and the second ultrasound cardiac images include images in a left ventricle rectangular image sequence frame marked with a left ventricle boundary; training the first deep neural network model with a second sample training set; and verifying the effectiveness of the first deep neural network model with the second sample verification set to obtain optimal model parameters.
The ejection fraction measurement method according to the embodiment of the present specification is described above with reference to fig. 1 to 8. The details mentioned in the description of the method embodiment above apply equally to the ejection fraction measuring device of the embodiments of the present specification. The above ejection fraction measuring device may be implemented in hardware, or may be implemented in software or a combination of hardware and software.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of an electronic device according to the present application. As shown in fig. 10, the electronic device 30 may include at least one processor 31, a memory 32 (e.g., a non-volatile memory), a memory 33, and a communication interface 34, and the at least one processor 31, the memory 32, the memory 33, and the communication interface 34 are connected together via a bus 35. The at least one processor 31 executes at least one computer readable instruction stored or encoded in the memory 32.
It should be appreciated that the computer-executable instructions stored in the memory 32, when executed, cause the at least one processor 31 to perform the various operations and functions described above in connection with fig. 1-6 in various embodiments of the present description.
In embodiments of the present description, electronic device 30 may include, but is not limited to: personal computers, server computers, workstations, desktop computers, laptop computers, notebook computers, mobile electronic devices, smart phones, tablet computers, cellular phones, personal Digital Assistants (PDAs), handsets, messaging devices, wearable electronic devices, consumer electronic devices, and the like.
According to one embodiment, a program product, such as a machine-readable medium, is provided. The machine-readable medium may have instructions (i.e., elements described above implemented in software) that, when executed by a machine, cause the machine to perform the various operations and functions described above in connection with fig. 1-5 in various embodiments of the specification. In particular, a system or apparatus provided with a readable storage medium having stored thereon software program code implementing the functions of any of the above embodiments may be provided, and a computer or processor of the system or apparatus may be caused to read out and execute instructions stored in the readable storage medium.
In this case, the program code itself read from the readable medium may implement the functions of any of the above embodiments, and thus the machine-readable code and the readable storage medium storing the machine-readable code form part of the present specification.
Examples of readable storage media include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-R, CD-RWs, DVD-ROMs, DVD-RAMs, DVD-RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer or cloud by a communications network.
It will be appreciated by those skilled in the art that various changes and modifications can be made to the embodiments disclosed above without departing from the spirit of the invention. Accordingly, the scope of protection of this specification should be limited by the attached claims.
It should be noted that not all the steps and units in the above flowcharts and the system configuration diagrams are necessary, and some steps or units may be omitted according to actual needs. The order of execution of the steps is not fixed and may be determined as desired. The apparatus structures described in the above embodiments may be physical structures or logical structures, that is, some units may be implemented by the same physical client, or some units may be implemented by multiple physical clients, or may be implemented jointly by some components in multiple independent devices.
In the above embodiments, the hardware units or modules may be implemented mechanically or electrically. For example, a hardware unit, module or processor may include permanently dedicated circuitry or logic (e.g., a dedicated processor, FPGA or ASIC) to perform the corresponding operations. The hardware unit or processor may also include programmable logic or circuitry (e.g., a general purpose processor or other programmable processor) that may be temporarily configured by software to perform the corresponding operations. The particular implementation (mechanical, or dedicated permanent, or temporarily set) may be determined based on cost and time considerations.
The detailed description set forth above in connection with the appended drawings describes exemplary embodiments, but does not represent all embodiments that may be implemented or fall within the scope of the claims. The term "exemplary" used throughout this specification means "serving as an example, instance, or illustration," and does not mean "preferred" or "advantageous over other embodiments. The detailed description includes specific details for the purpose of providing an understanding of the described technology. However, the techniques may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described embodiments.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A method of measuring ejection fraction, comprising:
acquiring an ultrasonic heart map sequence frame;
inputting each frame image in the ultrasonic heart map sequence frame into a pre-trained deep neural network model to obtain a left ventricle image sequence frame;
calculating the pixel area of each frame image in the left ventricle image sequence frame to obtain a left ventricle pixel area change sequence;
determining end diastole and end systole images from the left ventricular image sequence frames based on the left ventricular pixel area variation sequence;
based on the end diastole and end systole images, an ejection fraction is calculated.
2. The method of claim 1, wherein the step of inputting each frame of image in the sequence of frames of the ultrasound cardiac map into a pre-trained deep neural network model to obtain a sequence of frames of left ventricular images comprises:
carrying out homogenization treatment on all images in the ultrasonic heart map sequence frame;
inputting each frame image in the ultrasonic heart map sequence frame into a pre-trained first depth neural network model to obtain a left ventricle rectangular image sequence frame;
and inputting each frame image in the left ventricle rectangular image sequence frame into a pre-trained second deep network learning model to obtain the left ventricle image sequence frame.
3. The ejection fraction measurement method according to claim 2, wherein the training method of the first deep neural network model includes:
acquiring a first sample target set and dividing the first sample target set into a first sample training set and a first sample verification set, wherein the first sample target set comprises a plurality of first ultrasonic heart maps, and rectangular marks of left ventricles with boundaries circumscribed with the left ventricle boundaries in the first ultrasonic heart maps;
training the first deep neural network model with the first sample training set;
Verifying the effectiveness of the first deep neural network model by using the first sample verification set so as to obtain optimal model parameters;
the training the first deep neural network model with the first sample training set includes:
carrying out homogenization treatment on all images in the first sample training set;
performing image space transformation on each image in the first sample training set, and collecting the images obtained after transformation into the first sample training set to obtain a first amplified sample training set;
training the first deep neural network model with the first amplified sample training set.
The training method of the second deep neural network model comprises the following steps:
acquiring a second sample target set and dividing the second sample target set into a second sample training set and a second sample verification set, wherein the second sample target set comprises a plurality of second ultrasonic heart maps, and the second ultrasonic heart maps comprise images in the left ventricle rectangular image sequence frame marked with the left ventricle boundary;
training the first deep neural network model with the second sample training set;
verifying the effectiveness of the first deep neural network model by using the second sample verification set so as to obtain optimal model parameters;
The step of training the first deep neural network model with the second sample training set includes:
carrying out homogenization treatment on all images in the second sample training set;
performing image space transformation on each image in the second sample training set, and collecting the images obtained after transformation into the second sample training set to obtain a second amplified sample training set;
training the second deep neural network model with the second amplified sample training set.
4. The ejection fraction measurement method of claim 1, wherein the step of determining end diastole and end systole images from the left ventricular image sequence frames based on the left ventricular pixel area variation sequence comprises:
traversing the left ventricle pixel area change sequence to remove jump values;
selecting a sequence segment positioned in the same heartbeat period from the left ventricle pixel area change sequence, and sorting according to the size to obtain an area maximum value and an area minimum value;
and selecting an image corresponding to the area maximum value from the left ventricle image sequence frame as an end diastole image, and selecting an image corresponding to the area minimum value from the left ventricle image sequence frame as an end systole image.
5. The method of claim 1, wherein the step of calculating an ejection fraction based on the end diastole and end systole images comprises:
obtaining boundary point coordinates of the end diastole image to obtain a first point set, and obtaining boundary point coordinates of the end systole image to obtain a second point set;
acquiring a first queue and a second queue based on the first point set and the second point set, wherein the first queue comprises the connection line of any two boundary points in the first point set, and the second queue comprises the connection line of any two boundary points in the second point set;
traversing the length of each connecting line in the first queue to obtain a first longest connecting line, and collecting each pixel point through which the first longest connecting line passes to obtain a third point set;
traversing the length of each connecting line in the second queue to obtain a second longest connecting line, and collecting each pixel point through which the second longest connecting line passes to obtain a fourth point set;
based on the third point set and the first queue, a third queue is obtained, wherein the third queue comprises a connecting line which is intersected with any pixel point in the third point set in the first queue and is perpendicular to the first longest connecting line;
A fourth queue is obtained based on the fourth point set and the second queue, and the fourth queue comprises a connecting line which is intersected with any pixel point in the fourth point set in the second queue and is perpendicular to the second longest connecting line;
based on the third queue and the fourth queue, an ejection fraction is calculated.
6. The method of claim 5, wherein the step of calculating the ejection fraction based on the third queue and the fourth queue comprises:
traversing the pixel length of each connecting line in the third queue, and substituting the pixel length into a formula:
calculating an end diastole volume, wherein a i I= … N, LV for the pixel length of the line in the third queue s Is end diastole volume;
traversing the pixel length of each connecting line in the fourth queue, and substituting the pixel length into a formula:
calculating an end-systole volume, wherein b j For the pixel length of the connection in the third queue, j= … M, LV o Is end-systolic volume;
based on the end diastole volume and the end systole volume, an ejection fraction is calculated.
7. The method of claim 6, wherein the step of calculating the ejection fraction based on the third queue and the fourth queue further comprises:
Acquiring a plurality of standard left ventricular ultrasound images with known left ventricular volumes;
obtaining boundary point coordinates of the standard left ventricle ultrasonic image to obtain a fifth point set;
acquiring a fifth queue based on the fifth point set, wherein the fifth queue comprises connecting lines of any two boundary points in the fifth point set;
traversing the length of each connecting line in the fifth queue to obtain a third longest connecting line, and collecting each pixel point through which the third longest connecting line passes to obtain a sixth point set;
a sixth queue is obtained based on the sixth point set and the fifth queue, wherein the sixth queue comprises a connecting line which is intersected with any pixel point in the sixth point set and is perpendicular to the third longest connecting line in the fifth queue;
traversing the pixel length of each connecting line in the sixth queue, substituting the pixel length based on a formulaIn the constructed machine learning model, taking the left ventricle volume of the standard left ventricle ultrasonic image as expected output, training the machine learning model to obtain optimal model parameters, wherein c x For the pixel length of the line in the third queue, x= … M.
8. An ejection fraction measurement device, comprising:
the acquisition module is used for acquiring an ultrasonic heart map sequence frame;
The segmentation module is used for inputting each frame of image in the ultrasonic heart map sequence frame into a pre-trained deep neural network model to obtain a left ventricle image sequence frame;
the first calculation module is used for calculating the pixel area of each frame of image in the left ventricle image sequence frame to obtain a left ventricle pixel area change sequence;
a determining module for determining end diastole and end systole images from the left ventricular image sequence frame based on the left ventricular pixel area variation sequence;
and a second calculation module for calculating ejection fraction based on the end diastole image and the end systole image.
9. An electronic device, comprising:
at least one processor; and
a memory storing instructions that, when executed by the at least one processor, cause the at least one processor to perform the ejection fraction measurement method of any one of claims 1 to 7.
10. A machine readable storage medium having stored thereon executable instructions which when executed cause the machine to perform the ejection fraction measurement method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310124229.5A CN116130090A (en) | 2023-02-16 | 2023-02-16 | Ejection fraction measuring method and device, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310124229.5A CN116130090A (en) | 2023-02-16 | 2023-02-16 | Ejection fraction measuring method and device, electronic device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116130090A true CN116130090A (en) | 2023-05-16 |
Family
ID=86304530
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310124229.5A Pending CN116130090A (en) | 2023-02-16 | 2023-02-16 | Ejection fraction measuring method and device, electronic device, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116130090A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117745726A (en) * | 2024-02-21 | 2024-03-22 | 中国医学科学院北京协和医院 | Left ventricular ejection fraction calculating method and device based on transesophageal echocardiography |
CN117918889A (en) * | 2024-03-20 | 2024-04-26 | 中国医学科学院北京协和医院 | Automatic calculation method and device for left ventricular cardiac output of transesophageal echocardiography four-chamber cardiac tangential plane |
-
2023
- 2023-02-16 CN CN202310124229.5A patent/CN116130090A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117745726A (en) * | 2024-02-21 | 2024-03-22 | 中国医学科学院北京协和医院 | Left ventricular ejection fraction calculating method and device based on transesophageal echocardiography |
CN117745726B (en) * | 2024-02-21 | 2024-06-07 | 中国医学科学院北京协和医院 | Left ventricular ejection fraction calculating method and device based on transesophageal echocardiography |
CN117918889A (en) * | 2024-03-20 | 2024-04-26 | 中国医学科学院北京协和医院 | Automatic calculation method and device for left ventricular cardiac output of transesophageal echocardiography four-chamber cardiac tangential plane |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9968257B1 (en) | Volumetric quantification of cardiovascular structures from medical imaging | |
CN116130090A (en) | Ejection fraction measuring method and device, electronic device, and storage medium | |
CN111862044B (en) | Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium | |
CN111105424A (en) | Lymph node automatic delineation method and device | |
EP2365471B1 (en) | Diagnosis assisting apparatus, coronary artery analyzing method and recording medium having a coronary artery analzying program stored therein | |
WO2021128825A1 (en) | Three-dimensional target detection method, method and device for training three-dimensional target detection model, apparatus, and storage medium | |
CN111127527B (en) | Method and device for realizing lung nodule self-adaptive matching based on CT image bone registration | |
WO2022213654A1 (en) | Ultrasonic image segmentation method and apparatus, terminal device, and storage medium | |
CN111340756B (en) | Medical image lesion detection merging method, system, terminal and storage medium | |
CN113012173A (en) | Heart segmentation model and pathology classification model training, heart segmentation and pathology classification method and device based on cardiac MRI | |
CN111429457B (en) | Intelligent evaluation method, device, equipment and medium for brightness of local area of image | |
Veronesi et al. | Tracking of left ventricular long axis from real-time three-dimensional echocardiography using optical flow techniques | |
CN111724371B (en) | Data processing method and device and electronic equipment | |
JP2022111357A (en) | Method for determining mid-sagittal plane from magnetic resonance images, image processing device, and storage medium | |
EP2498222B1 (en) | Method and system for regression-based 4D mitral valve segmentation from 2D+T magnetic resonance imaging slices | |
CN114332132A (en) | Image segmentation method and device and computer equipment | |
US20090161926A1 (en) | Semi-automatic Segmentation of Cardiac Ultrasound Images using a Dynamic Model of the Left Ventricle | |
CN115861172A (en) | Wall motion estimation method and device based on self-adaptive regularized optical flow model | |
CN109102509B (en) | Segmentation model training method and device and computer readable storage medium | |
CN111723836A (en) | Image similarity calculation method and device, electronic equipment and storage medium | |
CN110739050B (en) | Left ventricle full-parameter and confidence coefficient quantification method | |
CN112258476A (en) | Echocardiography myocardial abnormal motion mode analysis method, system and storage medium | |
CN109767468B (en) | Visceral volume detection method and device | |
Liu et al. | Automated binocular vision measurement of food dimensions and volume for dietary evaluation | |
CN114419375B (en) | Image classification method, training device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |