WO2017154008A1 - Procédé et système d'analyse automatique d'images d'échocardiographie doppler - Google Patents

Procédé et système d'analyse automatique d'images d'échocardiographie doppler Download PDF

Info

Publication number
WO2017154008A1
WO2017154008A1 PCT/IL2017/050308 IL2017050308W WO2017154008A1 WO 2017154008 A1 WO2017154008 A1 WO 2017154008A1 IL 2017050308 W IL2017050308 W IL 2017050308W WO 2017154008 A1 WO2017154008 A1 WO 2017154008A1
Authority
WO
WIPO (PCT)
Prior art keywords
polygon
image
candidate
annotated
vertices
Prior art date
Application number
PCT/IL2017/050308
Other languages
English (en)
Inventor
Joseph KESHET
Amir GOTTLIEB
Original Assignee
Bar-Ilan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bar-Ilan University filed Critical Bar-Ilan University
Publication of WO2017154008A1 publication Critical patent/WO2017154008A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/026Measuring blood flow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/06Measuring blood flow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/026Measuring blood flow
    • A61B5/029Measuring or recording blood output from the heart, e.g. minute volume
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/06Measuring blood flow
    • A61B8/065Measuring blood flow to determine blood output from the heart
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/467Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/488Diagnostic techniques involving Doppler signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/02028Determining haemodynamic parameters not otherwise provided for, e.g. cardiac contractility or left ventricular ejection fraction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/467Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
    • A61B8/469Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means for selection of a region of interest
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Definitions

  • the invention relates to the field of image analysis, and, more particularly, to analysis of Doppler echocardiography (DE) images.
  • DE Doppler echocardiography
  • Doppler echocardiography is widely used as a part of clinical practices in order to assess cardiovascular functionalities (e.g., valvular regurgitation and stenosis, etc.). By employing the Doppler Effect, Doppler echocardiography determines whether the blood is moving towards or away from the ultrasound probe, and its relative velocity. The acquired Doppler echocardiogram is a velocity-time image.
  • a computerized method of automatically analyzing a Doppler echocardiography (DE) image comprising: obtaining one or more trained weight vectors each corresponding to a polygon type with a fixed number of vertices; obtaining a plurality of candidate polygons belonging to at least one polygon type; extracting a plurality of feature functions from the DE image for each candidate polygon, including vertex related feature functions and edge related feature functions; calculating a score for each candidate polygon using the extracted feature functions and a trained weight vector corresponding to the polygon type with the same number of vertices as the candidate polygon; and determining a candidate polygon with a highest calculated score to be a predicted polygon for the DE image, whereby the predicted polygon represents a pattern enclosed in the DE image
  • a computerized system of automatically analyzing a Doppler echocardiography (DE) image comprising a processor operatively coupled with a memory and configured to: obtain one or more trained weight vectors each corresponding to a polygon type with a fixed number of vertices; obtain a plurality of candidate polygons belonging to at least one polygon type; extract a plurality of feature functions from the DE image for each candidate polygon, including vertex related feature functions and edge related feature functions; calculate a score for each candidate polygon using the extracted feature functions and a trained weight vector corresponding to the polygon type with the same number of vertices as the candidate polygon; and determine a candidate polygon with a highest calculated score to be a predicted polygon for the DE image, whereby the predicted polygon represents a pattern enclosed in the DE image.
  • a processor operatively coupled with a memory and configured to: obtain one or more trained weight vectors each corresponding to a polygon type with a fixed number of vertices; obtain a plurality of candidate polygons belonging
  • a computerized training system for automatically analyzing a Doppler echocardiography (DE) image
  • the system comprising a first processor operatively coupled with a first memory, the first processor being associated with a second processor operatively coupled with a second memory, and configured to: obtain a training set of annotated DE images each associated with an annotated polygon; set one or more weight vectors each corresponding to a polygon type with a fixed number of vertices; and update the weight vectors iteratively by processing each annotated DE image within the training set and the annotated polygon associated therewith, giving rise to the trained weight vectors; whereby the first processor operatively coupled with the first memory is further configured to send the trained weight vectors to the second processor operatively coupled with the second memory, and the second processor operatively coupled with the second memory, being responsive to receiving the trained weight vectors, is configured to: obtain a plurality of candidate polygons belonging to at least one poly
  • a computerized searching system for automatically analyzing a Doppler echocardiography (DE) image, the system comprising a first processor operatively coupled with a first memory, the first processor being associated with a second processor operatively coupled with a second memory, and configured to search for a plurality of candidate polygons from all possible polygon configurations within the DE image, the searching comprising: for each polygon type with a fixed number of vertices, i) for each vertex, calculating a score map including a plurality of fitness scores each for a possible location for the vertex in the DE image, the fitness score being calculated based on a weight vector corresponding to the polygon type and a feature function related to the vertex; ii) selecting a set of possible locations with the highest fitness scores to be a set of candidate locations for each vertex; iii) selecting a member from each set of candidate locations for a vertex to form a candidate polygon, and repeat
  • the method or the system can comprise one or more of features listed below, in any desired combination or permutation which is technically possible:
  • Clinical parameters of the DE image can be extracted based on the predicted polygon for assisting diagnosis of heart conditions.
  • the DE image can be selected from a group comprising: Mitral Valve (MV) inflow, Aoertic Regurgitation (AR) and Left Ventricular Outflow Tract (LVOT).
  • the DE image can be MV inflow
  • the extracted clinical parameters can include at least one of the following: E/A ratio and deceleration time.
  • the one or more trained weight vectors can be obtained from a training process based on a plurality of annotated DE images.
  • the training process can be performed, including: obtaining a training set of annotated DE images each associated with an annotated polygon; setting values of one or more weight vectors each corresponding to a polygon type with a fixed number of vertices; and updating the values of the one or more weight vectors iteratively by processing each annotated DE image within the training set and the annotated polygon associated therewith, giving rise to the trained weight vectors.
  • the updating can comprise: for each annotated DE image, i) obtaining a plurality of training candidate polygons belonging to at least one polygon type; ii) extracting a plurality of feature functions from the annotated DE image for each polygon type, including vertex related feature functions and edge related feature functions; iii) for each training candidate polygon, calculating a loss value representing a level of dissimilarity between the training candidate polygon and the annotated polygon associated with the annotated DE image; iv) calculating a score for each training candidate polygon using the extracted feature functions, a weight vector corresponding to a polygon type with the same number of vertices as the training candidate polygon, and the loss value; v) determining a training candidate polygon with a highest calculated score to be a predicted polygon for the annotated DE image; and vi) updating at least a weight vector corresponding to the polygon type with the same number of vertices as the annotated polygon, using the predicted polygon, the annotated poly
  • the method or the system can comprise one or more of features listed below, in any desired combination or permutation which is technically possible:
  • the loss value can be calculated based on a cost function indicative of a distance between the training candidate polygon and the annotated polygon associated with the annotated DE image.
  • the updating at least a weight vector can include: if the predicted polygon has the same number of vertices as the annotated polygon, updating the weight vector corresponding to the polygon type with the same number of vertices as the annotated polygon, using the predicted polygon, the annotated polygon and the loss value; otherwise updating both the weight vector corresponding to the polygon type with the same number of vertices as the annotated polygon, and the weight vector corresponding to the polygon type with the same number of vertices as the predicted polygon, using the predicted polygon, the annotated polygon, and the loss value.
  • the obtaining a plurality of candidate polygons can comprise searching for the plurality of candidate polygons from all possible polygon configurations within the DE image.
  • the searching can include: for each polygon type with a fixed number of vertices, i) for each vertex, calculating a score map including a plurality of fitness scores each for a possible location for the vertex in the DE image, the fitness score being calculated based on a weight vector corresponding to the polygon type and a feature function related to the vertex; ii) selecting a set of possible locations with the highest fitness scores to be a set of candidate locations for each vertex; and iii) selecting a member from each set of candidate locations for a vertex to form a candidate polygon, and repeating the selecting a member for each candidate location in each set, giving rise to a set of candidate polygons for the polygon type; wherein the plurality of candidate polygons comprising one or more of the set of candidate polygons each for a respective polygon type.
  • the method or the system can comprise one or more of features listed below, in any desired combination or permutation which is technically possible:
  • the set of candidate polygons can be reduced by applying inherent constraints related to each image type.
  • the vertex related feature function can be a feature vector including pixel properties representing a window of pixels around a vertex.
  • the pixel properties can be selected from a group comprising: pixel intensities and a binary pixel property indicating whether a pixel is a part of a Connected Component of the DE image.
  • the edge related feature function can be based on a relation of pixel intensities between lines that are parallel to a slope between two vertices.
  • the DE image can be AR, and the extracted clinical parameters can include at least one of the following: a second vertex of the predicted polygon and a slope between the second vertex and a third vertex.
  • the DE image can be LVOT, and the extracted clinical parameters can include at least a Time Velocity Integral (TVI).
  • the obtained trained weight vectors and candidate polygons can correspond to at least two polygon types, and the DE image can be analyzed at least partially simultaneously for the at least two polygon types.
  • the at least two polygon types can include two or more polygon types each with a fixed number of vertices selected from the following: three vertices, four vertices, five vertices and six vertices.
  • Fig. 1 schematically illustrates a functional block diagram of a system for automatically analyzing a Doppler echocardiography (DE) image in accordance with certain embodiments of the presently disclosed subject matter;
  • DE Doppler echocardiography
  • Fig. 2 is a generalized flowchart of automatically analyzing a Doppler echocardiography (DE) image in accordance with certain embodiments of the presently disclosed subject matter
  • Fig. 3 is a generalized flowchart of performing a training process to generate trained weight vectors in accordance with certain embodiments of the presently disclosed subject matter
  • Fig. 4 is a generalized flowchart of an iterative process of updating values of the weight vectors in accordance with certain embodiments of the presently disclosed subject matter
  • Fig. 5 is a generalized flowchart of a searching process for the plurality of candidate polygons from all possible polygon configurations within a DE image in accordance with certain embodiments of the presently disclosed subject matter;
  • Figs. 6A-6C exemplify three different types of DE images including MV, AR and LVOT images as above described, in accordance with certain embodiments of the presently disclosed subject matter;
  • Fig. 7 exemplifies two types of feature functions extracted from a given MV image, in accordance with certain embodiments of the presently disclosed subject matter
  • Figs. 8A-8C illustrate an example of a shift of 1 pixel in a predicted polygon vertex relative to an annotated polygon vertex, given one possible pattern in accordance with certain embodiments of the presently disclosed subject matter;
  • FIG 9 shows a general flowchart of a method for training a system to predict polygon vertices, in accordance with certain embodiments of the presently disclosed subject matter
  • Fig. 10 shows an MV image displaying specific locations having clinical importance, in accordance with certain embodiments of the presently disclosed subject matter
  • FIG. 11A-11B shows MV loss function for two given annotations, in accordance with certain embodiments of the presently disclosed subject matter.
  • Fig. 12 shows an example of an operation to correct a DE image, in accordance with certain embodiments of the presently disclosed subject matter.
  • Disclosed herein is computerized method, computer program product, and system for automatically analyzing a Doppler echocardiography (DE) image.
  • DE Doppler echocardiography
  • FIG. 1 schematically illustrating a functional block diagram of a system for automatically analyzing a Doppler echocardiography (DE) image in accordance with certain embodiments of the presently disclosed subject matter.
  • DE Doppler echocardiography
  • a Doppler echocardiography analyzing system 100.
  • system 100 is configured to automatically trace a pattern (also referred to as envelopes) enclosed in the Doppler spectra of the DE image.
  • a pattern also referred to as envelopes
  • DE images may be analyzed by system 100, including, by way of non-limiting examples, any of the following: Mitral Valve Inflow (MV) images, Aortic Regurgitation (AR) images, Left Ventricular Outflow Tract (LVOT) images, and Tricuspid Regurgitation (TR) images.
  • MV Mitral Valve Inflow
  • AR Aortic Regurgitation
  • LVOT Left Ventricular Outflow Tract
  • TR Tricuspid Regurgitation
  • each DE image type has typical patterns and the clinical information or clinical parameters for each can be extracted differently. Moreover, different clinical cases in each type manifest themselves by different patterns. These patterns are referred to as sub-types.
  • the annotation of those DE images may be envelope of the main pattern, and may be sufficient to represent the envelopes using, for example, a polygon.
  • each sub-type of a certain DE image type may be represented using a polygon of a fixed number of vertices (also referred to as polygon type), which consequently may be used to compute the relevant measurements.
  • d co L s*d R ows ⁇ w ere dcoLS- d-Rows c are the horizontal and vertical dimensions of the image, respectively.
  • each DE image may have its own dimensions, thus dcoLS and dRows are not fixed and are different for each DE image.
  • Figs. 6A-6C there are exemplified three different types of DE images including, AR image, LVOT image, and MV image as above described, in accordance with certain embodiments of the presently disclosed subject matter.
  • Each DE image of the three may have a different size, from which the desired information is to be extracted via the analysis described herein.
  • an MV pattern may be composed of one or two waves, each wave appearing as the shape of a triangle. For example, in a single wave MV sub-type, the enclosing polygon has three vertices.
  • the enclosing polygon may have, e.g., five or six vertices.
  • a AR pattern can typically be described as quadrilateral.
  • the polygon enclosed therein is a four vertex polygon, as illustrated.
  • the LVOT image as shown in Fig. 6B, the LVOT pattern is enclosed by a curve that starts and ends below the x-axis.
  • the LVOT pattern can be represented as a polygon of n-vertices.
  • system 100 may be configured to predict polygons that correspond to the specific patterns that are typical to each image type and sub-type, and optionally extract the clinical information from these types of images.
  • the system 100 man comprise a processing unit 102, which may be configured to automatically analyze a given DE image and operatively coupled to an I/O interface 114 and a storage module 116.
  • the processing unit 102 may comprise a feature function extractor 108, a polygon score calculator 110, and a predicted polygon selector 112.
  • the processing unit 102 may comprise one or more of the following functional modules: an image scaling module 104, a searching module 106, a loss value calculator 120, a weight vector updating module 122, and a clinical parameters extracting module 118.
  • the processing unit 102 may be implemented by a processor, such as, e.g., a central processing unit (CPU), digital signal processor (DSP), a microcontroller, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or the like., and configured to execute functionalities of the functional modules 108, 110, 112, and possibly also one or more of the functional modules 104, 106, 120, 122 and 118, in accordance with computer-readable instructions implemented on a non- transitory computer readable storage medium.
  • the non-transitory computer readable storage medium may be included in the storage module 116.
  • the above mentioned functional modules are referred to herein as comprised in the processor.
  • the processing unit 102 can also be implemented by one or more processors operatively connected with each other and execute the functionality of different functional modules in cooperation.
  • the system 100 may obtain, e.g., through the I/O interface 114, a given DE image.
  • the DE image may be any type of DE image such as, e.g., any of the types aforementioned.
  • the DE image may be obtained from the storage module, or alternatively it can be received from an external source, such as, e.g., from a user, a third party provider or any clinical device or equipment that obtains or generates such image during clinical practices.
  • an image scaling module 104 in the processing unit 102 may be configured to scale the given DE image such that the size of each pixel within the image can adapt to the measurement and analysis of system 100, as will be described below in details.
  • the DE image is not scaled and the system 100 may predict the size of the polygon.
  • the system 100 may obtain, e.g., through the I/O interface 114, one or more trained weight vectors each corresponding to a polygon type with a fixed number of vertices. Polygons may be categorized into different types according to the number of vertices the polygons have. A polygon type may correspond to a sub-type of a certain DE image type.
  • the trained weight vectors may be obtained from the storage module 116, or alternatively they may be received from an external source, such as, e.g., from a user, a third -party provider or any other system that has derived such weight vectors, e.g., from a training process based on a plurality of annotated DE images.
  • the processing unit 102 in particular some relevant functional modules comprised therein may comprise the loss value calculator 120 and the weight vector updating module 122, may be configured to perform the training process as described above and generate the trained weight vectors, as will be described in further details below with reference to Figs. 3 and 4.
  • the system 100 may further obtain, through the I/O interface 114, a plurality of candidate polygons belonging to at least one polygon type.
  • the candidate polygons may be obtained from the storage module 116, or alternatively may be received from an external source, such as, e.g., from a user, a third party provider or any other system that has obtained such candidate polygons, e.g., by searching through all possible polygon configurations for the given DE image.
  • the processing unit 102 in particular, the searching module 106 as comprised therein, may be further configured to perform a searching process for the plurality of candidate polygons, as will be described in further details below with reference to Fig. 5.
  • the feature function extractor 108 may be configured to extract a plurality of feature functions from the DE image for each candidate polygon.
  • the term feature function used in this patent specification should be expansively construed to cover any kind of formula or function representation or transformation that can represent certain visual elements or visual features of the DE image, or relations between these elements or features.
  • the feature functions can include vertex related feature functions and edge related feature functions.
  • FIG. 7 there are exemplified two types of feature functions extracted from a given MV image, in accordance with certain embodiments of the presently disclosed subject matter.
  • a vertex related feature function may be a feature vector including pixel properties representing a window of pixels around a vertex.
  • the pixel properties referred herein may be selected from a group comprising: pixel intensities and a binary pixel property indicating whether or not it is a part of a Connected Component of the DE image.
  • the DE image is pre-processed by setting a threshold on its pixel intensities, to generate a binary image.
  • Each pixel in this binary image may be marked, for example, by 1 if it is part of a Connected Component of a size larger than n, where n may be for example 1000, or may be marked by 0 otherwise.
  • Connected Component here may be the same as in graph theory, where each pixel may be a vertex and pixels are connected by an 'edge' if they are adjacent. Determining whether a pixel is connected to the component may be based on its intensity and/or the intensities of its 4 or 8 neighbor pixels. For example, a pixel value of 1 may indicate that the pixel is connected to the component, and a pixel value of 0 may indicate that the pixel is not connected to the component. [0046] An exemplified vertex related feature function is illustrated on the right side of Fig.
  • the extracted features are the pixels' intensities.
  • Such extraction may help the system 100 to learn an environment of landmarks such as, e.g., peaks or troughs, and to evaluate image locations as possible vertices of the polygon.
  • landmarks such as, e.g., peaks or troughs
  • all input images can be already scaled (e.g., by the image scaling module 104) so that the size of each pixel can be equivalent to 5[milisec] in x and 0.5[cm/sec] in y.
  • an edge related feature function is based on a relation of pixel intensities between lines that are parallel to a slope between two vertices.
  • One exemplified edge related feature function is illustrated on the left side of Fig. 7, represented by a ratio of two sums of intensities over two areas.
  • This type of feature function may be used to learn the polygon's edges, e.g., the slope between two vertices.
  • the intensities along lines that are parallel to the slope can be summed, e.g., the line below the slope and the one above it, and the former can be divided by the latter, giving rise to the ratio.
  • the polygon score calculator 110 can be configured to calculate a score for each candidate polygon using the extracted feature functions and a trained weight vector corresponding to the polygon type with the same number of vertices as the candidate polygon.
  • the predicted polygon selector 112 can be configured to determine a candidate polygon with a highest calculated score to be a predicted polygon for the DE image, whereby the predicted polygon represents a pattern enclosed in the DE image, as will be described in further details below with reference to Fig. 2.
  • a clinical parameter extracting module 118 as may be comprised in the processing unit 102 may be configured to extract clinical parameters of the DE image based on the predicted polygon for assisting diagnosis of heart conditions.
  • the system 100 may comprise the I/O interface 114 and the storage module 116 to be operatively coupled to the other functional components described above.
  • the I/O interface 114 may be configured to obtain certain inputs for the execution of the functional components, such as, e.g., a given DE image, and/or one or more trained weight vectors, and/or a plurality of candidate polygons.
  • the I O interface 114 may be configured to provide the predicated polygon, and/or extracted clinical parameters of the DE image as output to the user.
  • the I/O interface 120 may be configured to provide any of the intermediate calculation results to the user.
  • the inputs or outputs can be provided to the user through a display unit 124, such as, e.g., a display screen.
  • the storage module 116 may comprise a non-transitory computer readable storage medium.
  • the storage module 116 may comprise a buffer that holds relevant inputs, outputs as well as intermediate results as above mentioned to be used for the execution of the system 100.
  • the storage module 116 may comprise computer-readable instructions embodies therein to be executed by the processing unit 102 for implementing the process of automatic analysis of the DE image as will be described below with reference to Fig. 2.
  • system 100 may correspond to some or all of the stages of the methods described with respect to Figs. 2- 5.
  • the methods described with respect to Figs. 2-5 and their possible implementations may be implemented by system 100. It is therefore noted that embodiments discussed in relation to the methods described with respect to Figs. 2-5 may also be implemented, mutatis mutandis as various embodiments of the system 100, and vice versa.
  • Fig. 2 there is shown a generalized flowchart of automatically analyzing a Doppler echocardiography (DE) image, in accordance with certain embodiments of the presently disclosed subject matter.
  • DE Doppler echocardiography
  • the analysis process as shown in Fig. 2 may also be referred to as an inference or prediction process.
  • a DE image containing a pattern is considered.
  • the pattern may be described as a set of vertices connected by edges to form a polygon.
  • Inference refers to the process of predicting the best enclosing polygon for a pattern enclosed in the DE image.
  • a given DE image can be obtained (e.g., by the I/O interface 114) as an input to the analysis process.
  • the DE image may be of any type that can be used during clinical practices.
  • the DE image can be selected from a group comprising: AR image, LVOT image, and MV image.
  • the DE image may be obtained locally from a storage module 116 or alternatively received from an external source, such as, e.g., a user, a third-party provider or any clinical device or equipment that obtain or generate such image during clinical practices.
  • the DE image may be of an unknown size.
  • a set of DE images each containing a pattern.
  • Each DE image may have its own dimensions, thus dcoLS and dRows are not fixed and are different for each DE image.
  • the system 100 attempts to predict the structure of the polygon vertices.
  • the obtained DE image may be scaled (e.g., by the image scaling module 104) prior to the analysis process due to the fact that the original x-axis scale (e.g. SCALEy, the interval of time represented by a single pixel in the input DE image) and y-axis scale (e.g., SCALEy, the interval of velocity represented by a single pixel in the input DE image) may differ from image to image.
  • SCALE y may be set as and the original image may be re-sampled according to the set
  • the result of the image scaling is a scaled DE image with constantly scaled x- axis and y-axis that may be used as input to the analysis process as will be described below. It is to be noted that the above scaling example is for illustration purposes only and should not be construed to limit the scope of the present disclosure in any way. Other suitable ways of image scaling and other x-axis and y-axis scales can be used in lieu of the above.
  • One or more trained weight vectors may be obtained (202) (e.g., via the I/O interface 114), each trained weight vector corresponding to a polygon type with a fixed number of vertices.
  • the trained weight vectors may be obtained from a training process based on a plurality of annotated DE images (e.g., previous clinical DE images annotated manually).
  • the trained weight vectors may be obtained locally from a storage module or alternatively be provided or received from a user, a third party provider or any other system that has derived such weight vectors, e.g., from the training process.
  • the method as described with reference to Fig. 2 herein may comprise performing the training process and generating the trained weight vectors, e.g., by the processing unit 102.
  • a set of annotated images including each of the types and sub-types may be provided.
  • the annotation of each sub-type of an image type may be represented by a polygon of a fixed number of vertices (i.e. a polygon type).
  • the training process may generate a classifier that given an input image finds a number of vertices of the polygon, e.g. the sub-type, as well as the actual location of the vertices in the image.
  • Such structural output of the classifier may be obtained using the structured learning framework, which minimizes the errors in predicting the output structure.
  • This approach may be considered as though a separate structured prediction model is maintained for each sub-type, but the sub-types' models may be trained together, e.g. in a coupled way.
  • Image types that have a single sub-type are then just a special, simplified example of this general approach.
  • an annotation which may be in the form of a polygon that encloses the pattern in the image.
  • the annotation is the correct solution to the analysis process. Training is then the process that given a set of images and annotations, may learn to predict the enclosed polygon of a new, unseen image. The training process may learn the statistical connection between the pattern in the image and its given annotation, so that the analysis and prediction process as will be described below may apply it to unseen images.
  • each image is taken from the set of all images xcX.
  • f w X ⁇ Y may be defined for each DE type, which learns the statistical connection between the image x ⁇ and its annotation
  • Weight vector wGR ⁇ may be a vector of the parameters of the function. Practically a trained weight vector w is the output of the training process. It is learnt using an iterative structured prediction algorithm as will be explained below.
  • weight vector refers to the weighting to be applied to a specific polygon type as compared to other polygon types when predicting, which candidate polygon may be a best matching polygon to represent the pattern enclosed in the DE image.
  • the weight vector may be implemented as a vector or a list of parameters that correspond to different feature functions (e.g., vertex related feature functions and/or edge related feature functions) of a polygon.
  • FIG. 3 there is illustrated a generalized flowchart of performing a training process to generate trained weight vectors in accordance with certain embodiments of the presently disclosed subject matter.
  • a training set of annotated DE images may be obtained (302) (e.g., by the I/O interface 114). Each of the annotated DE image is associated with an annotated polygon (e.g., previously annotated manually).
  • the training set of annotated DE images may be pre- stored in the storage module 116. They may also be received from an external source, such as, e.g., from a user, or a third-party provider or any other system or device operatively connected to the system 100.
  • Values of the one or more weight vectors each corresponding to a polygon type with a fixed number of vertices can be initially set or determined (304) (e.g., by the processing unit 102), e.g., as default values or based on previous knowledge or experience.
  • the values of the one or more weight vectors may be updated (306) (e.g., by the processing unit 102) iteratively by processing each annotated DE image within the training set and the annotated polygon associated therewith, giving rise to the trained weight vectors. The updating of the values of the one or more weight vectors will be further elaborated below with respect to Fig. 4.
  • a plurality of candidate polygons belonging to at least one polygon type may be obtained (204) (e.g., by the I/O interface 114).
  • the at least one polygon type may include some or all polygon types that corresponds to a certain type of DE image. For instance, if there are three polygon types (e.g., polygon types with three, five and six vertices) corresponding to a certain type of DE image (e.g., MV image), a set of candidate polygons belonging to each of the three polygon types may be obtained.
  • the obtained trained weight vectors and candidate polygons can correspond to at least two polygon types, and the DE image can be analyzed at least partially simultaneously for the at least two polygon types.
  • the at least two polygon types can include two or more polygon types each with a fixed number of vertices selected from the following: three vertices, four vertices, five vertices and six vertices.
  • the candidate polygons may be received from an external source, such as, e.g., from a user, a third-party provider or any other system that has obtained such candidate polygons, e.g., by searching through all possible polygon configurations for the given DE image.
  • the search process may be performed, e.g., by the processing unit 102, in particular the searching module 106 as comprised therein, for searching the plurality of candidate polygons, as will be described in further details below with reference to Fig. 5.
  • a plurality of feature functions may be extracted (206) (e.g., by the feature function extractor 108) from the given DE image for each candidate polygon.
  • the feature functions may include vertex related feature functions and edge related feature functions.
  • For a candidate c vertices polygon, its features-vector ⁇ can be
  • ⁇ _ ⁇ is a set of N feature functions, each of the form
  • each DE image is taken from the set of all images xcX.
  • ICI may also be the number of sub-types.
  • each sub-type of an image type may be represented by a certain polygon type with a fixed number of vertices.
  • cGC ⁇ 3,5,6 ⁇ .
  • the set of all polygons with c vertices can be denoted by Y c
  • a score may be calculated (208) (e.g., by the polygon score calculator 112) for each candidate polygon using the extracted feature functions and a trained weight vector corresponding to the polygon type with the same number of vertices as the candidate polygon.
  • a candidate polygon with a highest calculated score can be determined (210) (e.g., by the predicted polygon selector 112) to be a predicted polygon for the DE image.
  • the score may be calculated as, e.g., dot product of the extracted feature functions and the corresponding trained weight vector w, which is the output of the training process as described with respect to Figs. 3 and 4.
  • the dot product w- ⁇ is the score of the specific candidate polygon y.
  • a score for each candidate polygon may be derived and the candidate polygon with a highest or maximal score is the best polygon for this sub-type c.
  • the calculation for all sub-types cGC can be repeated, and a candidate polygon with a highest score can be the best matching polygon y across all possible polygons in Y and C, which is the predicted polygon representing the pattern enclosed in the DE image.
  • the sequence of the operations of calculating and determining can be slightly different. For instance, the score for each candidate polygon in all sub-types cGC may be calculated first, and then the candidate polygon with a highest score among all the calculated scores may be determined as the predicted polygon.
  • the prediction process of y can be implemented according to the following equation:
  • clinical parameters of the DE image can be extracted (212) (e.g., by the clinical parameter extracting module 118) based on the predicted polygon for assisting diagnosis of heart conditions.
  • the MV image 1000 may include an e-wave peak 1010 and an a-wave peak 1020.
  • the order of the peaks may constraint an MV polygon to have exemplary characteristics shape of two triangles.
  • the waves may be separated by a gap or trough 1030, which may represent the end point of an e-wave.
  • the MV polygon may be required to designate specific landmarks and should be able to describe a sequence of two triangles.
  • Figs. 6A-6C which exemplify three different types of DE images
  • different clinical parameters may be extracted for different types of images.
  • the first wave is the E-wave and the second is the A-wave. The peaks of these waves, the E peak and the A peak, and the trough between them can be extracted.
  • E/A Ratio This is the ratio between the E peak and the the A peak velocities, vEpeak
  • this parameter isn't calculated.
  • Deceleration Time This is the time duration from the time designated by the E peak to the time the E wave has ended.
  • Xj ⁇ is defined and computed as the x-axis intersect with the line that passes through the E peak and the trough vertex to its right.
  • DT Xpj-Xg can also be computed.
  • the AR pattern can typically be described as quadrilateral.
  • the predicted polygon y is a 4vertex polygon.
  • AR clinical parameters that can be extracted can include the following: the polygon's first peak, i.e., the polygon's second vertex, and/or its slope to the second peak (the third vertex).
  • Fig. 6B The LVOT pattern is enclosed by a curve the starts and ends below the x-axis.
  • the predicted polygon y of the LVOT pattern is a polygon of n-vertices, where n is a predefined constant.
  • the extracted clinical parameters can include the Time Velocity Integral (TVI) below the curve that can be computed as the integral of the area enclosed by the predicted polygon.
  • TVI Time Velocity Integral
  • FIG. 4 shows a drill down process of the updating operation in block 306 as described with respect to Fig. 3.
  • the updating is an iterative two stage process.
  • a single DE image together with its annotation may be processed to get an update of weight vector w.
  • the first stage is performed in a similar manner as the inference process as described above with respect to Fig. 2, in order to predict the best matching polygon for a given annotated image.
  • a plurality of training candidate polygons belonging to at least one polygon type may be obtained (402) (e.g., by the I/O interface 114).
  • the obtaining operation may be performed in a similar way as described in block 204.
  • the training candidate polygons may also be generated using the search process as will be described below with respect to Fig. 5.
  • a plurality of feature functions can be extracted (404) (e.g., by the feature function extractor 108) from the annotated DE image for each training candidate polygon, including vertex related feature functions and edge related feature functions.
  • the extraction operation may be performed in a similar way as described in block 206.
  • a loss value may be calculated (406) representing a level of dissimilarity between the training candidate polygon and the annotated polygon associated with the annotated DE image.
  • the loss value may be calculated based on a cost function indicative of a distance between the training candidate polygon and the annotated polygon associated with the annotated DE image.
  • the loss function ⁇ may be a function of the sum of the Euclidean distances between all matching pairs of vertices of (y .,)>). If the predicted polygon is y, and the correct polygon is y, the cost function can be defined as follows: y c xy c ⁇ R. (3)
  • MV image determining the euclidian distance between each predicted vertex may be determined via:
  • JQ is a user defined parameter that allows small annotation inaccuracies and inconsistencies
  • y m is a maximum loss that is used during training. If the sub-type predicted during the update stage differs from the annotated sub-type, the constant y m is used instead of the computed ⁇ .
  • the cost is defined to be the Euclidean distance between the predicted vertices and the manually annotated vertices.
  • the threshold is a user defined parameter that allows small annotation inaccuracies and inconsistencies: for pixels, for example, differences between the manually annotated pattern y and the predicted pattern y which are less than 5 pixels, are not counted in the cost function.
  • a score may be calculated (408) (e.g., by polygon score calculator 110) for each training candidate polygon using the extracted feature functions, a weight vector corresponding to a polygon type with the same number of vertices as the training candidate polygon, and the loss value.
  • the loss value may be added to the dot product ⁇ -w to get the score of each training candidate polygon y.
  • a training candidate polygon with a highest calculated score can be determined (410) (e.g., by the predicted polygon selector 112) to be a predicted polygon for the annotated DE image.
  • Blocks 408 and 410 can be performed in a similar manner as described in blocks 208 and 210.
  • the prediction process of the best polygon y for the annotated DE image can be implemented according to the following equation:
  • the second stage uses the best polygon, to train, i.e. update the weight vectors w t to the new w t+i .
  • at least a weight vector corresponding to the polygon type with the same number of vertices as the annotated polygon can be updated (412) (e.g., by weight vector updating module 122), using the predicted polygon, the annotated polygon and the loss value.
  • the weight vector corresponding to the polygon type with the same number of vertices as the annotated polygon may be updated, using the predicted polygon, the annotated polygon and the loss value. Otherwise both the weight vector corresponding to the polygon type with the same number of vertices as the annotated polygon, and the weight vector corresponding to the polygon type with the same number of vertices as the predicted polygon may be updated, using the predicted polygon, the annotated polygon, and the loss value.
  • weight vector as described with respect to block 412 in Fig. 4. Let be the weight vector for the sub-type that has c vertices, after the t-th iteration. Note there are generally several such vectors according to the number of sub-types c in the image type being
  • w can be designated as the vector that matches the predicted polygon and w' ⁇ i'can be designated as the one that matches the annotation.
  • w' ⁇ i' can be designated as the one that matches the annotation.
  • the update rule increases the weight vector w' ⁇ i', since this example's labeled polygon type was lyil, and decreases the weight vector w ⁇ ', which was too high relative to w' ⁇ i' for this example.
  • FIGs. 8A-8C illustrate an example of a shift of 1 pixel in a predicted polygon vertex relative to an annotated polygon vertex, given one possible pattern in accordance with certain embodiments of the presently disclosed subject matter.
  • Fig. 8A shows the predicted polygon vertex
  • Fig. 8B shows the annotated polygon vertex
  • Fig. 8C shows the difference therebetween.
  • FIG. 5 there is illustrated a generalized flowchart of a searching process for the plurality of candidate polygons from all possible polygon configurations within a DE image in accordance with certain embodiments of the presently disclosed subject matter.
  • ⁇ y, ⁇ which includes the candidate polygons to become the best polygon y.
  • the operations related to blocks 502-506 are performed. Specifically, for each vertex of a given polygon type, a score map can be calculated (502) (e.g., by the searching module 106).
  • the score map includes a plurality of fitness scores each for a possible location of the vertex in the DE image.
  • the fitness score is an indication for evaluating the fitness of each polygon to become the best. According to certain embodiments, it can be calculated based on a weight vector corresponding to the polygon type and a feature function related to the vertex.
  • a fitness score may be calculated as the sum over the score of each vertex j of the polygon. This score, independently computed per vertex is w-c j where Wj is that part of the weight vector that relates to the specific vertex j and ( j is accordingly the part of the feature vector that relates to the vertex j .
  • Wj is that part of the weight vector that relates to the specific vertex j
  • j is accordingly the part of the feature vector that relates to the vertex j .
  • may be a vector of intensities representing a rectangle of pixels around the vertex.
  • a set of possible locations with the highest fitness scores can be selected (504) (e.g., by the searching module 106) as a set of candidate locations for each vertex.
  • the locations with the highest nj scores can be selected as candidate locations for this vertex. Repeating this for all vertices gives c sets of candidates of varying size nj.
  • a member from each set of candidate locations for a vertex i.e., permutation that takes a member from each set
  • can be selected (506) e.g., by the searching module 106) to form a candidate polygon.
  • the selection of a member can be repeated for each candidate location in each set, giving rise to a set of candidate polygons for each polygon type.
  • the plurality of candidate polygons may be generated including one or more sets of candidate polygons each for a respective polygon type.
  • the score of a vertex is an invariant property of the specific vertex location in the image given the value of w in a specific iteration and is independent of the other vertices of the polygon. This means it is not needed to re-compute the score of the candidate polygons, just sum over the vertices scores and pick the highest n- of them.
  • all the permutations do form polygons, not all are valid.
  • the list of candidate polygons can be further reduced by applying certain inherent constraints related to each image type. For example, in the MV image type the vertex representing the E peak must be of a bigger velocity than the trough vertex between itself and the A peak. However, the velocity of the A peak must be bigger than the trough vertex. In this manner, the list of candidate polygons can be constrained and be left only with valid polygons.
  • the number of possible polygon configurations has been reduced to a small number of candidate polygons that are the best candidates according to current weight vector.
  • the output list of the candidate polygons can be used in both the inference process and the training process to compute a full score for each candidate polygon.
  • the computation of a full score can be done by re-using the already computed score of the sum of w-c j for all j 's (note that in search process only the vertex related feature functions are used) and adding certain more global features that relate to more than a single vertex, such as, e.g., edge related feature functions.
  • the predicted polygon y is the polygon with the highest score.
  • the training process as described in Figs. 3 and 4 and the searching process as describe in Fig. 5 can be either implemented respectively in a standalone computer or processor that is operatively connected with system 100, or alternatively their functionality or at least part thereof, can be included as part of the analysis process (i.e. the inference) as described with reference to Fig. 2 and performed by system 100.
  • Fig 9 shows a general flowchart of a method for training a system to predict polygon vertices, in accordance with certain embodiments of the presently disclosed subject matter.
  • Step 900 discloses processing the DE image.
  • Step 910 discloses annotation adjusting.
  • Step 920 discloses extracting features from the DE image.
  • Step 930 discloses searching cands.
  • Step 940 discloses predicting the polygon.
  • Step 950 discloses updating the weighted vectors. In some cases, the steps 930, 940 and 950 are continuously performed to achieve a most accurate prediction.
  • Fig. 1 lA-1 IB show MV loss function for two given annotations, in accordance with certain embodiments of the presently disclosed subject matter. Referring to Fig. 11 A, which shows annotations that may have a same number of vertices. Referring to Fig. 1 IB, which shows that the number of vertices may be different for each annotation.
  • Fig. 12 shows an example of an operation to correct a DE image, in accordance with certain embodiments of the presently disclosed subject matter.
  • the first DE image 1200 shows an exemplary basic DE image.
  • the second DE image 1250 shows a filtered DE image with the vertices constructing the polygon.
  • the present invention may be a system, a method, and/or a computer program product.
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non- exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • any suitable combination of the foregoing includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable
  • a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Rather, the computer readable storage medium is a non-transient (i.e., not- volatile) medium.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
  • These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Hematology (AREA)
  • Cardiology (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé et un système informatisés d'analyse automatique d'une image d'échocardiographie Doppler (ED), le procédé comprenant : l'obtention d'un ou plusieurs vecteurs de pondération entraînés correspondant chacun à un type de polygone avec un nombre fixe de sommets ; l'obtention d'une pluralité de polygones candidats appartenant à au moins un type de polygone ; l'extraction d'une pluralité de fonctions caractéristiques à partir de l'image ED pour chaque polygone candidat, comprenant des fonctions caractéristiques liées aux sommets et des fonctions caractéristiques liées aux bords ; le calcul d'un score pour chaque polygone candidat au moyen des fonctions caractéristiques extraites et d'un vecteur de pondération entraîné correspondant au type de polygone avec le même nombre de sommets que le polygone candidat ; et la détermination d'un polygone candidat ayant un score calculé le plus élevé de façon à être un polygone prédit pour l'image ED, le polygone prédit représentant un motif contenu dans l'image ED.
PCT/IL2017/050308 2016-03-10 2017-03-09 Procédé et système d'analyse automatique d'images d'échocardiographie doppler WO2017154008A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662306108P 2016-03-10 2016-03-10
US62/306,108 2016-03-10

Publications (1)

Publication Number Publication Date
WO2017154008A1 true WO2017154008A1 (fr) 2017-09-14

Family

ID=59790113

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2017/050308 WO2017154008A1 (fr) 2016-03-10 2017-03-09 Procédé et système d'analyse automatique d'images d'échocardiographie doppler

Country Status (1)

Country Link
WO (1) WO2017154008A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114255226A (zh) * 2021-12-22 2022-03-29 北京安德医智科技有限公司 多普勒全流程定量分析方法及装置、电子设备和存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004084735A1 (fr) * 2003-03-25 2004-10-07 Uscom Limited Determination d'information de parametres a partir d'un signal de flux sanguin
US20100137717A1 (en) * 2005-03-15 2010-06-03 Robert Strand Automatic Flow Tracking System and Method
WO2013002480A1 (fr) * 2011-06-28 2013-01-03 알피니언메디칼시스템 주식회사 Dispositif et procédé d'interpolation de vecteur pour une image d'onde ultrasonore

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004084735A1 (fr) * 2003-03-25 2004-10-07 Uscom Limited Determination d'information de parametres a partir d'un signal de flux sanguin
US20100137717A1 (en) * 2005-03-15 2010-06-03 Robert Strand Automatic Flow Tracking System and Method
WO2013002480A1 (fr) * 2011-06-28 2013-01-03 알피니언메디칼시스템 주식회사 Dispositif et procédé d'interpolation de vecteur pour une image d'onde ultrasonore

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114255226A (zh) * 2021-12-22 2022-03-29 北京安德医智科技有限公司 多普勒全流程定量分析方法及装置、电子设备和存储介质

Similar Documents

Publication Publication Date Title
US10769487B2 (en) Method and device for extracting information from pie chart
US10769484B2 (en) Character detection method and apparatus
US9613298B2 (en) Tracking using sensor data
JP5957629B1 (ja) 診療計画を導くための画像の構造形状を自動的に表示する方法及び装置
CN109214436A (zh) 一种针对目标场景的预测模型训练方法及装置
US11688077B2 (en) Adaptive object tracking policy
CN111488873B (zh) 一种基于弱监督学习的字符级场景文字检测方法和装置
CN115082920B (zh) 深度学习模型的训练方法、图像处理方法和装置
CN113642431A (zh) 目标检测模型的训练方法及装置、电子设备和存储介质
CN110909868A (zh) 基于图神经网络模型的节点表示方法和装置
US20180204084A1 (en) Ensemble based labeling
CN112101355B (zh) 图像中文本检测方法、装置、电子设备以及计算机介质
CN115147680B (zh) 目标检测模型的预训练方法、装置以及设备
CN115359308B (zh) 模型训练、难例识别方法、装置、设备、存储介质及程序
KR20220049573A (ko) 거리 기반 학습 신뢰 모델
CN114511743B (zh) 检测模型训练、目标检测方法、装置、设备、介质及产品
US20220253426A1 (en) Explaining outliers in time series and evaluating anomaly detection methods
WO2017154008A1 (fr) Procédé et système d'analyse automatique d'images d'échocardiographie doppler
KR20150131537A (ko) 오브젝트 추적 장치 및 그의 오브젝트 추적 방법
CN116433722A (zh) 目标跟踪方法、电子设备、存储介质及程序产品
US11158059B1 (en) Image reconstruction based on edge loss
CN113657482A (zh) 模型训练方法、目标检测方法、装置、设备以及存储介质
CN115471714A (zh) 数据处理方法、装置、计算设备和计算机可读存储介质
CN112541915A (zh) 用于高分辨率图像的高效布匹缺陷检测方法、系统及设备
CN113971183A (zh) 实体打标模型训练的方法、装置及电子设备

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17762652

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17762652

Country of ref document: EP

Kind code of ref document: A1