US20230104045A1 - System and method for ultrasound analysis - Google Patents

System and method for ultrasound analysis Download PDF

Info

Publication number
US20230104045A1
US20230104045A1 US18/048,873 US202218048873A US2023104045A1 US 20230104045 A1 US20230104045 A1 US 20230104045A1 US 202218048873 A US202218048873 A US 202218048873A US 2023104045 A1 US2023104045 A1 US 2023104045A1
Authority
US
United States
Prior art keywords
exemplary
procedure
heart
patient
imaging information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/048,873
Inventor
Itay KEZURER
Achiau Ludomirsky
Yaron Lipman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yeda Research and Development Co Ltd
New York University NYU
Original Assignee
Yeda Research and Development Co Ltd
New York University NYU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yeda Research and Development Co Ltd, New York University NYU filed Critical Yeda Research and Development Co Ltd
Priority to US18/048,873 priority Critical patent/US20230104045A1/en
Assigned to NEW YORK UNIVERSITY reassignment NEW YORK UNIVERSITY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LUDOMIRSKY, Achiau
Assigned to YEDA RESEARCH AND DEVELOPMENT CO. LTD. reassignment YEDA RESEARCH AND DEVELOPMENT CO. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KEZURER, Itay, LIPMAN, YARON
Publication of US20230104045A1 publication Critical patent/US20230104045A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/503Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of the heart
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0883Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of the heart
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/488Diagnostic techniques involving Doppler signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present disclosure relates generally to use of an ultrasound apparatus, and more specifically, to exemplary embodiments of an exemplary system, method and computer-accessible medium for ultrasound analysis.
  • Echocardiogram or ultrasound of the heart
  • Echocardiogram is a common clinical tool for assessing a heart condition, and for identifying and diagnosing certain heart diseases.
  • a significant amount of the analysis and processing of the input ultrasound movie clips is performed by a technician or a physician (“user”).
  • Such manual analysis has several drawbacks, for example: (i) it increases the chance of error, (ii) it needs a skilled user, (iii) it limits the throughput by the analyzing speed and skill of the user and (iv) due to time complexity, only several frames from the clips are fully analyzed while information in other frames is left unused.
  • Cardiac ultrasound can be the preferred modality for the assessment of cardiac anatomy, function and structural anomalies.
  • routine cardiac ultrasound examination lasts between about 30 and about 40 minutes, and can include: (i) acquisition of the data by ultrasound and Doppler procedures, (ii) analysis of ventricular function and multiple measurements of the different parts of the cardiac structure, and (iii) a report that can be incorporated directly in the electronic medical record.
  • An exemplary system, method and computer-accessible medium for detecting an anomaly(ies) in an anatomical structure(s) of a patient(s) can be provided, which can include, for example, receiving imaging information related to the anatomical structure(s) of the patient(s), classifying a feature(s) of the anatomical structure(s) based on the imaging information using a neural network(s), and detecting the anomaly(ies) based on data generated using the classification procedure.
  • the imaging information can include at least three images of the anatomical structure(s).
  • the imaging information can include ultrasound imaging information.
  • the ultrasound imaging information can be generated using, e.g., an ultrasound arrangement.
  • the anatomical structure(s) can be a heart.
  • the state(s) of the anatomical structure(s) can include (i) a systole state of a heart of the patient(s), (ii) a diastole state of the heart of the patient(s), (iii) an inflation state of the heart of the patient(s) or (iv) a deflation state of the heart of the patient(s).
  • the feature(s) can be classified using a view detection procedure, which can include detecting a view of a particular imaging frame in the imaging information.
  • the anatomical structure(s) can be segmented, e.g., using a part segmentation procedure and a localization procedure before the detection of the anomaly(ies).
  • the part segmentation procedure can be utilized to segment a left ventricle of the heart of the patient(s) from a background.
  • the localization procedure can be a valve localization procedure, which can include marking, a single pixel per frame in the imaging information to place a Doppler measuring point(s).
  • the imaging information can include a plurality of images, and neural network(s) can include a plurality of neural networks each one of which can be associated with one of the images. Each neural network can be used to classify the feature(s) in its associated one of the images. An output produced by each of the neural networks can be concatenated (e.g., in a depth). The imaging information can be upsampled.
  • FIG. 1 is an exemplary diagram of an exemplary ultrasound analysis system according to an exemplary embodiment of the present disclosure
  • FIG. 2 A is an exemplary image of the acquisition of labels for segmentation according to an exemplary embodiment of the present disclosure
  • FIG. 2 B is an exemplary image of labels for valve localization according to an exemplary embodiment of the present disclosure
  • FIG. 3 A is an exemplary image of data for part segmentation according to an exemplary embodiment of the present disclosure
  • FIG. 3 B is an exemplary image of data for valve localization according to an exemplary embodiment of the present disclosure
  • FIG. 4 A is an exemplary diagram of network architecture used by the exemplary system according to an exemplary embodiment of the present disclosure
  • FIG. 4 B is an exemplary diagram of a Core Neural Network according to an exemplary embodiment of the present disclosure
  • FIG. 5 is a set of exemplary images of different ultrasound views of the heart and their corresponding labels according to an exemplary embodiment of the present disclosure
  • FIGS. 6 A and 6 B are exemplary diagrams of the exemplary system, method and computer-accessible medium according to an exemplary embodiment of the present disclosure
  • FIGS. 7 A and 7 B are exemplary images of the per-view probability generated by the exemplary system, method, and computer-accessible medium for two input triplet images according to an exemplary embodiment of the present disclosure
  • FIGS. 7 C and 7 D are exemplary histogram diagrams corresponding to FIGS. 6 A and 6 B , respectively, according to an exemplary embodiment of the present disclosure
  • FIG. 8 is an exemplary graph of the detection of the cardiac cycle stage according to an exemplary embodiment of the present disclosure.
  • FIGS. 9 A- 9 F are exemplary images of the per-frame stages in the cardiac cycle according to an exemplary embodiment of the present disclosure.
  • FIG. 10 is a set of images of the segmentation of the left ventricle produced by the exemplary system, method, and computer-accessible medium on four selected views according to an exemplary embodiment of the present disclosure
  • FIGS. 11 A and 11 C are exemplary images of valve localization performed using the exemplary system according to an exemplary embodiment of the present disclosure
  • FIGS. 11 B and 11 D are exemplary magnified images of the valve localization images of FIGS. 11 A and 11 C , respectively, according to an exemplary embodiment of the present disclosure
  • FIG. 12 is a set of exemplary charts of the intersection over union and relative area difference comparing the left ventricle segmentation produced by the exemplary system, method, and computer-accessible medium vs. the ground truth and a user vs. the ground truth according to an exemplary embodiment of the present disclosure;
  • FIG. 13 is an exemplary histogram diagram of the distances between the exemplary system, method, and computer-accessible medium valve localization prediction and ground truth according to an exemplary embodiment of the present disclosure
  • FIG. 14 A is an exemplary graph of a learning graph for the Diastole/Systole classification test initialized with imagenet-type network for the Core Neural Network according to an exemplary embodiment of the present disclosure
  • FIG. 14 B is an exemplary graph of a learning graph for the Diastole/Systole classification test initialized with the Core Neural Network after view detection training according to an exemplary embodiment of the present disclosure
  • FIG. 14 C is an exemplary graph of a learning graph for the Diastole/Systole classification test initialized with the Core Neural Network after being trained on a part segmentation procedure according to an exemplary embodiment of the present disclosure
  • FIG. 15 is an exemplary flow diagram illustrating an adult echocardiographic examination performed using an exemplary procedure according to an exemplary embodiment of the present disclosure
  • FIG. 16 is an exemplary diagram of an exemplary configuration of the exemplary system, method and computer-accessible medium, for use in cardiac ultrasound according to an exemplary embodiment of the present disclosure
  • FIG. 17 is an exemplary diagram illustrating the neural network core of the exemplary system, method and computer-accessible medium according to an exemplary embodiment of the present disclosure
  • FIG. 18 is an exemplary diagram illustrating potential markets in which the exemplary system, method and computer-accessible medium, can be used according to an exemplary embodiment of the present disclosure
  • FIG. 19 A is an exemplary image generated based on a user interface left drive mode according to an exemplary embodiment of the present disclosure
  • FIG. 19 B is an exemplary image generated based on a user interface right halt mode according to an exemplary embodiment of the present disclosure
  • FIGS. 20 A- 20 G are exemplary images of a 3D reconstruction of a full cardiac cycle according to an exemplary embodiment of the present disclosure
  • FIG. 21 A is a set of exemplary images of cross-sections of systole LV and diastole LV according to an exemplary embodiment of the present disclosure
  • FIG. 21 B is a set of exemplary images of the reconstructions of the 3D surface in systole and diastole according to an exemplary embodiment of the present disclosure
  • FIG. 22 is an exemplary histogram generated using the exemplary system, method and computer-accessible medium according to an exemplary embodiment of the present disclosure
  • FIG. 23 A is an exemplary histogram of EF errors generated based on an exemplary expert analysis of a cardiac cycle according to an exemplary embodiment of the present disclosure
  • FIG. 23 B is an exemplary histogram of errors generated using the exemplary system, method and computer-accessible medium according to an exemplary embodiment of the present disclosure
  • FIG. 24 is an exemplary flow diagram of an exemplary method for detecting an anomaly in an anatomical structure of a patient according to an exemplary embodiment of the present disclosure.
  • FIG. 25 is an illustration of an exemplary block diagram of an exemplary system in accordance with certain exemplary embodiments of the present disclosure.
  • exemplary embodiments of the present disclosure may be further understood with reference to the following description and the related appended drawings.
  • the exemplary embodiments are described with reference to cardiovascular imaging (e.g., using ultrasound).
  • cardiovascular imaging e.g., using ultrasound
  • the exemplary embodiments of the present disclosure may be implemented for imaging other tissues or organs (e.g., other than the heart) and can be used in other imaging modalities (e.g., other than ultrasound, including but not limited to MRI, CT, OCT, OFDR, etc.).
  • the exemplary system, method and computer-accessible medium can include a neural network core that can aid a healthcare provider in diagnosing and making clinical decisions more accurately, be of better quality, and have increased safety.
  • the exemplary neural network core can receive images from multiple imaging modalities including ultrasound, magnetic resonance imaging, positron emission scanners, computer tomography and nuclear scanners.
  • the exemplary neural network core can be used for the examination of multiple organs, and is not limited to a specific organ system.
  • the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure can be incorporated into, or connected to, online and/or offline medical diagnostic devices, thus facilitating the operator to become more accurate and efficient.
  • the exemplary system, method and computer-accessible medium can include a Core Neural-Network (“NN”) 115 .
  • the Core NN 115 can take as input a sequence of ultrasound frames 105 , and produce high-level semantic features for the middle frame 110 in the sequence. These exemplary semantic features can be used to solve a series of recognition and analysis procedures (e.g., element 120 ).
  • a dedicated NN-based component can be built on top of the Core NN 115 , and can be tailored specifically for that procedure.
  • the exemplary procedures can be broken down into, for example, five exemplary groups: (i) View detection 125 , (ii) Systole/diastole detection 130 , (iii) Part segmentation 135 , (iv) Valve localization 140 and (v) Anomaly detection 145 .
  • the procedures can be sorted according to their difficulty level (e.g., from easy to hard).
  • the first exemplary procedure (e.g., View detection 125 ) can be performed to detect the view of a given frame out of several potential views.
  • the views currently handled can be ones used in a standard adult echocardiogram examination: (i) apical 2 -chamber view, (ii) apical 3-chamber view, (iii) apical 4-chamber view, (iv) apical 5-chamber view, (v) parasternal long axis view and (vi) parasternal short axis view.
  • the second exemplary procedure (e.g., Systole/diastole 130 ) can be performed to identify the systole/diastole in the cardiac cycle.
  • each frame can be labeled using one of the four temporal states of the left ventricle: (i) diastole, (ii) systole, (iii) inflating and (iv) deflating.
  • the third exemplary procedure e.g., Part segmentation 135
  • the fourth exemplary procedure e.g., Valve localization 140
  • the fifth exemplary procedure can be performed to detect and locate heart anomalies, such as pericardial effusion.
  • An exemplary observation can be that the Core NN that extracts high-level semantic features from a sequence of ultrasound images can be the same for all the procedures, and the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be trained by optimizing (e.g., simultaneously) all procedures and the Core NN.
  • This exemplary approach can provide the following benefits. First, building an ultrasound feature system that can be trained and used for different complementary procedures can provide versatile features that can corroborate and improve individual procedure performance. The intuition can be similar to human learning; for example, to better detect the view of an ultrasound image, it can be beneficial to detect and segment the different visible parts of the heart, and vice-versa.
  • an ultrasound analysis procedure can be added with rather low amounts of data. This can be because the main part of the exemplary system, the Core NN, can already be trained to produce generic features, and all that can be left to train can be the procedure specific part. This can facilitate adaptation of the exemplary system, method, and computer-accessible medium to new or different procedures with rather low computational complexity and time. Third, since it can usually be more difficult to achieve and/or produce data for the more difficult procedures, starting by training the system, method, and computer-accessible medium on the easier procedures provides a good starting point for more elaborate procedures.
  • the echocardiogram can often be used to extract high level quantitative information regarding the heart condition and function.
  • An exemplary archetypical example is the Ejection Fraction (“EF”).
  • the EF can be computed from various exemplary measurements including and combining view detection, systole/diastole detection, and part segmentation.
  • the exemplary system, method and computer-accessible medium can utilize an automatic pipeline for determining the EF, including producing a 3 D surface reconstruction of the LV, or other parts using the UT data (e.g., only the UT data).
  • the exemplary system, method and computer-accessible medium can include the creation of data.
  • An exemplary database consisting of 3000 ultrasound clips (e.g., approximately 40 frames each) was used.
  • Each ultrasound frame X i was an n ⁇ n gray-scale image.
  • the database was divided into three disjoint sets: (i) training, (ii) validation and (iii) test data.
  • the prediction performed by the exemplary system, method and computer-accessible medium can be done with respect to the middle frame 110 (e.g., x i ).
  • the input sequence of images can provide the system with a temporal context together with a spatial context.
  • the exemplary system, method and computer-accessible medium can support two types of procedures: (i) classification and (ii) segmentation.
  • a ground-truth label y i was generated for a subset of the ultrasound frames x i in the exemplary database;
  • y i can either be a single label (e.g., for classification procedures) or can consist of a label for each pixel in x i (e.g., for segmentation procedures).
  • the first two exemplary procedures can be classification procedures.
  • y i ⁇ ⁇ SA, LA, 5C, 4C, 3C, 2C ⁇
  • y i ⁇ ⁇ SA, LA, 5C, 4C, 3C, 2C ⁇
  • y i ⁇ ⁇ SA, LA, 5C, 4C, 3C, 2C ⁇
  • y i ⁇ ⁇ SA, LA, 5C, 4C, 3C, 2C ⁇
  • y i ⁇ ⁇ SA, LA, 5C, 4C, 3C, 2C ⁇
  • 2C ⁇ can correspond to Short-Axis, Long-Axis, 5 Chamber, 4 Chamber, 3 Chamber, and 2 Chamber views.
  • Diastole/systole detection ⁇ DI, SY, IN, DE ⁇
  • Dlastole SYstole, INflating, and DEflating of the left ventricle.
  • the next three exemplary procedures can be segmentation procedures.
  • the exemplary goal can be to decide, for each pixel in the middle frame x i , if it can be part of the left-ventricle or not.
  • Manually labeling each pixel in x i can be tedious and impractical. Therefore, an exemplary software tool that can facilitate the generation of labels y i for a collection of frames x i as produced can be used.
  • An exemplary screenshot from a labeling session is shown in the image in FIG. 2 A .
  • the user uses an interactive tool that can facilitate him/her to mark a sparse set of control points 205 and a smooth closed curve 210 (e.g., cubic spline) can be computed interpolating these points.
  • the user can add, remove or edit these control points 205 .
  • the user can mark consecutive frames, and can use a previous frame's curve as an initialization. Given a closed curve drawn on a frame x i , all the pixels within the region bounded by this curve can be marked with the same label i ⁇ . For example, as shown in the image in FIG. 3 A , the label y i corresponding to the frame x j in the image in FIG. 2 A is shown, where white area 305 corresponds to the LV label and black area 310 to the BA label.
  • the exemplary Valve Localization procedure can utilize marking a single pixel per frame to place the Doppler measuring point.
  • MI Magnetic Ink Characteristic
  • TR Tricuspidal
  • AO Aortic
  • BA background
  • MI Tricuspidal
  • BA background
  • an exemplary software tool similar to the one used for part segmentation, was produced, which can facilitate the user to select a pixel in each image to indicate the location of the relevant valve. (See, e.g., FIG. 2 B , element 215 ).
  • statistics were obtained by asking the user to mark the same set of 8 clips (e.g., 4 clips for 4C and 4 clips for SC) 10 times.
  • This data was used to calculate statistics of the valve sampling, and created the training data by splatting an ellipse centered at the user prescribed pixel with variances taken from the above statistics. This was performed separately and independently for each valve.
  • the image shown in FIG. 3 B illustrates the label y j created for the frame shown in the image shown in FIG. 2 B , where white pixels 305 represent the label AO, and black pixels 310 represent label BA.
  • pericardial effusion and detect fluid accumulation in the pericardial cavity can be taken care of.
  • FIG. 4 A shows an exemplary diagram/architecture of the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure.
  • each block 405 can represent a convolutional block that can consist of a series of convolutions and ReLU (e.g., activation) layers.
  • the number of channels e.g., depth
  • Consecutive convolutional blocks 405 with different resolutions (e.g., width and height of blocks), can be connected by pooling layers.
  • Blocks 410 can represent the output of the exemplary Core NN.
  • the exemplary Core NN block 415 is shown in FIG.
  • the exemplary system, method and computer-accessible medium can use the concatenated features to perform the procedures described above.
  • the exemplary procedures can be divided into two groups: (i) classification procedures, and (ii) segmentation procedures.
  • the exemplary classification procedure can include View detection and Systole/diastole detection.
  • the exemplary segmentation procedure can include part segmentation, valve localization and anomaly detection. An anomaly detection can also have instantiation as a classification procedure.
  • Each procedure can have its own network with suitable architecture based on its type (e.g., classification
  • the two groups of the exemplary procedures can use the same concatenated features produced by the exemplary Core NN.
  • the exemplary classification procedures can use an exemplary classification framework (e.g., element 420 shown in FIG. 4 A ) (see, e.g., Reference 3), which can utilize fully connected layers to reduce the output to prediction vector in k (e.g., can be the number of classes in ) from which a prediction z i can be made.
  • an exemplary classification framework e.g., element 420 shown in FIG. 4 A
  • Reference 3 can utilize fully connected layers to reduce the output to prediction vector in k (e.g., can be the number of classes in ) from which a prediction z i can be made.
  • an exemplary up sampling procedure can be used, which can be different from existing semantic segmentation architectures (see, e.g., Reference 2), which can learn a deconvolution operator and inject previous layers; the low resolution can be up sampled to full resolution, and the output can be concatenated with the input sequence X i , which can pass through a convolutional block to produce the final segmentation.
  • the upsampled segmentation information can provide a smooth rough approximation of the part to be segmented, and the final convolutional block can use local features to refine its boundary.
  • the exemplary segmentation of a particular part/section of the heart can include a multi-scale task. For example, a rough estimate of the part location in the image can be provided, and the exemplary prediction can be gradually refined, or modified, based on local features of the image.
  • the exemplary system, method and computer-accessible medium can include two exemplary architectures for segmenting anatomical parts from medical images. (See e.g., diagrams shown in FIGS. 6 A and 6 B ).
  • FIG. 6 A illustrates a first exemplary network in a serial form.
  • This exemplary network can produce a bottom-up segmentation using consecutive upsampling blocks (element 605 shown in the diagram of FIG. 6 A ).
  • Each block can include a bilinear or deconvolution up-sampling, and various convolutional-relu layers.
  • Each block 605 can receive as an input the previous low-resolution segmentation, add to it a downsampled version of the original image data (e.g., lines 610 ), and can produce the segmentation in a higher resolution.
  • One of the exemplary layers can produce the result in the resolution of the original image.
  • FIG. 6 B shows a diagram of an exemplary network in a parallel form.
  • Truncated copies of the Core NN can be utilized, and attached to each an up-sampling block (e.g., similar to the serial design).
  • Each truncated copy of the Core NN can be used to reduce the original images to different resolutions with corresponding receptive fields. For example, the lowest resolution can be used to recognize where the LV is but may not precisely determine the borders, while the highest resolution (e.g., the one with the smallest receptive field) can attempt to influence the borders of the segmentation based on local information in the image. The results can then be aggregated to achieve the final segmentation. It can also be possible to train the truncated Core NN for every resolution.
  • the exemplary system, method and computer-accessible medium can also consider the addition of a total variation regularizer directly to the network loss function in order to encourage segmentation with a shorter boundary curve.
  • the total variation energy can be defined directly on the output of the segmentation network as ⁇ p ⁇ [ ⁇ F(X i )] p ⁇ 2 , where F(X) can be the difference of the network response for a certain label (e.g., LV) and the background label BA when applied to the input sequence X i , the sum can be over all pixels p, and ⁇ can be the amount of regularization.
  • the exemplary Core NN 415 illustrated in FIG. 4 A can include an imagenet architecture (see, e.g., Reference 3), which can reduce an input image to a set of feature vectors defined on a very coarse resolution.
  • the same Core NN can be used for transforming each of the input ultrasound frames x i ⁇ 1 , x i , x i+1 to its feature vectors.
  • the weights defining the exemplary Core NN can be coupled between its three realizations in the system. (See, e.g., multiple blocks 415 the exemplary Core NN).
  • the exemplary system and method can be utilized to improve the segmentation and can involve L 1 cost functions and Generative Adversarial Networks.
  • L1 regularization for segmentation can be used as a loss function, and can provide a more accurate boundary detection than standard crossentropy loss.
  • an exemplary loss function can be trained using, for example, GANs.
  • a discriminator network can be trained to distinguish real segmentations and segmentations created by the segmentation network. This discriminator, in combination with some other loss or on its own to further train the segmentation network, can be used.
  • the input to the discriminator network can include (Xi, Zi) for real examples, and (Xi, f (Xi)) , where f (Xi) can be the output of the segmentation network.
  • the first exemplary procedure e.g., view detection
  • the second NN procedure e.g., Diastole/systole
  • the second NN procedure can be trained while fixing the Core NN followed by training both the Core NN and the Diastole/Systole-NN. This can be repeated/continued until a convergence is achieved. Since preliminary procedures can be easier to generate data for, training the Core NN on these applications already provides a good starting point for more challenging procedures.
  • the exemplary system, method and computer-accessible medium can be utilized to train all exemplary tasks simultaneously.
  • the exemplary system, method and computer-accessible medium was evaluated on test data that has not been used or validated in the training phase; the test data consisted of 322 clips (e.g., 10000 frames) for classification procedures, 42 clips (e.g., 530 frames) for part segmentation procedures and 108 clips (e.g., 2250 frames) for the Valve localization procedures.
  • 322 clips e.g., 10000 frames
  • 42 clips e.g., 530 frames
  • 108 clips e.g., 2250 frames
  • FIG. 5 shows an exemplary image from each view in the exemplary dataset.
  • the label produced by the system was compared to the ground-truth label produced by the user.
  • the system generated probabilities on the set of labels, and the view which received the highest probability was chosen as the system output.
  • FIGS. 7 A and 7 B show images of the probabilities produced by the exemplary system, method and computer-accessible medium for two exemplary images; note the high confidence in the correct view label.
  • the corresponding exemplary histograms are shown in FIGS. 7 C and 7 D , respectively.
  • the output of the exemplary system, method and computer-accessible medium also compared to the ground-truth labeled views on the entire test data (e.g., 10k frames) and produced correct view identification in 98.5% of the frames.
  • the exemplary system, method and computer-accessible medium can generate probabilities, for example, only with respect to the non-instantaneous states, IN, DE.
  • FIG. 8 shows an exemplary graph of the detection of the cardiac cycle according to an exemplary embodiment of the present disclosure.
  • line 805 illustrates the difference in probabilities DE minus IN producing values ranging from ⁇ 1 to 1 for a video clip of test data.
  • Curve 810 was fit with constant deflation and inflation time that best approximates the line 805 .
  • the zero crossing of curve 810 can be the detected peak Systole (“SY”) and peak Diastole (“DI”); the intermediate stages can be the deflating (“DE”) and inflating (“IN”) stages. Additionally, as shown in the graph of FIG. 8 , points 815 and 820 represent the chosen triplets and the signal points, respectively, while line 825 represents the received signal.
  • FIGS. 9 A- 9 F show a set of image frames from a cardiac cycle, and in the top-left of each frame the system draws the identified cardiac stage visualized as a dashed circle 905 between two circles indicate peaks systole (e.g., small circle 910 ) and peak diastole (e.g., large circle 915 ); the radius of the dashed circle is taken directly from the fitted 710 from FIG. 8 .
  • peaks systole e.g., small circle 910
  • peak diastole e.g., large circle 915
  • the exemplary part segmentation procedure can utilize labeling each pixel in an input image x i according to the parts labels. For example, only left ventricle (“LV”) segmentation can be implemented.
  • FIG. 10 shows a set of images of the segmentation of the LV as produced by the exemplary system, which is illustrated by element 1005 .
  • the ground-truth (“GT”) user segmentation is illustrated by element 1010 , and the overlay of the two is illustrated by element 1015 . Note that the segmentation produced by the exemplary system and the user GT can be visually very close.
  • IOU Intersection over Union
  • this ratio can be in the range of 0 to 1, where closer to 1 can be better, and where 1 means the exemplary segmentation and GT can be identical at the pixel level, and (ii) Relative Area Difference (“RAD”) measures the difference in area of the segmentation produced by the exemplary system and the GT divided by the GT segmentation area.
  • RAD Relative Area Difference
  • FIG. 12 shows four histograms for these two error measures, where, for each error measure, one set of histograms is shown for the exemplary method compared to GT (e.g., set of histograms 1210 ) and one set of histograms for the user compared to the GT (e.g., set of histograms 1205 ).
  • the results indicate the exemplary system produces segmentations slightly less consistent with the GT than the user, however comparable; these results can be remarkable when taking into account that the same user marked the two LV segmentations; larger variability in the user vs. GT experiment can be expected when repeating this experiment with a different user.
  • FIGS. 11 A- 11 D show exemplary images of the valve localization (e.g., elements 1105 ) produced by the exemplary system, method and computer-accessible medium, for two views (e.g., FIGS. 11 A and 11 C ) and magnified views (e.g., FIGS. 11 B and 11 D , respectively).
  • Elements 1110 shown in FIGS. 11 A- 11 D show a disk centered at the user marked valve localization for GT, where the radius can be computed to represent uncertainty in the user localization.
  • the uncertainty can be computed based on data where the user marked valve localization repeatedly 10 times per-frame, and the maximal deviation can be computed from the average prediction; this value can be called an uncertainty radius, and can be labeled by r.
  • FIG. 13 shows an exemplary histogram 1305 of distances between the predicted valve localization of the exemplary system and the ground-truth localization by the user. Markers 1310 and 1315 indicate the uncertainty radius r and two times the uncertainty radius 2r, respectively.
  • FIGS. 14 A- 14 C are exemplary learning graphs for the exemplary Diastole/Systole classification test.
  • these learning graphs depict the train (e.g., element 1405 ) and validation (e.g., element 1410 ) error during training of the Diastole/systole procedure when its Core NN can be initialized in three different ways: (i) with the Core NN taken from imagenet-type NN trained on natural images (see, e.g., FIG. 14 A ); (ii) with the Core NN after being trained on the view detection procedure (see, e.g., FIG.
  • the exemplary system, method and computer-accessible medium can be used as an add-on to any clinical commercial imaging device, (e.g., ultrasound).
  • the exemplary system, method and computer-accessible medium can be used for both the Echocardiography (e.g., Ultrasound of the Heart) in the Cardiology Department and the Emergency Department (“ED”).
  • Echocardiography e.g., Ultrasound of the Heart
  • ED Emergency Department
  • the exemplary system, method and computer-accessible medium can be used to accurately identify the left ventricle (e.g., out of the 4 chambers), and automatically apply and provide a complete cardiac function analysis that can be incorporated directly into the final study report.
  • the technician has to identify two or three points, or trace the left ventricle in different views (e.g., out of the six views acquired), and then activate the calculation packages available on an exemplary machine.
  • the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure can automatically detect the different viewing windows and segments, and can identify the various parts of the heart, such as the left ventricle.
  • the exemplary system, method and computer-accessible medium can use segmentation, peak systolic and/or peak diastolic frames, which can now be determined automatically. In the past, this has been performed manually by the technician by carefully scanning frame by frame, and identifying the peak systolic and peak diastolic frame.
  • the exemplary system, method and computer-accessible medium can utilize a neural core to determine these two events during the cardiac cycles, and can then perform an assessment of the left ventricular function.
  • the manual labor can be eliminated completely, and all other measurements, including dimension of the left ventricle in systole and diastole, Right ventricular assessment, LA size, measurement of the aortic valve annulus, the aortic sinuses, the ascending aorta, the pulmonary valve, the mitral valve annulus and the tricuspid valve annulus, can be automatically measured.
  • the exemplary system, method and computer-accessible medium can be used to identify the exact area where Doppler samples can be located.
  • a Doppler sample location can be identified, and the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can automatically activate the Doppler modality, thus achieving appropriate Doppler tracing across the cardiac valves.
  • Doppler tracings is achieved and displayed, calculation packages can be applied, and the mitral and tricuspid valve inflow, as well as the aorta and pulmonary outflow tract Doppler tracing, can be calculated automatically.
  • About 10%-15% of the total duration of the examination can be dedicated to incorporate all measurements in the final report before it can be sent to the specialist, for example, the cardiologist who can finalize the report and send it to emergency medical records.
  • This can be a significantly manual process that can be avoided by automatically measuring, using the exemplary system, method and computer-accessible medium, all the different variables needed to assess cardiac function as well as valve abnormalities. (See e.g., diagram shown in FIG. 15 ).
  • the exemplary system, method and computer-accessible medium can save up to 40% of the time as compared to a manual exam (e.g., the type of examinations currently being performed). Additionally, the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can improve efficiency as well as quality as compared to currently-performed manual exams.
  • the exemplary system, method, and computer-accessible medium can, after the appropriate image is acquired by the technician/physician/nurses, automatically display cardiac function as normal, mild, moderate or severe left ventricular dysfunction. If needed, the exact number of Left Ventricular Ejection Fraction can also be displayed.
  • Pericardial effusion represents one of the most dangerous cardiac abnormalities that can lead to death if not diagnosed in a timely fashion.
  • the exemplary system, method and computer-accessible medium can be used to automatically detect the existence of pericardial effusion by ultrasound examination.
  • the exemplary system, method and computer-accessible medium can (e.g., detect immediately) any pericardial effusion, and alert the operator by the existence or nonexistence of pericardial effusion. If needed, the severity of the accumulated pericardial effusion can be displayed.
  • Cardiac segmental abnormalities can be used as a screening tool for the diagnosis of ischemia of the cardiac muscle. This can be a very subjective task, and can be operator-dependent even in the hands of an expert cardiologist.
  • the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure can automatically notify the user that there may be segmental abnormalities, for example, Hypokynesis, diskynesia or paradoxical motion of any part of the left ventricular wall and septum.
  • segmental abnormalities for example, Hypokynesis, diskynesia or paradoxical motion of any part of the left ventricular wall and septum.
  • the heart is divided into 17 segments and the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can detect very subtle wall motion abnormalities across all segments.
  • the exemplary system, method and computer-accessible medium can use the exemplary neural core to identify and separate the left and the right ventricles.
  • the exemplary neural core can assess the relative and absolute areas and volumes of these ventricles, and can quickly calculate the ratio between them (e.g., in a fraction of a second). This can be beneficial in order to raise the suspicion level of a critical condition called a pulmonary embolism. In this condition there can be a major strain on the right ventricle, and as a result, the ventricle tends to enlarge, and the ratio between right ventricle and left ventricular area can be dramatically altered.
  • This can be used as an exemplary screening tool to notify the clinician that there can be a possible pulmonary embolism based on the RV-to-LV ratio. (See e.g., diagram shown in FIG. 16 ).
  • a cardiac ultrasound platform 1605 can be used in either on online mode 1610 or an offline mode 1640 .
  • the cardiac ultrasound platform can be used in a cardiac setting 1615 or an ED cardiac setting 1620 .
  • Various functions 1625 can be performed in the cardiac setting 1625 , which can be based on a Doppler analysis 1635 (e.g., (i) LV EF, (ii) LV Volume, (iii) RV/LV Ratio, (iv) AO, MV, PV, TV, (v) Pericardial Effusion, (vi) Segmental Abn., (vii) Aortic Measurements, and (viii) IVC size).
  • a Doppler analysis 1635 e.g., (i) LV EF, (ii) LV Volume, (iii) RV/LV Ratio, (iv) AO, MV, PV, TV, (v) Pericardial Effusion, (vi) Segmental Abn., (vii) Aortic Measure
  • Further functions 1630 can be performed in the ED cardiac setting 1620 (e.g., (i) LV Function, (ii) Segmental Abn., (iii) LV Volume, (iv) Pericardial Effusion, (v) RV/LV Ratio, and (vi) IVC size).
  • various offline functions 1650 can be performed (e.g., (i) LV EF, (ii) LV Volume, (iii) RV/LV Ratio, (iv) AO, MV, PV, TV, (v) Pericardial Effusion, (vi) Segmental Abn., (vii) Aortic Measurements, and (viii) IVC size).
  • the exemplary system, method and computer-accessible medium can be used to compute or otherwise determine the EF from the raw UT input.
  • the user can be provided with, for example, the indication of one of two possibilities: (i) a drive and (ii) a halt.
  • drive see, e.g., FIG. 19 A
  • the user can be instructed to navigate the UT transducer to produce the desired view (e.g., 4C, 2C, SA, etc.).
  • a halt sign is shown (see, e.g., FIG. 19 B ) the user can be requested to hold still while the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, gathers information.
  • the exemplary system, method and computer-accessible medium can automatically produce the EF prediction along with the 3D reconstruction of the LV.
  • the exemplary determination of the EF can include the following exemplary procedures:
  • the exemplary system, method and computer-accessible medium can utilize the above exemplary procedures to produce a 3D reconstruction of the entire cardiac cycle. (See, e.g., FIGS. 20 A- 20 G ).
  • FIGS. 23 A and 23 B show the exemplary histogram of the errors of the experts (e.g., see FIG. 23 A ) and the errors of the exemplary system, method and computer-accessible medium (e.g., see FIG. 23 B ).
  • the use of ultrasound in the emergency department is commonplace.
  • the emergency department of a hospital or a medical center can be one of the busiest, stressed and scary places in the healthcare system.
  • the need for a fast and reliable diagnosis can be crucial in order to improve patient outcome.
  • Ultrasound currently has an important role in the following exemplary areas: Pulmonary, cardiac, abdominal scanning and OB-GYN. Additionally, acute scanning of orthopedic abnormalities, including fractures, has been introduced and incorporated into the ultrasound examination in the emergency department.
  • the evolution of handheld devices that facilitate the clinician to scan without searching for equipment in the emergency department facilitates the process for critical decision while also gaining procedural guidance with high-quality ultrasound imaging.
  • the exemplary system, method and computer-accessible medium can assist the physician in the emergency room quickly identify abnormalities in different body systems.
  • Emergent cardiac ultrasound can be used to assess for pericardial effusion and tamponade, cardiac activity, infarction, a global assessment of contractility, and the detection of central venous volume status, as well as a suspected pulmonary embolism. Ultrasound also has been incorporated into resuscitation of the critically ill and the at-risk patients. In the assessment of a patient with undifferentiated hypotension, emergent cardiac ultrasound can also be expanded for the use in heart failure and dyspnea.
  • the exemplary system, method and computer-accessible medium can perform the following exemplary functions:
  • the exemplary system, method and computer-accessible medium can also alert the operator if there can be abnormal motion including akinesia, hypokinesia, dyskinesia and paradoxical movement of any part of the ventricular wall and septum, indicating potential ventricular ischemia or infarction.
  • Left Ventricular Volume can be displayed in situations of Hypovolumia. (See e.g., diagram shown in FIG. 16 ).
  • the exemplary system, method and computer-accessible medium can be used in an emergency department for the following exemplary examinations:
  • the exemplary uses of ultrasound in the emergency department have the potential to use deep learning for a faster and more acute diagnosis.
  • ED ultrasound use can often reduce the need for more expensive studies such as CT or MRIs and can reduce unnecessary admissions for more comprehensive diagnostic workup. Additionally, the moving of the patient from one lab to the other requires manpower and complex queue scheduling and monitoring.
  • the exemplary system, method and computer-accessible medium can provide a new way of using ultrasound in the emergency department. (See e.g., diagram shown in FIG. 17 ).
  • neural network core medical imaging 1705 can be performed, e.g., using the exemplary system, method and/or computer-accessible medium according to exemplary embodiments of the present disclosure.
  • This can be organ specific 1715 , based on various organs, organ types, or medical specialties 1715 (e.g., (i) cardiac, (ii) abdominal, (iii) chest, (iv) OBGYN, (v) oncology, (vi) urology, and (vii) orthopedics.
  • Neural network core medical imaging 1705 can be used in various settings 1720 (e.g., (i) patient care, (ii) education, (iii) training, (iv) quality assurance (“QA”), and (v) outcomes.
  • Various imaging modalities 1725 can be used (e.g., (i) ultrasound, (ii) magnetic resonance imaging (“MRI”), (iii) computed tomography (“CT”), (iv) positron emission tomography CT (“PETCT”), and (v) nuclear).
  • MRI magnetic resonance imaging
  • CT computed tomography
  • PETCT positron emission tomography CT
  • nuclear nuclear
  • An exemplary assessment of the examination can result in improving point of care. Using established analytic tools can facilitate much faster and reliable diagnosis. There can be no need for major requirements for special training.
  • the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure can be used in other fields including military arena, telemedicine applications, urgent care facilities and in-home healthcare. (See e.g., diagram shown in FIG. 18 ).
  • Exemplary markets 1805 for the use of the exemplary system, method, and computer-accessible medium can include hospitals 1810 (e.g., in (i) cardiology 1815 , (ii) ED 1820 , (iii) radiology 1825 , and (iv) intensive care 1830 ), urgent care 1835 , homecare 1840 , and the military 1845 .
  • hospitals 1810 e.g., in (i) cardiology 1815 , (ii) ED 1820 , (iii) radiology 1825 , and (iv) intensive care 1830 ), urgent care 1835 , homecare 1840 , and the military 1845 .
  • FIG. 24 illustrates an exemplary flow diagram of an exemplary method 2400 for detecting an anomaly in an anatomical structure of a patient according to an exemplary embodiment of the present disclosure.
  • imaging information related to an anatomical structure of a patient can be generated or received.
  • the imaging information can be optionally upsampled.
  • the anatomical structure can be segmented using a part segmentation procedure and a localization procedure before or independently of the detection of the anomaly.
  • a feature of the anatomical structure can be classified based on the imaging information using a neural network.
  • an output produced by each of the neural networks can be concatenated.
  • the anomaly can be detected based on data generated using the classification and segmentation procedures.
  • FIG. 25 shows a block diagram of an exemplary embodiment of a system according to the present disclosure.
  • exemplary procedures in accordance with the present disclosure described herein can be performed by a processing arrangement and/or a computing arrangement 2505 .
  • Such processing/computing arrangement 2505 can be, for example entirely or a part of, or include, but not limited to, a computer/processor 2510 that can include, for example one or more microprocessors, and use instructions stored on a computer-accessible medium (e.g., RAM, ROM, hard drive, or other storage device).
  • a computer-accessible medium e.g., RAM, ROM, hard drive, or other storage device.
  • a computer-accessible medium 2515 e.g., as described herein above, a storage device such as a hard disk, floppy disk, memory stick, CD-ROM, RAM, ROM, etc., or a collection thereof
  • the computer-accessible medium 2515 can contain executable instructions 2520 thereon.
  • a storage arrangement 2525 can be provided separately from the computer-accessible medium 2515 , which can provide the instructions to the processing arrangement 2505 so as to configure the processing arrangement to execute certain exemplary procedures, processes and methods, as described herein above, for example.
  • the exemplary processing arrangement 2505 can be provided with or include an input/output arrangement 2535 , which can include, for example a wired network, a wireless network, the internet, an intranet, a data collection probe, a sensor, etc.
  • the exemplary processing arrangement 2505 can be in communication with an exemplary display arrangement 2530 , which, according to certain exemplary embodiments of the present disclosure, can be a touch-screen configured for inputting information to the processing arrangement in addition to outputting information from the processing arrangement, for example.
  • the exemplary display 2530 and/or a storage arrangement 2525 can be used to display and/or store data in a user-accessible format and/or user-readable format.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Cardiology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Physiology (AREA)
  • Computational Linguistics (AREA)
  • Computer Graphics (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)

Abstract

An exemplary system, method and computer-accessible medium for detecting an anomaly(ies) in an anatomical structure(s) of a patient(s) includes receiving imaging information related to the anatomical structure(s) of the patient(s), classifying a feature(s) of the anatomical structure(s) based on the imaging information using a neural network (s), and detecting the anomaly(ies) based on data generated using the classification procedure.

Description

    CROSS-REFERENCE TO RELATED APPLICATION(S)
  • This application is a continuation application of U.S. patent application Ser. No. 16/478,507, filed on Jul. 17, 2019, which is a national phase patent application of International patent application No. PCT/US2018/014536, filed Jan. 19, 2018, which relates to and claims priority from U.S. patent application Ser. No. 62/448,061, filed on Jan. 19, 2017, the entire disclosures of which are incorporated herein by reference.
  • FIELD OF THE DISCLOSURE
  • The present disclosure relates generally to use of an ultrasound apparatus, and more specifically, to exemplary embodiments of an exemplary system, method and computer-accessible medium for ultrasound analysis.
  • BACKGROUND INFORMATION
  • Echocardiogram, or ultrasound of the heart, is a common clinical tool for assessing a heart condition, and for identifying and diagnosing certain heart diseases. Currently, a significant amount of the analysis and processing of the input ultrasound movie clips is performed by a technician or a physician (“user”). Such manual analysis has several drawbacks, for example: (i) it increases the chance of error, (ii) it needs a skilled user, (iii) it limits the throughput by the analyzing speed and skill of the user and (iv) due to time complexity, only several frames from the clips are fully analyzed while information in other frames is left unused.
  • Cardiac ultrasound can be the preferred modality for the assessment of cardiac anatomy, function and structural anomalies. Currently, routine cardiac ultrasound examination lasts between about 30 and about 40 minutes, and can include: (i) acquisition of the data by ultrasound and Doppler procedures, (ii) analysis of ventricular function and multiple measurements of the different parts of the cardiac structure, and (iii) a report that can be incorporated directly in the electronic medical record.
  • Thus, it may be beneficial to provide an exemplary system, method and computer-accessible medium for ultrasound analysis, and which can overcome at least some of the deficiencies described herein above.
  • SUMMARY OF EXEMPLARY EMBODIMENTS
  • An exemplary system, method and computer-accessible medium for detecting an anomaly(ies) in an anatomical structure(s) of a patient(s) can be provided, which can include, for example, receiving imaging information related to the anatomical structure(s) of the patient(s), classifying a feature(s) of the anatomical structure(s) based on the imaging information using a neural network(s), and detecting the anomaly(ies) based on data generated using the classification procedure. The imaging information can include at least three images of the anatomical structure(s).
  • In some exemplary embodiments of the present disclosure, the imaging information can include ultrasound imaging information. The ultrasound imaging information can be generated using, e.g., an ultrasound arrangement. The anatomical structure(s) can be a heart. In certain exemplary embodiments of the present disclosure, the state(s) of the anatomical structure(s) can include (i) a systole state of a heart of the patient(s), (ii) a diastole state of the heart of the patient(s), (iii) an inflation state of the heart of the patient(s) or (iv) a deflation state of the heart of the patient(s).
  • In some exemplary embodiments of the present disclosure, the feature(s) can be classified using a view detection procedure, which can include detecting a view of a particular imaging frame in the imaging information. The anatomical structure(s) can be segmented, e.g., using a part segmentation procedure and a localization procedure before the detection of the anomaly(ies). The part segmentation procedure can be utilized to segment a left ventricle of the heart of the patient(s) from a background. The localization procedure can be a valve localization procedure, which can include marking, a single pixel per frame in the imaging information to place a Doppler measuring point(s).
  • In certain exemplary embodiments of the present disclosure, the imaging information can include a plurality of images, and neural network(s) can include a plurality of neural networks each one of which can be associated with one of the images. Each neural network can be used to classify the feature(s) in its associated one of the images. An output produced by each of the neural networks can be concatenated (e.g., in a depth). The imaging information can be upsampled.
  • These and other objects, features and advantages of the exemplary embodiments of the present disclosure will become apparent upon reading the following detailed description of the exemplary embodiments of the present disclosure, when taken in conjunction with the appended claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Further objects, features and advantages of the present disclosure will become apparent from the following detailed description taken in conjunction with the accompanying figures showing illustrative embodiments of the present disclosure, in which:
  • FIG. 1 is an exemplary diagram of an exemplary ultrasound analysis system according to an exemplary embodiment of the present disclosure;
  • FIG. 2A is an exemplary image of the acquisition of labels for segmentation according to an exemplary embodiment of the present disclosure;
  • FIG. 2B is an exemplary image of labels for valve localization according to an exemplary embodiment of the present disclosure;
  • FIG. 3A is an exemplary image of data
    Figure US20230104045A1-20230406-P00001
    for part segmentation according to an exemplary embodiment of the present disclosure;
  • FIG. 3B is an exemplary image of data
    Figure US20230104045A1-20230406-P00002
    for valve localization according to an exemplary embodiment of the present disclosure;
  • FIG. 4A is an exemplary diagram of network architecture used by the exemplary system according to an exemplary embodiment of the present disclosure;
  • FIG. 4B is an exemplary diagram of a Core Neural Network according to an exemplary embodiment of the present disclosure;
  • FIG. 5 is a set of exemplary images of different ultrasound views of the heart and their corresponding labels according to an exemplary embodiment of the present disclosure;
  • FIGS. 6A and 6B are exemplary diagrams of the exemplary system, method and computer-accessible medium according to an exemplary embodiment of the present disclosure;
  • FIGS. 7A and 7B are exemplary images of the per-view probability generated by the exemplary system, method, and computer-accessible medium for two input triplet images according to an exemplary embodiment of the present disclosure;
  • FIGS. 7C and 7D are exemplary histogram diagrams corresponding to FIGS. 6A and 6B, respectively, according to an exemplary embodiment of the present disclosure;
  • FIG. 8 is an exemplary graph of the detection of the cardiac cycle stage according to an exemplary embodiment of the present disclosure;
  • FIGS. 9A-9F are exemplary images of the per-frame stages in the cardiac cycle according to an exemplary embodiment of the present disclosure;
  • FIG. 10 is a set of images of the segmentation of the left ventricle produced by the exemplary system, method, and computer-accessible medium on four selected views according to an exemplary embodiment of the present disclosure;
  • FIGS. 11A and 11C are exemplary images of valve localization performed using the exemplary system according to an exemplary embodiment of the present disclosure;
  • FIGS. 11B and 11D are exemplary magnified images of the valve localization images of FIGS. 11A and 11C, respectively, according to an exemplary embodiment of the present disclosure;
  • FIG. 12 is a set of exemplary charts of the intersection over union and relative area difference comparing the left ventricle segmentation produced by the exemplary system, method, and computer-accessible medium vs. the ground truth and a user vs. the ground truth according to an exemplary embodiment of the present disclosure;
  • FIG. 13 is an exemplary histogram diagram of the distances between the exemplary system, method, and computer-accessible medium valve localization prediction and ground truth according to an exemplary embodiment of the present disclosure;
  • FIG. 14A is an exemplary graph of a learning graph for the Diastole/Systole classification test initialized with imagenet-type network for the Core Neural Network according to an exemplary embodiment of the present disclosure;
  • FIG. 14B is an exemplary graph of a learning graph for the Diastole/Systole classification test initialized with the Core Neural Network after view detection training according to an exemplary embodiment of the present disclosure;
  • FIG. 14C is an exemplary graph of a learning graph for the Diastole/Systole classification test initialized with the Core Neural Network after being trained on a part segmentation procedure according to an exemplary embodiment of the present disclosure;
  • FIG. 15 is an exemplary flow diagram illustrating an adult echocardiographic examination performed using an exemplary procedure according to an exemplary embodiment of the present disclosure;
  • FIG. 16 is an exemplary diagram of an exemplary configuration of the exemplary system, method and computer-accessible medium, for use in cardiac ultrasound according to an exemplary embodiment of the present disclosure;
  • FIG. 17 is an exemplary diagram illustrating the neural network core of the exemplary system, method and computer-accessible medium according to an exemplary embodiment of the present disclosure;
  • FIG. 18 is an exemplary diagram illustrating potential markets in which the exemplary system, method and computer-accessible medium, can be used according to an exemplary embodiment of the present disclosure;
  • FIG. 19A is an exemplary image generated based on a user interface left drive mode according to an exemplary embodiment of the present disclosure;
  • FIG. 19B is an exemplary image generated based on a user interface right halt mode according to an exemplary embodiment of the present disclosure;
  • FIGS. 20A-20G are exemplary images of a 3D reconstruction of a full cardiac cycle according to an exemplary embodiment of the present disclosure;
  • FIG. 21A is a set of exemplary images of cross-sections of systole LV and diastole LV according to an exemplary embodiment of the present disclosure;
  • FIG. 21B is a set of exemplary images of the reconstructions of the 3D surface in systole and diastole according to an exemplary embodiment of the present disclosure;
  • FIG. 22 is an exemplary histogram generated using the exemplary system, method and computer-accessible medium according to an exemplary embodiment of the present disclosure;
  • FIG. 23A is an exemplary histogram of EF errors generated based on an exemplary expert analysis of a cardiac cycle according to an exemplary embodiment of the present disclosure;
  • FIG. 23B is an exemplary histogram of errors generated using the exemplary system, method and computer-accessible medium according to an exemplary embodiment of the present disclosure;
  • FIG. 24 is an exemplary flow diagram of an exemplary method for detecting an anomaly in an anatomical structure of a patient according to an exemplary embodiment of the present disclosure; and
  • FIG. 25 is an illustration of an exemplary block diagram of an exemplary system in accordance with certain exemplary embodiments of the present disclosure.
  • Throughout the drawings, the same reference numerals and characters, unless otherwise stated, are used to denote like features, elements, components or portions of the illustrated embodiments. Moreover, while the present disclosure will now be described in detail with reference to the figures, it is done so in connection with the illustrative embodiments and is not limited by the particular embodiments illustrated in the figures and the appended claims.
  • DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • The exemplary embodiments of the present disclosure may be further understood with reference to the following description and the related appended drawings. The exemplary embodiments are described with reference to cardiovascular imaging (e.g., using ultrasound). However, those having ordinary skill in the art will understand that the exemplary embodiments of the present disclosure may be implemented for imaging other tissues or organs (e.g., other than the heart) and can be used in other imaging modalities (e.g., other than ultrasound, including but not limited to MRI, CT, OCT, OFDR, etc.).
  • The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can include a neural network core that can aid a healthcare provider in diagnosing and making clinical decisions more accurately, be of better quality, and have increased safety. For example, the exemplary neural network core can receive images from multiple imaging modalities including ultrasound, magnetic resonance imaging, positron emission scanners, computer tomography and nuclear scanners. The exemplary neural network core can be used for the examination of multiple organs, and is not limited to a specific organ system. The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be incorporated into, or connected to, online and/or offline medical diagnostic devices, thus facilitating the operator to become more accurate and efficient.
  • As shown in the diagram of FIG. 1 , the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can include a Core Neural-Network (“NN”) 115. The Core NN 115 can take as input a sequence of ultrasound frames 105, and produce high-level semantic features for the middle frame 110 in the sequence. These exemplary semantic features can be used to solve a series of recognition and analysis procedures (e.g., element 120). For each exemplary procedure, a dedicated NN-based component can be built on top of the Core NN 115, and can be tailored specifically for that procedure. The exemplary procedures can be broken down into, for example, five exemplary groups: (i) View detection 125, (ii) Systole/diastole detection 130, (iii) Part segmentation 135, (iv) Valve localization 140 and (v) Anomaly detection 145. The procedures can be sorted according to their difficulty level (e.g., from easy to hard).
  • The first exemplary procedure (e.g., View detection 125) can be performed to detect the view of a given frame out of several potential views. The views currently handled can be ones used in a standard adult echocardiogram examination: (i) apical 2-chamber view, (ii) apical 3-chamber view, (iii) apical 4-chamber view, (iv) apical 5-chamber view, (v) parasternal long axis view and (vi) parasternal short axis view. The second exemplary procedure (e.g., Systole/diastole 130) can be performed to identify the systole/diastole in the cardiac cycle. For example, each frame can be labeled using one of the four temporal states of the left ventricle: (i) diastole, (ii) systole, (iii) inflating and (iv) deflating. The third exemplary procedure (e.g., Part segmentation 135) can be performed to segment regions in ultrasound images such as the four chambers of the heart, the heart valves and walls, and the pericardium. The fourth exemplary procedure (e.g., Valve localization 140) can be performed to identify the locations of valves for Doppler analysis of in/out flow through these valves. The fifth exemplary procedure (e.g., Anomaly detection 145) can be performed to detect and locate heart anomalies, such as pericardial effusion.
  • An exemplary observation can be that the Core NN that extracts high-level semantic features from a sequence of ultrasound images can be the same for all the procedures, and the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be trained by optimizing (e.g., simultaneously) all procedures and the Core NN. This exemplary approach can provide the following benefits. First, building an ultrasound feature system that can be trained and used for different complementary procedures can provide versatile features that can corroborate and improve individual procedure performance. The intuition can be similar to human learning; for example, to better detect the view of an ultrasound image, it can be beneficial to detect and segment the different visible parts of the heart, and vice-versa. Second, having a versatile feature system captured by the Core NN, an ultrasound analysis procedure can be added with rather low amounts of data. This can be because the main part of the exemplary system, the Core NN, can already be trained to produce generic features, and all that can be left to train can be the procedure specific part. This can facilitate adaptation of the exemplary system, method, and computer-accessible medium to new or different procedures with rather low computational complexity and time. Third, since it can usually be more difficult to achieve and/or produce data for the more difficult procedures, starting by training the system, method, and computer-accessible medium on the easier procedures provides a good starting point for more elaborate procedures.
  • The echocardiogram can often be used to extract high level quantitative information regarding the heart condition and function. An exemplary archetypical example is the Ejection Fraction (“EF”). The EF can be computed from various exemplary measurements including and combining view detection, systole/diastole detection, and part segmentation. The exemplary system, method and computer-accessible medium can utilize an automatic pipeline for determining the EF, including producing a 3D surface reconstruction of the LV, or other parts using the UT data (e.g., only the UT data).
  • Exemplary Data Generation
  • The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can include the creation of data. An exemplary database consisting of 3000 ultrasound clips (e.g., approximately 40 frames each) was used. Each ultrasound frame Xi was an n×n gray-scale image. The database was divided into three disjoint sets: (i) training, (ii) validation and (iii) test data.
  • An exemplary input to the exemplary system can include sequences (e.g., triplets) of consecutive ultrasound frames (e.g., l=3). Additionally, Xi=(xi−d, xi, xi+d) (e.g., element 105 from FIG. 1 ), where d can be a parameter which can be set to d=1. The prediction performed by the exemplary system, method and computer-accessible medium, can be done with respect to the middle frame 110 (e.g., xi). The input sequence of images can provide the system with a temporal context together with a spatial context.
  • The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can support two types of procedures: (i) classification and (ii) segmentation. For each exemplary procedure, a ground-truth label yi was generated for a subset of the ultrasound frames xi in the exemplary database; yi can either be a single label (e.g., for classification procedures) or can consist of a label for each pixel in xi (e.g., for segmentation procedures). Thus, either yi
    Figure US20230104045A1-20230406-P00003
    ={
    Figure US20230104045A1-20230406-P00004
    1,
    Figure US20230104045A1-20230406-P00004
    2, . . .
    Figure US20230104045A1-20230406-P00004
    k} can be a single label, or yj
    Figure US20230104045A1-20230406-P00003
    n×n can be a per-pixel label (e.g., corresponding to each pixel of xj).
  • The first two exemplary procedures can be classification procedures. For the first exemplary procedure of View detection, yi
    Figure US20230104045A1-20230406-P00005
    where
    Figure US20230104045A1-20230406-P00005
    ={SA, LA, 5C, 4C, 3C, 2C} can correspond to Short-Axis, Long-Axis, 5 Chamber, 4 Chamber, 3 Chamber, and 2 Chamber views. For the procedure of Diastole/systole detection,
    Figure US20230104045A1-20230406-P00005
    ={DI, SY, IN, DE}, can correspond to Dlastole, SYstole, INflating, and DEflating of the left ventricle. Since DI and SY can be instantaneous configurations, only two labels
    Figure US20230104045A1-20230406-P00005
    ={IN, DE} were utilized, and these labels were assigned to all frames between peaks (e.g., strictly between diastole and systole and systole and diastole).
  • The next three exemplary procedures can be segmentation procedures. The exemplary part segmentation procedure implemented can use labels yi
    Figure US20230104045A1-20230406-P00005
    n×n where
    Figure US20230104045A1-20230406-P00005
    ={LV, BA},LV can stand for Left-Ventricle and BA for Background. Given an input sequence of frames Xi the exemplary goal can be to decide, for each pixel in the middle frame xi, if it can be part of the left-ventricle or not. Manually labeling each pixel in xi, can be tedious and impractical. Therefore, an exemplary software tool that can facilitate the generation of labels yi for a collection of frames xi as produced can be used. An exemplary screenshot from a labeling session is shown in the image in FIG. 2A. The user uses an interactive tool that can facilitate him/her to mark a sparse set of control points 205 and a smooth closed curve 210 (e.g., cubic spline) can be computed interpolating these points. The user can add, remove or edit these control points 205. The user can mark consecutive frames, and can use a previous frame's curve as an initialization. Given a closed curve drawn on a frame xi, all the pixels within the region bounded by this curve can be marked with the same label
    Figure US20230104045A1-20230406-P00004
    i
    Figure US20230104045A1-20230406-P00005
    . For example, as shown in the image in FIG. 3A, the label yi corresponding to the frame xj in the image in FIG. 2A is shown, where white area 305 corresponds to the LV label and black area 310 to the BA label.
  • The exemplary Valve Localization procedure can utilize marking a single pixel per frame to place the Doppler measuring point. Three valves were implemented;
    Figure US20230104045A1-20230406-P00003
    ={MI, TR, AO, BA}, Mitral (“MI”), Tricuspidal (“TR”), Aortic (“AO”) and background (“BA”). To generate data, an exemplary software tool, similar to the one used for part segmentation, was produced, which can facilitate the user to select a pixel in each image to indicate the location of the relevant valve. (See, e.g., FIG. 2B, element 215). To create meaningful training data, statistics were obtained by asking the user to mark the same set of 8 clips (e.g., 4 clips for 4C and 4 clips for SC) 10 times. This data was used to calculate statistics of the valve sampling, and created the training data by splatting an ellipse centered at the user prescribed pixel with variances taken from the above statistics. This was performed separately and independently for each valve. For example, the image shown in FIG. 3B illustrates the label yj created for the frame shown in the image shown in FIG. 2B, where white pixels 305 represent the label AO, and black pixels 310 represent label BA.
  • In the exemplary anomaly detection, pericardial effusion and detect fluid accumulation in the pericardial cavity can be taken care of. In this exemplary segmentation procedure, the labels can discriminate pericardial fluid and background,
    Figure US20230104045A1-20230406-P00003
    ={PE, BA}. Similar software to the previous two segmentation procedures was built where the user can annotate the fluid areas.
  • Exemplary Network Architecture
  • FIG. 4A shows an exemplary diagram/architecture of the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure. For example, as shown in FIG. 4A, each block 405 can represent a convolutional block that can consist of a series of convolutions and ReLU (e.g., activation) layers. For each convolutional block 405, the number of channels (e.g., depth) is indicated. Consecutive convolutional blocks 405, with different resolutions (e.g., width and height of blocks), can be connected by pooling layers. Blocks 410 can represent the output of the exemplary Core NN. The exemplary Core NN block 415 is shown in FIG. 4B; the architecture of Core NN 415 is shown in FIG. 4B. The input to the exemplary system can be a sequence of ultrasound frames Xi=(xi−1, xi, xi+1); the system can feed each frame xi−1, xi, xi+1 independently through the Core NN to achieve the high-level ultrasound features and can concatenate the output. The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can use the concatenated features to perform the procedures described above.
  • The exemplary procedures can be divided into two groups: (i) classification procedures, and (ii) segmentation procedures. The exemplary classification procedure can request to assign a label zi
    Figure US20230104045A1-20230406-P00003
    from a set of possible labels
    Figure US20230104045A1-20230406-P00003
    ={
    Figure US20230104045A1-20230406-P00004
    1, . . . ,
    Figure US20230104045A1-20230406-P00004
    k} for an input sequence Xi. The exemplary classification procedure can include View detection and Systole/diastole detection. The exemplary segmentation procedure can ask for a given input sequence Xi to generate a label per-pixel for the middle frame xi, that can be zi
    Figure US20230104045A1-20230406-P00003
    n×n, where xi can be n×n image, and as before
    Figure US20230104045A1-20230406-P00003
    ={
    Figure US20230104045A1-20230406-P00004
    1, . . . ,
    Figure US20230104045A1-20230406-P00004
    k}. The exemplary segmentation procedure can include part segmentation, valve localization and anomaly detection. An anomaly detection can also have instantiation as a classification procedure. Each procedure can have its own network with suitable architecture based on its type (e.g., classification of segmentation).
  • The two groups of the exemplary procedures can use the same concatenated features produced by the exemplary Core NN. The exemplary classification procedures can use an exemplary classification framework (e.g., element 420 shown in FIG. 4A) (see, e.g., Reference 3), which can utilize fully connected layers to reduce the output to prediction vector in
    Figure US20230104045A1-20230406-P00006
    k (e.g.,
    Figure US20230104045A1-20230406-P00007
    can be the number of classes in
    Figure US20230104045A1-20230406-P00003
    ) from which a prediction zi can be made.
  • During the exemplary segmentation (see, e.g., element 425 shown in FIG. 4A), an exemplary up sampling procedure can be used, which can be different from existing semantic segmentation architectures (see, e.g., Reference 2), which can learn a deconvolution operator and inject previous layers; the low resolution can be up sampled to full resolution, and the output can be concatenated with the input sequence Xi, which can pass through a convolutional block to produce the final segmentation. The upsampled segmentation information can provide a smooth rough approximation of the part to be segmented, and the final convolutional block can use local features to refine its boundary.
  • The exemplary segmentation of a particular part/section of the heart, for example, the left-ventricle, can include a multi-scale task. For example, a rough estimate of the part location in the image can be provided, and the exemplary prediction can be gradually refined, or modified, based on local features of the image. The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can include two exemplary architectures for segmenting anatomical parts from medical images. (See e.g., diagrams shown in FIGS. 6A and 6B).
  • For example, the diagram shown in FIG. 6A illustrates a first exemplary network in a serial form. This exemplary network can produce a bottom-up segmentation using consecutive upsampling blocks (element 605 shown in the diagram of FIG. 6A). Each block can include a bilinear or deconvolution up-sampling, and various convolutional-relu layers. Each block 605 can receive as an input the previous low-resolution segmentation, add to it a downsampled version of the original image data (e.g., lines 610), and can produce the segmentation in a higher resolution. One of the exemplary layers can produce the result in the resolution of the original image. FIG. 6B shows a diagram of an exemplary network in a parallel form. Truncated copies of the Core NN can be utilized, and attached to each an up-sampling block (e.g., similar to the serial design). Each truncated copy of the Core NN can be used to reduce the original images to different resolutions with corresponding receptive fields. For example, the lowest resolution can be used to recognize where the LV is but may not precisely determine the borders, while the highest resolution (e.g., the one with the smallest receptive field) can attempt to influence the borders of the segmentation based on local information in the image. The results can then be aggregated to achieve the final segmentation. It can also be possible to train the truncated Core NN for every resolution.
  • The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can also consider the addition of a total variation regularizer directly to the network loss function in order to encourage segmentation with a shorter boundary curve. The total variation energy can be defined directly on the output of the segmentation network as λΣp∥[∇F(Xi)]p2, where F(X) can be the difference of the network response for a certain label (e.g., LV) and the background label BA when applied to the input sequence Xi, the sum can be over all pixels p, and λ can be the amount of regularization.
  • The exemplary Core NN 415 illustrated in FIG. 4A can include an imagenet architecture (see, e.g., Reference 3), which can reduce an input image to a set of feature vectors defined on a very coarse resolution. In the exemplary system, the same Core NN can be used for transforming each of the input ultrasound frames xi−1, xi, xi+1 to its feature vectors. The weights defining the exemplary Core NN can be coupled between its three realizations in the system. (See, e.g., multiple blocks 415 the exemplary Core NN).
  • The exemplary system and method, according to an exemplary embodiment of the present disclosure, can be utilized to improve the segmentation and can involve L1 cost functions and Generative Adversarial Networks. For example, L1 regularization for segmentation can be used as a loss function, and can provide a more accurate boundary detection than standard crossentropy loss. Alternatively or in addition, an exemplary loss function can be trained using, for example, GANs. As an example, given some segmentation network, a discriminator network can be trained to distinguish real segmentations and segmentations created by the segmentation network. This discriminator, in combination with some other loss or on its own to further train the segmentation network, can be used. The input to the discriminator network can include (Xi, Zi) for real examples, and (Xi, f (Xi)) , where f (Xi) can be the output of the segmentation network.
  • Exemplary Training
  • For training the exemplary system, an interleaving approach can be employed. For example, the first exemplary procedure (e.g., view detection) can be used, which can train its NN (see, e.g., element 420 shown in FIG. 4A) while fixing the Core NN. Both the Core NN and view-procedure NN can be trained. The second NN procedure (e.g., Diastole/systole) can be trained while fixing the Core NN followed by training both the Core NN and the Diastole/Systole-NN. This can be repeated/continued until a convergence is achieved. Since preliminary procedures can be easier to generate data for, training the Core NN on these applications already provides a good starting point for more challenging procedures. Alternatively or in addition, the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be utilized to train all exemplary tasks simultaneously.
  • Exemplary Evaluation
  • The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, was evaluated on test data that has not been used or validated in the training phase; the test data consisted of 322 clips (e.g., 10000 frames) for classification procedures, 42 clips (e.g., 530 frames) for part segmentation procedures and 108 clips (e.g., 2250 frames) for the Valve localization procedures.
  • Exemplary Performance
  • Exemplary View detection
  • The view of a given frame xi can be used for the exemplary classification. For example, FIG. 5 shows an exemplary image from each view in the exemplary dataset. In this experiment, the label produced by the system was compared to the ground-truth label produced by the user. The system generated probabilities on the set of labels, and the view which received the highest probability was chosen as the system output. FIGS. 7A and 7B show images of the probabilities produced by the exemplary system, method and computer-accessible medium for two exemplary images; note the high confidence in the correct view label. The corresponding exemplary histograms are shown in FIGS. 7C and 7D, respectively. The output of the exemplary system, method and computer-accessible medium also compared to the ground-truth labeled views on the entire test data (e.g., 10k frames) and produced correct view identification in 98.5% of the frames.
  • Exemplary Diastole/Systole Detection
  • The exemplary diastole/systole detection procedure can utilize identifying the cardiac cycle stage of a given input frame xi; where the labels can be
    Figure US20230104045A1-20230406-P00003
    ={DI, SY, IN, DE}. The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can generate probabilities, for example, only with respect to the non-instantaneous states, IN, DE. FIG. 8 shows an exemplary graph of the detection of the cardiac cycle according to an exemplary embodiment of the present disclosure. For example, line 805 illustrates the difference in probabilities DE minus IN producing values ranging from −1 to 1 for a video clip of test data. Curve 810 was fit with constant deflation and inflation time that best approximates the line 805. The zero crossing of curve 810 can be the detected peak Systole (“SY”) and peak Diastole (“DI”); the intermediate stages can be the deflating (“DE”) and inflating (“IN”) stages. Additionally, as shown in the graph of FIG. 8 , points 815 and 820 represent the chosen triplets and the signal points, respectively, while line 825 represents the received signal.
  • FIGS. 9A-9F show a set of image frames from a cardiac cycle, and in the top-left of each frame the system draws the identified cardiac stage visualized as a dashed circle 905 between two circles indicate peaks systole (e.g., small circle 910) and peak diastole (e.g., large circle 915); the radius of the dashed circle is taken directly from the fitted 710 from FIG. 8 .
  • Exemplary Part Segmentation
  • The exemplary part segmentation procedure can utilize labeling each pixel in an input image xi according to the parts labels. For example, only left ventricle (“LV”) segmentation can be implemented. FIG. 10 shows a set of images of the segmentation of the LV as produced by the exemplary system, which is illustrated by element 1005. The ground-truth (“GT”) user segmentation is illustrated by element 1010, and the overlay of the two is illustrated by element 1015. Note that the segmentation produced by the exemplary system and the user GT can be visually very close. To produce a quantitative analysis to this exemplary procedure, two error measures were computed: (i) Intersection over Union (“IoU”) which measures the area of the intersection of the exemplary system's segmentation and GT segmentation divided by the union of both areas (e.g., it can produce the ratio of the common segmented area (e.g., element 1015 shown in FIG. 10 )) and the common area plus the different areas (e.g., elements 1005 and 1010 shown in FIG. 10 ); this ratio can be in the range of 0 to 1, where closer to 1 can be better, and where 1 means the exemplary segmentation and GT can be identical at the pixel level, and (ii) Relative Area Difference (“RAD”) measures the difference in area of the segmentation produced by the exemplary system and the GT divided by the GT segmentation area.
  • To produce a baseline for the results, the exemplary user was asked to repeat the LV annotation of the test set after waiting a period of several weeks, and to measure these new segmentations versus the original GT segmentations. FIG. 12 shows four histograms for these two error measures, where, for each error measure, one set of histograms is shown for the exemplary method compared to GT (e.g., set of histograms 1210) and one set of histograms for the user compared to the GT (e.g., set of histograms 1205). The results indicate the exemplary system produces segmentations slightly less consistent with the GT than the user, however comparable; these results can be remarkable when taking into account that the same user marked the two LV segmentations; larger variability in the user vs. GT experiment can be expected when repeating this experiment with a different user.
  • Exemplary Valve Localization
  • This exemplary procedure can utilize placing a point at a certain location for flow calculation during Doppler analysis. FIGS. 11A-11D show exemplary images of the valve localization (e.g., elements 1105) produced by the exemplary system, method and computer-accessible medium, for two views (e.g., FIGS. 11A and 11C) and magnified views (e.g., FIGS. 11B and 11D, respectively). Elements 1110 shown in FIGS. 11A-11D show a disk centered at the user marked valve localization for GT, where the radius can be computed to represent uncertainty in the user localization. The uncertainty can be computed based on data where the user marked valve localization repeatedly 10 times per-frame, and the maximal deviation can be computed from the average prediction; this value can be called an uncertainty radius, and can be labeled by r. FIG. 13 shows an exemplary histogram 1305 of distances between the predicted valve localization of the exemplary system and the ground-truth localization by the user. Markers 1310 and 1315 indicate the uncertainty radius r and two times the uncertainty radius 2r, respectively.
  • Exemplary Versatility
  • The exemplary Core NN was tested to determine how it can adapt to new procedures after different levels of training and using a “warm start” initialization. FIGS. 14A-14C are exemplary learning graphs for the exemplary Diastole/Systole classification test. For example, these learning graphs depict the train (e.g., element 1405) and validation (e.g., element 1410) error during training of the Diastole/systole procedure when its Core NN can be initialized in three different ways: (i) with the Core NN taken from imagenet-type NN trained on natural images (see, e.g., FIG. 14A); (ii) with the Core NN after being trained on the view detection procedure (see, e.g., FIG. 14B) and (iii) with Core NN after being trained on the part segmentation procedure (see, e.g., FIG. 14C). Initializing the system with Core NN that already learned some ultrasound analysis procedure (e.g., view detection) produced much more accurate results in shorter training time (e.g., compare FIGS. 14A and 14B). After learning the segmentation and repeating the DS training, the results even further improve (e.g., compare FIGS. 14B to 14C). This exemplary experiment supports that: (i) the different procedures corroborate and facilitate higher accuracy and better performance and (ii) initializing a procedure with the Core NN from a different ultrasound procedure can facilitate learning, and can achieve higher performance.
  • The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be used as an add-on to any clinical commercial imaging device, (e.g., ultrasound). The exemplary system, method and computer-accessible medium can be used for both the Echocardiography (e.g., Ultrasound of the Heart) in the Cardiology Department and the Emergency Department (“ED”).
  • Exemplary Cardiac Function And Measurements
  • Multiple technicians were observed while performing a routine and complete Echocardiographic examination. The total duration of the examination and the distribution of time for each of the three tasks were recorded. (See, e.g., diagram of FIG. 15 ). About 50% of the total time was dedicated to image acquisition, about 30% to about 35% was used for analysis, including tracing online measurement and calculation and about 10% to about 15% was used to generate a report. (See e.g., diagram shown in FIG. 15 ).
  • The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be used to accurately identify the left ventricle (e.g., out of the 4 chambers), and automatically apply and provide a complete cardiac function analysis that can be incorporated directly into the final study report. Currently, the technician has to identify two or three points, or trace the left ventricle in different views (e.g., out of the six views acquired), and then activate the calculation packages available on an exemplary machine. The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can automatically detect the different viewing windows and segments, and can identify the various parts of the heart, such as the left ventricle.
  • The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can use segmentation, peak systolic and/or peak diastolic frames, which can now be determined automatically. In the past, this has been performed manually by the technician by carefully scanning frame by frame, and identifying the peak systolic and peak diastolic frame. The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can utilize a neural core to determine these two events during the cardiac cycles, and can then perform an assessment of the left ventricular function. Thus, the manual labor can be eliminated completely, and all other measurements, including dimension of the left ventricle in systole and diastole, Right ventricular assessment, LA size, measurement of the aortic valve annulus, the aortic sinuses, the ascending aorta, the pulmonary valve, the mitral valve annulus and the tricuspid valve annulus, can be automatically measured.
  • An important part of the cardiac examination by a skilled echocardiographer can be to perform a thorough Doppler examination. In the past, the technician identified the ideal location and angle in which the sample volume of the Doppler can be located to receive the best signal-to-noise ratio. This can also be a manually laborious task that requires expertise in order to assess all cardiac valves. Such task is time consuming. The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be used to identify the exact area where Doppler samples can be located. After the appropriate image can be achieved, a Doppler sample location can be identified, and the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can automatically activate the Doppler modality, thus achieving appropriate Doppler tracing across the cardiac valves. When Doppler tracings is achieved and displayed, calculation packages can be applied, and the mitral and tricuspid valve inflow, as well as the aorta and pulmonary outflow tract Doppler tracing, can be calculated automatically.
  • About 10%-15% of the total duration of the examination can be dedicated to incorporate all measurements in the final report before it can be sent to the specialist, for example, the cardiologist who can finalize the report and send it to emergency medical records. This can be a significantly manual process that can be avoided by automatically measuring, using the exemplary system, method and computer-accessible medium, all the different variables needed to assess cardiac function as well as valve abnormalities. (See e.g., diagram shown in FIG. 15 ).
  • It is estimated that the exemplary system, method and computer-accessible medium, can save up to 40% of the time as compared to a manual exam (e.g., the type of examinations currently being performed). Additionally, the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can improve efficiency as well as quality as compared to currently-performed manual exams.
  • The exemplary system, method, and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can, after the appropriate image is acquired by the technician/physician/nurses, automatically display cardiac function as normal, mild, moderate or severe left ventricular dysfunction. If needed, the exact number of Left Ventricular Ejection Fraction can also be displayed.
  • Exemplary Pericardial Effusion
  • Pericardial effusion represents one of the most dangerous cardiac abnormalities that can lead to death if not diagnosed in a timely fashion. The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be used to automatically detect the existence of pericardial effusion by ultrasound examination. For example, the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can (e.g., detect immediately) any pericardial effusion, and alert the operator by the existence or nonexistence of pericardial effusion. If needed, the severity of the accumulated pericardial effusion can be displayed.
  • Exemplary Segmental Abnormality
  • Cardiac segmental abnormalities can be used as a screening tool for the diagnosis of ischemia of the cardiac muscle. This can be a very subjective task, and can be operator-dependent even in the hands of an expert cardiologist. The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can automatically notify the user that there may be segmental abnormalities, for example, Hypokynesis, diskynesia or paradoxical motion of any part of the left ventricular wall and septum. Currently, the heart is divided into 17 segments and the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can detect very subtle wall motion abnormalities across all segments.
  • Exemplary RV-To-LV Ratio
  • The exemplary system, method and computer-accessible medium can use the exemplary neural core to identify and separate the left and the right ventricles. The exemplary neural core can assess the relative and absolute areas and volumes of these ventricles, and can quickly calculate the ratio between them (e.g., in a fraction of a second). This can be beneficial in order to raise the suspicion level of a critical condition called a pulmonary embolism. In this condition there can be a major strain on the right ventricle, and as a result, the ventricle tends to enlarge, and the ratio between right ventricle and left ventricular area can be dramatically altered. This can be used as an exemplary screening tool to notify the clinician that there can be a possible pulmonary embolism based on the RV-to-LV ratio. (See e.g., diagram shown in FIG. 16 ).
  • For example, as illustrated in an exemplary diagram of FIG. 16 , a cardiac ultrasound platform 1605 can be used in either on online mode 1610 or an offline mode 1640. In the online mode 1610, the cardiac ultrasound platform can be used in a cardiac setting 1615 or an ED cardiac setting 1620. Various functions 1625 can be performed in the cardiac setting 1625, which can be based on a Doppler analysis 1635 (e.g., (i) LV EF, (ii) LV Volume, (iii) RV/LV Ratio, (iv) AO, MV, PV, TV, (v) Pericardial Effusion, (vi) Segmental Abn., (vii) Aortic Measurements, and (viii) IVC size). Further functions 1630 can be performed in the ED cardiac setting 1620 (e.g., (i) LV Function, (ii) Segmental Abn., (iii) LV Volume, (iv) Pericardial Effusion, (v) RV/LV Ratio, and (vi) IVC size). In offline mode 1640, in a cardiac setting 1645, various offline functions 1650 can be performed (e.g., (i) LV EF, (ii) LV Volume, (iii) RV/LV Ratio, (iv) AO, MV, PV, TV, (v) Pericardial Effusion, (vi) Segmental Abn., (vii) Aortic Measurements, and (viii) IVC size).
  • Exemplary Ejection Fraction
  • The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be used to compute or otherwise determine the EF from the raw UT input. The user can be provided with, for example, the indication of one of two possibilities: (i) a drive and (ii) a halt. In drive (see, e.g., FIG. 19A) the user can be instructed to navigate the UT transducer to produce the desired view (e.g., 4C, 2C, SA, etc.). When a halt sign is shown (see, e.g., FIG. 19B) the user can be requested to hold still while the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, gathers information. After traversing the exemplary views, the exemplary system, method and computer-accessible medium, can automatically produce the EF prediction along with the 3D reconstruction of the LV.
  • The exemplary determination of the EF can include the following exemplary procedures:
      • (i) Each input triple Xi can be passed through the exemplary network to produce a View detection and LV segmentation.
      • (ii) A time versus area curve can be computed for each view.
      • (iii) Persistence diagrams can be used to find stable maximum and minimum points of these curves.
      • (iv) From each view, all maximum and minimum cross sections can be extracted.
      • (v) The 3D volumes of the systole and diastole of the LV can be reconstructed by considering all combinations of minimal cross sections and maximal cross sections.
      • (vi) For each collection of cross sections (e.g., all of the same type: either minimal or maximal), the cross section can be aligned in 3D space. (See e.g., FIG. 21A which illustrates one collection of minimum cross sections in image 2105 and one collection of maximum cross sections in image 2110).
      • (vii) A 3D mesh M=(V, E , F) can be fitted, with vertices V, edges E, and faces F, to the cross sections by constraining the boundary of the cross sections to be on the mesh surface while minimizing the Willmore energy, ∫M H2dA, of the surface, where H can be the mean curvature and dA can be the area element on the surface. The Willmore energy can be used to minimize distortion, and can be optimal for round spheres. The exemplary optimization can be performed by a gradient descent together with a line search strategy. FIG. 21B shows the reconstruction generated using the cross sections in FIG. 21A.
      • (viii) From each pair of systole and diastole reconstruction, a candidate EF can be measured. The final robust estimation of the EF can be generated as the median of the EF histogram. FIG. 22 shows an exemplary histogram (EF Histogram 2202 and EF Prediction 2210) and the extracted value.
  • The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can utilize the above exemplary procedures to produce a 3D reconstruction of the entire cardiac cycle. (See, e.g., FIGS. 20A-20G).
  • To validate the automatic EF procedure described above, a test of 114 anonymous cases was performed, and compared to 4 expert cardiologists and 2 two expert technicians to assess/compute the EF. The ground truth for each case was defined as the median of these assessments. The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, was compared to the experts in terms of a mean deviation to ground truth and a standard deviation. As can be seen in Table 1 below, the exemplary system, method and computer-accessible medium produced comparable results to the top experts, and compares favorably to most experts. FIGS. 23A and 23B show the exemplary histogram of the errors of the experts (e.g., see FIG. 23A) and the errors of the exemplary system, method and computer-accessible medium (e.g., see FIG. 23B).
  • TABLE 1
    Comparison of mean and standard deviation
    of EF estimation errors on 114 subjects
    based on a comparison of the exemplary system,
    method and computer-accessible medium to
    various experts with respect to ground-truth EF.
    Method Mean EF Error Std
    Ours 0.2554862 7.388649
    Expert1 −0.2661642 8.699921
    3D-GT 0.5572847 4.356815
    Expert2 −0.9854624 6.826413
    Report-GT 2.146116 5.323971
    Expert4 3.189976 7.996613
    Sim-GT −4.851627 5.246874
    Bul-GT 5.107598 8.6520291
    Expert3 −8.397743 8.327786
  • Exemplary Use Emergency Department
  • The use of ultrasound in the emergency department is commonplace. The emergency department of a hospital or a medical center can be one of the busiest, stressed and scary places in the healthcare system. The need for a fast and reliable diagnosis can be crucial in order to improve patient outcome. Ultrasound currently has an important role in the following exemplary areas: Pulmonary, cardiac, abdominal scanning and OB-GYN. Additionally, acute scanning of orthopedic abnormalities, including fractures, has been introduced and incorporated into the ultrasound examination in the emergency department. The evolution of handheld devices that facilitate the clinician to scan without searching for equipment in the emergency department facilitates the process for critical decision while also gaining procedural guidance with high-quality ultrasound imaging.
  • The attraction of immediate bedside sonographic examination in the evaluation of specific emergent complaints can make it an ideal tool for the emergency physician. The increasing pressure to triage, diagnose, and rapidly treat the patient has fueled ultrasound use as the primary screening tool in the emergency department. The major areas currently being used in the ED are abdominal, pelvic, cardiac, and trauma.
  • Currently, for example, a minimum of 12 months of specific training is needed in order to train an emergency department physician to become an expert in ultrasound. There is an official ultrasound fellowship for emergency department physicians that takes a full 12 months of training. Not every emergency department in the country is currently staffed by an expert ultrasonographer. Thus, the exemplary system, method and computer-accessible medium, can assist the physician in the emergency room quickly identify abnormalities in different body systems.
  • Exemplary Cardiac Ultrasound In The Emergency Department
  • Emergent cardiac ultrasound can be used to assess for pericardial effusion and tamponade, cardiac activity, infarction, a global assessment of contractility, and the detection of central venous volume status, as well as a suspected pulmonary embolism. Ultrasound also has been incorporated into resuscitation of the critically ill and the at-risk patients. In the assessment of a patient with undifferentiated hypotension, emergent cardiac ultrasound can also be expanded for the use in heart failure and dyspnea. The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can perform the following exemplary functions:
  • Exemplary Abnormal Segmental Movement. The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can also alert the operator if there can be abnormal motion including akinesia, hypokinesia, dyskinesia and paradoxical movement of any part of the ventricular wall and septum, indicating potential ventricular ischemia or infarction. Left Ventricular Volume can be displayed in situations of Hypovolumia. (See e.g., diagram shown in FIG. 16 ).
  • The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be used in an emergency department for the following exemplary examinations:
      • 1) Focused assessment with sonography in trauma or fast (examination).
      • 2) Pregnancy.
      • 3) Abdominal aortic aneurysm.
      • 4) Hepatobiliary system.
      • 5) Urinary tract.
      • 6) Deep vein thrombosis.
      • 7) Soft tissue musculoskeletal.
      • 8) Thoracic airway.
      • 9) Ocular.
      • 10) Bowel.
  • The exemplary uses of ultrasound in the emergency department have the potential to use deep learning for a faster and more acute diagnosis.
  • Emergency physicians' use of ultrasound can provide timely and cost-effective means to accurately diagnose emergency conditions during illness and injury in order to provide a higher-quality, lower-cost, care. ED ultrasound use can often reduce the need for more expensive studies such as CT or MRIs and can reduce unnecessary admissions for more comprehensive diagnostic workup. Additionally, the moving of the patient from one lab to the other requires manpower and complex queue scheduling and monitoring. Thus, the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can provide a new way of using ultrasound in the emergency department. (See e.g., diagram shown in FIG. 17 ).
  • For example, as shown in an exemplary diagram of FIG. 17 , neural network core medical imaging 1705 can be performed, e.g., using the exemplary system, method and/or computer-accessible medium according to exemplary embodiments of the present disclosure. This can be organ specific 1715, based on various organs, organ types, or medical specialties 1715 (e.g., (i) cardiac, (ii) abdominal, (iii) chest, (iv) OBGYN, (v) oncology, (vi) urology, and (vii) orthopedics. Neural network core medical imaging 1705 can be used in various settings 1720 (e.g., (i) patient care, (ii) education, (iii) training, (iv) quality assurance (“QA”), and (v) outcomes. Various imaging modalities 1725 can be used (e.g., (i) ultrasound, (ii) magnetic resonance imaging (“MRI”), (iii) computed tomography (“CT”), (iv) positron emission tomography CT (“PETCT”), and (v) nuclear).
  • An exemplary assessment of the examination can result in improving point of care. Using established analytic tools can facilitate much faster and reliable diagnosis. There can be no need for major requirements for special training. The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be used in other fields including military arena, telemedicine applications, urgent care facilities and in-home healthcare. (See e.g., diagram shown in FIG. 18 ). Exemplary markets 1805 for the use of the exemplary system, method, and computer-accessible medium, can include hospitals 1810 (e.g., in (i) cardiology 1815, (ii) ED 1820, (iii) radiology 1825, and (iv) intensive care 1830), urgent care 1835, homecare 1840, and the military 1845.
  • FIG. 24 illustrates an exemplary flow diagram of an exemplary method 2400 for detecting an anomaly in an anatomical structure of a patient according to an exemplary embodiment of the present disclosure. For exemplary, at procedure 2405, imaging information related to an anatomical structure of a patient can be generated or received. At procedure 2410, the imaging information can be optionally upsampled. At procedure 2415 the anatomical structure can be segmented using a part segmentation procedure and a localization procedure before or independently of the detection of the anomaly. At procedure 2420, a feature of the anatomical structure can be classified based on the imaging information using a neural network. At procedure 2425, an output produced by each of the neural networks can be concatenated. At procedure 2430, the anomaly can be detected based on data generated using the classification and segmentation procedures.
  • FIG. 25 shows a block diagram of an exemplary embodiment of a system according to the present disclosure. For example, exemplary procedures in accordance with the present disclosure described herein can be performed by a processing arrangement and/or a computing arrangement 2505. Such processing/computing arrangement 2505 can be, for example entirely or a part of, or include, but not limited to, a computer/processor 2510 that can include, for example one or more microprocessors, and use instructions stored on a computer-accessible medium (e.g., RAM, ROM, hard drive, or other storage device).
  • As shown in FIG. 25 , for example a computer-accessible medium 2515 (e.g., as described herein above, a storage device such as a hard disk, floppy disk, memory stick, CD-ROM, RAM, ROM, etc., or a collection thereof) can be provided (e.g., in communication with the processing arrangement 2505). The computer-accessible medium 2515 can contain executable instructions 2520 thereon. In addition or alternatively, a storage arrangement 2525 can be provided separately from the computer-accessible medium 2515, which can provide the instructions to the processing arrangement 2505 so as to configure the processing arrangement to execute certain exemplary procedures, processes and methods, as described herein above, for example.
  • Further, the exemplary processing arrangement 2505 can be provided with or include an input/output arrangement 2535, which can include, for example a wired network, a wireless network, the internet, an intranet, a data collection probe, a sensor, etc. As shown in FIG. 25 , the exemplary processing arrangement 2505 can be in communication with an exemplary display arrangement 2530, which, according to certain exemplary embodiments of the present disclosure, can be a touch-screen configured for inputting information to the processing arrangement in addition to outputting information from the processing arrangement, for example. Further, the exemplary display 2530 and/or a storage arrangement 2525 can be used to display and/or store data in a user-accessible format and/or user-readable format.
  • The foregoing merely illustrates the principles of the disclosure. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements, and procedures which, although not explicitly shown or described herein, embody the principles of the disclosure and can be thus within the spirit and scope of the disclosure. Various different exemplary embodiments can be used together with one another, as well as interchangeably therewith, as should be understood by those having ordinary skill in the art. In addition, certain terms used in the present disclosure, including the specification, drawings and claims thereof, can be used synonymously in certain instances, including, but not limited to, for example, data and information. It should be understood that, while these words, and/or other words that can be synonymous to one another, can be used synonymously herein, that there can be instances when such words can be intended to not be used synonymously. Further, to the extent that the prior art knowledge has not been explicitly incorporated by reference herein above, it is explicitly incorporated herein in its entirety. All publications referenced are incorporated herein by reference in their entireties.
  • EXEMPLARY REFERENCES
  • The following references are hereby incorporated by reference in their entireties:
  • [1] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, Generative adversarial nets, Advances in neural information processing systems, 2014, pp. 2672-2680.
  • [2] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros, Image-to-image translation with conditional adversarial networks, arXiv preprint arXiv:1611.07004 (2016).
  • [3] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems, 2012, pp. 1097-1105.

Claims (22)

What is claimed is:
1. A system for analyzing at least one anatomical structure of at least one patient, comprising:
at least one neural network trained on multiple recognition and/or analysis procedures, trained in order of their difficulty level; and
a specifically configured computer hardware arrangement configured to:
receive imaging information related to the at least one anatomical structure of the at least one patient;
using said at least one neural network, produce at least one non-procedure specific feature of the at least one anatomical structure based on the imaging information; and
perform one of said recognition and/or analysis procedures using said non-procedure-specific features to determine at least one anomaly or as input to a further procedure.
2. The system of claim 1, wherein the imaging information includes at least three images of the at least one anatomical structure.
3. The system of claim 1, wherein the imaging information includes ultrasound imaging information.
4. The system of claim 1, wherein the at least one anatomical structure is a heart and wherein said multiple recognition and/or analysis procedures comprise:
a view detection procedure to detect a view of a particular imaging frame in the imaging information;
a systole/diastole detection procedure;
a part segmentation procedure to segment parts of the heart of the at least one patient from a background;
a valve localization procedure to localize a heart valve; and
an anomaly detection procedure.
5. The system of claim 1, wherein the at least one state of said heart includes at least one of (i) a systole state of a heart of the at least one patient, (ii) a diastole state of the heart of the at least one patient, (iii) an inflation state of the heart of the at least one patient or (iv) a deflation state of the heart of the at least one patient.
6. The system of claim 1, wherein the further procedure is configured to determine an ejection fraction using output of said view detection and part segmentation procedures, where the segmentation segments the left ventricle.
7. The system of claim 6, wherein the further procedure is configured to place at least one Doppler measuring point within an appropriate view of said heart.
8. The system of claim 1, wherein the imaging information includes a plurality of images at different resolutions, and wherein the at least one neural network includes a plurality of neural networks, each of the neural networks being associated with one of the images.
9. A method for analyzing at least one anatomical structure of at least one patient, comprising:
training at least one neural network on multiple recognition and/or analysis procedures, trained in order of their difficulty level;
receiving imaging information related to the at least one anatomical structure of the at least one patient;
using said at least one neural network, producing at least one non-procedure specific feature of the at least one anatomical structure based on the imaging information; and
performing one of said recognition and/or analysis procedures using said non-procedure-specific features to determine at least one anomaly or as input to a further procedure.
10. The method of claim 9, wherein the imaging information includes at least three images of the at least one anatomical structure.
11. The method of claim 9, wherein the imaging information includes ultrasound imaging information.
12. The method of claim 9, wherein the at least one anatomical structure is a heart and wherein said multiple recognition and/or analysis procedures comprise:
a view detection procedure to detect a view of a particular imaging frame in the imaging information;
a systole/diastole detection procedure;
a part segmentation procedure to segment parts of the heart of the at least one patient from a background;
a valve localization procedure to localize a heart valve; and
an anomaly detection procedure.
13. The method of claim 9, wherein the at least one state of said heart includes at least one of (i) a systole state of a heart of the at least one patient, (ii) a diastole state of the heart of the at least one patient, (iii) an inflation state of the heart of the at least one patient or (iv) a deflation state of the heart of the at least one patient.
14. The method of claim 9, wherein the further procedure comprises determining an ejection fraction using output of said view detection and part segmentation procedures, where the segmentation segments the left ventricle.
15. The method of claim 12, wherein the further procedure comprises placing at least one Doppler measuring point within an appropriate view of said heart.
16. The method of claim 9, wherein the imaging information includes a plurality of images at different resolutions, and wherein the at least one neural network includes a plurality of neural networks, each of the neural networks being associated with one of the images.
17. The system of claim 6, wherein the further procedure is configured to determine three dimensional volumes of the systole and diastole of the left ventricle and to fit a three dimensional mesh to each of said volumes.
18. The system of claim 6, wherein the further procedure is configured to determine said ejection fraction from said meshes.
19. The system of claim 4, wherein the further procedure is configured to generate measurements of said heart, including at least one of: a dimension of the left ventricle in systole and diastole, right ventricular assessment, LA size, measurement of the aortic valve annulus, the aortic sinuses, the ascending aorta, the pulmonary valve, the mitral valve annulus and the tricuspid valve annulus.
20. The method of claim 14, wherein the further procedure comprises determining three dimensional volumes of the systole and diastole of the left ventricle and to fit a three dimensional mesh to each of said volumes.
21. The method of claim 20, wherein the further procedure comprises determining said ejection fraction from said meshes.
22. The method of claim 12, wherein the further procedure comprises generating measurements of said heart, including at least one of: a dimension of the left ventricle in systole and diastole, right ventricular assessment, LA size, measurement of the aortic valve annulus, the aortic sinuses, the ascending aorta, the pulmonary valve, the mitral valve annulus and the tricuspid valve annulus.
US18/048,873 2017-01-19 2022-10-24 System and method for ultrasound analysis Pending US20230104045A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/048,873 US20230104045A1 (en) 2017-01-19 2022-10-24 System and method for ultrasound analysis

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762448061P 2017-01-19 2017-01-19
PCT/US2018/014536 WO2018136805A1 (en) 2017-01-19 2018-01-19 System, method and computer-accessible medium for ultrasound analysis
US201916478507A 2019-07-17 2019-07-17
US18/048,873 US20230104045A1 (en) 2017-01-19 2022-10-24 System and method for ultrasound analysis

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US16/478,507 Continuation US11478226B2 (en) 2017-01-19 2018-01-19 System and method for ultrasound analysis
PCT/US2018/014536 Continuation WO2018136805A1 (en) 2017-01-19 2018-01-19 System, method and computer-accessible medium for ultrasound analysis

Publications (1)

Publication Number Publication Date
US20230104045A1 true US20230104045A1 (en) 2023-04-06

Family

ID=62908785

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/478,507 Active 2038-06-03 US11478226B2 (en) 2017-01-19 2018-01-19 System and method for ultrasound analysis
US18/048,873 Pending US20230104045A1 (en) 2017-01-19 2022-10-24 System and method for ultrasound analysis

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/478,507 Active 2038-06-03 US11478226B2 (en) 2017-01-19 2018-01-19 System and method for ultrasound analysis

Country Status (8)

Country Link
US (2) US11478226B2 (en)
EP (1) EP3570752A4 (en)
JP (2) JP2020511190A (en)
KR (1) KR20190119592A (en)
CN (1) CN110461240A (en)
IL (1) IL268141B2 (en)
RU (1) RU2019125590A (en)
WO (1) WO2018136805A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230113154A1 (en) * 2018-02-26 2023-04-13 Siemens Medical Solutions Usa, Inc. Three-Dimensional Segmentation from Two-Dimensional Intracardiac Echocardiography Imaging

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7194691B2 (en) * 2017-03-28 2022-12-22 コーニンクレッカ フィリップス エヌ ヴェ Ultrasound clinical feature detection and related apparatus, systems, and methods
US10628919B2 (en) * 2017-08-31 2020-04-21 Htc Corporation Image segmentation method and apparatus
US11024025B2 (en) * 2018-03-07 2021-06-01 University Of Virginia Patent Foundation Automatic quantification of cardiac MRI for hypertrophic cardiomyopathy
EP3553740A1 (en) * 2018-04-13 2019-10-16 Koninklijke Philips N.V. Automatic slice selection in medical imaging
FI3793447T3 (en) 2018-05-15 2023-03-18 Univ New York System and method for orientating capture of ultrasound images
US11486950B2 (en) * 2018-08-01 2022-11-01 General Electric Company Systems and methods for automated graphical prescription with deep neural networks
CN109345498B (en) * 2018-10-05 2021-07-13 数坤(北京)网络科技股份有限公司 Coronary artery segmentation method fusing dual-source CT data
WO2020220126A1 (en) 2019-04-30 2020-11-05 Modiface Inc. Image processing using a convolutional neural network to track a plurality of objects
US11651487B2 (en) * 2019-07-12 2023-05-16 The Regents Of The University Of California Fully automated four-chamber segmentation of echocardiograms
JP7471895B2 (en) * 2020-04-09 2024-04-22 キヤノンメディカルシステムズ株式会社 Ultrasound diagnostic device and ultrasound diagnostic system
US20210330285A1 (en) * 2020-04-28 2021-10-28 EchoNous, Inc. Systems and methods for automated physiological parameter estimation from ultrasound image sequences
US11523801B2 (en) * 2020-05-11 2022-12-13 EchoNous, Inc. Automatically identifying anatomical structures in medical images in a manner that is sensitive to the particular view in which each image is captured
CN111598868B (en) * 2020-05-14 2022-12-30 上海深至信息科技有限公司 Lung ultrasonic image identification method and system
KR102292002B1 (en) * 2021-02-09 2021-08-23 강기운 Echocardiogram apparatus for monitoring pacing-induced cardiomyopathy
JP7475313B2 (en) * 2021-04-19 2024-04-26 富士フイルムヘルスケア株式会社 Ultrasound diagnostic system, ultrasound diagnostic device, and diagnostic support server
CN113450337B (en) * 2021-07-07 2024-05-24 东软医疗系统股份有限公司 Method and device for evaluating effusion in pericardial space, electronic equipment and storage medium
JP2023040452A (en) * 2021-09-10 2023-03-23 株式会社東芝 Data processor, method for processing data, and data processing program
KR20240057761A (en) * 2022-10-25 2024-05-03 주식회사 온택트헬스 Method for providing information of echocardiography images and device usinng the same

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6106466A (en) * 1997-04-24 2000-08-22 University Of Washington Automated delineation of heart contours from images using reconstruction-based modeling
US20060111877A1 (en) 2002-12-13 2006-05-25 Haselhoff Eltjo H System and method for processing a series of image frames representing a cardiac cycle
EP1639542A1 (en) * 2003-06-16 2006-03-29 Philips Intellectual Property & Standards GmbH Image segmentation in time-series images
US7912528B2 (en) 2003-06-25 2011-03-22 Siemens Medical Solutions Usa, Inc. Systems and methods for automated diagnosis and decision support for heart related diseases and conditions
US7545965B2 (en) * 2003-11-10 2009-06-09 The University Of Chicago Image modification and detection using massive training artificial neural networks (MTANN)
US7536044B2 (en) 2003-11-19 2009-05-19 Siemens Medical Solutions Usa, Inc. System and method for detecting and matching anatomical structures using appearance and shape
US20060074315A1 (en) 2004-10-04 2006-04-06 Jianming Liang Medical diagnostic ultrasound characterization of cardiac motion
US8913830B2 (en) 2005-01-18 2014-12-16 Siemens Aktiengesellschaft Multilevel image segmentation
US8009900B2 (en) 2006-09-28 2011-08-30 Siemens Medical Solutions Usa, Inc. System and method for detecting an object in a high dimensional space
US8073215B2 (en) * 2007-09-18 2011-12-06 Siemens Medical Solutions Usa, Inc. Automated detection of planes from three-dimensional echocardiographic data
US8771189B2 (en) 2009-03-18 2014-07-08 Siemens Medical Solutions Usa, Inc. Valve assessment from medical diagnostic imaging data
US8396268B2 (en) * 2010-03-31 2013-03-12 Isis Innovation Limited System and method for image sequence processing
US9092692B2 (en) 2012-09-13 2015-07-28 Los Alamos National Security, Llc Object detection approach using generative sparse, hierarchical networks with top-down and lateral connections for combining texture/color detection and shape/contour detection
US9730643B2 (en) * 2013-10-17 2017-08-15 Siemens Healthcare Gmbh Method and system for anatomical object detection using marginal space deep neural networks
US9700219B2 (en) * 2013-10-17 2017-07-11 Siemens Healthcare Gmbh Method and system for machine learning based assessment of fractional flow reserve
US9092743B2 (en) 2013-10-23 2015-07-28 Stenomics, Inc. Machine learning system for assessing heart valves and surrounding cardiovascular tracts
US10271817B2 (en) 2014-06-23 2019-04-30 Siemens Medical Solutions Usa, Inc. Valve regurgitant detection for echocardiography
US10194888B2 (en) 2015-03-12 2019-02-05 Siemens Medical Solutions Usa, Inc. Continuously oriented enhanced ultrasound imaging of a sub-volume
CN106156793A (en) 2016-06-27 2016-11-23 西北工业大学 Extract in conjunction with further feature and the classification method of medical image of shallow-layer feature extraction
US10074038B2 (en) * 2016-11-23 2018-09-11 General Electric Company Deep learning medical systems and methods for image reconstruction and quality evaluation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230113154A1 (en) * 2018-02-26 2023-04-13 Siemens Medical Solutions Usa, Inc. Three-Dimensional Segmentation from Two-Dimensional Intracardiac Echocardiography Imaging
US11806189B2 (en) * 2018-02-26 2023-11-07 Siemens Medical Solutions Usa, Inc. Three-dimensional segmentation from two-dimensional intracardiac echocardiography imaging

Also Published As

Publication number Publication date
JP7395142B2 (en) 2023-12-11
WO2018136805A1 (en) 2018-07-26
JP2022188108A (en) 2022-12-20
IL268141B1 (en) 2023-05-01
RU2019125590A3 (en) 2021-08-30
KR20190119592A (en) 2019-10-22
EP3570752A4 (en) 2020-01-22
US11478226B2 (en) 2022-10-25
IL268141A (en) 2019-09-26
EP3570752A1 (en) 2019-11-27
CN110461240A (en) 2019-11-15
JP2020511190A (en) 2020-04-16
US20190388064A1 (en) 2019-12-26
RU2019125590A (en) 2021-02-19
IL268141B2 (en) 2023-09-01

Similar Documents

Publication Publication Date Title
US20230104045A1 (en) System and method for ultrasound analysis
US10909681B2 (en) Automated selection of an optimal image from a series of images
US11576621B2 (en) Plaque vulnerability assessment in medical imaging
US11024025B2 (en) Automatic quantification of cardiac MRI for hypertrophic cardiomyopathy
Slomka et al. Cardiac imaging: working towards fully-automated machine analysis & interpretation
Que et al. CardioXNet: automated detection for cardiomegaly based on deep learning
Nurmaini et al. Accurate detection of septal defects with fetal ultrasonography images using deep learning-based multiclass instance segmentation
US20220012875A1 (en) Systems and Methods for Medical Image Diagnosis Using Machine Learning
US11076824B1 (en) Method and system for diagnosis of COVID-19 using artificial intelligence
JP2013545520A (en) Image search engine
US11379978B2 (en) Model training apparatus and method
Lucassen et al. Deep learning for detection and localization of B-lines in lung ultrasound
WO2024126468A1 (en) Echocardiogram classification with machine learning
Nasimov et al. Deep learning algorithm for classifying dilated cardiomyopathy and hypertrophic cardiomyopathy in transport workers
Zeng et al. A 2.5 D deep learning-based method for drowning diagnosis using post-mortem computed tomography
Ragnarsdottir et al. Interpretable prediction of pulmonary hypertension in newborns using echocardiograms
Dominguez et al. Classification of segmental wall motion in echocardiography using quantified parametric images
Jafari Towards a more robust machine learning framework for computer-assisted echocardiography
US20230316523A1 (en) Free fluid estimation
US20240233129A1 (en) Quantification of body composition using contrastive learning in ct images
KR20230168954A (en) Device and method for analyzing functional modalities
Lin et al. Deep learning for the identification of pre-and post-capillary pulmonary hypertension on cine MRI
WO2024127406A1 (en) Automatized detection of intestinal inflammation in crohn's disease using convolutional neural network
WO2024046807A1 (en) Ultrasound video feature detection using learning from unlabeled data
WO2023110680A1 (en) A computer implemented method, a method and a system

Legal Events

Date Code Title Description
AS Assignment

Owner name: YEDA RESEARCH AND DEVELOPMENT CO. LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KEZURER, ITAY;LIPMAN, YARON;REEL/FRAME:061587/0227

Effective date: 20191204

Owner name: NEW YORK UNIVERSITY, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LUDOMIRSKY, ACHIAU;REEL/FRAME:061587/0296

Effective date: 20191204

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION