EP4150534A1 - Apprentissage de mouvement sans marqueurs - Google Patents

Apprentissage de mouvement sans marqueurs

Info

Publication number
EP4150534A1
EP4150534A1 EP21803270.4A EP21803270A EP4150534A1 EP 4150534 A1 EP4150534 A1 EP 4150534A1 EP 21803270 A EP21803270 A EP 21803270A EP 4150534 A1 EP4150534 A1 EP 4150534A1
Authority
EP
European Patent Office
Prior art keywords
pair
training
machine learning
learning model
motion field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21803270.4A
Other languages
German (de)
English (en)
Other versions
EP4150534A4 (fr
Inventor
Allen Lu
Babajide Ayinde
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EchoNous Inc
Original Assignee
EchoNous Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EchoNous Inc filed Critical EchoNous Inc
Publication of EP4150534A1 publication Critical patent/EP4150534A1/fr
Publication of EP4150534A4 publication Critical patent/EP4150534A4/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0883Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of the heart
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/485Diagnostic techniques involving measuring strain or elastic properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2178Validation; Performance evaluation; Active pattern learning techniques based on feedback of a supervisor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/168Segmentation; Edge detection involving transform domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/031Recognition of patterns in medical or anatomical images of internal organs

Definitions

  • Figure 1 shows an example of 1 -dimensional deformation of an object. Force is applied to the object 110 to increase its initial length Lo 111 to a distended length L 121, resulting in deformed object 120.
  • Equation (1) below determines the one-dimensional strain for this example.
  • One-dimensional strain has only 1 component.
  • three- dimensional medical imaging such as ultrasound imaging
  • Figure 2 shows three directions in which cardiac strain is commonly projected.
  • the context is the heart 200, having a wall 202 bounded by an outer surface 201 and an inner surface 203.
  • a section of the heart is shown, comprising a section 212 of wall 202, a section 211 of outer surface 201, and a section 213 of inner surface 203.
  • strain has the potential to be incredibly useful in the clinical setting.
  • Myocardial strain imaging how useful is it in clinical decision making?, published in European Heart Journal in 2016, the author described that strain may be useful as a supplementary diagnostic method in the following ways (quoting):
  • reduced GLS may be used to identify systolic dysfunction.
  • Strain imaging can be used to identify sub-clinical LV dysfunction in individuals who are evaluated for cardiomyopathy. This includes family screening for HCM and the finding of reduced GLS indicates early disease.
  • Strain may be used to diagnose myocardial ischaemia, but the technology is not sufficiently standardized to be recommended as a general tool for this purpose. In unclear clinical cases, however, it may be considered as a supplementary method.
  • Peak systolic longitudinal LA strain is a promising supplementary index of LV filling pressure, but needs further validation in prospective trials.
  • Figure 3 is a data flow diagram showing a generic workflow for computing strain at a high level.
  • motion tracking is performed on a sequence of 2D ultrasound images 301-304 to generate a displacement field 320, which describes the motion of each pixel in the image over the course of the sequence.
  • a displacement field 320 which describes the motion of each pixel in the image over the course of the sequence.
  • Strain is computed from the displacement field, and can be shown in various representations, including a 17-segment strain map 331 showing strain at end-systole (ES), strain curves 332 for each segment, strain rate (temporal derivative of strain), and a global strain value.
  • GLS Global longitudinal strain and strain rate
  • Speckle tracking is a popular motion tracking approach used in clinics today.
  • Figure 4 shows is a data flow diagram showing a generic workflow for speckle tracking.
  • ROI Region of Interest
  • the previous defined ROI patch 411/431 is matched to every possible patch 432-435 in the Search region 421/430.
  • the location of the patch in the Search region that has the highest similarity to the original ROI patch, patch 434/44, is designated as the location where pixel of interested moved to from the original image frame to the subsequent image frame. It is used as a basis for determining vertical motion displacement 446 and horizontal motion displacement 447.
  • Deformable registration is another popular motion tracking approach but is not yet commonly used in clinics.
  • Figure 5 shows a generic workflow for deformable registration.
  • Motion is estimated between two images 510 and 520 by deforming/displacing a pre-defined grid 511 on the moving image to obtain deformed-displaced grid 521 that maximizes similarity to the fixed image.
  • the grid points are the parameters of kernels, such as B-splines and thin plate splines, that implicitly spatially smooth the displacement field. This maximization process is solved with a global objective function and is usually solved in a multi-scale framework to mitigate convergence to local maxima.
  • Figure 1 shows an example of 1 -dimensional deformation of an object.
  • Figure 2 shows three directions in which cardiac strain is commonly projected.
  • Figure 3 is a data flow diagram showing a generic workflow for computing strain at a high level.
  • Figure 4 shows is a data flow diagram showing a generic workflow for speckle tracking.
  • Figure 5 shows a generic workflow for deformable registration.
  • Figure 6 is a block diagram showing some of the components typically incorporated in at least some of the computer systems and other devices on which the facility operates.
  • Figure 7 is a flow diagram showing a process performed by the facility in some embodiments to assess strain based on three-dimensional video.
  • Figure 8 is a data flow diagram depicting the facility’s training of the neural network.
  • Figure 9 is a data flow diagram depicting the facility’s application of the neural network.
  • speckle tracking relies on tracking local speckle patterns over the image sequence. As a result, it requires a consistent propagation of speckle over time and requires a high frame-rate / temporal resolution. High temporal resolution necessitates lower spatial resolution, which is an unfortunate trade-off that should be avoided.
  • the inventor has conceived and reduced to practice a software and/or hardware facility that uses deep learning to determine a displacement or velocity field (a “motion field”) from a sequence of ultrasound images without using training labels.
  • a displacement field a “motion field”
  • the facility uses this displacement or velocity field to compute motion-based clinical measurements such as strain and strain rate from echocardiography.
  • the facility trains a neural network using a set of training pairs of radiological images, each pair captured from the same patient at different times.
  • the facility applies the present state of the machine learning model to the images of the training pair to obtain a predicted motion field.
  • the facility applies a spatial transform to transform the first radiological image of the training pair using the predicted motion field, and compares the transformed first radiological image of the training pair to the second radiological image of the training pair to obtain a loss metric value.
  • the facility adjusts adjusting the state of the machine learning model based on the loss metric value, and continues with additional training image pairs until the loss metric value reaches an acceptably low level.
  • the facility can determine a strain assessment rapidly from a patient’s video frames — such as in less than one second, permitting it to be acted on promptly — while producing good results, even for large deformations, without the need to determine ground-truth displacements for neural network training.
  • the facility improves the functioning of computer or other hardware, such as by reducing the dynamic display area, processing, storage, and/or data transmission resources needed to perform a certain task, thereby enabling the task to be permitted by less capable, capacious, and/or expensive hardware devices, and/or be performed with lesser latency, and/or preserving more of the conserved resources for use in performing other tasks.
  • the facility permits a computing device used for inference to have a less powerful processor or fewer processors, or be used for additional or different tasks simultaneously.
  • Figure 6 is a block diagram showing some of the components typically incorporated in at least some of the computer systems and other devices on which the facility operates.
  • these computer systems and other devices 600 can include server computer systems, cloud computing platforms or virtual machines in other configurations, desktop computer systems, laptop computer systems, netbooks, mobile phones, personal digital assistants, televisions, cameras, automobile computers, electronic media players, etc.
  • the computer systems and devices include zero or more of each of the following: a central processing unit (“CPU”) or processor of another type 601 for executing computer programs; a computer memory 602 for storing programs and data while they are being used, including the facility and associated data, an operating system including a kernel, and device drivers; a persistent storage device 603, such as a hard drive or flash drive for persistently storing programs and data; a computer- readable media drive 604, such as a floppy, CD-ROM, or DVD drive, for reading programs and data stored on a computer-readable medium; and a network connection 605 for connecting the computer system to other computer systems to send and/or receive data, such as via the Internet or another network and its networking hardware, such as switches, routers, repeaters, electrical cables
  • Figure 7 is a flow diagram showing a process performed by the facility in some embodiments to assess strain based on three-dimensional video.
  • the facility uses many pairs of training video frames to train a neural network to predict a motion field — such as a displacement field or a velocity field — from a pair of production video frames.
  • the facility uses neural networks of a variety of types.
  • the facility uses a UNET-based convolutional neural network.
  • the facility uses various other types of fully-convolutional networks.
  • the facility uses experimentation to configure the neural network.
  • Figure 8 is a data flow diagram depicting the facility’s training of the neural network.
  • the facility feeds two images, Frame 1 801 and Frame 2 802, into the neural network 810 in its present state.
  • the network outputs a displacement or velocity field 811.
  • the facility uses the outputted displacement field to displace 720 a fixed mesh grid from Frame 1 , and the displaced mesh grid is used to sample the pixel intensities from transformed Frame 1 for comparison to Frame 2. This comparison produces a Loss Metric 730.
  • the facility uses the Loss Metric adjust the training state of the neural network. The facility repeats this process until the Loss Metric is converges to a minimal value.
  • the facility receives a pair of production video frames; i.e. , frames from video captured from a patient for which strain is to be assessed.
  • the facility applies the neural network trained in act 701 to the received pair of production video frames to predict a motion field for the received pair of production video frames.
  • Figure 9 is a data flow diagram depicting the facility’s application of the neural network.
  • the facility subjects production video frame 1 901 and production video frame 2902 to trained neural network 910 to produce a predicted motion field for the pair of production video frames.
  • the facility performs strain analysis (shown as analysis 920 in Figure 9) against the motion field predicted in act 703. This produces one or more of strain representations 921-924 shown in Figure 9, described above in connection with Figure 3.
  • act 705 the facility acts on the strain determined in act 704. In various embodiments, this includes one or more of storing a produced strain representation on behalf of the patient from whom the production video frames were obtained; displaying storing a produced strain representation; performing diagnostic analysis for the patient based at least in part on a produced strain representation; etc. After act 705, this process continues in step 702 to receive and process the next pair of production video frames.
  • objective functions of the neural network used by the facility and their optimization are designed in a flexible manner.
  • custom objective functions and training frameworks can be designed to suit particular ultrasound image sequence analyses.
  • the facility uses one or more of the following approaches:
  • the facility jointly trains both the motion tracking network and the segmentation network, as described in K. Ta, S. S. Ahn, A. Lu, J. C. Stendahl, A. J. Sinusas and J. S. Duncan, "A Semi-Supervised Joint Learning Approach to Left Ventricular Segmentation and Motion Tracking in Echocardiography," 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 2020, pp. 1734-1737, available at ieeexplore.ieee.org/abstract/document/9098664, which is hereby incorporated by reference in its entirety.
  • the facility learns this unsupervised learning framework by first training the network using a supervised learning approach (i.e. with labels during training), where the labels are noisy and based on a traditional motion tracking approach, such as non-rigid registration. Then, the facility uses the learned weights from non-rigid registration to restart training in an unsupervised manner. In some cases, this has the benefit of training a higher-performance network with less training data.
  • a supervised learning approach i.e. with labels during training
  • a traditional motion tracking approach such as non-rigid registration.
  • the facility applies approaches discussed above to a variety of other motion abnormality detection applications.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Public Health (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Mathematical Physics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Cardiology (AREA)
  • Image Analysis (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un modèle d'apprentissage automatique qui est entraîné sans marqueurs pour prédire un champ de mouvement entre une paire d'images. Le modèle entraîné peut être appliqué à une paire d'images distinguées pour prédire un champ de mouvement entre la paire d'images distinguées.
EP21803270.4A 2020-05-11 2021-05-10 Apprentissage de mouvement sans marqueurs Pending EP4150534A4 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063022989P 2020-05-11 2020-05-11
PCT/US2021/031618 WO2021231320A1 (fr) 2020-05-11 2021-05-10 Apprentissage de mouvement sans marqueurs

Publications (2)

Publication Number Publication Date
EP4150534A1 true EP4150534A1 (fr) 2023-03-22
EP4150534A4 EP4150534A4 (fr) 2024-05-29

Family

ID=78412944

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21803270.4A Pending EP4150534A4 (fr) 2020-05-11 2021-05-10 Apprentissage de mouvement sans marqueurs

Country Status (4)

Country Link
US (1) US11847786B2 (fr)
EP (1) EP4150534A4 (fr)
JP (1) JP2023525287A (fr)
WO (1) WO2021231320A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102657226B1 (ko) * 2021-10-18 2024-04-15 주식회사 온택트헬스 심장 초음파 이미지 데이터를 증강하기 위한 방법 및 장치

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8861830B2 (en) * 2011-11-07 2014-10-14 Paieon Inc. Method and system for detecting and analyzing heart mechanics
EP3380859A4 (fr) * 2015-11-29 2019-07-31 Arterys Inc. Segmentation automatisée de volume cardiaque
DE202017007512U1 (de) * 2016-04-11 2022-04-28 Magic Pony Technology Limited Bewegungsschätzung durch maschinelles Lernen
US11449759B2 (en) * 2018-01-03 2022-09-20 Siemens Heathcare Gmbh Medical imaging diffeomorphic registration based on machine learning
WO2019238232A1 (fr) * 2018-06-14 2019-12-19 Siemens Aktiengesellschaft Procédé et support de stockage lisible par machine pour classifier une image du ciel proche du soleil

Also Published As

Publication number Publication date
JP2023525287A (ja) 2023-06-15
WO2021231320A1 (fr) 2021-11-18
US11847786B2 (en) 2023-12-19
US20210350549A1 (en) 2021-11-11
EP4150534A4 (fr) 2024-05-29

Similar Documents

Publication Publication Date Title
Chen et al. Deep learning for cardiac image segmentation: a review
Litjens et al. State-of-the-art deep learning in cardiovascular image analysis
Moradi et al. MFP-Unet: A novel deep learning based approach for left ventricle segmentation in echocardiography
CN110475505B (zh) 利用全卷积网络的自动分割
Kusunose et al. Utilization of artificial intelligence in echocardiography
US10453200B2 (en) Automated segmentation using deep learned priors
Zhuang et al. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection
US8594398B2 (en) Systems and methods for cardiac view recognition and disease recognition
US8311303B2 (en) Method and system for semantics driven image registration
Zhou Shape regression machine and efficient segmentation of left ventricle endocardium from 2D B-mode echocardiogram
JP2019530490A (ja) 検出精度を向上させるために関心領域の異なるビューからの複数の画像を用いるコンピュータ支援検出
US9142030B2 (en) Systems, methods and computer readable storage media storing instructions for automatically segmenting images of a region of interest
Habijan et al. Overview of the whole heart and heart chamber segmentation methods
JP5955199B2 (ja) 画像処理装置および画像処理方法、並びに、画像処理プログラム
US20230394670A1 (en) Anatomically-informed deep learning on contrast-enhanced cardiac mri for scar segmentation and clinical feature extraction
Kim et al. Automatic segmentation of the left ventricle in echocardiographic images using convolutional neural networks
Wang et al. Learning-based detection and tracking in medical imaging: a probabilistic approach
Duchateau et al. Machine learning approaches for myocardial motion and deformation analysis
Shen et al. Consistent estimation of cardiac motions by 4D image registration
Avazov et al. An improvement for the automatic classification method for ultrasound images used on CNN
Arafati et al. Generalizable fully automated multi-label segmentation of four-chamber view echocardiograms based on deep convolutional adversarial networks
US11847786B2 (en) Motion learning without labels
JP6781417B2 (ja) 患者間脳レジストレーション
Graves et al. Siamese pyramidal deep learning network for strain estimation in 3D cardiac cine-MR
Abramson et al. Anatomically-informed deep learning on contrast-enhanced cardiac MRI for scar segmentation and clinical feature extraction

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20221212

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20240426

RIC1 Information provided on ipc code assigned before grant

Ipc: G06N 20/20 20190101ALI20240422BHEP

Ipc: G06N 3/04 20060101ALI20240422BHEP

Ipc: G06N 3/08 20060101AFI20240422BHEP