EP3975865B1 - Geführte ultraschallbildgebung - Google Patents
Geführte ultraschallbildgebung Download PDFInfo
- Publication number
- EP3975865B1 EP3975865B1 EP20728751.7A EP20728751A EP3975865B1 EP 3975865 B1 EP3975865 B1 EP 3975865B1 EP 20728751 A EP20728751 A EP 20728751A EP 3975865 B1 EP3975865 B1 EP 3975865B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- ultrasound
- user
- volumetric
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012285 ultrasound imaging Methods 0.000 title claims description 23
- 238000002604 ultrasonography Methods 0.000 claims description 135
- 238000013528 artificial neural network Methods 0.000 claims description 91
- 238000000034 method Methods 0.000 claims description 59
- 238000003384 imaging method Methods 0.000 claims description 43
- 238000003709 image segmentation Methods 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 3
- 238000002059 diagnostic imaging Methods 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims 1
- 238000012549 training Methods 0.000 description 19
- 239000000523 sample Substances 0.000 description 18
- 230000033001 locomotion Effects 0.000 description 10
- 210000003754 fetus Anatomy 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 6
- 238000003491 array Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 230000001815 facial effect Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000002592 echocardiography Methods 0.000 description 3
- 230000001605 fetal effect Effects 0.000 description 3
- 210000004072 lung Anatomy 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000009966 trimming Methods 0.000 description 3
- 210000001015 abdomen Anatomy 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 210000002216 heart Anatomy 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000015654 memory Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- QZHBYNSSDLTCRG-LREBCSMRSA-N 5-bromo-n-(4,5-dihydro-1h-imidazol-2-yl)quinoxalin-6-amine;(2r,3r)-2,3-dihydroxybutanedioic acid Chemical compound OC(=O)[C@H](O)[C@@H](O)C(O)=O.C1=CC2=NC=CN=C2C(Br)=C1NC1=NCCN1 QZHBYNSSDLTCRG-LREBCSMRSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 210000000481 breast Anatomy 0.000 description 1
- 230000000747 cardiac effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000002440 hepatic effect Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002611 ovarian Effects 0.000 description 1
- 230000037361 pathway Effects 0.000 description 1
- 210000002826 placenta Anatomy 0.000 description 1
- 230000009237 prenatal development Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000004513 sizing Methods 0.000 description 1
- 230000003393 splenic effect Effects 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 230000002381 testicular Effects 0.000 description 1
- 210000001685 thyroid gland Anatomy 0.000 description 1
- 210000003954 umbilical cord Anatomy 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/44—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/46—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
- A61B8/467—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
- A61B8/469—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means for selection of a region of interest
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
- A61B8/0866—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Detecting organic movements or changes, e.g. tumours, cysts, swellings
- A61B8/0833—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
- A61B8/085—Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/42—Details of probe positioning or probe attachment to the patient
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/44—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
- A61B8/4444—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device related to the probe
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/44—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
- A61B8/4483—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device characterised by features of the ultrasound transducer
- A61B8/4488—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device characterised by features of the ultrasound transducer the transducer being a phased array
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/48—Diagnostic techniques
- A61B8/483—Diagnostic techniques involving the acquisition of a 3D volume of data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5207—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5223—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/523—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for generating planar views from image data in a user selectable plane not corresponding to the acquisition plane
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2503/00—Evaluating a particular growth phase or type of persons or animals
- A61B2503/02—Foetus
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/13—Tomography
- A61B8/14—Echo-tomography
- A61B8/145—Echo-tomography characterised by scanning multiple planes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/58—Testing, adjusting or calibrating the diagnostic device
- A61B8/585—Automatic set-up of the device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
- G06T2207/10136—3D ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30044—Fetus; Embryo
Definitions
- the present disclosure pertains to ultrasound systems and methods for recognizing anatomical features via ultrasound imaging and guiding a user to capture a desired view of a target feature.
- Particular implementations may utilize at least one neural network and associated display configured to generate customized user instructions based on the anatomical features recognized in a current ultrasound image.
- Implementations also include ultrasound image acquisition components configured to automatically toggle between 2D and volumetric imaging modes.
- WO2019086365A1 entitled “Intelligent Ultrasound System for Detecting Image Artifacts”, for example, can be generally configured to identify and remove image artifacts from such an image frame by applying a neural network to the frame.
- US 2017/119354 A1 discloses systems and methods for displaying a 3D ultrasound image in a desired view orientation.
- the present disclosure describes systems and methods for capturing ultrasound images of various anatomical objects in accordance with a particular view selected by a user. While examples herein specifically address prenatal imaging of a fetus to acquire an image of the face of an unborn baby, it should be understood to those skilled in the art that the disclosed systems and methods are described with respect to fetal imaging for illustrative purposes only, and that anatomical imaging can be performed in accordance with the present disclosure on a variety of anatomical features, including but not limited to the heart and lungs, for instance.
- a system may be configured to improve the accuracy, efficiency and automation of prenatal ultrasound imaging by guiding a user to acquire an image of a targeted anatomical feature, automatically selecting a region of interest (ROI) within an image of the feature, capturing a volumetric, e.g., 3D, image of the feature, and modifying the volumetric image in accordance with user preferences.
- the system may include an ultrasound transducer configured to acquire echo signals responsive to ultrasound pulses transmitted toward a target region, which may include the abdomen of a patient.
- One or more processors may be coupled with the ultrasound transducer, each processor uniquely configured to perform one or more functions based on the ultrasound data acquired by the transducer.
- a data processor can be configured to implement one or more neural networks configured to recognize certain anatomical features and guide an operator to manipulate the transducer in the manner necessary to acquire an image of a target feature.
- the data processor can be configured to perform image segmentation or another boundary detection technique to identify certain anatomical features.
- a control circuit can be configured to automatically switch the transducer into a volumetric imaging mode for sweeping through the ROI.
- the acquired volumetric image may be modified, for example by a neural network or image rendering processor, configured to apply certain image modifications for improved clarity, quality and/or artistic purposes. While specific embodiments are described herein with respect to generating 3D images, the present disclosure is not limited to 3D imaging. For example, embodiments may also be directed to additional forms of volumetric imaging, such as 4D and/or spatio-temporal image correlation (STIC) imaging.
- STIC spatio-temporal image correlation
- an ultrasound imaging system may include an ultrasound transducer configured to acquire echo signals responsive to ultrasound pulses transmitted toward a target region.
- the system may also include one or more processors in communication with the ultrasound transducer and configured to: present, to a user, one or more illustrative volumetric images of a target feature, each illustrative volumetric image corresponding to a particular view of the target image; receive a user selection of one of the illustrative volumetric images; generate two-dimensional (2D) image frames from the acquired echo signals of the target region; identify one or more anatomical landmarks corresponding to the target feature in the generated 2D image frames; based on the anatomical landmarks and the particular view of the user-selected volumetric image, provide an instruction for manipulating the ultrasound transducer to a target location in order to generate at least one 2D image frame specific to the particular view of the user-selected volumetric image; cause the ultrasound transducer to acquire additional echo signals at the target location; and generate, with the acquired additional echo
- the one or more processors are configured to identify the one or more anatomical landmarks via image segmentation. In some examples, the one or more processors are configured to identify the one or more anatomical landmarks via implementation of a neural network trained to recognize the anatomical landmarks. In some examples, the one or more processors are further configured to apply an artificial light source to the actual volumetric image in accordance with the particular view. In some examples, the artificial light source is applied by an artificial neural network. Examples can include one or more artificial neural networks, for e.g. two, three or more communicatively coupled neural networks. In some embodiments, the artificial neural networks can be further configured to apply an image contrast adjustment to the actual volumetric image in accordance with particular view. In some examples, the target feature can include the face of an unborn baby.
- the one or more processors are configured to generate the instruction for manipulating the ultrasound transducer by inputting the 2D image frames to an artificial neural network trained to compare the 2D image frames to stored image frames embodying the target feature.
- the artificial neural network is configured to generate a new instruction for manipulating the ultrasound transducer upon repositioning of the ultrasound transducer.
- the one or more processors are further configured to define a region of interest within the 2D image frame specific to the particular view of the user-selected volumetric image.
- the ultrasound imaging system further includes a controller configured to switch the ultrasound transducer from a 2D imaging mode to a volumetric imaging mode.
- the controller can be configured to switch the ultrasound transducer from the 2D imaging mode to the volumetric imaging mode automatically upon receiving an indication from the one or more processors that the region of interest has been defined.
- Embodiments can also include a user interface communicatively coupled with the one or more processors and configured to display the instruction for manipulating the ultrasound transducer to a target location.
- the one or more processors can be further configured to cause an indicator of the target feature to be displayed on the user interface.
- a method of ultrasound imaging may involve acquiring echo signals responsive to ultrasound pulses transmitted toward a target region.
- the method may further involve presenting, to a user, one or more illustrative volumetric images of a target feature, each illustrative volumetric image corresponding to a particular view of the target image; receiving a user selection of one of the illustrative volumetric images; generating two-dimensional (2D) image frames from the acquired echo signals of the target region; identifying one or more anatomical landmarks corresponding to the target feature in the generated 2D image frames; based on the anatomical landmarks and the particular view of the user-selected volumetric image, providing an instruction for manipulating the ultrasound transducer to a target location in order to generate at least one 2D image frame specific to the particular view of the user-selected volumetric image; causing the ultrasound transducer to acquire additional echo signals at the target location; and generating, with the acquired additional echo signals, an actual volumetric image of the target feature and corresponding to the particular view of
- the method further involves applying an artificial light source, an image contrast adjustment, or both to the actual volumetric image.
- the target feature comprises a face of an unborn baby.
- identifying the one or more anatomical landmarks involves image segmentation or implementation of at least one neural network trained to recognize the anatomical landmarks.
- the method further involves displaying the instruction for manipulating the ultrasound transducer. Embodiments may also involve defining a region of interest within the 2D image frame specific to the particular view of the user-selected volumetric image.
- the method also involves identifying additional anatomical landmarks of the target feature upon manipulation of an ultrasound transducer; and generating additional instructions for manipulating the ultrasound transducer based on the additional anatomical landmarks identified upon manipulation of the ultrasound transducer.
- Example methods may also involve switching the ultrasound transducer from the 2D imaging mode to a volumetric imaging mode upon receiving an indication that a region of interest has been identified.
- Any of the methods described herein, or steps thereof, may be embodied in non-transitory computer-readable medium comprising executable instructions, which when executed may cause a processor of a medical imaging system to perform the method or steps embodied herein.
- An ultrasound system may utilize one or more artificial neural networks implemented by a computer processor, module or circuit.
- Example networks include a convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), autoencoder neural network, or the like, configured to identify one or more anatomical features, e.g., head, feet, hands or legs, present within a 2D ultrasound image, and guide a user to manipulate an ultrasound transducer in the manner necessary to capture an image of a specifically targeted anatomical feature, e.g., the face of an unborn baby.
- CNN convolutional neural network
- DNN deep neural network
- RNN recurrent neural network
- autoencoder neural network or the like, configured to identify one or more anatomical features, e.g., head, feet, hands or legs, present within a 2D ultrasound image, and guide a user to manipulate an ultrasound transducer in the manner necessary to capture an image of a specifically targeted anatomical feature, e.g., the face of an un
- the artificial neural network(s) may be trained using any of a variety of currently known or later developed machine learning techniques to obtain a neural network (e.g., a machine-trained algorithm or hardware-based system of nodes) that is configured to analyze input data in the form of ultrasound image frames and determine the imaging adjustments necessary to acquire a particular view of at least one anatomical feature.
- a neural network e.g., a machine-trained algorithm or hardware-based system of nodes
- Neural networks may provide an advantage over traditional forms of computer programming algorithms in that they can be generalized and trained to recognize data set features by analyzing data set samples rather than by reliance on specialized computer code.
- one or more neural networks of an ultrasound system can be trained to identify a plurality of anatomical features, guide a user to obtain an image of a target feature based in part on the anatomical features identified, refine a ROI encompassing the target feature, and/or obtain a 3D image of the target feature.
- An ultrasound system in accordance with principles of the present invention may include or be operatively coupled to an ultrasound transducer configured to transmit ultrasound pulses toward a medium, e.g., a human body or specific portions thereof, and generate echo signals responsive to the ultrasound pulses.
- the ultrasound system may include a beamformer configured to perform transmit and/or receive beamforming, and a display configured to display ultrasound images generated by the ultrasound imaging system, along with notifications overlaid on or adjacent to the images.
- the ultrasound imaging system may include one or more processors and at least one neural network, which may be implemented in hardware and/or software components.
- the neural network(s) can be machine trained to identify anatomical features present within 2D images, guide a user to obtain an image of a target feature, identify a region of interest within the image of the target feature, and/or modify a 3D image of the target feature.
- the neural network(s) implemented according to some examples of the present disclosure may be hardware- (e.g., neurons are represented by physical components) or software-based (e.g., neurons and pathways implemented in a software application), and can use a variety of topologies and learning algorithms for training the neural network to produce the desired output.
- a software-based neural network may be implemented using a processor (e.g., single or multi-core CPU, a single GPU or GPU cluster, or multiple processors arranged for parallel-processing) configured to execute instructions, which may be stored in computer readable medium, and which when executed cause the processor to perform a machine-trained algorithm for evaluating an image.
- the ultrasound system may include a display or graphics processor, which is operable to arrange the ultrasound images and/or additional graphical information, which may include annotations, confidence metrics, user instructions, tissue information, patient information, indicators, and other graphical components, in a display window for display on a user interface of the ultrasound system.
- the ultrasound images and associated measurements may be provided to a storage and/or memory device, such as a picture archiving and communication system (PACS) for reporting purposes or future machine training.
- a storage and/or memory device such as a picture archiving and communication system (PACS) for reporting purposes or future machine training.
- PACS picture archiving and communication system
- FIG. 1 shows an example ultrasound system according to principles of the present disclosure.
- the ultrasound system 100 may include an ultrasound data acquisition unit 110.
- the ultrasound data acquisition unit 110 can include an ultrasound probe which includes an ultrasound sensor array 112 configured to transmit ultrasound pulses 114 into a region 116 of a subject, e.g., abdomen, and receive ultrasound echoes 118 responsive to the transmitted pulses.
- the region 116 may include a developing fetus, as shown, or a variety of other anatomical objects, such as the heart, lungs or umbilical cord.
- the ultrasound data acquisition unit 110 can include a beamformer 120, a transmit controller 121, and a signal processor 122.
- the transmit controller 121 can be configured to control the transmission of ultrasonic beams from the sensor array 112.
- An image acquisition feature that may be controlled by the transmit controller 121 is the imaging mode, e.g., 2D or 3D, implemented by the sensor array 112.
- the beamformer 120 and sensor array 112 may switch from 2D imaging to 3D imaging upon acquiring a 2D image of a region of interest.
- the signal processor 122 can be configured to generate a stream of discrete ultrasound image frames 124 from the ultrasound echoes 118 received at the array 112.
- the image frames 124 can be communicated to a data processor 126, e.g., a computational module or circuit.
- the data processor 126 may be configured to analyze the image frames 124 by implementing various image segmentation and/or boundary detection techniques.
- the data processor 126 may, in addition or alternatively, be configured to implement one or more neural networks trained to recognize various anatomical features and/or generate user instructions for manipulating the ultrasound transducer 112.
- the data processor 126 can be configured to implement a first neural network 128, a second neural network 130, and/or a third neural network 132.
- the first neural network 128 may be trained to identify one or more anatomical features visible within the image frames 124, and based on the anatomical features identified, generate instructions for manipulating the ultrasound sensor array 112 in the manner necessary to obtain an image of a target feature 117, e.g., the face of an unborn baby.
- the second neural network 130 may be trained to identify a region of interest within the image of the target feature 117, which may trigger the sensor array 112, at the direction of the transmit controller 121, to acquire a 3D image of the region of interest.
- the third neural network 132 may be trained to perform one or more post-acquisition processing steps, e.g., the application of artificial lighting, necessary to generate a desired 3D portrait of the region of interest.
- the data processor 126 can also be coupled, communicatively or otherwise, to a memory or database 134 configured to store various data types, including training data and newly acquired, patient-specific data.
- the system 100 can also include a display processor 136, e.g., a computational module or circuit, communicatively coupled with data processor 126.
- the display processor 136 is further coupled with a user interface 138, such that the display processor 136 can link the data processor 126 (and thus any neural network(s) operating thereon) to the user interface 138, thereby enabling the neural network outputs, e.g., user instructions in the form of motion control commands, to be displayed on the user interface 138.
- the display processor 136 can be configured to generate 2D ultrasound images 140 from the image frames 124 received at the data processor 126, which may then be displayed via the user interface 138 in real time as an ultrasound scan is being performed.
- the display processor 136 can be configured to generate and display (via user interface 138) one or more illustrative volumetric images of a target feature, each illustrative volumetric image corresponding to a particular view of the target image.
- the illustrative volumetric images can be selected by a user, thereby prompting the system 100 to generate and display one or more commands for acquiring and generating one or more actual volumetric images in accordance with the user-selected view.
- a specific region of interest (“ROI") 142 e.g., ROI box, may also be displayed.
- the ROI 142 may be positioned and trimmed by the second neural network 130 in some examples.
- One or more notifications 144 may be overlaid on or displayed adjacent to the images 140 during an ultrasound scan.
- the user interface 138 can also be configured to receive a user input 146 at any time before, during, or after an ultrasound scan.
- the user interface 138 may be interactive, receiving user input 146 indicating a desired viewpoint for imaging the target feature 117 and/or indicating confirmation that an imaging instruction has been followed.
- the user interface 138 may also be configured to display 3D images 148 acquired and processed by the ultrasound data acquisition unit 110, data processor 126, and display processor 136.
- the user interface 138 may comprise a display that is positioned external to the data processor 126, for example comprising a standalone display, an augmented reality glass, or a mobile phone.
- the system 100 can be portable or stationary.
- Various portable devices e.g., laptops, tablets, smart phones, or the like, may be used to implement one or more functions of the system 100.
- the ultrasound sensor array 112 may be connectable via a USB interface, for example.
- various components may be combined.
- the data processor 126 may be merged with the display processor 136 and/or the user interface 138.
- the first, second and/or third neural networks 128, 130, 132 may be merged such that the networks constitute sub-components of a larger, layered network, for example.
- the networks may be operatively arranged in a cascade, such that the output of the first neural network 128 comprises the input for the second neural network 130, and the output of the second neural network 130 comprises the input for the third neural network 132.
- the ultrasound data acquisition unit 110 can be configured to acquire ultrasound data from one or more regions of interest 116, which may include a fetus and features thereof.
- the ultrasound sensor array 112 may include at least one transducer array configured to transmit and receive ultrasonic energy.
- the settings of the ultrasound sensor array 112 can be adjustable during a scan.
- the ultrasound sensor array 112 under the direction of the beamformer 120 and transmit controller 121, can be configured to switch between 2D and 3D imaging modes automatically, i.e., without user input.
- the ultrasound sensor array 112 can be configured to switch to 3D imaging mode (or 4D or STIC mode) after the target feature 117 is identified in a 2D image 140 and the ROI 142 is demarcated.
- the sensor array 112 may initiate an automated sweep through the target feature 117, thereby acquiring a 3D volume of image data.
- transducer arrays may be used, e.g., linear arrays, convex arrays, or phased arrays.
- the number and arrangement of transducer elements included in the sensor array 112 may vary in different examples.
- the ultrasound sensor array 112 may include a 2D array of transducer elements, e.g., a matrix array probe.
- a 2D matrix array may be configured to scan electronically in both the elevational and azimuth dimensions (via phased array beamforming) for alternate 2D and 3D imaging.
- imaging modalities implemented according to the disclosures herein can also include shear-wave and/or Doppler, for example.
- a variety of users may handle and operate the ultrasound data acquisition unit 110 to perform the methods described herein. In some examples, the user may be an inexperienced, novice ultrasound operator.
- the data acquisition unit 110 may also include a beamformer 120, e.g., comprising a microbeamformer or a combination of a microbeamformer and a main beamformer, coupled to the ultrasound sensor array 112.
- the beamformer 120 may control the transmission of ultrasonic energy, for example by forming ultrasonic pulses into focused beams.
- the beamformer 120 may also be configured to control the reception of ultrasound signals such that discernable image data may be produced and processed with the aid of other system components.
- the role of the beamformer 120 may vary in different ultrasound probe varieties.
- the beamformer 120 may comprise two separate beamformers: a transmit beamformer configured to receive and process pulsed sequences of ultrasonic energy for transmission into a subject, and a separate receive beamformer configured to amplify, delay and/or sum received ultrasound echo signals.
- the beamformer 120 may include a microbeamformer operating on groups of sensor elements for both transmit and receive beamforming, coupled to a main beamformer which operates on the group inputs and outputs for both transmit and receive beamforming, respectively.
- the transmit controller 121 may be communicatively, operatively and/or physically coupled with the sensor array 112, beamformer 120, signal processor 122, data processor 126, display processor 136, and/or user interface 138.
- the transmit controller 121 may be responsive to user input 146, such that the sensor array 112, via the transmit controller 121, may switch to 3D imaging mode upon receipt of a user input directing the switch.
- the transmit controller 121 may initiate the switch automatically, for example in response to an indication received from the data processor 126 that a ROI 142 has been identified in a 2D image 140 of the target feature 117.
- the signal processor 122 e.g., a computational module or circuit, may be communicatively, operatively and/or physically coupled with the sensor array 112, the beamformer 120 and/or the transmit controller 121.
- the signal processor 122 is included as an integral component of the data acquisition unit 110, but in other examples, the signal processor 122 may be a separate component.
- the signal processor 122 may be housed together with the sensor array 112 or it may be physically separate from but communicatively (e.g., via a wired or wireless connection) coupled thereto.
- the signal processor 122 may be configured to receive unfiltered and disorganized ultrasound data embodying the ultrasound echoes 118 received at the sensor array 112. From this data, the signal processor 122 may continuously generate a plurality of ultrasound image frames 124 as a user scans the region 116.
- the first neural network 128 may comprise a convolutional neural network trained to identify the presence and in some examples, orientation, of one or more anatomical features present in a 2D ultrasound image, e.g., a B-mode image. Based on this determination, the first neural network 128 may generate one or more user instructions for manipulating the sensor array 112 in the manner necessary to acquire an image of another anatomical feature, such as the face of an unborn baby, from a particular vantage point, which may be specified by a user. The instructions may be displayed sequentially in real time as the user adjusts the sensor array 112 in accordance with each instruction.
- the first neural network 128 may be configured to recognize anatomical features and generate new instructions accordingly as the user moves the sensor array 112, each new instruction based on the image data, e.g., embodying anatomical landmarks, present in each ultrasound image generated with movement of the sensor array 112.
- the user may confirm that an instruction has been implemented, e.g., via manual input 146 received at the user interface 138, thereby signaling to the system that the next instruction can be displayed.
- Instructions can include directional commands to move, tilt or rotate the sensor array 112 in specific directions.
- the first neural network 128 can be trained to align the transducer array 112 to a target image plane based on previously acquired images stored in the database 134.
- the previously acquired images can be stored in association with motion control parameters and/or labels or scores for qualifying or validating newly acquired images, for example as described in the non-prepublished applications PCT/EP2019/056072 ( WO2019175129A1 ) and PCT/EP2019/056108 ( WO2019175141A1 ).
- one or more processors e.g., data processor 126, can be configured to apply the first neural network 128 in a clinical setting to determine directional commands for the user to align the transducer array 112 to a patient in the manner necessary to acquire an image of a target feature.
- New motion control commands can be generated each time the transducer array 112 is repositioned.
- the first neural network 128 may be further configured to predict whether a candidate of motion control configurations, e.g., controls for changing an imaging plane within a volumetric ultrasound image, used for repositioning the ultrasound transducer 112 will lead to an optimal imaging location for a particular target view given an input image.
- the processor may then output the directional commands in the format of instructions and/or visual indicators to the display 138 and a user may manually align the transducer array 112 to the patient based on the instructions.
- first neural network 128 may comprise networks 128a, 128b and 128c, where network 128a comprises a predictive network trained to receive a currently acquired image and infer or deduce a motion vector with the highest probability of reaching a desired location for capturing a target image view.
- network 128b may comprise a fine-tuning network trained to verify whether a pair of images have the same quality level or select an image having a higher quality level from the pair.
- Network 128c may comprise a target network trained to determine whether a target image view has been captured.
- identification of the presence and orientation of one or more anatomical features present in an ultrasound image may not be performed by the first neural network.
- Such embodiments may involve implementation of one or more boundary detection or image segmentation techniques by one or more processors of the system 100, such as data processor 126.
- the second neural network 130 may comprise a convolutional neural network trained to define a ROI 142 within a 2D image of the target feature 117.
- the second neural network 130 may be configured to operate after the first neural network 128 has successfully guided the user to acquire an image 140 of the target feature 117.
- Defining the ROI 142 may involve placing and sizing a geometric shape, e.g., a box, within the image 140 of the target feature 117 such that all non-targeted features, e.g., placenta, legs, arms, neck, etc., are excluded from the ROI 142.
- the ROI 142 may not comprise a geometric shape, instead comprising a best-fit line positioned around the salient features of the target feature 117, e.g., nose, forehead, eyes, chin, ears, etc.
- 3D data may only be collected within the ROI, which may improve the speed and efficiency of the system 100, as well as the quality of the resulting 3D image 148.
- the third neural network 132 may comprise a convolutional neural network trained to perform one or more post-acquisition processing steps necessary to generate a 3D panoramic image 148 of the ROI 142.
- Example post-acquisition processing steps may include applying an artificial light source to the image, such that the image includes shadows. The direction from which the artificial light source is applied may be adjusted automatically by the third neural network 132 or in response to user input 146.
- the third neural network 132 may also trim the image to remove one or more undesired features.
- the third neural network 132 may be configured to alter the imaging contrast, such that certain features are accentuated or dimmed.
- the modifications introduced by the third neural network 132 may be based, at least in part, on artistic qualities. For example, the lighting, contrast and/or trimming adjustments applied by the third neural network 132 may be designed to improve the aesthetic appearance of the 3D portrait generated by the system.
- one or more of the post-acquisition steps may be implemented without a neural network.
- one or more processors of the system 100 such as data processor 126 or display processor 136, may be configured to render the 3D image 148 of the ROI 142 in spatial relation to a lighting model such that lighting and shadowing regions of the anatomical features depicted within the ROI 142 are displayed according to a stored setting of the system 100, as disclosed for instance in US 2017/0119354 , which is incorporated by reference in its entirety herein.
- the stored setting used for the lighting model can be automatically implemented by the system 100, or the stored setting can be customizable or selectable from a plurality of setting options.
- Each neural network 128, 130 and 132 may be implemented, at least in part, in a computer-readable medium comprising executable instructions, which when executed by a processor, e.g., data processor 126, may cause the processor to perform a machine-trained algorithm.
- the data processor 126 may be caused to perform a machine-trained algorithm to determine the presence and/or type of anatomical features contained in an image frame based on the acquired echo signals embodied therein.
- the data processor 126 may also be caused to perform a separate machine-trained algorithm to define the location, size and/or shape of a ROI within an image frame, such as a 2D image frame containing an image of the face of an unborn baby.
- Another machine-trained algorithm implemented by the data processor 126 may apply at least one image display setting configured to add shading and/or contrast to a 3D image.
- each neural network 128, 130 and 132 training sets which include multiple instances of input arrays and output classifications may be presented to the training algorithm(s) of each network (e.g., AlexNet training algorithm, as described by Krizhevsky, A., Sutskever, I. and Hinton, G. E. "ImageNet Classification with Deep Convolutional Neural Networks," NIPS 2012 or its descendants).
- the first neural network 128 can be trained using a large clinical database of ultrasound images obtained during prenatal ultrasound scans. The images may include fetuses at various stages of development from various imaging angles and positions.
- a neural network training algorithm associated with the first neural network 128 can be presented with thousands or even millions of training data sets in order to train the neural network to identify anatomical features and determine the ultrasound probe adjustments necessary to acquire an image of a target feature based on the presence of the features identified.
- the number of ultrasound images used to train the first neural network 128 may range from about 50,000 to 200,000 or more.
- the number of images used to train the first neural network 128 may be increased if higher numbers of different anatomical features are to be recognized.
- the number of training images may differ for different anatomical features, and may depend on variability in the appearance of certain features. For example, certain features may appear more consistently at certain stages of prenatal development than other features. Training the first neural network 128 to identify features with moderate to high variability may require more training images.
- the first neural network 128 may be further trained with clinically validated instructions for manipulating an ultrasound probe, each instruction associated with an initial set of anatomical features present in a current ultrasound image and a target anatomical feature viewed from a particular vantage point. Accordingly, the first neural network 128 may be trained to recognize certain anatomical features present within a given ultrasound image and associate such features with one or more instructions necessary to acquire an image of a target feature from a vantage point.
- the second neural network 130 can also be trained using a large clinical database of ultrasound images obtained during prenatal ultrasound scans.
- Each of the training images may contain a defined ROI, which may include the boundaries of the face of an unborn baby.
- a neural network training algorithm associated with the second neural network 130 can be presented with thousands or even millions of training data sets in order to train the neural network to define a ROI within any given 2D ultrasound image.
- the number of ultrasound images used to train the second neural network 130 may range from about 50,000 to 200,000 or more.
- the number of training images may be increased for greater numbers of viewing options. For example, if the target feature can only be imaged from one direction, the number of training images may be less compared to embodiments in which the target feature can be imaged from multiple directions.
- the third neural network 132 can also be trained using a large clinical database of ultrasound images obtained during prenatal ultrasound scans.
- Each of the images may contain a 3D image of a target feature with one or more post-acquisition settings applied.
- each image may include an artificial light source, pixel contrast adjustments, and/or feature trimming modifications, just to name a few, so that the third neural network 132 can learn which adjustments to apply to different images to ensure that a recognizable image of the target feature, e.g., baby face, is generated in accordance with common artistic preferences.
- the number of ultrasound images used to train the third neural network 132 may range from about 50,000 to 200,000 or more.
- the number of training images may be increased to accommodate a greater number of post-acquisition modifications.
- FIG. 2 is a display of illustrative volumetric images of a target feature generated by one or more processors described herein.
- the illustrative volumetric images are presented as selectable options to a user on a user interface 238, along with a user instruction 244 to "Pick Desired 3D Baby Face View.”
- the target feature includes the face of an unborn baby.
- a user may select option A, B or C corresponding to a right profile view, center front view, and left profile view of the baby face, respectively.
- Selection of option B may trigger one or more processors operating on the system, which may be configured to implement one or more neural networks, coupled with the user interface 238 to generate user instructions for manipulating an ultrasound transducer to a target location in order to acquire and generate at least one image frame in accordance with the specific view embodied by option B.
- FIG. 3 is a display of a user instruction 344 for ultrasound probe manipulation overlaid on a live 2D image 340 of a fetus displayed on a user interface 338.
- User instructions for ultrasound probe manipulation such as user instruction 344, may be generated by data processor 126, implementing one or more neural networks and/or image segmentation techniques, and output to the user interface 138 for display.
- the user instruction 344 directs the user to "Please Translate As Directed."
- the exact language of the user instruction 344 may vary.
- the instruction may consist of one or more symbols, e.g., an arrow, only.
- the instruction may not be visually displayed at all, and may instead by conveyed as an audio cue.
- the user instruction 344 is generated based on fetal anatomical landmarks recognized by a neural network operating within the ultrasound system, and also based on the viewing option selected by the user, as shown in FIG. 2 .
- visual indicators embodying user instructions may include a graphical representation or view of the ultrasound probe, including one or more features of the probe, such as the handle, one or more adjustable knobs, and/or a switch.
- user instructions may include indicating a direction and/or an amount to dial the knobs, an instruction to turn the switch on or off, and/or a direction and/or a degree to rotate the probe.
- motion control commands can include controls for operating the probe (or other imaging device) to change an imaging plane within a volumetric ultrasound image in addition to changing a physical location of the probe.
- user instructions can include motion control commands embodying any measurable data related to a particular position or motion of an imaging device, e.g., ultrasound transducer, as further described in PCT/EP2019/056072 and PCT/EP2019/056108 .
- an imaging device e.g., ultrasound transducer
- FIG. 4 is a display of another user instruction 444 for ultrasound probe manipulation overlaid on a live 2D image 440 of a fetus displayed on a user interface 438.
- the user instruction 444 directs the user to "Please Rotate As Directed.”
- the user instruction 444 may be generated based on the fetal anatomical landmarks recognized in the image 440 by one or more processors, and also based on the viewing option selected by the user, as shown in FIG. 2 .
- FIG. 5 is a display of another user instruction 544 for ultrasound probe manipulation overlaid on a live 2D image 540 of a fetus displayed on a user interface 538.
- the target feature 517 i.e., the baby face
- the user instruction 544 informs the user that "Optimal View Acquired” and thus directs the user to "Please Hold Still!"
- the user instruction 544 may be generated based on facial landmarks recognized in the image 540 by the neural network or other processing component, and also based on the viewing option selected by the user, as shown in FIG. 2 .
- FIG. 6 is a display showing automatic positioning of a refined ROI 642 along with a user notification 644 indicating the impending commencement of a 3D sweep.
- a separate neural network in embodiments featuring multiple discrete networks, can be configured to identify and define the refined ROI 642 by distinguishing facial features from non-facial features, and trimming the preliminary ROI 541 (of FIG. 5 ) to include the facial features, only. In addition or alternatively, facial features may be identified and trimmed via image segmentation.
- the system may perform a 3D sweep of the target feature 617 within the refined ROI 642.
- the user interface 638 may prompt the user to confirm whether the sweep should proceed, or may alert the user that the sweep will be performed automatically, for example at the end of a countdown. Accordingly, the user notification 644 may inform the user that "Auto Positioning ROI. Sweep in 3...2... 1"
- FIG. 7 is a display of an actual volumetric image of the target feature captured in the refined ROI 642 (of FIG. 6 ).
- the actual image comprises an acquired 3D image 748 of the refined ROI 642 presented on a user interface 738, along with a user notification 744 asking "Would you like another view?"
- the 3D image 748 includes shadows generated by an artificial light source, which may be applied by another processor and/or neural network operating on the system. If the user indicates that another view is desired, the user interface may display the initial viewing options, for example in the same or similar manner shown in FIG. 2 .
- an underlying neural network such as the first neural network 128 depicted in FIG. 1 , may be configured to recognize which views will yield a quality image of the target feature.
- Such view(s) may be displayed on a user interface for selection by the user, thereby initiating the display of user instructions necessary to acquire the image, as shown in FIGS. 2-7 .
- FIG. 8 is a flow diagram of a method of ultrasound imaging performed in accordance with principles of the present disclosure.
- the example method 800 shows the steps that may be utilized, in any sequence, by the systems and/or apparatuses described herein for acquiring a 3D image of a target feature, such as the face of an unborn baby, which may be performed by a novice user adhering to instructions generated by the system.
- the method 800 may be performed by an ultrasound imaging system, such as system 100, or other systems including, for example, a mobile system such as LUMIFY by Koninklijke Philips N.V. ("Philips"). Additional example systems may include SPARQ and/or EPIQ, also produced by Philips.
- the method 800 begins at block 802 by "acquiring echo signals responsive to ultrasound pulses transmitted toward a target region.”
- the method involves "presenting, to a user, one or more illustrative volumetric images of a target feature, each illustrative volumetric image corresponding to a particular view of the target image.”
- the method involves "receiving a user selection of one of the illustrative volumetric images.”
- the method involves "generating two-dimensional (2D) image frames from the acquired echo signals of the target region.”
- the method involves "identifying one or more anatomical landmarks corresponding to the target feature in the generated 2D image frames.”
- the method involves "based on the anatomical landmarks and the particular view of the user-selected volumetric image, providing an instruction for manipulating the ultrasound transducer to a target location in order to generate at least one 2D image frame specific to the particular view of the user-selected volumetric image.”
- the method involves "causing the ultrasound transducer to acquire additional echo signals at the target location.”
- the method involves "generating, with the acquired additional echo signals, an actual volumetric image of the target feature and corresponding to the particular view of the user-selected volumetric image.”
- a programmable device such as a computer-based system or programmable logic
- the above-described systems and methods can be implemented using any of various known or later developed programming languages, such as "C”, “C++”, “FORTRAN”, “Pascal”, “VHDL” and the like.
- various storage media such as magnetic computer disks, optical disks, electronic memories and the like, can be prepared that can contain information that can direct a device, such as a computer, to implement the above-described systems and/or methods.
- the storage media can provide the information and programs to the device, thus enabling the device to perform functions of the systems and/or methods described herein.
- the computer could receive the information, appropriately configure itself and perform the functions of the various systems and methods outlined in the diagrams and flowcharts above to implement the various functions. That is, the computer could receive various portions of information from the disk relating to different elements of the above-described systems and/or methods, implement the individual systems and/or methods and coordinate the functions of the individual systems and/or methods described above.
- processors described herein can be implemented in hardware, software and firmware. Further, the various methods and parameters are included by way of example only and not in any limiting sense. In view of this disclosure, those of ordinary skill in the art can implement the present teachings in determining their own techniques and needed equipment to affect these techniques, while remaining within the scope of the invention.
- the functionality of one or more of the processors described herein may be incorporated into a fewer number or a single processing unit (e.g., a CPU) and may be implemented using application specific integrated circuits (ASICs) or general purpose processing circuits which are programmed responsive to executable instruction to perform the functions described herein.
- ASICs application specific integrated circuits
- the present system may have been described with particular reference to an ultrasound imaging system, it is also envisioned that the present system can be extended to other medical imaging systems where one or more images are obtained in a systematic manner. Accordingly, the present system may be used to obtain and/or record image information related to, but not limited to renal, testicular, breast, ovarian, uterine, thyroid, hepatic, lung, musculoskeletal, splenic, cardiac, arterial and vascular systems, as well as other imaging applications related to ultrasound-guided interventions. Further, the present system may also include one or more programs which may be used with conventional imaging systems so that they may provide features and advantages of the present system.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Pathology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Gynecology & Obstetrics (AREA)
- Pregnancy & Childbirth (AREA)
- Vascular Medicine (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Physiology (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Claims (15)
- Ultraschall-Bildgebungsverfahren (100), umfassend:einen Ultraschallwandler (112), der konfiguriert ist, um als Reaktion auf Ultraschallimpulse, die in Richtung eines Zielbereichs gesendet werden, Echosignale zu erfassen;einen oder mehrere Prozessoren (122, 126, 136), die mit dem Ultraschallwandler in Verbindung sind und zu Folgendem konfiguriert sind:Präsentieren, an einen Benutzer, eines oder mehrerer veranschaulichender volumetrischer Bilder eines Zielmerkmals, wobei jedes veranschaulichende volumetrische Bild einer besonderen Ansicht des Zielbildes entspricht;Empfangen einer Benutzerauswahl eines der veranschaulichenden volumetrischen Bilder;Erzeugen von zweidimensionalen (2D) Bildframes aus den erfassten Echosignalen des Zielbereichs; und Identifizieren eines oder mehrerer anatomischer Merkpunkte, die dem Zielmerkmal in den erzeugten 2D-Bildframes entsprechen, dadurch gekennzeichnet, dass das Ultraschall-Bildgebungssystem ferner zu Folgendem konfiguriert ist basierend auf den anatomischen Merkpunkten und der speziellen Ansicht des von dem Benutzer ausgewählten volumetrischen Bilds, Bereitstellen einer Anweisung zum Manipulieren eines Ultraschallwandlers an einer Zielstelle, um mindestens ein 2D-Bild zu erzeugen, das für die besondere Ansicht des von dem Benutzer ausgewählten volumetrischen Bilds spezifisch ist;Veranlassen des Ultraschallwandlers, zusätzliche Echosignale an der Zielstelle zu erfassen; undErzeugen, mit den erfassten zusätzlichen Echosignalen, eines aktuellen volumetrischen Bilds des Zielmerkmals und entsprechend der besonderen Ansicht des von dem Benutzer ausgewählten volumetrischen Bilds.
- Ultraschall-Bildgebungssystem nach Anspruch 1, wobei der eine oder die mehreren Prozessoren konfiguriert sind, um die eine oder die mehreren anatomischen Merkpunkte durch Bildsegmentierung zu identifizieren, oder
wobei der eine oder die mehreren Prozessoren konfiguriert sind, um den einen oder die mehreren anatomischen Merkpunkte über Implementierung eines neuronalen Netzwerks zu identifizieren, das für eine Erkennung der anatomischen Merkpunkte trainiert ist. - Ultraschall-Bildgebungssystem nach Anspruch 1, wobei der eine oder die mehreren Prozessoren ferner konfiguriert sind, um eine künstliche Lichtquelle auf das tatsächliche volumetrische Bild gemäß der besonderen Ansicht anzuwenden.
- Ultraschall-Bildgebungssystem nach Anspruch 1, wobei der eine oder die mehreren Prozessoren konfiguriert sind, um die Anweisung zum Manipulieren des Ultraschallwandlers zu erzeugen, indem sie die 2D-Bildframes in ein künstliches neuronales Netzwerk eingeben, das trainiert ist, um die 2D-Bildframes mit gespeicherten Bildframes zu vergleichen, die das Zielmerkmal verkörpern.
- Ultraschall-Bildgebungssystem nach Anspruch 1, wobei der eine oder die mehreren Prozessoren ferner konfiguriert sind, um einen Bereich von Interesse innerhalb des 2D-Bildframes definieren, der für die besondere Ansicht des von dem Benutzer ausgewählten volumetrischen Bilds spezifisch ist.
- Ultraschall- Bildgebungssystem nach Anspruch 1, ferner umfassend eine Steuerung, die konfiguriert ist, um den Ultraschallwandler von einem 2D-Bildgebungsmodus in einen volumetrischen Bildgebungsmodus umzuschalten.
- Ultraschall-Bildgebungssystem nach Anspruch 6, wobei die Steuerung konfiguriert ist, um den Ultraschallwandler automatisch von dem 2D-Bildgebungsmodus in den volumetrischen Bildgebungsmodus umzuschalten, wenn sie von dem einen oder den mehreren Prozessoren eine Angabe empfängt, dass der Bereich von Interesse definiert wurde.
- Ultraschall-Bildgebungssystem nach Anspruch 1, ferner umfassend eine Benutzeroberfläche, die kommunikativ mit dem einen oder den mehreren Prozessoren gekoppelt und konfiguriert ist, um die Anweisung zum Manipulieren des Ultraschallwandlers an der Zielstelle anzuzeigen.
- Verfahren (800) für Ultraschallbildgebung, das Verfahren umfassend:Erfassen (802) von Echosignalen als Reaktion auf Ultraschallimpulse, die in Richtung eines Zielbereichs gesendet werden;Präsentieren (804), an einen Benutzer, eines oder mehrerer veranschaulichender volumetrischer Bilder eines Zielmerkmals, wobei jedes veranschaulichende volumetrische Bild einer besonderen Ansicht des Zielbildes entspricht;empfangen (806) einer Benutzerauswahl eines der veranschaulichenden volumetrischen Bilder;erzeugen (808) von zweidimensionalen (2D) Bildframes aus den erfassten Echosignalen des Zielbereichs;Identifizieren (810) eines oder mehrerer anatomischer Merkpunkte, die dem Zielmerkmal in den erzeugten 2D-Bildframes entsprechen, dadurch gekennzeichnet, dass das Verfahren für Ultraschallbildgebung ferner Folgendes umfasst;basierend auf den anatomischen Merkpunkten und der speziellen Ansicht des von dem Benutzer ausgewählten volumetrischen Bilds, Bereitstellen einer Anweisung zum Manipulieren eines Ultraschallwandlers an einer Zielstelle, um mindestens ein 2D-Bild zu erzeugen, das für die besondere Ansicht des von dem Benutzer ausgewählten volumetrischen Bilds spezifisch ist;Veranlassen (814) des Ultraschallwandlers, zusätzliche Echosignale an der Zielstelle zu erfassen; undErzeugen (816), mit den erfassten zusätzlichen Echosignalen, eines aktuellen volumetrischen Bilds des Zielmerkmals und entsprechend der besonderen Ansicht des von dem Benutzer ausgewählten volumetrischen Bilds.
- Verfahren nach Anspruch 9, ferner umfassend ein Anwenden einer künstlichen Lichtquelle, einer Bildkontrasteinstellung oder von beidem auf das aktuelle volumetrische Bild.
- Verfahren nach Anspruch 9, ferner umfassend ein Anzeigen der Anweisung zum Handhaben des Ultraschallwandlers.
- Verfahren nach Anspruch 9, ferner umfassend ein Definieren eines Bereichs von Interesse innerhalb des 2D-Bildframes, der für die besondere Ansicht des von dem Benutzer ausgewählten volumetrischen Bilds spezifisch ist.
- Verfahren nach Anspruch 9, ferner umfassend:Identifizieren zusätzlicher anatomischer Merkpunkte des Zielmerkmals bei Betätigung des Ultraschallwandlers; undErzeugen zusätzlicher Anweisungen zum Handhaben des Ultraschallwandlers basierend auf den zusätzlichen anatomischen Merkpunkten, die bei Betätigung des Ultraschallwandlers identifiziert werden.
- Verfahren nach Anspruch 9, ferner umfassend ein Umschalten des Ultraschallwandlers von dem 2D-Bildgebungsmodus in einen volumetrischen Bildgebungsmodus bei Empfangen einer Angabe, dass ein Bereich von Interesse identifiziert wurde.
- Computerprogrammprodukt, umfassend ausführbare Anweisungen, die, wenn sie ausgeführt werden, einen Prozessor eines medizinischen Bildgebungssystems veranlassen, eines der Verfahren der Ansprüche 9-14 durchzuführen.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962855453P | 2019-05-31 | 2019-05-31 | |
PCT/EP2020/064714 WO2020239842A1 (en) | 2019-05-31 | 2020-05-27 | Guided ultrasound imaging |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3975865A1 EP3975865A1 (de) | 2022-04-06 |
EP3975865B1 true EP3975865B1 (de) | 2023-07-12 |
Family
ID=70861500
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20728751.7A Active EP3975865B1 (de) | 2019-05-31 | 2020-05-27 | Geführte ultraschallbildgebung |
Country Status (5)
Country | Link |
---|---|
US (1) | US12048589B2 (de) |
EP (1) | EP3975865B1 (de) |
JP (1) | JP7442548B2 (de) |
CN (1) | CN113905670A (de) |
WO (1) | WO2020239842A1 (de) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021063807A1 (en) * | 2019-09-30 | 2021-04-08 | Koninklijke Philips N.V. | Recording ultrasound images |
WO2021099278A1 (en) * | 2019-11-21 | 2021-05-27 | Koninklijke Philips N.V. | Point-of-care ultrasound (pocus) scan assistance and associated devices, systems, and methods |
US11523801B2 (en) * | 2020-05-11 | 2022-12-13 | EchoNous, Inc. | Automatically identifying anatomical structures in medical images in a manner that is sensitive to the particular view in which each image is captured |
WO2024041917A1 (en) * | 2022-08-23 | 2024-02-29 | Koninklijke Philips N.V. | Automated neural network selection for ultrasound |
EP4331499A1 (de) * | 2022-08-29 | 2024-03-06 | Koninklijke Philips N.V. | Ultraschallbildgebungssysteme und -verfahren |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1915738B1 (de) * | 2005-08-09 | 2018-09-26 | Koninklijke Philips N.V. | System und verfahren zum selektiven mischen von zweidimensionalen röntgenbildern und dreidimensionalen ultraschallbildern |
US8858436B2 (en) * | 2008-11-12 | 2014-10-14 | Sonosite, Inc. | Systems and methods to identify interventional instruments |
JP5462598B2 (ja) * | 2009-11-18 | 2014-04-02 | 日立アロカメディカル株式会社 | 超音波診断システム |
US20110255762A1 (en) * | 2010-04-15 | 2011-10-20 | Harald Deischinger | Method and system for determining a region of interest in ultrasound data |
US8891881B2 (en) | 2012-01-25 | 2014-11-18 | General Electric Company | System and method for identifying an optimal image frame for ultrasound imaging |
US9943286B2 (en) * | 2012-06-04 | 2018-04-17 | Tel Hashomer Medical Research Infrastructure And Services Ltd. | Ultrasonographic images processing |
EP3043864A4 (de) * | 2013-09-11 | 2017-07-26 | The Board of Trustees of The Leland Stanford Junior University | Verfahren und systeme zur strahlenintensitätsmodulation zur aktivierung schneller radiotherapien |
US11540718B2 (en) * | 2013-12-09 | 2023-01-03 | Koninklijke Philips N.V. | Imaging view steering using model-based segmentation |
RU2689172C2 (ru) * | 2014-05-09 | 2019-05-24 | Конинклейке Филипс Н.В. | Системы визуализации и способы для расположения трехмерного ультразвукового объема в требуемой ориентации |
CN105433981B (zh) | 2015-12-07 | 2018-08-07 | 深圳开立生物医疗科技股份有限公司 | 一种超声成像方法、装置及其超声设备 |
US11712221B2 (en) * | 2016-06-20 | 2023-08-01 | Bfly Operations, Inc. | Universal ultrasound device and related apparatus and methods |
CN110087555B (zh) | 2017-05-12 | 2022-10-25 | 深圳迈瑞生物医疗电子股份有限公司 | 一种超声设备及其三维超声图像的显示变换方法、系统 |
EP3435324A1 (de) * | 2017-07-27 | 2019-01-30 | Koninklijke Philips N.V. | Verarbeitung eines fötalen ultraschallbildes |
KR102063294B1 (ko) * | 2017-10-11 | 2020-01-07 | 알레시오 주식회사 | 초음파 영상을 실사 영상으로 변환하는 방법 및 그 장치 |
WO2019086365A1 (en) | 2017-11-02 | 2019-05-09 | Koninklijke Philips N.V. | Intelligent ultrasound system for detecting image artefacts |
JP7407725B2 (ja) | 2018-03-12 | 2024-01-04 | コーニンクレッカ フィリップス エヌ ヴェ | ニューラルネットワークのための超音波撮像平面整列ガイダンス並びに関連するデバイス、システム、及び方法 |
JP7401447B2 (ja) | 2018-03-12 | 2023-12-19 | コーニンクレッカ フィリップス エヌ ヴェ | ニューラルネットワークを使用した超音波撮像平面整列並びに関連するデバイス、システム、及び方法 |
US11638569B2 (en) * | 2018-06-08 | 2023-05-02 | Rutgers, The State University Of New Jersey | Computer vision systems and methods for real-time needle detection, enhancement and localization in ultrasound |
US11426142B2 (en) * | 2018-08-13 | 2022-08-30 | Rutgers, The State University Of New Jersey | Computer vision systems and methods for real-time localization of needles in ultrasound images |
EP3838163A1 (de) * | 2019-12-17 | 2021-06-23 | Koninklijke Philips N.V. | Verfahren und system zur verbesserten ultraschallebenenerfassung |
-
2020
- 2020-05-27 WO PCT/EP2020/064714 patent/WO2020239842A1/en unknown
- 2020-05-27 JP JP2021570306A patent/JP7442548B2/ja active Active
- 2020-05-27 CN CN202080040358.0A patent/CN113905670A/zh active Pending
- 2020-05-27 US US17/613,864 patent/US12048589B2/en active Active
- 2020-05-27 EP EP20728751.7A patent/EP3975865B1/de active Active
Also Published As
Publication number | Publication date |
---|---|
EP3975865A1 (de) | 2022-04-06 |
US12048589B2 (en) | 2024-07-30 |
JP7442548B2 (ja) | 2024-03-04 |
US20220211348A1 (en) | 2022-07-07 |
WO2020239842A1 (en) | 2020-12-03 |
JP2022534253A (ja) | 2022-07-28 |
CN113905670A (zh) | 2022-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3975865B1 (de) | Geführte ultraschallbildgebung | |
US11992369B2 (en) | Intelligent ultrasound system for detecting image artefacts | |
CN111200973B (zh) | 基于智能超声的生育力监测 | |
US11521363B2 (en) | Ultrasonic device, and method and system for transforming display of three-dimensional ultrasonic image thereof | |
US20240074675A1 (en) | Adaptive ultrasound scanning | |
US11986345B2 (en) | Representation of a target during aiming of an ultrasound probe | |
JP7358457B2 (ja) | 超音波画像による脂肪層の識別 | |
CN113795198B (zh) | 用于控制体积速率的系统和方法 | |
KR102419310B1 (ko) | 초음파 이미징 데이터로부터 태아 이미지들을 프로세싱 및 디스플레이하기 위한 방법들 및 시스템들 | |
CN112220497A (zh) | 一种超声成像显示方法及相关装置 | |
JP6501796B2 (ja) | 超音波画像のモデル・ベースのセグメンテーションのための取得方位依存特徴 | |
CN111403007A (zh) | 一种超声成像的优化方法、超声成像系统和计算机可读存储介质 | |
RU2782874C2 (ru) | Интеллектуальная ультразвуковая система для обнаружения артефактов изображений |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20220103 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 602020013716 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: A61B0008080000 Ipc: G06T0007730000 Ref country code: DE Ref legal event code: R079 Free format text: PREVIOUS MAIN CLASS: A61B0008080000 Ipc: G06T0007730000 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G16H 50/20 20180101ALI20230102BHEP Ipc: G06T 7/00 19950101ALI20230102BHEP Ipc: A61B 8/00 19850101ALI20230102BHEP Ipc: A61B 5/107 19900101ALI20230102BHEP Ipc: A61B 8/14 19850101ALI20230102BHEP Ipc: A61B 8/08 19850101ALI20230102BHEP Ipc: G16H 40/63 20180101ALI20230102BHEP Ipc: G16H 30/40 20180101ALI20230102BHEP Ipc: G16H 30/20 20180101ALI20230102BHEP Ipc: G06T 7/73 20170101AFI20230102BHEP |
|
INTG | Intention to grant announced |
Effective date: 20230206 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602020013716 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R084 Ref document number: 602020013716 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 746 Effective date: 20230904 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20230712 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1587937 Country of ref document: AT Kind code of ref document: T Effective date: 20230712 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231013 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231112 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231113 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231012 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231112 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231013 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602020013716 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 |
|
26N | No opposition filed |
Effective date: 20240415 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20240521 Year of fee payment: 5 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240529 Year of fee payment: 5 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230712 |