WO2022069208A1 - Identification de région d'intérêt spécifique à un patient basée sur une image ultrasonore et dispositifs, systèmes et procédés associés - Google Patents

Identification de région d'intérêt spécifique à un patient basée sur une image ultrasonore et dispositifs, systèmes et procédés associés Download PDF

Info

Publication number
WO2022069208A1
WO2022069208A1 PCT/EP2021/075162 EP2021075162W WO2022069208A1 WO 2022069208 A1 WO2022069208 A1 WO 2022069208A1 EP 2021075162 W EP2021075162 W EP 2021075162W WO 2022069208 A1 WO2022069208 A1 WO 2022069208A1
Authority
WO
WIPO (PCT)
Prior art keywords
patient
interest
ultrasound
region
neural network
Prior art date
Application number
PCT/EP2021/075162
Other languages
English (en)
Inventor
Earl Canfield
Robert Gustav TRAHMS
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2022069208A1 publication Critical patent/WO2022069208A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present disclosure relates generally to systems for ultrasound imaging.
  • patient-specific deep learning networks can be trained and implemented in point-of- care settings to identify regions of interest within a patient’s anatomy and display the regions of interest to a user during ultrasound imaging examinations.
  • a medical ultrasound system may include an ultrasound transducer probe coupled to a processing system and one or more display devices.
  • the ultrasound transducer probe may include an array of ultrasound transducer elements that transmit acoustic waves into a patient’s body and record acoustic waves reflected from anatomical structures within the patient’s body, which may include tissues, blood vessels, internal organs, tumors, cysts, or other anatomical features.
  • the transmission of the acoustic waves and the reception of reflected acoustic waves or echo responses can be performed by the same set of ultrasound transducer elements or different sets of ultrasound transducer elements.
  • the processing system can apply beamforming, signal processing, and/or image processing to the received echo responses to create an image of the patient’s internal anatomical structures.
  • the image may then be presented to a user for analysis.
  • Ultrasound imaging is a safe, useful, and in some applications, non-invasive tool for diagnostic examination, interventions, or treatment. Ultrasound imaging can provide insights into an anatomy before a surgery or other major procedure is performed as well as monitor and track changes to a particular anatomical feature over time.
  • the rapid growth in point-of-care ultrasound treatment has enabled the availability of ultrasound in many point-of-care settings, such as during an emergency situation, critical care unit, or other specialized care facility.
  • use of point-of-care ultrasound can be challenging for users, specifically for novice users, in terms of structure identification, rapid assessment of the condition, or tracking differences in anatomical structures or features of the same patient over time between ultrasound examinations.
  • Embodiments of the present disclosure are directed to systems, devices, and methods for deep learning networks used for ultrasound applications.
  • the ultrasound imaging system described herein identifies a region of interest in a patient’s anatomy during a first ultrasound examination, labels the region of interest, and trains a deep learning network to identify the same region of interest during subsequent ultrasound examinations.
  • the same region of interest may be identified, labelled, and displayed to a user in real time in a point- of-care setting.
  • Regions of interest may include such anatomical features as tumors, cysts, blood clots, blockages, or other features.
  • the system may utilize convolutional neural networks to learn and identify regions of interest.
  • the system may intentionally overfit regions of interest identified and use anatomical features surrounding a region of interest to identify a patient’s specific anatomy and specific regions of interest within that anatomy.
  • the system may utilize high speed processors to capture ultrasound image frames at a high frame rate and process ultrasound image frames for display to a user in real time.
  • the system may additionally calculate a number of metrics associated with an identified region of interest including volume of a region of interest, blood flow, or any number of other metrics.
  • the system may assist a user to track changes to a region of interest over several ultrasound examinations by displaying image frames or ultrasound videos from different examinations simultaneously.
  • the system may generate and store several anatomy-specific deep learning network parameters associated with one patient and may store patient-specific deep learning network parameters corresponding to many patients.
  • the ultrasound imaging system described herein increases a user’s ability to diagnose, monitor, and treat various medical conditions by more easily and reliably identifying, measuring, and comparing important anatomical features in a patient’s anatomy.
  • an ultrasound imaging system includes a processor configured for communication with an ultrasound probe, the processor configured to: receive a first ultrasound image frame representative of a first patient during a first ultrasound examination of the first patient; receive first neural network parameters, wherein the first neural network parameters are associated only with the first patient; identify, using a neural network implemented with the first neural network parameters, a first region of interest within the first ultrasound image frame; and output, to a display in communication with the processor, the first ultrasound image frame and a first graphical representation of the first region of interest.
  • the processor is configured to: receive a second ultrasound image frame representative of a second patient during a second ultrasound examination of the second patient; receive second neural network parameters, wherein the second neural network parameters are associated only with the second patient; identify, using the neural network implemented with the second neural network parameters, a second region of interest within the second ultrasound image frame; and output, to the display in communication with the processor, the second ultrasound image frame and a second graphical representation of the second region of interest.
  • the first neural network parameters are associated with only a first anatomy of the first patient, and the first region of interest comprises the first anatomy of the first patient.
  • the processor is configured to: receive a second ultrasound image frame representative of the first patient; receive second neural network parameters, wherein the second neural network parameters are associated only with a second anatomy of the first patient; identify, using the neural network implemented with the second neural network parameters, a second region of interest within the second ultrasound image frame, wherein the second region of interest comprises the second anatomy of the first patient; and output, to the display in communication with the processor, the second ultrasound image frame and a second graphical representation of the second region of interest.
  • the first neural network parameters are determined based on training during a previous ultrasound examination of the first patient.
  • the first neural network parameters are intentionally overfit to the first patient.
  • the first neural network parameters are determined based on training in a point-of-care setting.
  • the processor is configured to retrieve a first neural network parameter file comprising the first neural network parameters from a memory in communication with the processor, and the first neural network parameter file is associated with only the first patient.
  • the processor is configured to retrieve the first neural network parameter file when the processor retrieves patient data associated with only the first patient to initiate the first ultrasound examination.
  • the processor is configured to modify the first neural parameters during a training based on the first ultrasound examination.
  • the processor is configured to: determine a confidence score representative of the processor identifying the first region of interest; and output the confidence score to the display.
  • the graphical representation of the first region of interest comprises a graphical overlay on the first ultrasound image.
  • the processor is configured to: identify the first region of interest in a plurality of ultrasound image frames; and output the plurality of ultrasound image frames to the display, wherein the graphical representation of the first region of interest moves to track the first region of interest within the plurality of ultrasound image frames.
  • the processor is configured to: identify the first region of interest, using the neural network implemented with the first neural network parameters, during a plurality of ultrasound examinations of the first patient; store, in a memory in communication with the processor, a respective ultrasound image frame comprising the first region of interest for each plurality of ultrasound examinations; and output, to the display, a screen display simultaneously displaying each of the respective ultrasound image frames.
  • the neural network comprises a convolutional neural network (CNN) and the first neural network parameters comprise CNN parameters.
  • the processor comprises a graphics processing unit (GPU).
  • an ultrasound imaging method includes receiving, at a processor in communication with an ultrasound probe, an ultrasound image frame representative of a patient during an ultrasound examination of the patient; receiving neural network parameters at the processor, wherein the neural network parameters are associated only with the patient; identifying, by the processor, a region of interest within the ultrasound image frame using a neural network implemented with the neural network parameters; and outputting, to a display in communication with the processor, the ultrasound image frame and a graphical representation of the region of interest.
  • a medical imaging system includes: a processor configured for communication with a medical imaging device, the processor configured to: receive a user input related to a region of interest within a first set of medical images; train a neural network on the first set of medical images using the user input, thereby generating a patient-specific neural network; obtain a second set of medical images from the first patient; apply the patient-specific neural network to the second set of medical images to identify the region of interest in the second set of medical images; and provide, based on the application, a graphical representation related to the region of interest in the second set of medical images.
  • FIG. 1 is a schematic diagram of an ultrasound imaging system, according to aspects of the present disclosure.
  • Fig. 2 is a schematic diagram of a plurality of patient-specific ultrasound image frames, video clips, and deep learning networks stored in a memory, according to aspects of the present disclosure.
  • FIG. 3 is a flow diagram of an ultrasound imaging method of a patient’s ultrasound imaging examination, according to aspects of the present disclosure.
  • FIG. 4 is a diagrammatic view of a graphical user interface for an ultrasound imaging system identifying a region of interest, according to aspects of the present disclosure.
  • FIG. 5 is a flow diagram of a method of training a patient-specific deep learning network to identify a predetermined region of interest, according to aspects of the present disclosure.
  • FIG. 6 is a schematic diagram of a method of training a patient-specific deep learning network to identify a predetermined region of interest, according to aspects of the present disclosure.
  • FIG. 7 is a flow diagram of a method of identifying a region of interest with a previously trained patient-specific deep learning network, according to aspects of the present disclosure.
  • FIG. 8 is a schematic diagram of a method of identifying and displaying to a user a region of interest with a previously trained patient-specific deep learning network, according to aspects of the present disclosure.
  • Fig. 9 is a diagrammatic view of a graphical user interface for an ultrasound imaging system identifying a region of interest, according to aspects of the present disclosure.
  • Fig. 10 is a diagrammatic view of a graphical user interface for an ultrasound imaging system displaying to a user a plurality of video clips of a region of interest, according to aspects of the present disclosure.
  • Fig. 11 is a schematic diagram of a processor circuit, according to aspects of the present disclosure. DETAILED DESCRIPTION
  • the patient-specific and/or anatomy-specific deep learning network described herein can be advantageously utilized for a given patient and/or a particular anatomy for a given patient.
  • a deep learning networks that are overfit to a specific patient or the patient’s specific anatomy is considered a poor deep learning network because it does not have predictive use outside of that patient.
  • overfitting is avoided and deep learning networks are trained and implemented with many different patients.
  • Training of deep learning networks is also typically done outside of the point-of-care environment (e.g., during hardware and/or software development by a manufacturer).
  • the present disclosure intentionally and advantageously overfits the deep learning network to the given patient and/or a particular anatomy for a given patient in a point-of-care environment.
  • Fig. 1 is a schematic diagram of an ultrasound imaging system 100, according to aspects of the present disclosure.
  • the system 100 is used for scanning an area or volume of a patient’s body.
  • the system 100 includes an ultrasound imaging probe 110 in communication with a host 130 over a communication interface or link 120.
  • the probe 110 may include a transducer array 112, a beamformer 114, a processor circuit 116, and a communication interface 118.
  • the host 130 may include a display 132, a processor circuit 134, a communication interface 136, a memory 138 storing patient files 140.
  • the probe 110 is an external ultrasound imaging device including a housing configured for handheld operation by a user.
  • the transducer array 112 can be configured to obtain ultrasound data while the user grasps the housing of the probe 110 such that the transducer array 112 is positioned adjacent to or in contact with a patient’s skin.
  • the probe 110 is configured to obtain ultrasound data of anatomy within the patient’s body while the probe 110 is positioned outside of the patient’s body.
  • the probe 110 can be an external ultrasound probe and/or a transthoracic echocardiography (TTE) probe.
  • TTE transthoracic echocardiography
  • the probe 110 can be an internal ultrasound imaging device and may comprise a housing configured to be positioned within a lumen of a patient’s body, including the patient’s coronary vasculature, peripheral vasculature, esophagus, heart chamber, or other body lumen or body cavity.
  • the probe 110 may be an intravascular ultrasound (IVUS) imaging catheter or an intracardiac echocardiography (ICE) catheter.
  • probe 110 may be a transesophageal echocardiography (TEE) probe. Probe 110 may be of any suitable form for any suitable ultrasound imaging application including both external and internal ultrasound imaging.
  • aspects of the present disclosure can be implemented with medical images of patients obtained using any suitable medical imaging device and/or modality.
  • medical images and medical imaging devices include x-ray images (angiographic image, fluoroscopic images, images with or without contrast) obtained by an x-ray imaging device, computed tomography (CT) images obtained by a CT imaging device, positron emission tomography-computed tomography (PET-CT) images obtained by a PET-CT imaging device, magnetic resonance images (MRI) obtained by an MRI device, single-photon emission computed tomography (SPECT) images obtained by a SPECT imaging device, optical coherence tomography (OCT) images obtained by an OCT imaging device, and intravascular photoacoustic (IVPA) images obtained by an IVPA imaging device.
  • CT computed tomography
  • PET-CT positron emission tomography-computed tomography
  • MRI magnetic resonance images
  • SPECT single-photon emission computed tomography
  • OCT optical coherence tomography
  • the medical imaging device can obtain the medical images while positioned outside the patient body, spaced from the patient body, adjacent to the patient body, in contact with the patient body, and/or inside the patient body.
  • the transducer array 112 emits ultrasound signals towards an anatomical object 105 of a patient and receives echo signals reflected from the object 105 back to the transducer array 112.
  • the ultrasound transducer array 112 can include any suitable number of acoustic elements, including one or more acoustic elements and/or a plurality of acoustic elements. In some instances, the transducer array 112 includes a single acoustic element.
  • the transducer array 112 may include an array of acoustic elements with any number of acoustic elements in any suitable configuration.
  • the transducer array 112 can include between 1 acoustic element and 10000 acoustic elements, including values such as 2 acoustic elements, 4 acoustic elements, 36 acoustic elements, 64 acoustic elements, 128 acoustic elements, 500 acoustic elements, 812 acoustic elements, 1000 acoustic elements, 3000 acoustic elements, 8000 acoustic elements, and/or other values both larger and smaller.
  • the transducer array 112 may include an array of acoustic elements with any number of acoustic elements in any suitable configuration, such as a linear array, a planar array, a curved array, a curvilinear array, a circumferential array, an annular array, a phased array, a matrix array, a one-dimensional (ID) array, a 1.x dimensional array (e.g., a 1.5D array), or a two- dimensional (2D) array.
  • the array of acoustic elements e.g., one or more rows, one or more columns, and/or one or more orientations
  • the transducer array 112 can be configured to obtain one-dimensional, two- dimensional, and/or three-dimensional images of a patient’s anatomy.
  • the transducer array 112 may include a piezoelectric micromachined ultrasound transducer (PMUT), capacitive micromachined ultrasonic transducer (CMUT), single crystal, lead zirconate titanate (PZT), PZT composite, other suitable transducer types, and/or combinations thereof.
  • PMUT piezoelectric micromachined ultrasound transducer
  • CMUT capacitive micromachined ultrasonic transducer
  • PZT lead zirconate titanate
  • PZT composite other suitable transducer types, and/or combinations thereof.
  • the object 105 may include any anatomy or anatomical feature, such as blood vessels, nerve fibers, airways, mitral leaflets, cardiac structure, abdominal tissue structure, appendix, large intestine (or colon), small intestine, kidney, liver, and/or any other anatomy of a patient.
  • the object 105 may include at least a portion of a patient’s large intestine, small intestine, cecum pouch, appendix, terminal ileum, liver, epigastrium, and/or psoas muscle.
  • the present disclosure can be implemented in the context of any number of anatomical locations and tissue types, including without limitation, organs including the liver, heart, kidneys, gall bladder, pancreas, lungs; ducts; intestines; nervous system structures including the brain, dural sac, spinal cord and peripheral nerves; the urinary tract; as well as valves within the blood vessels, blood, chambers or other parts of the heart, abdominal organs, and/or other systems of the body.
  • the object 105 may include malignancies such as tumors, cysts, lesions, hemorrhages, or blood pools within any part of human anatomy.
  • the anatomy may be a blood vessel, as an artery or a vein of a patient’s vascular system, including cardiac vasculature, peripheral vasculature, neural vasculature, renal vasculature, and/or any other suitable lumen inside the body.
  • vascular system including cardiac vasculature, peripheral vasculature, neural vasculature, renal vasculature, and/or any other suitable lumen inside the body.
  • the present disclosure can be implemented in the context of man-made structures such as, but without limitation, heart valves, stents, shunts, filters, implants and other devices.
  • the beamformer 114 is coupled to the transducer array 112.
  • the beamformer 114 controls the transducer array 112, for example, for transmission of the ultrasound signals and reception of the ultrasound echo signals.
  • the beamformer 114 may apply a time-delay to signals sent to individual acoustic transducers within an array in the transducer 112 such that an acoustic signal is steered in any suitable direction propagating away from the probe 110.
  • the beamformer 114 may further provide image signals to the processor circuit 116 based on the response of the received ultrasound echo signals.
  • the beamformer 114 may include multiple stages of beamforming. The beamforming can reduce the number of signal lines for coupling to the processor circuit 116.
  • the transducer array 112 in combination with the beamformer 114 may be referred to as an ultrasound imaging component.
  • the processor 116 is coupled to the beamformer 114.
  • the processor 116 may also be described as a processor circuit, which can include other components in communication with the processor 116, such as a memory, beamformer 114, communication interface 118, and/or other suitable components.
  • the processor 116 may include a central processing unit (CPU), a graphical processing unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a controller, a field programmable gate array (FPGA) device, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein.
  • CPU central processing unit
  • GPU graphical processing unit
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the processor 116 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • the processor 116 is configured to process the beamformed image signals. For example, the processor 116 may perform filtering and/or quadrature demodulation to condition the image signals.
  • the processor 116 and/or 134 can be configured to control the array 112 to obtain ultrasound data associated with the object 105.
  • the communication interface 118 is coupled to the processor 116.
  • the communication interface 118 may include one or more transmitters, one or more receivers, one or more transceivers, and/or circuitry for transmitting and/or receiving communication signals.
  • the communication interface 118 can include hardware components and/or software components implementing a particular communication protocol suitable for transporting signals over the communication link 120 to the host 130.
  • the communication interface 118 can be referred to as a communication device or a communication interface module.
  • the communication link 120 may be any suitable communication link.
  • the communication link 120 may be a wired link, such as a universal serial bus (USB) link or an Ethernet link.
  • the communication link 120 nay be a wireless link, such as an ultra- wideband (UWB) link, an Institute of Electrical and Electronics Engineers (IEEE) 802.11 WiFi link, or a Bluetooth link.
  • UWB ultra- wideband
  • IEEE Institute of Electrical and Electronics Engineers
  • the communication interface 136 may receive the image signals.
  • the communication interface 136 may be substantially similar to the communication interface 118.
  • the host 130 may be any suitable computing and display device, such as a workstation, a personal computer (PC), a laptop, a tablet, or a mobile phone.
  • the processor 134 is coupled to the communication interface 136.
  • the processor 134 may also be described as a processor circuit, which can include other components in communication with the processor 134, such as the memory 138, the communication interface 136, and/or other suitable components.
  • the processor 134 may be implemented as a combination of software components and hardware components.
  • the processor 134 may include a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a controller, an FPGA device, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein.
  • the processor 134 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • the processor 134 can be configured to generate image data from the image signals received from the probe 110.
  • the processor 134 can apply advanced signal processing and/or image processing techniques to the image signals.
  • the processor 134 can form a three-dimensional (3D) volume image from the image data.
  • the processor 134 can perform real-time processing on the image data to provide a streaming video of ultrasound images of the object 105.
  • the memory 138 is coupled to the processor 134.
  • the memory 138 may be any suitable storage device, such as a cache memory (e.g., a cache memory of the processor 134), random access memory (RAM), magnetoresistive RAM (MRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, solid state memory device, hard disk drives, solid state drives, other forms of volatile and nonvolatile memory, or a combination of different types of memory.
  • a cache memory e.g., a cache memory of the processor 134
  • RAM random access memory
  • MRAM magnetoresistive RAM
  • ROM read-only memory
  • PROM programmable read-only memory
  • EPROM erasable programmable read only memory
  • EEPROM electrically erasable programmable read only memory
  • flash memory solid state memory device, hard disk drives, solid state drives, other forms of volatile
  • the memory 138 can be configured to store the patient files 140 relating to a patient’s medical history, history of procedures performed, anatomical or biological features, characteristics, or medical conditions associated with a patient, computer readable instructions, such as code, software, or other application, as well as any other suitable information or data.
  • the patient files 140 may include other forms of medical history, such as but not limited to ultrasound images, ultrasound videos, and/or any imaging information relating to the patient’s anatomy.
  • the memory 138 can also be configured to store patient files 140 relating to the training and implementation of patient-specific deep learning networks (e.g., neural networks). Mechanisms for training and implementing the patient-specific deep learning networks (e.g., patient-specific neural networks) are described in greater detail herein.
  • the display 132 is coupled to the processor circuit 134.
  • the display 132 may be a monitor or any suitable display.
  • the display 132 is configured to display the ultrasound images, image videos, and/or any imaging information of the object 105.
  • the system 100 may be used to assist a sonographer in performing an ultrasound scan at a point-of-care setting.
  • the host 130 may be a mobile device, such as a tablet, a mobile phone, or portable computer.
  • the ultrasound system can implement a patient-specific deep learning network to automatically label or flag the region of interest and place a bounding box around the region of interest.
  • the sonographer may direct the ultrasound imaging system 100 to label or flag the region of interest and place a bounding box around the region of interest to assist the sonographer in locating, comparing, and displaying the region of interest in the same or subsequent ultrasound imaging examinations in a point-of-care setting.
  • the processor 134 may train one or more new deep learning-based prediction networks to identify the region of interest selected by the sonographer within the anatomy of the patient based on input ultrasound images.
  • the training of the one or more new deep learning-based networks may include receiving a video clip comprising ultrasound image frames acquired by the probe 110, and using them to train a patient-specific deep learning network to identify a region of interest.
  • the training may further comprise using ultrasound imaging frames acquired by the probe 110 to test the deep learning network to ensure that it correctly identifies the region of interest labeled by the sonographer or other user.
  • the same ultrasound system 100 is used for training and implementation of the patient-specific deep learning networks. In other embodiments, different ultrasound systems are used for training and implementation of the patient-specific deep learning networks.
  • a patient-specific deep learning network configuration file (e.g., storing network parameters and/or weights) may be generated by one ultrasound imaging system, transferred to a different, second ultrasound imaging system (e.g., via a local network or the internet, or using physical storage media), and implemented on the second ultrasound imaging system.
  • the host 130 of the ultrasound imaging system 100 may further include a deep learning network API and GPU hardware.
  • Fig. 2 is a schematic diagram of a plurality of patient-specific ultrasound image frames 210, video clips 220, and deep learning networks 230 stored in the memory 138, according to aspects of the present disclosure.
  • a sonographer may conduct an ultrasound imaging examination of the anatomy of a patient. During the examination, multiple ultrasound image frames 210 may be captured by the probe 110.
  • every image frame 210 captured during an ultrasound imaging examination may be stored in the memory 138 within a particular patient’s file 205. In other embodiments, only a portion of captured image frames 210 may be stored in the memory 138 in the patient’s file 205 (e.g., those image frames 210 that are selected by the user for storage).
  • the ultrasound image frames 210 may be of any suitable image format or extension, including but not limited to IMG files, high dynamic range (HDR) files, Nil files associated with NIfTI-1 data formats, MNC files, DCM files, digital imaging and communications in medicine (DICOM) files or other image file formats or extensions.
  • the ultrasound image frames 210 may include both vector image files or raster image files.
  • the ultrasound image frames 210 may be in the form of joint photographic experts group (JPEG) files, portable network graphics (PNG) files, tagged image file (TIFF), portable document format (PDF) files, encapsulated postscript (EPS) files, raw image formats (RAW) or other file types.
  • JPEG joint photographic experts group
  • PNG portable network graphics
  • TIFF tagged image file
  • PDF portable document format
  • EPS encapsulated postscript
  • RAW raw image formats
  • the ultrasound image frames 210 may be captured and stored at any suitable bit depth, depending on the particular application of ultrasound imaging, characteristics of the region of interest, storage space within the memory 138, the number of frames in a given set of the ultrasound image frames 210, or other constraints.
  • the ultrasound image frames 210 may be stored or captured at a rate of 8 bits, 16 bits, 24 bits, 32 bits, 64 bits, 128 bits, 256 bits, or more, or at any suitable rate therebetween.
  • the user of the ultrasound imaging system 100 may, in a point- of-care setting during an examination or at a later time, identify and label individual image frames 210 which may be of particular interest.
  • the user may annotate, enlarge, crop, compress, or otherwise modify an individual ultrasound image frame 210 in any suitable manner.
  • the ultrasound image frames 210 may be captured at multiple patient examinations at different times. For example, a set of the ultrasound image frames 210 from a first examination may be captured and stored in the memory 138 within the patient’s file 205. At a later date and/or time, a second examination may be conducted and the same region may be examined using the ultrasound imaging system 100. At this second examination, an additional or second set of ultrasound image frames 210 may be captured and also stored in the memory 138 within the patient’s file 205. After this second examination, the patient’s file 205 may then include two sets of ultrasound image frames 210: one from a first examination or “Exam 1”, and one from a second examination or “Exam 2,” as shown in Fig. 2.
  • the anatomy of the patient may then be examined at a third examination (or any subsequent examination) using the ultrasound imaging system 100 and a third set (or any subsequent set) of ultrasound image frames 210 may also be stored. Subsequent examinations may be conducted and corresponding sets of ultrasound image frames 210 may be captured and stored in the memory 138 within the patient’s file 205 and organized according to the date and time the ultrasound image frames 210 were captured corresponding to an examination.
  • the ultrasound imaging system 100 may store, in the memory 138 in communication with the processor 134, a respective ultrasound image frame or ultrasound video clip depicting the same region of interest for each ultrasound examination of several ultrasound examinations.
  • Ultrasound video clips 220 may also be stored on the memory 138 within a patient’s specific file 205.
  • a multiple ultrasound image frames from a set of ultrasound image frames 210 captured from any given patient examination may be compiled to form the ultrasound video clip 220.
  • the ultrasound video clips 220 may be sorted or organized based on the date and time the ultrasound image files 210 used to generate an ultrasound video clip 220 were captured.
  • the ultrasound video clips 220 may be of any suitable file format or extension.
  • the ultrasound video clips 220 may be captured, created, and/or stored in the form of an audio video interleave (AVI) file, flash video (FLV or F4V) file, Windows® media video (WMV) file, QuickTime® movie (MOV) file, motion picture experts group (MPEG) file, MP4 file, interplay multimedia (MVL) file, Volusion® 4D ultrasound scan (.V00) file, or any other suitable video file.
  • ultrasound video clips 220 may be stored or captured at a rate of 8 bits, 16 bits, 24 bits, 32 bits, 64 bits, 128 bits, 256 bits, or more, or any suitable rate therebetween.
  • video clips 220 may additionally or alternatively be video loops.
  • the system may be configured to automatically loop the video when the user plays the video.
  • the user of the ultrasound imaging system 100 may either in a point-of-care setting during an examination or at a later time, identify and label an ultrasound video clip 220 which may be of particular interest.
  • the user may annotate, enlarge, compress, or otherwise modify an individual ultrasound video clip 220 in any suitable manner.
  • the ultrasound video clips 220 may be captured, created, or stored at each patient examination. For example, a set of ultrasound video clips 220 from a first exam may be captured and stored in the memory 138 within the patient’s file 205. At a second examination (or any subsequent examination), a second set (or any subsequent set) of ultrasound video clips 220 may be captured and stored in the memory 138 within the particular patient’s file 205 and so on at any additional examinations.
  • the ultrasound video clips 220 may be stored within the memory 138, and organized according to the date and time of the patient examination in which the ultrasound video clip 220 was captured.
  • One or more training ultrasound video clips 222 may be captured by the ultrasound imaging system 100 and used to create and/or train a patient-specific deep learning network 230 corresponding to a region of interest identified by a user.
  • a training ultrasound video clip 222 may comprise a plurality of ultrasound image frames 210 which depict a region of interest selected, labeled, and/or flagged by a user of ultrasound imaging system 100. Additional details regarding the training of patient-specific deep learning networks 230 corresponding to regions of interest will be discussed in more detail hereafter, and particularly with reference to Figs. 4-7.
  • the ultrasound video clips 220 may be of any suitable length of time.
  • the ultrasound imaging system 100 may store a video clip 220 comprising every ultrasound image frame 210 captured during a given patient examination, such that the ultrasound video clip 220 is the same length as the entire patient examination.
  • the ultrasound imaging system 100 may create and store such a full-length video clip 220 within a patient’s file 205 for each patient examination with or without direction from a user of ultrasound imaging system 100.
  • a full-length video clip 220 is not stored in the memory 138.
  • an ultrasound video clip 220 may be a fraction of a second.
  • An ultrasound video clip 220 may comprise only two ultrasound image frames or may be of a duration of only 1-10 milliseconds.
  • the ultrasound video clips 220 may be of a duration of just 1 millisecond, 1 second, 10 seconds, 20 seconds, 50 seconds, 1 minute, 10 minutes, an hour, or of any suitable duration therebetween.
  • the ultrasound imaging system 100 may determine the length of any or all of the ultrasound video clips 220.
  • a user may dictate the length of any or all of the ultrasound video clips 200. Any temporal constraints may, in some embodiments, be imposed by the storage capacity of the memory 138, among other constraints. [0050] As previously mentioned, the ultrasound video clips 220 may be captured and stored at any suitable frame rate.
  • frame rates of acquisition of ultrasound image frames 210 may be up to ten times that of frame rates acquired with CPU processor circuits.
  • ultrasound imaging system 100 may capture ultrasound image frames 210 at 100 frames per second, 120 frames per second, 180 frames per second, or 200 frames per second, or at any suitable frame rate therebetween.
  • the ultrasound imaging system 100 Due to the high frame rate achieved by the use of GPU(s) implemented as part of the processors 116 and 134, and the ability to process frames and train and implement patientspecific deep learning networks 230 at a high rate, it is possible for the ultrasound imaging system 100 to train a new patient-specific deep learning network 230 in a point-of-care setting while an ultrasound examination is being conducted.
  • the ultrasound imaging system 100 is able to implement previously trained patient-specific deep learning networks 230 and display to a user previously identified regions of interest in real time in a point-of-care setting. This allows sonographers to immediately recognize regions of interest while an examination is being performed and easily compare them to images or videos from past examinations to track changes in the anatomy.
  • the patient-specific deep learning networks 230 may be capable of recognizing and distinguishing individual regions of interest where multiple regions of interest exist.
  • both the ultrasound image frames 210 and the ultrasound video clips 220 may pertain to two-dimensional data or depictions, both the ultrasound image frames 210 and the ultrasound video clips 220 may be captured, stored, saved, or displayed to a user in both two-dimensional or three-dimensional formats. Data corresponding to three- dimensional ultrasound image frames 210 and ultrasound video clips 220 may be stored and organized in the memory 138 in a substantially similar way to two-dimensional data. In some embodiments, data relating to three-dimensional ultrasound image frames 210 and ultrasound video clips 220 may be of a larger file size.
  • a patient file 205 may comprise a plurality of deep learning network files 230.
  • a deep learning network file 230 stores parameters and/or weights of the deep learning networks. That is, the deep learning network file 230 stores the patientspecific and/or anatomy-specific data needed to implement the patient-specific and/or anatomy- specific deep learning network.
  • the file does not include the deep learning architecture (e.g., the various layers of a convolutional neural network). Rather, in these embodiments, the file only stores the parameters and/or weights used in the layers of the CNN. This advantageously minimizes the file size of the deep learning network file 230.
  • the ultrasound imaging system that is implementing the deep learning network can have software and/or hardware to implement the deep learning architecture.
  • the same deep learning architecture may be implemented by the ultrasound imaging system, but it is patientspecific and/or anatomy-specific when it uses the deep learning network file 230.
  • the deep learning network file 230 also stores the deep learning architecture.
  • a deep learning network file 230 may correspond to one anatomical feature within a patient’s anatomy, including an organ, tumor, lesion, diseased region, or other previously listed features. Examples are depicted within Fig. 2 including a deep learning network 230 relating to a cardiac region, a breast region, a liver region, and a kidney region. These regions are merely exemplary.
  • a deep learning network file 230 may correspond to more than one region of interest 250.
  • a first region of interest 250 identified and labeled by a user could correspond to a lesion within a lumen and a second region of interest 250 could correspond to an impedance or blockage of the same lumen or a different lumen within the anatomical region. Additional regions of interest 250 may also be included in the same deep learning network file 230.
  • a deep learning network file 230 may correspond to one, two, three, four, five, ten, 15, 20, or more regions of interest 250.
  • the region of interest 250 depicted in Fig. 2 may, in some embodiments, a deep learning network parameter file stored within a patient file 205.
  • a plurality of patient files 205 may be stored in the memory 138.
  • two patient files 205 associated with two different patients are depicted in Fig. 2, each containing ultrasound image frames 210 from multiple patient examinations, ultrasound video clips 220 from multiple patient examinations including training ultrasound video clips 222, and files corresponding to patient-specific deep learning networks 230.
  • Each saved deep learning network 230 corresponding to a particular anatomical region may include one or more regions of interest, as shown in Fig. 2.
  • patient files 140 may include many more patient files 205 than the two depicted.
  • Patient files 140 may include one, two, five, ten, 100, 1000, and/or any suitable number of 205.
  • a patient file 205 may contain any suitable types of data or files corresponding to a patient’s health, history, or anatomy.
  • a patient file 205 may contain multiple deep learning network files 230, or may additionally include other medical/health records of the patient, such as electronic health records or electronic medical records.
  • the ultrasound imaging system 100 may access the patient’s deep learning network file 230 corresponding to a particular region or anatomy as well as any other patient data stored within the patient’s file 205.
  • the memory 138 may be any suitable storage device, or a combination of different types of memory.
  • a first set of patient files 205 may be stored on one storage device, including any type of storage device previously listed, and a second set of patient files 205 may be stored on a separate storage device.
  • the first storage device may be in communication with the second storage device, and the two may subsequently be in communication with the processor 134 of Fig. 1.
  • the total number of patient files 205 may be stored on any number of storage devices in communication with one another and with the processor 134.
  • all or some of the patient files 205 may be stored on a server, or cloud server, and accessed remotely by the host 130. All or some of the patient files 205 may further be copied such that a second copy or back-up of all or some of the patient files 205 are stored on a separate storage device, server, or cloud based server.
  • Fig. 3 is a flow diagram of an ultrasound imaging method 300 of a patient’s ultrasound imaging examination, according to aspects of the present disclosure.
  • this ultrasound imaging examination may be a first or otherwise initial examination (e.g., before subsequent examinations).
  • One or more steps of the method 300 can be performed by a processor circuit of the ultrasound imaging system 100, including, e.g., the processor 134 (Fig. 1).
  • method 300 includes a number of enumerated steps, but embodiments of method 300 may include additional steps before, after, or in between the enumerated steps.
  • one or more of the enumerated steps may be omitted, performed in a different order, or performed concurrently.
  • Fig. 4 is a schematic diagram of a graphical user interface (GUI) or screen display for the ultrasound imaging system 100 identifying a region of interest 420, according to aspects of the present disclosure.
  • GUI graphical user interface
  • a region of interest 420 is generally identified by the user for clinical diagnostic and/or treatment purposes.
  • This region of interest 420 may include an anatomical feature 450.
  • Anatomical feature 450 may be a specific organ including any of the previously listed anatomical features or regions listed in the description of object 105 of Fig. 1 above.
  • method 300 includes receiving ultrasound image frames 210 from a patient examination.
  • the ultrasound image frames 210 may be created corresponding to signals sent and received via the probe 110 and displayed via the display 132 (Fig. 1).
  • the ultrasound image frames 210 may be stored in the patient’s file 205 (Fig. 2) in the memory 138.
  • method 300 may not include storing the ultrasound image frames 210.
  • method 300 includes receiving a user input designating the location of an anatomical feature 450 within a region of interest 420 (Fig. 4).
  • the region of interest 420 may be substantially similar to any of the previously depicted and described regions of interest 250 in Fig. 2.
  • a user may designate the location of an anatomical feature 450 within the region of interest 420 through a user interface.
  • the region of interest 420 may comprise a portion of the anatomy of the patient, such as an anatomical feature 450, or in some embodiments may comprise an entire anatomy of the patient.
  • Designating the location of an anatomical feature 450 may be completed by the user selecting (e.g., with any suitable input device that is in communication with the processor 134 and/or part of the host 130 in Fig. 1) a location on an ultrasound image frame 210 displayed via the display 132.
  • the display 132 may comprise a touch screen and designating the location of an anatomical feature 450 may comprise touching a location within an ultrasound image frame 210.
  • Other components such as a computer keyboard, mouse, touchpad, various hard or soft buttons on an ultrasound console, or any components configured to receive user inputs may be used to designate an anatomical feature 450 within a region of interest 420.
  • an anatomical feature 450 within a region of interest 420 is selected by a user, the user may adjust the selected location of the region of interest 420 or anatomical feature 450 by either moving the selected location in any two dimensional or three dimensional direction via a computer mouse, keyboard, or any suitable input device, and/or by stepping forward or backward through the captured ultrasound image frames 210.
  • the anatomical feature 450 and/or the region of interest 420 within the body of a given patient is identified and designated during the ultrasound imaging examination in a point-of-care setting.
  • the anatomical feature 450 and/or the region of interest 420 may be identified and designated at some point after an examination has taken place.
  • the user only designates either the anatomical feature 450 or the region of interest 420 (and not both). In other embodiments, the user designates both the anatomical feature 450 and the region of interest 420.
  • Step 310 of method 300 may further include identifying more than one anatomical feature 450 and associated region of interest 420. For example, one, two, three, four, five, ten, 15, 20 or more anatomical features 450 may be identified during one patient ultrasound imaging examination within a region of interest 420. In embodiments involving more than one anatomical feature 450 per patient examination, the remaining steps of method 300 may be completed concurrently or simultaneously, or may be completed at different times, including completing any remaining steps with regards to one anatomical feature 450, and then completing the same steps immediately thereafter with regards to any additional anatomical features 450.
  • method 300 includes labeling the previously identified anatomical feature 450 with a graphical element 470 within any ultrasound image frames 210 which depict the region of interest 420.
  • the ultrasound imaging system 100 may label the anatomical feature 450 by any suitable method or with any suitable signifier.
  • the ultrasound imaging system 100 may label the anatomical feature 450 by placing a graphical element 470 at and/or around the user-selected location.
  • the graphical element 470 may also be referred to as a flag, label, bookmark, bookmark label, indicator, or any other suitable term and may be a graphical representation of the region of interest 420.
  • the graphical element 470 may be overlaid over the ultrasound image frame or may be a graphical overlay.
  • the graphical element 470 may be of any suitable shape, color, size, or orientation.
  • the graphical element may include a two-dimensional element and/or a three-dimensional element.
  • the graphical element 470 may be the shape of a flag or triangle as depicted in Fig. 4, or may be a circle, square, rectangle, triangle, any other polygon, or any other geometric or non-geometric shape.
  • Graphical element 470 may also include text of any length and font, numerals, alpha-numeric characters, or any other symbols.
  • the shape of the graphical element 470 may symbolize any number of appropriate characteristic pertaining to the anatomical feature 450 or region of interest 420.
  • the shape of the graphical element 470 could represent the order in which the anatomical feature 450 was identified and designated with respect to other identified anatomical features 450, the relative size of the anatomical feature 450, the level of severity of the medical condition associated with the anatomical feature 450, the urgency with which the condition the region of interest 420 portrays is to be treated, or any other relevant characteristic.
  • the color, shading, pattern, or any other feature of the graphical element 470 could also be used to symbolize or convey these same characteristics to a user as well as any other suitable characteristic.
  • the size, shape, color, pattern, or any other feature of the graphical element 470 used to identify an anatomical feature 450 may be selected by a user, or by the ultrasound imaging system 100. Any characteristic of a graphical element 470 may be fully customizable by a user of ultrasound imaging system 100 and additional characteristics or features of a graphical element 470 used to label an anatomical feature 450 may be added by the user.
  • the graphical element 470 used to label an anatomical feature 450 within a region of interest 420 may continue to be displayed at or around the anatomical feature 450.
  • the anatomical feature 450 may move from one location on the display 132 to another location as a user moves the probe 110, and the graphical element 470 used to label the anatomical feature 450 may move with the anatomical feature 450 on the display 132.
  • method 300 includes creating a bounding box 460 around the selected region of interest 420 within any ultrasound image frames 210 which depict the region of interest 420.
  • a bounding box 460 may be created and displayed surrounding the graphical element 470 and anatomical feature 450 and specifying the region of interest 420.
  • Bounding box 460 may be a graphical representation of the region of interest 420. two-dimensional as depicted in Fig. 4, or may be a three-dimensional box.
  • the bounding box 460 may be overlaid over the ultrasound image frame or may be a graphical overlay.
  • Bounding box 460 may be automatically created and displayed to a user by the ultrasound imaging system 100 based on the location of the anatomical features 450 or other characteristics, or may be created based on user input. In some embodiments, the bounding box 460 is created automatically by the ultrasound imaging system 100 and a prompt may be displayed to a user of the ultrasound imaging system 100 allowing a user to modify the dimensions and location of bounding box 460.
  • One primary purpose of bounding box 460 may be to specify to the ultrasound imaging system 100 which portions of an ultrasound image frame 210 should be used to train the patient-specific deep learning network 230.
  • the bounding box 460 may serve additional purposes, such as but not limited to identifying regions of interest 420 to a user more clearly, or conveying other characteristics, measurements, or metrics to a user.
  • both the bounding box 460 and the graphical element 470 are generated and/or displayed. In some embodiments, only one of the bounding box 460 or the graphical element 470 is generated and/or displayed.
  • method 300 includes saving a plurality of the ultrasound image frames 210 which depict the region of interest 420 as an ultrasound video clip 222.
  • the image frames 210 and/or the video clip 222 can be saved in a memory.
  • the image frames 210 and/or the ultrasound video clip 222 may be used to train patient-specific deep learning networks 230.
  • Fig. 5 is a flow diagram of a method 500 of training a patient-specific deep learning network to identify a predetermined region of interest 420, according to aspects of the present disclosure.
  • One or more steps of the method 500 can be performed by a processor circuit of the ultrasound imaging system 100, including, e.g., the processor 134 (Fig. 1).
  • Fig. 6 is a schematic diagram of a method of training a patient-specific deep learning network to identify a region of interest, according to aspects of the present disclosure.
  • method 500 includes a number of enumerated steps, but embodiments of method 500 may include additional steps before, after, or in between the enumerated steps. In some embodiments, one or more of the enumerated steps may be omitted, performed in a different order, or performed concurrently.
  • the training of a deep learning network may be initiated immediately after an ultrasound video clip is completed and may be completed as a background process as the ultrasound imaging system 100 performs other functions.
  • method 500 includes receiving a plurality of ultrasound image frames and/or an ultrasound video clip.
  • the ultrasound image frames that form the ultrasound video clip are identified in Fig. 6 as ultrasound image frames 622.
  • the ultrasound image frames 622 may each depict the region of interest (e.g., the region of interest 420 for Fig. 4).
  • the ultrasound image frames 622 may be selected from a set 610 of the ultrasound image frames 210 captured via the ultrasound imaging system 100 during an ultrasound examination.
  • 10 ultrasound image frames 622 are depicted in Fig. 6, there may be any suitable number of ultrasound image frames 622.
  • the ultrasound image frames 622 corresponding to the ultrasound video clip 222 may include 10, 50, 100, 200, 240, 300, lOOOor more frames, or any suitable number of frames therebetween.
  • the number of ultrasound image frames 622 may be determined by the ultrasound imaging system 100.
  • the number of ultrasound image frames 622 may be user defined.
  • a patient-specific deep learning network may be trained on a few specific ultrasound images frames 622. For example, a patient-specific deep learning network may be satisfactorily trained on about 100 ultrasound image frames 622.
  • method 500 includes a patient-specific deep learning network training.
  • Step 510 further includes a number of sub-steps within step 510.
  • a set of ultrasound image frames 622 corresponding to the ultrasound video clip 222 may be extracted from the broader set 610 of ultrasound image frames 210 and used to train a patient-specific deep learning network corresponding to the region of interest 420.
  • the deep learning network is anatomy-specific for a specific patient.
  • method 500 includes identifying a subset 630 (shown in Fig. 6) of ultrasound image frames 622.
  • the ultrasound image frames forming subset 630 are labelled as ultrasound image frames 632 in Fig. 6.
  • Each ultrasound image frame 632 may depict the region of interest 420 and may be included in ultrasound video clip 222.
  • the ultrasound image frames 632 may constitute 80% of the ultrasound image frames 622 of the ultrasound video clip 222.
  • the ultrasound image frames 632 may constitute different percentages of ultrasound image frames 622, such as 10%, 20%, 40%, 60%, 90%, or higher, or any suitable percentage therebetween.
  • method 500 includes using the subset 630 of ultrasound image frames 632 to train a patient-specific deep learning network 230 to identify the region of interest 420.
  • the ultrasound image frames 632 are used by a training component 670 to train the deep learning network, while the remaining frames 642 of the ultrasound image frames 622 may be used for testing.
  • the deep learning network trained in method 500 may be a neural network, such as a convolutional neural network (CNN).
  • the neural network may be a deep convolutional network (DCN), a deconvolutional network (DN), a deep convolutional inverse graphics network (DCIGN), a generative adversarial network (GAN), a deep residual network (DRN), an extreme learning machine (ELM), or any other application of machine learning or deep learning algorithms suitable for the purposes of the present application.
  • DCN deep convolutional network
  • DN deconvolutional network
  • DCIGN deep convolutional inverse graphics network
  • GAN generative adversarial network
  • DNN deep residual network
  • ELM extreme learning machine
  • training the deep learning network 230 at sub-step 520 may include a set of convolutional layers followed by a set of fully connected layers.
  • Each convolutional layer may include a set of filters configured to extract features from an input, such as the ultrasound image frames 632.
  • the number of convolutional and fully connected layers as well as the size of the associated filters may vary depending on the embodiments.
  • the convolutional layers and the fully connected layers may utilize a leaky rectified non-linear (ReLU) activation function and/or batch normalization.
  • the fully connected layers may be non-linear and may gradually shrink the highdimensional output to a dimension of the prediction result (e.g., the classification output).
  • the fully connected layers may also be referred to as a classifier.
  • the training component 670 and/or the testing component 680 may be any suitable software and/or hardware implemented in or by a processor circuit of an ultrasound imaging system 100, e.g. the host 130 of Fig. 1.
  • a subset 630 of ultrasound image frames 632 that depict the region of interest 420 can be captured or extracted from a set 610 of ultrasound image frames 210.
  • the subset 630 of ultrasound image frames 632 may include annotated B-mode images.
  • a user may annotate the B-mode images by selecting the area with the anatomical objects and/or imaging artifacts, e.g., with a bounding box and/or label.
  • a processor and/or a processor circuit can receive a user input or user feedback related to the region of interest within the ultrasound image frames. Training of the deep learning network may consider these annotated B-mode images as the ground truth.
  • the B-mode images in the subset 630 may include annotations corresponding to the region of interest 420.
  • a processor and/or processor circuit trains a neural network on the ultrasound image frames based on the user input, thereby generating a patient-specific neural network file.
  • the deep learning network 230 can be applied to each ultrasound image frame 632 in the subset 630, for example, using forward propagation, to obtain an output for each input ultrasound image frame 632.
  • the training component 670 may adjust the coefficients of the filters in the convolutional layers and weightings in the fully connected layers, for example, by using backward propagation to minimize a prediction error (e.g., a difference between the ground truth and the prediction result).
  • the prediction result may include regions of interests identified from the input ultrasound image frames 632.
  • the training component 670 adjusts the coefficients of the filters in the convolutional layers and weightings in the fully connected layers per each input ultrasound image frame 632. In some other instances, the training component 670 applies a batch-training process to adjust the coefficients of the filters in the convolutional layers and weightings in the fully connected layers based on a prediction error obtained from a set of input images.
  • the subset 630 may store image-class pairs. For instance, each ultrasound image frame 632 may be associated with the region of interest 420.
  • the deep learning network may be fed with the image-class pairs from the subset 630 and the training component 670 can apply similar mechanisms to adjust the weightings in the convolutional layers and/or the fully-connected layers to minimize the prediction error between the ground truth (e.g., the specific region of interest 420 in the image-class pair) and the prediction output.
  • method 500 includes identifying a subset 640 of ultrasound image frames 642 for testing.
  • This testing subset 640 may include a number of ultrasound image frames 642 that also depict the region of interest 420.
  • the ultrasound image frames 642 used for testing may also be referred to as testing frames 642 and may be included in the ultrasound video clip 222.
  • the testing frames 642 may comprise about 20% of the ultrasound image frames 622 corresponding to the ultrasound video clip 222.
  • the ultrasound image frames 642 may comprise different percentages of the ultrasound image frames 622, such as 10%, 20%, 40%, 60%, 90%, or higher, or any suitable percentage therebetween.
  • method 500 includes testing the deep learning network trained during sub-step 520 as previously described.
  • the testing frames 642 are used by the testing component 680 of the deep learning network to verify that the coefficients of the filters of the convolutional layers and the weightings of in the fully connected layers are accurate.
  • a single testing frame 642 may be presented to the deep learning network and, based on the determined coefficients and weights, a confidence score output may be generated for each region of interest 420.
  • a user may determine a threshold value associated with the confidence score output of the deep learning network.
  • This threshold may be specific to one of the regions of interest 420, or generally applied to all the regions of interest 420 if multiple regions of interest have been identified.
  • the threshold confidence score may be determined and input into the ultrasound imaging system 100 before or at the point-of-care setting, by a user, or alternatively by a manufacturer of one or more components of the ultrasound imaging system 100.
  • the ultrasound imaging system 100 may determine the threshold confidence score based on characteristics of the region of interest 420, trends or other collected data from previous examinations for the same patient or from other patients’ examinations, or any other relevant criteria.
  • method 500 includes determining whether the region of interest 420 was correctly identified by the deep learning network 230. For example, this determination could be based on the confidence score output for each region of interest 420. For example, if a confidence score associated with one region of interest exceeds a predetermined threshold, the deep learning network may indicate that that region of interest is depicted in the ultrasound testing frame 642. If another confidence score output associated with an additional region of interest does not exceed a predetermined threshold, the deep learning network may indicate that that additional region of interest is not depicted in the ultrasound testing frame 642. For each testing frame 642, the testing component 680 of the deep learning network may produce a prediction error (e.g., a difference between the ground truth and the confidence score).
  • a prediction error e.g., a difference between the ground truth and the confidence score
  • the testing component 680 of the deep learning network may determine that the region of interest 420 was correctly identified and will proceed to sub-step 530 again and an additional testing frame 642 may be presented. If, however, the prediction error is calculated to be above a certain threshold level, the testing component 680 of the deep learning network may determine that the region of interest 420 was not correctly identified and will proceed to sub-step 540 of step 510.
  • method 500 includes adjusting the deep learning network parameters.
  • the deep learning network may include coefficients of filters of the convolutional layers of the deep learning network as well as weights of the fully connected layers in a convolutional neural network application.
  • the parameters of the deep learning network may include additional coefficients, weights, or other values depending on the particular type of deep learning network used and its intended application using forward propagation, to obtain an output 650 for each input ultrasound image frame 632.
  • the testing component 680 may adjust the coefficients of the filters in the convolutional layers and weightings in the fully connected layers by using backward propagation to minimize the prediction error.
  • the testing component 680 adjusts the coefficients of the filters in the convolutional layers and weightings in the fully connected layers for each input ultrasound image frame 642. In some other instances, the testing component 680 applies a batchtraining process to adjust the coefficients of the filters in the convolutional layers and weightings in the fully connected layers based on a prediction error obtained from a set of input images. After sub-step 540 of step 510 is completed, the deep learning network then returns to sub-step 530 and an additional testing frame 642 is presented.
  • Testing component 680 may iteratively present each testing frame 642 of subset 640 and may adjust the coefficients and weights of the neural network’s convolutional and fully connected layers until multiple or all of testing frames 642 have been presented. In some instances, after multiple or all testing frames 642 have been presented and tested, the ultrasound imaging system 100 may present an indicator to the user indicating the success of the training and testing processes. For example, if the prediction error for each subsequent testing frame 642 consistently decreased, such that a convergence was observed, an indication of success may be displayed to a user via display 132. In other instances, if the prediction error for each subsequent testing frame 642 did not decrease, such that a divergence of prediction error was observed, an indication of failure may be displayed via display 132.
  • this indication may be accompanied with a directive to the user to redo the examination, adjust the size of the bounding box 460, reposition the graphical element 470, or perform another remedial action.
  • the ultrasound imaging system 100 may train a new patient-specific deep learning network based on as few as 100 ultrasound image frames 622 in very little time. For example, a deep learning network may be trained in as little as a few minutes. In other embodiments, a deep learning network may be trained in much less time depending primarily on processing speeds of components of the ultrasound imaging system 100. Because this patient- specific deep learning network is intentionally overfitted to the anatomy of the patient, it is customized for the patient anatomy.
  • method 500 includes saving the deep learning network parameters as a deep learning network file 230. If the testing component 680 observed a convergence of the prediction error, coefficients and weights of the neural network corresponding to all regions of interest 420 within the trained neural network may be saved as a deep learning network file 230. [0080] At step 550, method 500 includes storing the deep learning network file 230 in a patient’s file 205 in a memory accessible by the ultrasound imaging system (e.g., the memory 138 of Fig. 1). The deep learning network file 230 that is saved is therefore a patient-specific and/or anatomy-specific deep learning network with coefficients and weights trained or calculated to identify the associated patient’s specific regions of interest 420 within that patient’s anatomy.
  • This deep learning network file 230 may be loaded to the ultrasound imaging system 100 during subsequent examinations to assist a user in locating and identifying the same regions of interest 420 within a patient’s anatomy and comparing differences over time in any measurable or observable characteristics of the region of interest 420, as will be discussed in more detail hereafter It is noted that any or all of the steps of method 500 may be performed by the ultrasound imaging system 100 either during a patient examination or after a patient examination. In addition, the steps of training a deep learning network as outlined above and/or according to other methods presented may be performed concurrently while the ultrasound imaging system 100 acquires ultrasound image frames during an examination, or may perform these steps at a later time.
  • Fig. 7 is a flow diagram of a method 700 of identifying a region of interest 420 with a previously trained patient-specific and/or anatomy-specific deep learning network, according to aspects of the present disclosure.
  • One or more steps of the method 700 can be performed by a processor circuit of the ultrasound imaging system 100, including, e.g., the processor 134 (Fig. 1).
  • Fig. 8 is a schematic diagram of a method of identifying and displaying to a user a region of interest 420.
  • method 700 includes a number of enumerated steps, but embodiments of method 700 may include additional steps before, after, or in between the enumerated steps.
  • method 700 includes loading a previously saved patient-specific deep learning network file 230.
  • the deep learning network file 230 may include several parameters, such as coefficients of filters of convolutional layers and weights of fully connected layers configured to recognize one or more regions of interest 420 within a patient’s anatomy.
  • method 700 includes implementing the patient-specific deep learning network trained to recognize regions of interest 420 during a subsequent ultrasound examination for that patient.
  • the patient-specific deep learning network may be loaded and implemented by a user of the ultrasound imaging system 100 in a point-of-care setting, such that the system 100 may receive and analyze ultrasound image frames 210 in real time.
  • Step 710 may be divided into several sub-steps.
  • method 700 includes receiving an ultrasound image frame 812 captured by the probe 110.
  • Sub-step 715 may be initiated at a subsequent patient examination, in which one or more regions of interest 420 are to be examined on a second, third, fourth, or subsequent occasion.
  • the ultrasound image frame 812 may depict one of the regions of interest 420 within the patient’s anatomy or may not.
  • method 700 includes determining whether the ultrasound image frame 812 received from the probe 110 depicts one of the regions of interest 420.
  • a processor and/or processor circuit can apply the patient-specific neutral network to the ultrasound image frames to identify the region of interest within the ultrasound image frames.
  • the region of interest can be identified automatically (e.g., without user input required to identify the region of interest).
  • the ultrasound imaging system 100 may retrieve the deep learning network file for that patient which may contain multiple deep learning parameter files as shown by the regions of interest 250 in Fig. 2.
  • whether the ultrasound image frame 812 depicts one of the regions of interest 420 may be determined based on the confidence score output for that region of interest 420. If the ultrasound imaging system 100 determines that no region of interest 420 is depicted in the ultrasound image frame 812, the system reverts back to sub-step 715, and another ultrasound image frame 812 is received. If, however, the ultrasound imaging system 100 does determine that a region of interest 420 is depicted in an ultrasound image frame 812, the system proceeds to sub-step 725 of step 710. [0086] At sub-step 725 of step 710, method 700 includes labelling the anatomical feature 450 identified in a previous ultrasound examination with a graphical element 870 within the ultrasound image frame 812.
  • Graphical element 870 may be substantially similar to graphical element 470 previously mentioned and described with reference to Fig. 4.
  • the ultrasound imaging system 100 may place the graphical element 870 in the same location in relation to the anatomical feature 450 and/or other anatomical features surrounding the region of interest 420 as the user placed graphical element 470 in relation to the same elements during a patient’s first examination.
  • the graphical element 870 may be placed at a different location. This different location could correspond to a movement or shifting in the anatomical feature 850 or denote any other suitable characteristic of anatomical feature 450.
  • the graphical element 870 may be substantially different from the graphical element 470.
  • the graphical element 870 may be of any suitable shape, color, size, or orientation, including any of the features previously described in relation to the graphical element 470 of Fig. 4.
  • changes between graphical element 470 and graphical element 870 may symbolize to a user any number of appropriate characteristic pertaining to the region of interest 420.
  • changes could reflect changes in the size of the anatomical feature 450 within the region of interest 420, the level of urgency with which the condition the region of interest 420 portrays is to be treated, or any other relevant characteristic.
  • method 700 includes calculating one or more metrics associated with the anatomical feature 450 or region of interest 420.
  • Metrics associated with the region of interest 420 may include, but are not limited to blood flow through a lumen or body cavity, the volume of a particular region of interest 420 such as body cavity, a tumor, cyst, or any other suitable region of interest 420, other dimensions of an anatomical feature 450, including length, width, depth, circumference, diameter, area, and other metrics.
  • method 700 includes creating a bounding box 860 around the region of interest 420 in an ultrasound image frame 812.
  • the bounding box 860 may be substantially similar to the bounding box 460 described with reference to Fig. 4.
  • the bounding box 860 may be two-dimensional as depicted in Fig. 8, or may be a three-dimensional box.
  • One purpose of the bounding box 860 may be to specify the boundaries of the region of interest 420 and therefore which features should be used by the ultrasound imaging system 100 to further train the deep learning network.
  • the size or other features of the bounding box 860 may be determined in a similar manner as the bounding box 460 may be generated or modified as discussed previously.
  • the ultrasound imaging system 100 may prompt a user to decrease or increase the size of the bounding box 860 depending on the prediction error calculated by the ultrasound imaging system 100 and whether or not the prediction error converges or diverges based on the data used to train and test the deep learning network.
  • both the bounding box 860 and the graphical element 870 are generated and/or displayed. In some embodiments, only one of the bounding box 860 or the graphical element 870 is generated and/or displayed.
  • method 700 includes outputting the ultrasound image frame 812 showing the graphical element 870 added in sub-step 725, the bounding box 860 added in sub-step 735, and/or any calculated metrics added in sub-step 730 to a user display 132.
  • the processor and/or processor circuit can provide (e.g., to a display device in communication therewith), a graphical representation related to the region of interest in the ultrasound image frames. A user may then see an image similar to that shown in Fig. 8, showing an ultrasound image frame 812 depicting an anatomical feature 850 within a region of interest 420 with a graphical element 870 and bounding box 860 positioned nearby.
  • these elements may be displayed in real time in a point-of-care setting such that as a user moves the probe 110 on or within a patient’s anatomy, the graphical element 870 and the bounding box 860 may move along display 132 together with the region of interest 420.
  • the ultrasound imaging system 100 returns to sub-step 715 in which an additional ultrasound image frame 812 is received from the probe 110 and the ultrasound image frame 812 is again analyzed according to the same process and displayed to the user.
  • the ultrasound image frames 812 may be analyzed to determine if a region of interest 420 is depicted, the graphical element 870, bounding box 860, and/or any calculated metrics may be added, and the frame 812 may be displayed in real time, or at substantially the same frame rate at which the ultrasound image frames 812 are captured by the probe 110.
  • the ability of the ultrasound imaging system 100 to display to a user in real time the locations of the regions of interest 420 identified in previous examinations along with metrics relating to the regions of interest 420 provides the user with valuable insight as to the current state of any depicted anatomical features 450 or regions of interest 420 within a patient’s anatomy. In addition, it allows a user to more easily track changes to the regions of interest 420 over time, and determine the best method of diagnosing and/or remedying medical issues relating to any of the regions of interest 420.
  • method 700 includes saving a set of ultrasound image frames 812 as an additional video clip 820.
  • the ultrasound imaging system may save the frames as a video clip in response to a user input to record the particular frames.
  • Step 745 may be completed simultaneously with step 710 while a user moves the probe 110 along or within a patient’s body, or it may be completed after the probe 110 has completed capturing the frames 812 during a patient examination. Additionally, step 745 may occur during a point-of-care setting or may occur afterwards.
  • An additional video clip 820 may include ultrasound image frames 822 which depict the region(s) of interest 420 within a patient’s anatomy.
  • the additional video clip 820 may be of any suitable length of time, similar to the video clip 222 previously discussed.
  • the additional ultrasound video clip 820 may be stored within a patient’s file 205 along with the other ultrasound video clips 220 previously stored.
  • the additional ultrasound video clip 820 may be organized according to the date on which the procedure was completed, its use in training the deep learning network, or by any other suitable characteristic.
  • the ultrasound imaging system 100 may generate and store any number of additional ultrasound video clips 820 during subsequent patient examinations.
  • method 700 includes initiating the patient-specific deep learning network training step 510 of method 500 using the additional video clip 820 rather than the ultrasound video clip 222 from a patient’s initial ultrasound imaging procedure.
  • image frames from the additional video clip 820 may be combined with image frames from the initial ultrasound video clip 222 to further train the deep learning network.
  • the deep learning network may more effectively identify and monitor changes in regions of interest.
  • Step 510 has been previously discussed with reference to Fig. 5 and Fig. 6 and the process may be substantially similar using the additional video clip 820.
  • the frames 822 of the additional video clip 820 may be divided into two sets.
  • a set 830 of ultrasound image frames 832 selected from the ultrasound image frames 822 may be used to further train the deep learning network. This process may include adjusting the coefficients of filters of convolutional layers and weights of fully connected layers such that the prediction error may further decrease and the deep learning network is better able to identify the regions of interest 420.
  • An additional set 840 of ultrasound image frames 842 may be additionally selected by the ultrasound imaging system 100 to test the deep learning network. This process may include presenting an ultrasound image frame 842 to the deep learning network and testing if the network correctly identifies the region(s) of interest 420. If the network identifies one of the regions of interest 420, an additional frame 842 is presented.
  • the deep learning network parameters may be further adjusted to reduce prediction error.
  • the new parameters if any, may be saved as the deep learning network file 230 within the patient’s file 205 to be recalled at a later ultrasound examination.
  • Fig. 9 is a diagrammatic view of a graphical user interface (GUI) or screen display for the ultrasound imaging system 100 identifying a region of interest 420, according to aspects of the present disclosure.
  • Fig. 9 may represent an exemplary display 132 which a user of the ultrasound imaging system 100 may see and interact with during a point-of-care setting ultrasound examination.
  • an ultrasound image frame 822 is displayed to a user.
  • a region of interest 420 is depicted within the frame 822 and identified by the ultrasound imaging system 100.
  • a bounding box 860 may adequately identify to a user the region of interest 420 or an anatomical feature 450 within a region of interest 420.
  • a bounding box 860 is placed around the region of interest 420 by the ultrasound imaging system 100 so as to identify an anatomical feature 450 within the region of interest 420 and specify to the system 100 which features within the anatomy of the patient are to be used to further train the patient-specific deep learning network.
  • a confidence score 910 is depicted in Fig. 9.
  • the classification output may indicate the confidence score 910 for each region of interest 420 based on the input ultrasound image frame 822.
  • one region of interest 420 may be assigned to one tumor, another region of interest 420 may be assigned to another tumor, and so on.
  • the deep learning network may then output a confidence score for each region of interest 420 within the deep learning network file 230.
  • a region of interest 420 indicating a high confidence score indicates that the input ultrasound image frame 812 is likely to include the region of interest 420.
  • a region of interest 420 indicating a low confidence score 910 indicates that the input ultrasound image frame 812 is unlikely to include the region of interest 420.
  • a confidence score 910 may be concurrently displayed for each region of interest 420, or only the confidence scores of each corresponding region of interest 420 depicted within a frame 812 on display 132 may be displayed to the user.
  • Confidence scores 910 may be positioned in any suitable location relative to their corresponding regions of interest.
  • confidence scores 910 may be positioned proximate and/or adjacent to the bounding box 860, overlaying the ultrasound image frame 822, beside the ultrasound image frame 822, or in any other suitable position.
  • any calculated metrics 920 may be displayed to a user. The metrics 920 may be displayed in any suitable position within the display 132.
  • the metrics 920 may be displayed to the right of, left of, above, below or overlaid on top of the ultrasound image frame 812.
  • Any number of other suitable indicators may also be included on the display 132 relating to, for example, the processing speed of the probe 110, the processing speed of the host 130, display qualities, characteristics, or settings associated with the display 132, battery life, position of the probe 110 relative to the regions of interest 420 or other notable landmarks within a patient’s anatomy, or any other suitable indicator or image.
  • Fig. 10 is a diagrammatic view of a graphical user interface (GUI) or screen display for the ultrasound imaging system 100 displaying to a user a plurality of video clips 220 depicting the region of interest 420, according to aspects of the present disclosure.
  • Fig. 10 may be representative of a longitudinal evaluation of the patient (e.g., the same region of interest of the same patient over time).
  • the present disclosure also enables a user to view the same region of interest 420 in video clips 220 from different examinations to compare and track changes.
  • the ultrasound imaging system 100 may display to a user a plurality of video clips 220 simultaneously within the display 132.
  • a user of the ultrasound imaging system 100 may select which ultrasound video clips 220 to display and may determine the placement of each video clip 220 within the display 132. In some embodiments, the ultrasound imaging system 100 may determine the order and position of the ultrasound video clips 220 for display. The ultrasound imaging system 100 may be capable of displaying all video clips 220 saved to a patient’s file 205 simultaneously. In other embodiments, a user may select to view and compare a plurality of the ultrasound image frames 210 rather than video clips 220. Within the ultrasound image frames 210 or video clips 220 displayed, the graphical element 870, bounding box 860 and/or region of interest 420 may be displayed. The anatomical feature 450 may appear substantially different or may not appear at all in different image frames 210 or video clips 220.
  • the anatomical feature 450 may be of a different volume or size.
  • an anatomical feature e.g., a tumor
  • a label 1010 may be included with each displayed ultrasound image frame 210 or video clip 220.
  • the label 1010 may comprise the date and time 1020 of a patient’s examination as well as other metrics 1030 associated with the anatomical feature 450.
  • graphical user interfaces and display designs may be implemented in accordance with the presently disclosed application.
  • FIG. 11 is a schematic diagram of a processor circuit 1100, according to embodiments of the present disclosure.
  • the processor circuit 1100 may be implemented in the probe 110 and/or the host 130 of FIG. 1.
  • the processor circuit 1100 may be in communication with the transducer array 112 in the probe 110.
  • One or more processor circuits 1100 are configured to execute the operations described herein.
  • the processor circuit 1100 may include a processor 1160, a memory 1164, and a communication module 1168. These elements may be in direct or indirect communication with each other, for example via one or more buses.
  • the processor 1160 may include a CPU, a GPU, a DSP, an application-specific integrated circuit (ASIC), a controller, an FPGA, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein.
  • the processor 1160 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • the memory 1164 may include a cache memory (e.g., a cache memory of the processor 1160), random access memory (RAM), magnetoresistive RAM (MRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, solid state memory device, hard disk drives, other forms of volatile and non-volatile memory, or a combination of different types of memory.
  • the memory 1164 includes a non-transitory computer-readable medium.
  • the memory 1164 may store instructions 1166.
  • the instructions 1166 may include instructions that, when executed by the processor 1160, cause the processor 1160 to perform the operations described herein with reference to the probe 110 and/or the host 130 (FIG. 1). Instructions 1166 may also be referred to as code.
  • the terms “instructions” and “code” should be interpreted broadly to include any type of computer- readable statement(s). For example, the terms “instructions” and “code” may refer to one or more programs, routines, sub-routines, functions, procedures, etc. “Instructions” and “code” may include a single computer-readable statement or many computer-readable statements.
  • the communication module 1168 can include any electronic circuitry and/or logic circuitry to facilitate direct or indirect communication of data between the processor circuit 1100, the probe 110, and/or the display 132.
  • the communication module 1168 can be an input/output (I/O) device.
  • the communication module 1168 facilitates direct or indirect communication between various elements of the processor circuit 1100 and/or the probe 110 (FIG. 1) and/or the host 130 (FIG. 1).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

Un système d'imagerie médicale comprend un processeur configuré pour communiquer avec un dispositif d'imagerie médicale (par exemple, une sonde à ultrasons). Le processeur reçoit une entrée utilisateur relative à une région d'intérêt dans un premier ensemble d'images médicales (par exemple, des images ultrasonores). Le processeur établit un réseau neuronal sur le premier ensemble d'images médicales à l'aide de l'entrée utilisateur, générant ainsi un réseau neuronal spécifique au patient. Le processeur obtient un second ensemble d'images médicales provenant du premier patient. Le processeur applique le réseau neuronal spécifique au patient au second ensemble d'images médicales pour identifier la région d'intérêt. Le processeur fournit, sur la base de l'application, une représentation graphique liée à la région d'intérêt dans le second ensemble d'images médicales.
PCT/EP2021/075162 2020-09-29 2021-09-14 Identification de région d'intérêt spécifique à un patient basée sur une image ultrasonore et dispositifs, systèmes et procédés associés WO2022069208A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063084926P 2020-09-29 2020-09-29
US63/084,926 2020-09-29

Publications (1)

Publication Number Publication Date
WO2022069208A1 true WO2022069208A1 (fr) 2022-04-07

Family

ID=77989770

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/075162 WO2022069208A1 (fr) 2020-09-29 2021-09-14 Identification de région d'intérêt spécifique à un patient basée sur une image ultrasonore et dispositifs, systèmes et procédés associés

Country Status (1)

Country Link
WO (1) WO2022069208A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823829A (zh) * 2023-08-29 2023-09-29 深圳微创心算子医疗科技有限公司 医疗影像的分析方法、装置、计算机设备和存储介质

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019219387A1 (fr) * 2018-05-16 2019-11-21 Koninklijke Philips N.V. Identification automatisée de tumeur au cours d'une chirurgie à l'aide d'un apprentissage automatique

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019219387A1 (fr) * 2018-05-16 2019-11-21 Koninklijke Philips N.V. Identification automatisée de tumeur au cours d'une chirurgie à l'aide d'un apprentissage automatique

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823829A (zh) * 2023-08-29 2023-09-29 深圳微创心算子医疗科技有限公司 医疗影像的分析方法、装置、计算机设备和存储介质
CN116823829B (zh) * 2023-08-29 2024-01-09 深圳微创心算子医疗科技有限公司 医疗影像的分析方法、装置、计算机设备和存储介质

Similar Documents

Publication Publication Date Title
CN109758178B (zh) 超声成像中的机器辅助工作流
US11890137B2 (en) Intraluminal ultrasound imaging with automatic and assisted labels and bookmarks
US10290097B2 (en) Medical imaging device and method of operating the same
JP7462672B2 (ja) 超音波撮像におけるセグメンテーション及びビューガイダンス並びに関連するデバイス、システム及び方法
US11826201B2 (en) Ultrasound lesion assessment and associated devices, systems, and methods
EP4115389B1 (fr) Enregistrement d'image médicale à modes multiples, et dispositifs, systèmes et procédés associés
US20180089845A1 (en) Method and apparatus for image registration
WO2021099278A1 (fr) Assistance de balayage d'ultrasons au point d'intervention (pocus) et dispositifs, systèmes et procédés associés
CN112447276A (zh) 用于提示数据捐赠以用于人工智能工具开发的方法和系统
WO2022069208A1 (fr) Identification de région d'intérêt spécifique à un patient basée sur une image ultrasonore et dispositifs, systèmes et procédés associés
US20240074738A1 (en) Ultrasound image-based identification of anatomical scan window, probe orientation, and/or patient position
US20230112722A1 (en) Intraluminal image visualization with adaptive scaling and associated systems, methods, and devices
KR20230165284A (ko) 진단 또는 중재 이용을 위해 전자 의료 이미지들을 처리하기 위한 시스템들 및 방법들
US20230329674A1 (en) Ultrasound imaging
US20230316523A1 (en) Free fluid estimation
EP4327750A1 (fr) Imagerie ultrasonore guidée pour stadification de point d'intervention d'états médicaux
WO2023202887A1 (fr) Imagerie ultrasonore
JP7421548B2 (ja) 診断支援装置及び診断支援システム
WO2023186640A1 (fr) Complétude de la vue de l'anatomie dans l'imagerie ultrasonore et systèmes, dispositifs et procédés associés
WO2024042044A1 (fr) Échographie guidée pour la stadification au point de soins de pathologies médicales
WO2024104857A1 (fr) Détection automatique de point de mesure pour une mesure d'anatomie dans des images anatomiques
Senthilkumar Virtual-reality-assisted surgery using CNN paradigm and image-processing techniques

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21782450

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21782450

Country of ref document: EP

Kind code of ref document: A1