US11532144B2 - Method and apparatus for actuating a medical imaging device - Google Patents

Method and apparatus for actuating a medical imaging device Download PDF

Info

Publication number
US11532144B2
US11532144B2 US17/029,257 US202017029257A US11532144B2 US 11532144 B2 US11532144 B2 US 11532144B2 US 202017029257 A US202017029257 A US 202017029257A US 11532144 B2 US11532144 B2 US 11532144B2
Authority
US
United States
Prior art keywords
image dataset
region
dimensional image
patient
imaging device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/029,257
Other languages
English (en)
Other versions
US20210097322A1 (en
Inventor
Kerstin Mueller
Stefan Lautenschlaeger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens Healthineers AG
Original Assignee
Siemens Healthcare GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Healthcare GmbH filed Critical Siemens Healthcare GmbH
Assigned to SIEMENS HEALTHCARE GMBH reassignment SIEMENS HEALTHCARE GMBH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LAUTENSCHLAEGER, STEFAN, MUELLER, KERSTIN
Publication of US20210097322A1 publication Critical patent/US20210097322A1/en
Application granted granted Critical
Publication of US11532144B2 publication Critical patent/US11532144B2/en
Assigned to Siemens Healthineers Ag reassignment Siemens Healthineers Ag ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SIEMENS HEALTHCARE GMBH
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/08Auxiliary means for directing the radiation beam to a particular spot, e.g. using light beams
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4405Constructional features of apparatus for radiation diagnosis the apparatus being movable or portable, e.g. handheld or mounted on a trolley
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/46Arrangements for interfacing with the operator or the patient
    • A61B6/467Arrangements for interfacing with the operator or the patient characterised by special input means
    • A61B6/469Arrangements for interfacing with the operator or the patient characterised by special input means for selecting a region of interest [ROI]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/481Diagnostic techniques involving the use of contrast agents
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/486Diagnostic techniques involving generating temporal series of image data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/488Diagnostic techniques involving pre-scan acquisition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/501Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of the head, e.g. neuroimaging or craniography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/504Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of blood vessels, e.g. by angiography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/507Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for determination of haemodynamic parameters, e.g. perfusion CT
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/54Control of apparatus or devices for radiation diagnosis
    • A61B6/545Control of apparatus or devices for radiation diagnosis involving automatic set-up of acquisition parameters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G3/00Ambulance aspects of vehicles; Vehicles with special provisions for transporting patients or disabled persons, or their personal conveyances, e.g. for facilitating access of, or for loading, wheelchairs
    • A61G3/001Vehicles provided with medical equipment to perform operations or examinations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G2210/00Devices for specific treatment or diagnosis
    • A61G2210/50Devices for specific treatment or diagnosis for radiography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20128Atlas-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/031Recognition of patterns in medical or anatomical images of internal organs

Definitions

  • Embodiments of the invention generally relate to a method and an apparatus for actuating a medical imaging device via a control unit for generating a second three-dimensional image dataset which includes a target region in a region of interest of a patient with a functional impairment.
  • Embodiments of the invention further generally relate to a vehicle including an apparatus for actuating a medical imaging device and a computer program product.
  • a diagnosis is typically made by a physician and with the aid of medical image datasets.
  • three-dimensional volume image data is frequently provided via a computed tomography device (CT device).
  • CT device computed tomography device
  • This three-dimensional volume image data can also be used as the basis for the generation of two-dimensional slice image datasets which in each case depict a sectional image through the mapped volume.
  • a typical setup of an imaging series in particular in the case of a stroke patient or of a suspected stroke, frequently consists of a native CT image dataset of the patient's head without contrast medium enhancement in order to enable other conditions or a hemorrhage-induced stroke to be excluded.
  • a CT angiography image dataset it is further possible for a CT angiography image dataset to be provided in order to identify the blood vessel occluded by a blood clot, i.e. which is affected by a vascular occlusion.
  • a CT perfusion image dataset is generated in order to identify the ischemic core region and the regions of the brain affected by a reduced blood flow but which are still potentially salvable. This image data can then be accordingly used as the basis of a targeted treatment decision, for example as to whether observation only, intravenous thrombolysis and/or even mechanical thrombectomy is required.
  • Embodiments of the invention provides an improved method and an improved apparatus for actuating a medical imaging device for generating a three-dimensional image dataset.
  • At least one embodiment of the invention relates to a method for actuating a medical imaging device, in particular a mobile imaging device, via a control unit for generating a second three-dimensional image dataset which includes a target region in a region of interest of a patient with a functional impairment.
  • the method according to an embodiment of the invention comprises: providing a first three-dimensional image dataset including the region of interest of the patient via a data processing unit,
  • At least one embodiment of the invention also relates to an apparatus for actuating a medical imaging device, in particular a mobile medical imaging device, for generating a second 3D image dataset including at least one target region of a region of interest of a patient with a functional impairment.
  • the apparatus includes a data processing unit embodied to provide a first 3D image dataset, to identify the target region in the first 3D image dataset, wherein a partial region of the region of interest affected by the functional impairment is determined, and to determine an imaging parameter based on the identified target region for generating the second 3D image dataset via the medical imaging device.
  • the apparatus according to at least one embodiment of the invention also includes a control unit embodied to actuate the medical imaging device based on the imaging parameter determined for generating the second 3D image dataset.
  • At least one embodiment of the invention furthermore relates a vehicle, in particular an ambulance including a medical imaging device and an apparatus according to at least one embodiment of the invention.
  • At least one embodiment of the invention is directed to a computer program product, which includes a computer program and can be loaded directly into a memory of a data processing unit, and program segments, for example libraries and auxiliary functions for executing a method for actuating a medical imaging device when the computer program product is executed.
  • At least one embodiment of the present invention is directed to a method for actuating a medical imaging device via a control unit for generating a second three-dimensional image dataset including a target region in a region of interest of a patient with a functional impairment, the method comprising:
  • At least one embodiment of the present invention is directed to an apparatus for actuating a medical imaging device for generating a second three-dimensional image dataset including at least one target region in a region of interest of a patient with a functional impairment, the apparatus comprising:
  • control unit embodied to actuate the medical imaging device based on the imaging parameter determined for generating the second three-dimensional image dataset.
  • At least one embodiment of the present invention is directed to a non-transitory computer program product storing a computer program, directly loadable into a memory of a data processing unit, including program segments for executing the method for actuating a medical imaging device of an embodiment when the computer program is executed by the data processing unit.
  • At least one embodiment of the present invention is directed to a non-transitory computer readable medium storing a computer program, readable and executable by a data processing unit, including program segments for executing the method for actuating a medical imaging device of an embodiment when the computer program is executed by the data processing unit.
  • FIG. 1 a method according to an embodiment of the invention for actuating a medical imaging device
  • FIG. 2 a schematic depiction of a longitudinal section through a 3D image dataset
  • FIG. 3 a further, schematic depiction of a cross section through a 3D image dataset
  • FIG. 4 a vehicle including a medical imaging device and an apparatus according to an embodiment of the invention.
  • first, second, etc. may be used herein to describe various elements, components, regions, layers, and/or sections, these elements, components, regions, layers, and/or sections, should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention.
  • the term “and/or,” includes any and all combinations of one or more of the associated listed items. The phrase “at least one of” has the same meaning as “and/or”.
  • spatially relative terms such as “beneath,” “below,” “lower,” “under,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below,” “beneath,” or “under,” other elements or features would then be oriented “above” the other elements or features. Thus, the example terms “below” and “under” may encompass both an orientation of above and below.
  • the device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
  • the element when an element is referred to as being “between” two elements, the element may be the only element between the two elements, or one or more other intervening elements may be present.
  • Spatial and functional relationships between elements are described using various terms, including “connected,” “engaged,” “interfaced,” and “coupled.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the above disclosure, that relationship encompasses a direct relationship where no other intervening elements are present between the first and second elements, and also an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. In contrast, when an element is referred to as being “directly” connected, engaged, interfaced, or coupled to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between,” versus “directly between,” “adjacent,” versus “directly adjacent,” etc.).
  • the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. Also, the term “exemplary” is intended to refer to an example or illustration.
  • Units and/or devices may be implemented using hardware, software, and/or a combination thereof.
  • hardware devices may be implemented using processing circuity such as, but not limited to, a processor, Central At least one processor (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a System-on-Chip (SoC), a programmable logic unit, a microprocessor, or any other device capable of responding to and executing instructions in a defined manner.
  • processing circuity such as, but not limited to, a processor, Central At least one processor (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a System-on-Chip (SoC), a programmable logic unit, a microprocessor, or any other device capable of responding to and executing instructions in a defined manner.
  • module or the term ‘controller’ may be replaced with the term ‘circuit.’
  • module may refer to, be part of, or include processor hardware (shared, dedicated, or group) that executes code and memory hardware (shared, dedicated, or group) that stores code executed by the processor hardware.
  • the module may include one or more interface circuits.
  • the interface circuits may include wired or wireless interfaces that are connected to a local area network (LAN), the Internet, a wide area network (WAN), or combinations thereof.
  • LAN local area network
  • WAN wide area network
  • the functionality of any given module of the present disclosure may be distributed among multiple modules that are connected via interface circuits. For example, multiple modules may allow load balancing.
  • a server (also known as remote, or cloud) module may accomplish some functionality on behalf of a client module.
  • Software may include a computer program, program code, instructions, or some combination thereof, for independently or collectively instructing or configuring a hardware device to operate as desired.
  • the computer program and/or program code may include program or computer-readable instructions, software components, software modules, data files, data structures, and/or the like, capable of being implemented by one or more hardware devices, such as one or more of the hardware devices mentioned above.
  • Examples of program code include both machine code produced by a compiler and higher level program code that is executed using an interpreter.
  • a hardware device is a computer processing device (e.g., a processor, Central At least one processor (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a microprocessor, etc.)
  • the computer processing device may be configured to carry out program code by performing arithmetical, logical, and input/output operations, according to the program code.
  • the computer processing device may be programmed to perform the program code, thereby transforming the computer processing device into a special purpose computer processing device.
  • the processor becomes programmed to perform the program code and operations corresponding thereto, thereby transforming the processor into a special purpose processor.
  • Software and/or data may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, or computer storage medium or device, capable of providing instructions or data to, or being interpreted by, a hardware device.
  • the software also may be distributed over network coupled computer systems so that the software is stored and executed in a distributed fashion.
  • software and data may be stored by one or more computer readable recording mediums, including the tangible or non-transitory computer-readable storage media discussed herein.
  • any of the disclosed methods may be embodied in the form of a program or software.
  • the program or software may be stored on a non-transitory computer readable medium and is adapted to perform any one of the aforementioned methods when run on a computer device (a device including a processor).
  • a computer device a device including a processor
  • the non-transitory, tangible computer readable medium is adapted to store information and is adapted to interact with a data processing facility or computer device to execute the program of any of the above mentioned embodiments and/or to perform the method of any of the above mentioned embodiments.
  • Example embodiments may be described with reference to acts and symbolic representations of operations (e.g., in the form of flow charts, flow diagrams, data flow diagrams, structure diagrams, block diagrams, etc.) that may be implemented in conjunction with units and/or devices discussed in more detail below.
  • a function or operation specified in a specific block may be performed differently from the flow specified in a flowchart, flow diagram, etc.
  • functions or operations illustrated as being performed serially in two consecutive blocks may actually be performed simultaneously, or in some cases be performed in reverse order.
  • computer processing devices may be described as including various functional units that perform various operations and/or functions to increase the clarity of the description.
  • computer processing devices are not intended to be limited to these functional units.
  • the various operations and/or functions of the functional units may be performed by other ones of the functional units.
  • the computer processing devices may perform the operations and/or functions of the various functional units without sub-dividing the operations and/or functions of the computer at least one processors into these various functional units.
  • Units and/or devices may also include one or more storage devices.
  • the one or more storage devices may be tangible or non-transitory computer-readable storage media, such as random access memory (RAM), read only memory (ROM), a permanent mass storage device (such as a disk drive), solid state (e.g., NAND flash) device, and/or any other like data storage mechanism capable of storing and recording data.
  • the one or more storage devices may be configured to store computer programs, program code, instructions, or some combination thereof, for one or more operating systems and/or for implementing the example embodiments described herein.
  • the computer programs, program code, instructions, or some combination thereof may also be loaded from a separate computer readable storage medium into the one or more storage devices and/or one or more computer processing devices using a drive mechanism.
  • a separate computer readable storage medium may include a Universal Serial Bus (USB) flash drive, a memory stick, a Blu-ray/DVD/CD-ROM drive, a memory card, and/or other like computer readable storage media.
  • the computer programs, program code, instructions, or some combination thereof may be loaded into the one or more storage devices and/or the one or more computer processing devices from a remote data storage device via a network interface, rather than via a local computer readable storage medium.
  • the computer programs, program code, instructions, or some combination thereof may be loaded into the one or more storage devices and/or the one or more processors from a remote computing system that is configured to transfer and/or distribute the computer programs, program code, instructions, or some combination thereof, over a network.
  • the remote computing system may transfer and/or distribute the computer programs, program code, instructions, or some combination thereof, via a wired interface, an air interface, and/or any other like medium.
  • the one or more hardware devices, the one or more storage devices, and/or the computer programs, program code, instructions, or some combination thereof, may be specially designed and constructed for the purposes of the example embodiments, or they may be known devices that are altered and/or modified for the purposes of example embodiments.
  • a hardware device such as a computer processing device, may run an operating system (OS) and one or more software applications that run on the OS.
  • the computer processing device also may access, store, manipulate, process, and create data in response to execution of the software.
  • OS operating system
  • a hardware device may include multiple processing elements or processors and multiple types of processing elements or processors.
  • a hardware device may include multiple processors or a processor and a controller.
  • other processing configurations are possible, such as parallel processors.
  • the computer programs include processor-executable instructions that are stored on at least one non-transitory computer-readable medium (memory).
  • the computer programs may also include or rely on stored data.
  • the computer programs may encompass a basic input/output system (BIOS) that interacts with hardware of the special purpose computer, device drivers that interact with particular devices of the special purpose computer, one or more operating systems, user applications, background services, background applications, etc.
  • BIOS basic input/output system
  • the one or more processors may be configured to execute the processor executable instructions.
  • the computer programs may include: (i) descriptive text to be parsed, such as HTML (hypertext markup language) or XML (extensible markup language), (ii) assembly code, (iii) object code generated from source code by a compiler, (iv) source code for execution by an interpreter, (v) source code for compilation and execution by a just-in-time compiler, etc.
  • source code may be written using syntax from languages including C, C++, C#, Objective-C, Haskell, Go, SQL, R, Lisp, Java®, Fortran, Perl, Pascal, Curl, OCaml, Javascript®, HTML5, Ada, ASP (active server pages), PHP, Scala, Eiffel, Smalltalk, Erlang, Ruby, Flash®, Visual Basic®, Lua, and Python®.
  • At least one embodiment of the invention relates to the non-transitory computer-readable storage medium including electronically readable control information (processor executable instructions) stored thereon, configured in such that when the storage medium is used in a controller of a device, at least one embodiment of the method may be carried out.
  • electronically readable control information processor executable instructions
  • the computer readable medium or storage medium may be a built-in medium installed inside a computer device main body or a removable medium arranged so that it can be separated from the computer device main body.
  • the term computer-readable medium, as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium is therefore considered tangible and non-transitory.
  • Non-limiting examples of the non-transitory computer-readable medium include, but are not limited to, rewriteable non-volatile memory devices (including, for example flash memory devices, erasable programmable read-only memory devices, or a mask read-only memory devices); volatile memory devices (including, for example static random access memory devices or a dynamic random access memory devices); magnetic storage media (including, for example an analog or digital magnetic tape or a hard disk drive); and optical storage media (including, for example a CD, a DVD, or a Blu-ray Disc).
  • Examples of the media with a built-in rewriteable non-volatile memory include but are not limited to memory cards; and media with a built-in ROM, including but not limited to ROM cassettes; etc.
  • various information regarding stored images for example, property information, may be stored in any other form, or it may be provided in other ways.
  • code may include software, firmware, and/or microcode, and may refer to programs, routines, functions, classes, data structures, and/or objects.
  • Shared processor hardware encompasses a single microprocessor that executes some or all code from multiple modules.
  • Group processor hardware encompasses a microprocessor that, in combination with additional microprocessors, executes some or all code from one or more modules.
  • References to multiple microprocessors encompass multiple microprocessors on discrete dies, multiple microprocessors on a single die, multiple cores of a single microprocessor, multiple threads of a single microprocessor, or a combination of the above.
  • Shared memory hardware encompasses a single memory device that stores some or all code from multiple modules.
  • Group memory hardware encompasses a memory device that, in combination with other memory devices, stores some or all code from one or more modules.
  • memory hardware is a subset of the term computer-readable medium.
  • the term computer-readable medium does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium is therefore considered tangible and non-transitory.
  • Non-limiting examples of the non-transitory computer-readable medium include, but are not limited to, rewriteable non-volatile memory devices (including, for example flash memory devices, erasable programmable read-only memory devices, or a mask read-only memory devices); volatile memory devices (including, for example static random access memory devices or a dynamic random access memory devices); magnetic storage media (including, for example an analog or digital magnetic tape or a hard disk drive); and optical storage media (including, for example a CD, a DVD, or a Blu-ray Disc).
  • Examples of the media with a built-in rewriteable non-volatile memory include but are not limited to memory cards; and media with a built-in ROM, including but not limited to ROM cassettes; etc.
  • various information regarding stored images for example, property information, may be stored in any other form, or it may be provided in other ways.
  • the apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general purpose computer to execute one or more particular functions embodied in computer programs.
  • the functional blocks and flowchart elements described above serve as software specifications, which can be translated into the computer programs by the routine work of a skilled technician or programmer.
  • At least one embodiment of the invention relates to a method for actuating a medical imaging device, in particular a mobile imaging device, via a control unit for generating a second three-dimensional image dataset which includes a target region in a region of interest of a patient with a functional impairment.
  • the method according to an embodiment of the invention comprises:
  • the medical imaging device can be used to generate a three-dimensional image dataset (3D image dataset), for example a computed tomography image dataset (CT image dataset).
  • 3D image dataset for example a computed tomography image dataset (CT image dataset).
  • the generation of a 3D image dataset can include image acquisition, i.e. the recording of the measurement data via a measurement data recording unit of the medical imaging device.
  • the measurement data recording unit includes, for example, an X-ray source and an X-ray detector, which is positioned relative to the patient for the recording of measurement data.
  • the generation can also include image reconstruction of the 3D image dataset on the basis of recorded measurement data.
  • a 3D image dataset permits a three-dimensional, in particular spatially three-dimensional, depiction of the region of interest of the patient.
  • a 3D image dataset can also be depicted as a plurality of slice image datasets.
  • a slice image dataset in each case includes a slice of the 3D image dataset at a position along a marked axis.
  • a slice image dataset then in each case permits a two-dimensional, in particular spatially two-dimensional, depiction of the respective slice.
  • the 3D image dataset advantageously includes a plurality of voxels, in particular image points.
  • each voxel can preferably in each case have a value, in particular an image value, for example a gray value and/or an RGB color value and/or an intensity value.
  • a slice image dataset can include a plurality of pixels, in particular image points.
  • each pixel can preferably in each case have a value, in particular an image value, for example a gray value and/or an RGB color value and/or an intensity value.
  • the medical imaging device can, for example, include a CT device or a C-arm X-ray device. Also conceivable are other imaging devices embodied to generate a 3D image dataset.
  • the patient can be an animal patient and/or a human patient.
  • the region of interest of the patient can include an anatomical and/or spatial region of the patient which includes a predetermined tissue region and/or a spatial region required for a diagnosis.
  • the region of interest can include a body region, for example the head, the thorax or an arm.
  • the region of interest can optionally also include the entire patient's body.
  • the first 3D image dataset provided in particular includes the region of interest of the patient.
  • the provision of the first 3D image dataset can, for example, include acquiring and/or reading out a computer-readable data memory and/or reception from a memory unit.
  • the method according to the invention can further also include the generation of the first 3D image dataset via the medical imaging device, which can subsequently be provided for the further steps of the method.
  • the first 3D image dataset can be or have been generated by the same medical imaging device as the second 3D image dataset.
  • the first 3D image dataset can be or have been generated via a different recording mode of the medical imaging device than the second 3D image dataset.
  • the first 3D image dataset is a native CT image dataset or a CT angiography image dataset and the second 3D image dataset a CT perfusion image dataset.
  • Other recording modes or image datasets are also conceivable.
  • the second 3D image dataset includes a target region of the region of interest of a patient with a functional impairment.
  • a functional impairment can mean that the normal, bodily function of a functional unit of the patient's body included by the region of interest is disrupted or restricted.
  • a functional unit can, for example, be an organ, a bone, a blood vessel or the like. In the specific example of a stroke patient, the functional unit can, for example, be the patient's brain.
  • the target region included by the region of interest can then at least include the partial region of the region of interest with the functional impairment, i.e. in which the functional impairment can be localized based on the first 3D image dataset.
  • the target region can at least include the partial region of the region of interest with which the functional impairment can be associated, i.e. linked, in the first 3D image dataset.
  • the target region can include a functional unit affected by the functional impairment, a subunit of the affected functional unit to which the functional impairments can be restricted, or also a specific point of origin of the functional impairment within the functional unit.
  • the step of identifying the target region can then include a localization of the partial region of the region of interest in the first 3D image dataset which can be linked with the functional impairment based on the first 3D image dataset. This means that it is possible to identify image regions corresponding to the partial region in the first 3D image dataset. It can also include the localization of a site, i.e. a site of occurrence or a site of origin, of the functional impairment in the region of interest based on the first 3D image dataset.
  • the identifying step can, for example, include the localization or determination of a functional unit affected by the functional impairment, a subunit of the affected functional unit to which the occurrence of the functional impairments can be restricted, or also a specific site of origin of the functional impairment within the functional unit based on the first 3D image dataset.
  • the identified target region can be identified as a three-dimensional volume in the first 3D image dataset. It can also be identified as a two-dimensional area in one or in each of a plurality of slice image datasets of the first 3D image dataset.
  • the identified target region can also be determined by individual or a plurality of pixel or voxel coordinates in the first 3D image dataset or one of its slice image datasets. For example, the target region identified is highlighted in a depiction of the first 3D image dataset for a user.
  • a marking can be depicted in the form of a three-dimensional or two-dimensional box in the image dataset superimposed on the image data, which marks the identified target region in the first 3D image dataset. This can optionally enable a user to check the identified target region.
  • An example of functional impairment is an insufficient supply of oxygen and glucose to a region of the brain following a vascular occlusion of a cerebral artery, i.e. a perfusion deficit in the context of an ischemic infarction.
  • the target region can then include the brain as a functional unit. This means that, during the identification, the image regions associated with the brain of a patient can be determined in the first 3D image dataset.
  • the target region can also include the part of the brain as a subunit of the affected functional unit, which can be determined as affected by the perfusion deficit or linked thereto in the first 3D image dataset. This means that, during the identification, it is possible to determine the image regions of the first 3D image dataset that can be linked with the perfusion deficit.
  • the target region can include the actual ischemic core region depicting the center of the perfusion deficit.
  • the maximum blood flow is only 20% of the normal perfusion amount, which leads to an irreversible loss of function in an extremely short time.
  • the penumbra surrounding the core region also called the core shadow, it is possible for the tissue to recover as a result of residual blood flow in the brain tissue provided that the patient is given the appropriate treatment promptly. This means that, during the identification, it is possible to identify the image regions that can be linked to the ischemic core region in the first 3D image dataset.
  • a functional impairment is, for example, a fracture of a skeletal bone.
  • the target region then, for example, includes at least the region of the site of the fracture.
  • the target region can also, for example, include the entire, affected bone or the section of the patient's body including the bone.
  • a further example of a functional impairment is, for example, a vascular occlusion of a blood vessel away from the brain, for example in the heart.
  • the target region then, for example, includes at least the site or the vascular segment at which the vascular occlusion is present.
  • the identification also to include the determination of an expansion of the functional impairment or a volume of a region affected by the functional impairment, for example a maximum expansion along at least one spatial dimension of the 3D image dataset.
  • the identification can take place in the first 3D image dataset and/or based on slice image datasets of the 3D image datasets.
  • the identification can in particular be based on an analysis or further processing of the image values or the structures mapped thereby in the first 3D image dataset or its slice image datasets.
  • the imaging parameter determined based on the target region can include an imaging parameter relating to the image acquisition and/or the image reconstruction.
  • the imaging parameter determined can in particular be targeted to match the generation of the second 3D image dataset to the identified target region, i.e. in particular the partial region of the region of interest with the functional impairment defined by the target region identified via the first 3D image dataset. This can result in an optimal depiction of the region with the functional restriction via the second 3D image dataset.
  • a recording region also called a scan region
  • the recording region can correspond to the part of the patient or the region of interest for which measurement data is to be recorded via the medical imaging device for the generation of the second 3D image dataset.
  • the imaging parameter can include a positioning parameter of the medical imaging device, in particular the measurement data recording unit thereof, relative to the patient or relative to the region of interest.
  • the imaging parameter can also include a reconstruction volume for the reconstruction of the 3D image dataset or also another type of reconstruction parameter.
  • other imaging parameters are also conceivable.
  • the steps of identifying and/or determining can be carried out fully automatically and without further user intervention via the data processing unit.
  • interaction with a user for example a confirmation or possibility of correction for the target region or the imaging parameter or the like by a user.
  • an input unit can be provided which is embodied to convert user inputs in the form of touch, gestures, speech or the like into instructions that can be interpreted for the data processing.
  • the medical imaging device is actuated via the control unit.
  • the imaging parameter can, for example, be output via an interface to the control unit of the medical imaging device.
  • the imaging parameter can be translated into a control instruction for the medical imaging device, which can be interpreted by the control unit for the actuation of the imaging process, i.e. the generation of the second 3D image dataset.
  • actuation via the control unit can include actuation of the image acquisition and/or actuation of the image reconstruction.
  • the actuation step can be carried out fully automatically and without further user intervention via the control unit based on the imaging parameter determined.
  • interaction with a user can also be provided, for example a confirmation, a starting or stopping, a possibility of correction or the like by a user.
  • the second 3D image dataset can in particular depict a partial section of the first 3D image dataset which includes at least the target region.
  • the second 3D image dataset depicts the same image section as the first 3D image dataset, provided, for example, that such a recording region is determined as an imaging parameter via the method according to at least one embodiment of the invention.
  • the second 3D image dataset is preferably generated in chronological proximity to the first 3D image dataset.
  • a movement of the patient, in particular the region of interest, between the first and second image dataset is avoided as far as possible.
  • the inventors have recognized that in many imaging procedure cases at least two successive image datasets are recorded in in chronological proximity so that it is possible to use a pre-existing first image dataset to optimize and, if possible, also automate the imaging process for generating the subsequent second image dataset.
  • different image modalities for example a native CT image dataset and a CT angiography image dataset or a CT perfusion image dataset, are to be performed in a short time sequence, and preferably without moving the patient, and combined for a comprehensive diagnosis.
  • the method enables a particularly time-efficient workflow.
  • the method can advantageously help to minimize the dose in that an imaging parameter is selected, for example an optimized recording region, thus enabling unnecessary radiation exposure to be avoided and repetitions of recordings to be reduced.
  • CT devices for example in ambulances (so-called mobile stroke units) or even in helicopters, that enable a patient to be examined in an even shorter time.
  • these are usually subject to restrictions, for example due to the limited space available. For example, only a limited recording region may be available.
  • the method according to at least one embodiment of the invention enables it to be ensured that a relevant partial region can be selected optimally and mapped via the second 3D image dataset.
  • the region of interest includes at least one part of the patient's head and the target region at least one part of the patient's brain.
  • the target region can include the whole brain or only a part of the brain.
  • the patient can be a trauma patient with potential injuries or bleeding in the brain.
  • the patient can be a stroke patient with an ischemic infarction.
  • the combination with at least one second recording mode for example a CT angiography image dataset or a CT perfusion image dataset, is provided for the diagnosis so that advantageously, as a rule, a first three-dimensional image dataset is already available and can be used for the actuation of the medical imaging device for generating the second 3D image dataset.
  • a time-efficient exact determination of a target region and an actuation matched thereto can in particular be essential for a successful generation of the two-dimensional image dataset if the time-critical application is accompanied by restrictions of the medical imaging device.
  • the patient is a stroke patient and, in the step of identifying the target region as a partial region, a part of the brain of a patient affected by an ischemic infarction is determined.
  • a stroke i.e. in particular an ischemic infarction
  • a stroke represents a particularly time-critical application, wherein a diagnosis that is as fast and comprehensive as possible is essential with respect to the extent of subsequent impairments and the survival of the patient, so that the method according to at least one embodiment of the invention can be used particularly advantageously here.
  • the determination of the partial region can in particular be based on an analysis or further processing of the image values that occur or structures generated thereby in the first 3D image dataset or its slice image datasets.
  • a patient's brain can be divided into different brain areas based on the first 3D image dataset.
  • the brain areas can be, for example, but do not necessarily have to be, linked with specific bodily functions.
  • the brain can, for example, be divided into the brain areas based on an anatomical atlas.
  • the brain areas can in each case be evaluated with respect to the probability of their being affected by the ischemic infarction.
  • a respective brain area can be partially or wholly affected by the ischemic infarction.
  • the target region can then include at least one brain area evaluated as affected.
  • the target region can also affect more than one brain area evaluated as affected.
  • the evaluation can be based on the analysis or further processing of the image values.
  • the partial region affected by an ischemic infarction can also be determined independently of a previous division into different brain areas in that the image values that occur or structures generated thereby in the first 3D image dataset or its slice image datasets are analyzed or further processed.
  • An affected partial region can, for example, in the first 3D image dataset or in a respective slice image dataset of the first 3D image dataset, have local or global image values, spatially coherent groups of image values, an average image value or the like, which, for example, differ or differs from an expected value.
  • an expected image value can be based on non-affected brain areas of the patient, for example by comparing the right and left hemispheres of the brain or also on empirical values from previously recorded image datasets of the actual patient or a plurality of patients.
  • the partial region determined can represent a three-dimensional volume in the first 3D image dataset or, when viewing the first 3D image dataset slice-by-slice, a two-dimensional area in a slice of the first 3D image dataset.
  • an ischemic core region is determined in the first 3D image dataset in the identifying step.
  • the ischemic core region includes the region with irreversible damage to the brain caused by the perfusion deficit.
  • the ischemic core region is located in the center of a brain area affected by a perfusion deficit and is surrounded by the penumbra, in which there is residual bleeding.
  • the ischemic core region can particularly advantageously be used for localization and determination of the relevant region.
  • the determination of the ischemic core region can also optionally enable conclusions to be drawn regarding the origin and extent of and the damage caused by the ischemic infarction.
  • One variant of the method according to at least one embodiment of the invention provides that the target region is identified automatically in the first 3D image dataset based on a segmentation algorithm or a threshold method.
  • the segmentation algorithm can, for example, be based on pixel-based, voxel-based, edge-based, area-based and/or region-based segmentation.
  • the segmentation can also be based on a model-based, atlas-based method in conjunction with assumptions about the object to be segmented.
  • the segmentation can proceed in a slice-by-slice manner, i.e. on the basis of the two-dimensional slice images, or it is also possible to use a three-dimensional segmentation method.
  • the segmentation step can also be implemented in a semi-automatic manner. For example, starting points or germ cells or rough contour information can be set manually for the segmentation.
  • a segmentation algorithm can advantageously be used to identify coherent structures. For example, the brain, brain areas or partial regions of the brain affected by ischemic infarction can be segmented.
  • a threshold method can include a comparison of image values of the first 3D image dataset, for example given in HU (“Hounsfield Units”), with a threshold value. This can, for example, be used to differentiate whether or not the pixel or voxel with the image value should be assigned to the target region.
  • the threshold value can correspond to an above-described expected value.
  • a segmentation algorithm can be based on a threshold method or precede this method, wherein image regions with an image value below or above the specified threshold value are assigned to a region to be segmented or the image dataset for a segmentation is prepared via a threshold method.
  • the identifying step includes the application of a trained function.
  • a trained function can be used to determine the target region automatically and in a time-efficient manner.
  • a trained function can, for example, be used for a localization, a segmentation and/or an evaluation of brain areas.
  • a trained function can optionally even be used to derive the imaging parameter based on the target region.
  • the trained function can advantageously be trained using a machine learning method.
  • the trained function can be a neural network, in particular a convolutional neural network (CNN) or a network including a convolutional layer.
  • CNN convolutional neural network
  • a trained function maps input data onto output data.
  • the output data can in particular furthermore depend upon one or more parameters of the trained function.
  • the one or more parameters of the trained function can be determined and/or adjusted by training.
  • the determination and/or the adjustment of the one or more parameters of the trained function can in particular be based on a pair consisting of training input data and associated training output data, wherein the trained function is applied in order to generate output data onto the training input data.
  • the evaluation can be based on a comparison of the generated output data and the training output data.
  • a trainable function i.e. a function with one or more parameters that have not yet been adjusted, is generally referred to as a trained function.
  • trained function Other terms for trained function are trained mapping rule, mapping rule with trained parameters, function with trained parameters, algorithm based on artificial intelligence, machine learning algorithm.
  • An example of a trained function is an artificial neural network, wherein the edge weights of the artificial neural network correspond to the parameters of the trained function.
  • a trained function can also be a deep artificial neural network (or deep neural network).
  • a further example for a trained function is a “support vector machine”, furthermore in particular it is also possible to use other machine learning algorithms as a trained function.
  • the trained function can, for example, be trained via back propagation. First, output data can be determined by applying the trained function to training input data.
  • At least one parameter, in particular a weighting, of the trained function, in particular the neural network can be adjusted iteratively based on a gradient of the error function with respect to the at least one parameter of the trained function. This can advantageously enable the difference between the training mapping data and the training output data during the training of the trained function to be minimized.
  • the training input data can, for example, include a plurality of 3D training image datasets with a functional impairment.
  • the training output data can, for example, include training image datasets with localized partial regions or linked target regions, evaluations of brain areas or imaging parameters.
  • an artificial intelligence system i.e. a trained function
  • a trained function enables all influencing variables relevant for the identification to be taken into account, including those for which a user is unable to estimate a connection with the identification.
  • a trained function can enable automated identification in a particularly reliable and time-efficient manner.
  • the imaging parameter is an image acquisition parameter for recording measurement data for generating the second 3D image dataset and/or an image reconstruction parameter for the image reconstruction of the second 3D image dataset.
  • automation or at least partial automation of the imaging process i.e. the generation, is enabled.
  • it is possible to optimize the imaging process by avoiding error-prone manual steps and by improved time efficiency.
  • an image acquisition parameter can substantially include any parameter relating to the recording of measurement data.
  • the imaging parameter can relate to a recording region.
  • the recording region can specify which region of the region of interest is to be scanned via the medical imaging device for the generation of the second 3D image dataset. It can also include other parameters for the recording of measurement data. For example, setting parameters for the measurement data recording unit, for example for setting the X-ray source or the X-ray detector.
  • the imaging parameter can also relate to a positioning parameter of the measurement data recording unit relative to the patient or a movement parameter of the measurement data recording unit relative to the patient, for example a speed of the movement.
  • the imaging parameter determined includes at least one recording region.
  • an optimized recording region which in particular can also incorporate the circumstances of the medical imaging device, can be provided.
  • a dose-optimized recording region can be provided.
  • the method according to at least one embodiment of the invention can include that the imaging parameter determined includes a recording region for the recording of measurement data via the medical imaging device for generating the second 3D image dataset and the actuation includes positioning a measurement data recording unit of the medical imaging device relative to the patient based on the recording region determined for the recording of the measurement data.
  • a positioning parameter for the measurement data recording unit for recording measurement data of the recording region can be derived from the recording region determined.
  • the positioning parameters can furthermore be diverted into control commands for positioning the measurement data recording unit via the control unit on the basis of which the medical imaging device is actuated.
  • particularly time-efficient and optimally matched generation of the second 3D image dataset is enabled.
  • the positioning can be automated or at least partially automated and error-prone and possibly laborious manual positioning dispensed with. Overall, this can advantageously enable an optimized, and in particular automated, imaging process and hence a time-efficient and dose-efficient process.
  • this can in particular include an optimal selection of the region relevant for a more in-depth diagnosis.
  • the positioning of a measurement data recording unit of the medical imaging device relative to the patient can include the positioning of the actual measurement data recording unit.
  • the positioning of a measurement data recording unit of the medical imaging device relative to the patient can also include the positioning of the patient via a mobile mounting apparatus, that can be actuated via the control unit, on which the patient is mounted for the generation of the second 3D image dataset and which is included by the medical imaging device.
  • the imaging parameter determined can include a parameter relating to the image reconstruction.
  • a parameter relating to the image reconstruction For example, it is possible to determine a reconstruction volume, a parameter of a reconstruction method, a parameter relating to artifact correction included by the reconstruction or also another type of parameter.
  • a reconstruction matched to the target region can be provided automatically.
  • the method according to at least one embodiment of the invention includes the fact that the imaging parameter determined includes a reconstruction volume.
  • the image reconstruction of the second 3D image dataset can be carried out automatically or at least semi-automatically and enable a time-efficient and optimized imaging process.
  • An in particular automatic matching of the image acquisition parameters and/or image reconstruction parameters based on the target region identified in the first 3D image dataset can advantageously enable an optimized imaging process and hence a time- and dose-efficient process.
  • the first three-dimensional image dataset is a native image dataset, in particular a native CT image dataset, or an angiography image dataset, in particular a CT angiography image dataset
  • the second image dataset is a perfusion image dataset, in particular a CT perfusion image dataset.
  • a CT perfusion image dataset is frequently used in very time-critical applications, in particular with a stroke patient.
  • a previous image dataset i.e. a native CT image dataset or a CT angiography image dataset
  • a CT perfusion image dataset is frequently associated with restrictions with respect to the scannable recording region.
  • the method according to at least one embodiment of the invention enables an optimized imaging process which makes optimal use of the possibly restricted recording region or at least enables a generation of the perfusion image dataset that is as time- and dose-efficient as possible.
  • the method according to at least one embodiment of the invention can include that, in the case of a plurality of first image datasets, the first image dataset with the shortest time interval to the time of the provision is provided for the identification.
  • both a native CT image dataset without the addition of contrast medium and a CT angiography image dataset are available, it is in particular possible for the most recently recorded image dataset to be used and to be provided for the method.
  • the smallest possible change to the patient position, i.e. movement of the region of interest, between the first and the second image dataset is to be expected.
  • At least one embodiment of the invention also relates to an apparatus for actuating a medical imaging device, in particular a mobile medical imaging device, for generating a second 3D image dataset including at least one target region of a region of interest of a patient with a functional impairment.
  • the apparatus includes a data processing unit embodied to provide a first 3D image dataset, to identify the target region in the first 3D image dataset, wherein a partial region of the region of interest affected by the functional impairment is determined, and to determine an imaging parameter based on the identified target region for generating the second 3D image dataset via the medical imaging device.
  • the apparatus according to at least one embodiment of the invention also includes a control unit embodied to actuate the medical imaging device based on the imaging parameter determined for generating the second 3D image dataset.
  • An apparatus according to at least one embodiment of the invention for actuating a medical imaging device can in particular be embodied to execute the above-described method according to at least one embodiment of the invention and the aspects thereof.
  • the apparatus for actuating a medical imaging device can be embodied to execute the method and the aspects thereof in that the data processing unit and the control unit are embodied to execute the corresponding method steps.
  • the medical imaging device is in particular a CT device.
  • the CT device can in particular be embodied, in addition to native CT image datasets, to generate angiography CT image datasets and perfusion CT image datasets.
  • At least one embodiment of the invention furthermore relates a vehicle, in particular an ambulance including a medical imaging device and an apparatus according to at least one embodiment of the invention.
  • At least one embodiment of the invention is directed to a computer program product, which includes a computer program and can be loaded directly into a memory of a data processing unit, and program segments, for example libraries and auxiliary functions for executing a method for actuating a medical imaging device when the computer program product is executed.
  • the computer program product can include software with a source code that still has to be compiled and linked or only has to be interpreted or an executable software code that only needs to be loaded into the data processing unit for execution.
  • the computer program product enables the method for actuating a medical imaging device to be executed quickly and robustly, and to be identically repeated.
  • the computer program product is configured such that it can execute the method steps according to at least one embodiment of the invention via the data processing unit.
  • the data processing unit must in each case fulfil the requisite conditions such as, for example, an appropriate random-access memory, an appropriate graphics card or an appropriate logic unit so that the respective method steps can be executed efficiently.
  • the computer program product is for example stored on a computer-readable medium or held on a network or server from where it can be loaded into the processor of a data processing unit, which can be directly connected to the data processing unit or embodied as part of the data processing unit.
  • program segments of the computer program product can be stored on an electronically readable data carrier.
  • the program segments of the electronically readable data carrier can be embodied such that they carry out a method according to at least one embodiment of the invention when the data carrier is used in a processing unit.
  • Examples of electronically readable data carriers are a DVD, a magnetic tape or a USB stick, on which electronically readable program means, in particular software, are stored.
  • the invention can also be based on the computer-readable medium and/or the electronically readable data carrier.
  • An extensively software-based implementation has the advantage that it is possible to retrofit apparatuses used to date for actuating a medical imaging device in a simple way via a software update in order to work in the manner according to at least one embodiment of the invention.
  • a computer program product can optionally include additional parts, such as, for example, documentation and/or additional components and hardware components, such as, for example, hardware keys (dongles etc.) for using the software.
  • FIG. 1 depicts an example schematic process of a method for actuating a medical imaging device 101 , in particular a mobile medical imaging device, via a control unit 113 for generating a second three-dimensional image dataset which includes a target region 11 in a region of interest 3 of a patient 103 with a functional impairment.
  • a first three-dimensional image dataset including the region of interest 3 of the patient 103 is provided via a data processing unit 115 .
  • a next identifying step S 2 the target region 11 based on the first three-dimensional image dataset is identified via the data processing unit 115 , wherein a partial region of the region of interest 3 with the functional impairment is determined.
  • the region of interest 3 includes at least one part of the head of the patient 103 and the target region 11 at least one part of the brain 13 of the patient 103 .
  • the patient 103 can be a stroke patient, wherein in the step S 2 of identifying the target region 11 as a partial region, a part 8 , 9 of the brain 13 of the patient 103 affected by an ischemic infarction affected is determined.
  • the actual ischemic core region 9 can be determined.
  • the target region 11 can be identified based on a segmentation algorithm, a threshold method and/or with the aid of an anatomical atlas and in particular automatically based on the first three-dimensional image dataset.
  • the identifying step S 2 can include the application of a trained function.
  • an imaging parameter 4 for generating the second three-dimensional image dataset is determined based on the identified target region 11 via the data processing unit 115 .
  • the imaging parameter 4 can be an image acquisition parameter 6 , 7 for recording measurement data for generating the second three-dimensional image dataset and/or an image reconstruction parameter 5 for image reconstruction of the second three-dimensional image dataset.
  • the imaging parameter 4 can be a recording region 6 , 7 and/or a reconstruction volume 5 .
  • a further actuating step S 5 the medical imaging device 101 is actuated based on the imaging parameter 4 determined for the generation of the second three-dimensional image dataset via the control unit 113 .
  • the imaging parameter 4 determined is a recording region 6 , 7 for the recording of measurement data via the medical imaging device 101 for generating the second three-dimensional image dataset and the actuation a positioning of a measurement data recording unit 102 , 104 of the medical imaging device 101 relative to the patient 103 .
  • the method can further include a step of the first generation S 0 of the first 3D image dataset.
  • the method can further include a step of the second generation S 6 of the second 3D image dataset in the context of the actuation of the medical imaging device.
  • the first three-dimensional image dataset can be a native image dataset or an angiography image dataset
  • the second three-dimensional image dataset can be a perfusion image dataset
  • the first three-dimensional image dataset with the shortest time interval to the time of the provision can be provided.
  • FIG. 2 and FIG. 3 in each case show a schematic depiction of a longitudinal section or of a cross section through a first 3D image dataset.
  • Such a section can substantially correspond to a slice image dataset of the first 3D image dataset.
  • the region of interest 3 included by the first 3D image datasets depicted by way of example in FIGS. 2 and 3 is in particular at least one part of the head 3 of a stroke patient 103 with a functional impairment due to an ischemic infarction.
  • these indicate the cranial bone 15 and the brain 13 of the patient 103 in the sectional image of the first 3D image dataset depicted in each case.
  • the region 9 substantially corresponds to the ischemic core region in each case.
  • the target region 11 is identified based on the first 3D image dataset via the data processing unit 115 , wherein a partial region of the region of interest 3 with the functional impairment is determined. This can substantially include the identification of the image regions which can be linked with the functional impairment in the first 3D image dataset.
  • FIG. 2 depicts by way of example that the target region 11 can include the entire brain 13 of the patient 103 as a partial region of the head 3 with the functional impairment.
  • the brain 13 of the patient can be identified or localized in the identifying step based on the image values in the first 3D image dataset via a threshold method, a segmentation method or via a trained function.
  • a threshold method e.g., a threshold for determining a segmentation method
  • a trained function e.g., a trained function for example.
  • a determined recording region 6 can, for example, be used as the basis for determining a positioning parameter based on the recording region 6 for the medical imaging device 101 for generating the second 3D image dataset, which is subsequently applied in the actuating step via the control unit 115 for generating the second 3D image dataset.
  • the ischemic core region 9 can optionally be identified based on the first 3D image dataset and determined as a target region 11 .
  • the image values in the ischemic core region differ from the image values away from the ischemic core region.
  • the ischemic core region 9 can be identified automatically based on a segmentation algorithm or a threshold method in the first 3D image dataset.
  • a machine learning method i.e. the application of a trained function
  • an imaging parameter matched to the target region is determined in the form of a recording region 7 or reconstruction volume 7 .
  • the recording region 7 can be reduced to a part of the brain 13 of the patient 103 , thus advantageously enabling dose savings and better account to be taken of any restrictions of the medical imaging device 101 with respect to a recording region.
  • the brain 13 of a patient 103 can first also be divided in a plurality of brain areas 8 , 10 , 12 based on the first 3D image dataset.
  • the brain areas 8 , 10 , 12 can, for example, but do not necessarily have to, be linked with specific bodily functions.
  • a brain area 8 can be evaluated as affected by the functional impairment and identified as a target region 11 .
  • the brain area 8 can be identified as a target region 11 in that the image values that occur or structures that occur in the first 3D image dataset or are in each case analyzed or further processed separately in the brain areas 8 , 10 , 12 .
  • the brain area 8 can, for example, have an average image value which differs from an expected value.
  • an expected image value can be based on non-affected brain areas of the patient 103 , for example by a comparison of the right and left hemispheres of the brain 13 or also on empirical values from previously recorded image datasets of the actual patient 103 or a plurality of patients.
  • a brain area 8 , 10 , 12 can also be determined in another way as a partial region affected by the functional impairment, for example if local or global image values, spatially coherent groups of image values, or other structures, for example fluctuations between the image values that occur, occur, which can be linked with a functional impairment.
  • the application of a trained function can ensure that an affected brain area is identified in a time-efficient and reliable manner.
  • a recording region and/or a reconstruction volume can be restricted to a part of the brain including the area 8 , thus advantageously enabling dose savings and better account to be taken of any restrictions of the medical imaging device with respect to a recording region.
  • FIG. 4 shows a vehicle, in particular an ambulance, including a medical imaging device 101 and an apparatus for actuating the medical imaging device 101 for generating a second three-dimensional image dataset including at least one target region 11 in a region of interest of a patient 103 , in particular a stroke patient, with a functional impairment.
  • the medical imaging device 101 is a computed tomography device with a measurement data recording unit 102 , 104 including an X-ray detector 104 and, opposite thereto, an X-ray source 102 , which are arranged in a gantry 106 that enables rotation of the measurement data recording unit 102 , 104 about a common axis and hence the recording of measurement data, i.e. in particular X-ray projection measurement data, of a patient 103 from different angular ranges.
  • This measurement data can then be used as the basis for reconstructing a first or second three-dimensional image dataset, for example via a filtered back projection reconstruction algorithm or corresponding slice image datasets.
  • the patient 103 is mounted on a patient mounting apparatus 105 of the medical imaging device 101 .
  • the measurement data recording unit 102 , 104 is positioned relative to the patient so that a determined recording region 6 , 7 can be scanned via the measurement data recording unit 102 , 104 .
  • a positioning indicated by way of example by the arrows, can be enabled by moving or positioning the patient mounting apparatus 105 and/or also by moving or positioning the measurement data recording unit 102 , 104 , i.e. substantially the gantry 106 .
  • the apparatus for actuating the medical imaging device 101 in particular comprises a data processing unit 115 embodied to provide a first three-dimensional image dataset including the region of interest 3 of the patient 103 , to identify the target region 11 based on the first three-dimensional image dataset, wherein a partial region of the region of interest with the functional impairment is determined, and to determine an imaging parameter 4 based on the identified target region 11 for generating the second three-dimensional image dataset.
  • the apparatus also comprises a control unit 113 embodied to actuate the medical imaging device 101 based on the determined imaging parameter 4 for generating the second three-dimensional image dataset.
  • the data processing unit 115 and the control unit 113 can be implemented in the form of a computer, a microcontroller or an integrated switching circuit.
  • the data processing unit 115 and the control unit 113 can comprise hardware elements or software elements, for example a microprocessor or a so-called FPGA (field programmable gate array). It can also entail a group of computers (another technical term for a real group is “cluster”).
  • the apparatus can also include a storage unit 117 .
  • the storage unit 117 can also be included by the data processing unit. This can be implemented as a random-access memory (RAM for short) or as a permanent mass storage device (hard disk, USB stick, SD card, solid state disk).
  • An interface 15 can be a hardware or software interface (for example PCI bus, USB or firewire).
  • the storage unit 117 can, for example, be configured to buffer a first 3D image dataset for usage in accordance with the method according to an embodiment of the invention.
  • the apparatus preferably furthermore comprises at least one input unit and/or at least one output unit, which are not depicted here.
  • An input unit and/or output unit enables, for example, manual interaction by a user, for example starting or stopping the method according to the invention, a confirmation or a correction by a user.
  • An output unit in the form of a depiction unit can also enable the depiction of the first and/or second three-dimensional image dataset.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Optics & Photonics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physiology (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Pulmonology (AREA)
  • Neurosurgery (AREA)
  • Neurology (AREA)
  • Vascular Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Multimedia (AREA)
US17/029,257 2019-09-30 2020-09-23 Method and apparatus for actuating a medical imaging device Active 2041-01-07 US11532144B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP19200340.8 2019-09-30
EP19200340 2019-09-30
EP19200340.8A EP3797692B1 (de) 2019-09-30 2019-09-30 Verfahren und vorrichtung zur ansteuerung eines medizinischen bildgebungsgeräts

Publications (2)

Publication Number Publication Date
US20210097322A1 US20210097322A1 (en) 2021-04-01
US11532144B2 true US11532144B2 (en) 2022-12-20

Family

ID=68136148

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/029,257 Active 2041-01-07 US11532144B2 (en) 2019-09-30 2020-09-23 Method and apparatus for actuating a medical imaging device

Country Status (2)

Country Link
US (1) US11532144B2 (de)
EP (1) EP3797692B1 (de)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079080B (zh) * 2023-10-11 2024-01-30 青岛美迪康数字工程有限公司 冠脉cta智能分割模型的训练优化方法、装置和设备

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120093383A1 (en) 2007-03-30 2012-04-19 General Electric Company Sequential image acquisition method
US20150254837A1 (en) 2012-09-05 2015-09-10 Mayank Goyal Systems and methods for diagnosing strokes
US20150289779A1 (en) 2012-10-18 2015-10-15 Bruce Fischl System and method for diagnosis of focal cortical dysplasia
US20170323447A1 (en) 2016-05-09 2017-11-09 Toshiba Medical Systems Corporation Medical image capturing apparatus and method
US20180025255A1 (en) 2016-07-21 2018-01-25 Toshiba Medical Systems Corporation Classification method and apparatus
US20180365824A1 (en) * 2015-12-18 2018-12-20 The Regents Of The University Of California Interpretation and Quantification of Emergency Features on Head Computed Tomography
US20190117179A1 (en) 2017-10-23 2019-04-25 Mg Stroke Analytics Inc. Systems And Methods For Deciding Management Strategy in Acute Ischemic Strokes Using Rotational Angiography
US20190200274A1 (en) 2017-12-27 2019-06-27 Siemens Healthcare Gmbh Method for providing image data to a central unit
US20190357862A1 (en) * 2016-04-11 2019-11-28 Dedicated2Imaging, Llc Improved ct imaging systems

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120093383A1 (en) 2007-03-30 2012-04-19 General Electric Company Sequential image acquisition method
US20150254837A1 (en) 2012-09-05 2015-09-10 Mayank Goyal Systems and methods for diagnosing strokes
US20150289779A1 (en) 2012-10-18 2015-10-15 Bruce Fischl System and method for diagnosis of focal cortical dysplasia
US20180365824A1 (en) * 2015-12-18 2018-12-20 The Regents Of The University Of California Interpretation and Quantification of Emergency Features on Head Computed Tomography
US20190357862A1 (en) * 2016-04-11 2019-11-28 Dedicated2Imaging, Llc Improved ct imaging systems
US20170323447A1 (en) 2016-05-09 2017-11-09 Toshiba Medical Systems Corporation Medical image capturing apparatus and method
US20180025255A1 (en) 2016-07-21 2018-01-25 Toshiba Medical Systems Corporation Classification method and apparatus
US20190117179A1 (en) 2017-10-23 2019-04-25 Mg Stroke Analytics Inc. Systems And Methods For Deciding Management Strategy in Acute Ischemic Strokes Using Rotational Angiography
US20190200274A1 (en) 2017-12-27 2019-06-27 Siemens Healthcare Gmbh Method for providing image data to a central unit

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Extended European Search Report for European Application No. 19200340.8 dated Mar. 24, 2020.

Also Published As

Publication number Publication date
EP3797692B1 (de) 2023-01-18
US20210097322A1 (en) 2021-04-01
EP3797692A1 (de) 2021-03-31

Similar Documents

Publication Publication Date Title
AU2019222619B2 (en) Atlas-based segmentation using deep-learning
US20190130571A1 (en) Method and system for compensating for motion artifacts by means of machine learning
US11101025B2 (en) Providing a patient model of a patient
US10930029B2 (en) Method for processing medical image data and image processing system for medical image data
US10388037B2 (en) Selection method for an artifact correction algorithm, data processing facility for performing the method, and medical imaging system
US11170499B2 (en) Method and device for the automated evaluation of at least one image data record recorded with a medical image recording device, computer program and electronically readable data carrier
US10729919B2 (en) Method for supporting radiation treatment planning for a patient
US20200126272A1 (en) Method for the reconstruction of an image data set of computed tomography, computed tomography apparatus, computer program and electronically readable data carrier
US11615528B2 (en) Method and device for computed tomography imaging
US11229773B2 (en) Determining a vessel puncture position
US20190295294A1 (en) Method for processing parameters of a machine-learning method and reconstruction method
US10121244B2 (en) Transformation determination device and method for determining a transformation for image registration
US11532144B2 (en) Method and apparatus for actuating a medical imaging device
US11925501B2 (en) Topogram-based fat quantification for a computed tomography examination
US20190066301A1 (en) Method for segmentation of an organ structure of an examination object in medical image data
US20220351832A1 (en) Methods for operating an evaluation system for medical image data sets, evaluation systems, computer programs and electronically readable storage mediums
US20220301289A1 (en) Method for providing a trainable function for determination of synthetic image data
US11854125B2 (en) Method and apparatus for providing an artifact-reduced x-ray image dataset
US20230070656A1 (en) Method for providing medical imaging decision support data and method for providing ground truth in 2d image space
US20220405941A1 (en) Computer-implemented segmentation and training method in computed tomography perfusion, segmentation and training system, computer program and electronically readable storage medium
US20230129687A1 (en) Method for creating a three-dimensional medical image
US20240144479A1 (en) Method for providing a virtual, noncontrast image dataset
US11010897B2 (en) Identifying image artifacts by means of machine learning
US20230157650A1 (en) Ct imaging depending on an intrinsic respiratory surrogate of a patient
US12008759B2 (en) Method and system for identifying pathological changes in follow-up medical images

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

AS Assignment

Owner name: SIEMENS HEALTHCARE GMBH, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MUELLER, KERSTIN;LAUTENSCHLAEGER, STEFAN;SIGNING DATES FROM 20201029 TO 20201110;REEL/FRAME:055228/0295

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: SIEMENS HEALTHINEERS AG, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS HEALTHCARE GMBH;REEL/FRAME:066267/0346

Effective date: 20231219