WO2019200351A1 - Systems and methods for an imaging system express mode - Google Patents

Systems and methods for an imaging system express mode Download PDF

Info

Publication number
WO2019200351A1
WO2019200351A1 PCT/US2019/027376 US2019027376W WO2019200351A1 WO 2019200351 A1 WO2019200351 A1 WO 2019200351A1 US 2019027376 W US2019027376 W US 2019027376W WO 2019200351 A1 WO2019200351 A1 WO 2019200351A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
scan
imaging system
data
imaging
Prior art date
Application number
PCT/US2019/027376
Other languages
French (fr)
Inventor
Sandeep Dutta
David Erik Chevalier
Saad Sirohey
Raja RAMNARAYAN
Gregory OHME
Original Assignee
General Electric Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by General Electric Company filed Critical General Electric Company
Priority to CN201980023558.2A priority Critical patent/CN112004471A/en
Publication of WO2019200351A1 publication Critical patent/WO2019200351A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/54Control of apparatus or devices for radiation diagnosis
    • A61B6/545Control of apparatus or devices for radiation diagnosis involving automatic set-up of acquisition parameters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/46Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • A61B6/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5294Devices using data or image processing specially adapted for radiation diagnosis involving using additional data, e.g. patient information, image labeling, acquisition parameters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/56Details of data transmission or power supply, e.g. use of slip rings
    • A61B6/563Details of data transmission or power supply, e.g. use of slip rings involving image data transmission via a network
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/04Positioning of patients; Tiltable beds or the like
    • A61B6/0487Motor-assisted positioning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/488Diagnostic techniques involving pre-scan acquisition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Definitions

  • Embodiments of the subject matter disclosed herein relate to non-invasive diagnostic imaging, and more particularly, to imaging a subject with minimal input or intervention from an operator.
  • Non-invasive imaging technologies allow images of the internal structures of a patient or object to be obtained without performing an invasive procedure on the patient or object.
  • technologies such as computed tomography (CT) use various physical principles, such as the differential transmission of X-rays through the target volume, to acquire image data and to construct tomographic images (e.g., three- dimensional representations of the interior or the human body or of other imaged structures).
  • CT computed tomography
  • Radiologists and technicians must be trained to select and prepare the correct scan protocols for each patient, who may also be positioned differently within the imaging system for a given protocol. If any portion of the scan preparation is incorrect, the quality of the resulting image(s) may be too poor for clinical use and so the scan must be performed again. Further, image analysis must be performed by radiologists/physicians and so the turn-around time for a clinical report may be slow.
  • a method for an imaging system comprises receiving an identification of a subject to be scanned, automatically determining a personalized scan protocol for the subject, automatically performing a scan of the subject according to the personalized scan protocol to acquire imaging data, and displaying an image and decision support, the image and the decision support automatically generated from the imaging data.
  • the imaging of a subject such as a patient may be performed with minimal input or intervention from an operator of the imaging system.
  • FIG. 1 shows a block schematic diagram of a simplified example system for extending the capabilities of an imaging system according to an embodiment
  • FIG. 2 shows a high-level flowchart illustrating an example method for synchronizing an imaging system with an edge computing system according to an embodiment
  • FIG. 3 shows a high-level flowchart illustrating an example method for generating deep learning training data with an imaging system according to an embodiment
  • FIG. 4 shows a high-level flowchart illustrating an example method for an imaging system express mode according to an embodiment
  • FIG. 5 shows a high-level flowchart illustrating an example method for managing applications on an edge computing system according to an embodiment
  • FIG. 6 shows a pictorial view of an imaging system according to an embodiment
  • FIG. 7 shows a block schematic diagram of an exemplary imaging system according to an embodiment
  • FIG. 8 shows a block schematic diagram of a more detailed example system, than FIG. 1, for extending the capabilities of an imaging system according to an embodiment
  • FIG. 9 shows a flowchart illustrating an example method for generating decision support output for an imaging system using an edge computing system according to an embodiment.
  • the processing capabilities of an imaging system may be expanded by coupling the imaging system to an ECS, as shown in FIG. 1.
  • Imaging data acquired by the imaging system may be transmitted or streamed during a scan to the ECS.
  • a method for synchronizing the transmission of imaging data to the ECS such as the method depicted in FIG. 2, allows the imaging data to be processed by deep learning (DL) applications concurrently with the scan, thereby reducing the amount of time to obtain decision support. Additional information relating to the acquisition of imaging data, as well as information characterizing the subject being scanned, may be leveraged to train the DL applications, as depicted in FIG. 3.
  • the expanded processing capabilities of the imaging system when coupled with the ECS allows for the imaging system to operate in an“express mode,” wherein a scan is performed with minimal intervention by or input from an operator of the imaging system, as depicted in FIG. 4.
  • New and updated DL applications may be retrieved from a remote repository, as depicted in FIG. 5, thereby allowing the ECS to provide the latest DL capabilities that are compatible with the imaging system.
  • An example of a CT imaging system that may be used to acquire images processed in accordance with the present techniques is provided in FIGS. 6 and 7.
  • the ECS is connected to an imaging system/scanner.
  • the ECS includes CPETs/GPETs running one or more virtual machines (VMs) configured for different types of tasks.
  • VMs virtual machines
  • Data is streamed in real-time from the scanner to the ECS which processes the data (in image and/or projection space) and returns the results.
  • the imaging system appears to have additional processing power because the post- processing performed by the ECS is output alongside reconstructed images by the user interface of the imaging system.
  • the streaming of data to the ECS is synchronized with the state of the scanner. Data is only transferred from the scanner to the ECS when the scanner is not in a critical state.
  • an interventional mode e.g., when the doctor is at the scanner performing an intervention such as contrast injection
  • the scanner does not transfer data at all to avoid data corruption.
  • the ECS provides task-based decision support.
  • a particular task input to the imaging system triggers a secondary task input to and carried out by the ECS.
  • a task may prescribe a particular scan protocol and/or type of image reconstruction by the scanner, while the secondary task may prescribe the application of relevant post-processing techniques to the acquired data. These post-processing techniques may include deep learning analysis of the acquired data.
  • the ECS may select an appropriate DL application based on the secondary task, exam type, and/or other information, and generate the decision support with the selected DL application.
  • Each instance of a decision support that is generated at the ECS may be saved with associated data (e.g., DL application that was used, exam type, fmal/edited exam report).
  • data e.g., DL application that was used, exam type, fmal/edited exam report.
  • one or more of the DL applications may be re- trained with the new data, both locally (e.g., DL applications stored on the ECS) and globally (e.g., updated model weights may be sent to a central server to create an updated global model along with data from other locations).
  • FIG. 1 shows a block schematic diagram of an example system 100 for extending the capabilities of an imaging system 101 with an edge computing system (ECS) 110 according to an embodiment.
  • the imaging system 101 may comprise any suitable non-invasive imaging system, including but not limited to a computed tomography (CT) imaging system, a positron emission tomography (PET) imaging system, a single-photon emission computed tomography (SPECT) imaging system, a magnetic resonance (MR) imaging system, an X-ray imaging system, an ultrasound system, and combinations thereof (e.g., a multi-modality imaging system such as a PET/CT, PET/MR or SPECT/CT imaging system).
  • CT computed tomography
  • PET positron emission tomography
  • SPECT single-photon emission computed tomography
  • MR magnetic resonance
  • X-ray imaging system X-ray imaging system
  • ultrasound system and combinations thereof
  • a multi-modality imaging system such as a PET/CT, PET/MR or SPECT/CT
  • the imaging system 101 includes a processor 103 and a non-transitory memory 104.
  • One or more methods described herein may be implemented as executable instructions in the non-transitory memory 104 that when executed by the processor 103 cause the processor 103 to perform various actions. Such methods are described further herein with regard to FIGS. 2-4.
  • the imaging system 101 further comprises a scanner 105 for scanning a subject such as a patient to acquire imaging data.
  • the scanner 105 may comprise multiple components necessary for scanning the subject.
  • the imaging system 101 comprises a CT imaging system
  • the scanner 105 may comprise a CT tube and a detector array, as well as various components for controlling the CT tube and the detector array, as discussed further herein with regard to FIGS. 6 and 7.
  • the imaging system 101 comprises an ultrasound imaging system
  • the scanner 105 may comprise an ultrasound transducer.
  • the term“scanner” as used herein refers to the components of the imaging system which are used and controlled to perform a scan of a subject.
  • the type of imaging data acquired by the scanner 105 also depends on the type of imaging system 101.
  • the imaging system 101 comprises a CT imaging system
  • the imaging data acquired by the scanner 105 may comprise projection data.
  • the imaging system 101 comprises an ultrasound imaging system
  • the imaging data acquired by the scanner 105 may comprise analog and/or digital echoes of ultrasonic waves emitted into the subject by the ultrasound transducer.
  • the imaging system 101 includes a protocol engine 106 for automatically selecting and adjusting a scan protocol for scanning a subject.
  • a scan protocol selected by protocol engine 106 prescribes a variety of settings for controlling the scanner 105 during a scan of the subject.
  • protocol engine 106 may select or determine a scan protocol based on an indicated primary task.
  • the protocol engine 106 is depicted as a separate component from the non- transitory memory 104, it should be understood that in some examples the protocol engine 106 may comprise a software module stored in non-transitory memory 104 as executable instructions that when executed by the processor 103 causes the processor 103 to select and adjust a scan protocol.
  • the imaging system 101 further comprises a user interface 107 configured to receive input from an operator of the imaging system 101 as well as display information to the operator.
  • user interface 107 may comprise one or more of an input device, including but not limited to a keyboard, a mouse, a touchscreen device, a microphone, and so on, and an output device, including but not limited to a display device, a printer, and so on.
  • the imaging system 101 further comprises a camera 108 for assisting with the automatic positioning of the subject within the imaging system.
  • the camera 108 may capture live images of the subject within the imaging system, while the processor 103 determines a position of the subject within the imaging system based on the live images.
  • the processor 103 may then control a table motor controller, for example, to adjust the position of a table upon which the subject is resting in order to position at least a region of interest (ROI) of the subject within the imaging system.
  • ROI region of interest
  • a scan range may be at least approximately determined based on the live images captured by the camera 108.
  • the system 100 further comprises the ECS 110 that is communicatively coupled to the imaging system 101 via a wired or wireless connection.
  • the ECS 110 comprises a plurality of processors 113 running one or more virtual machines (VMs) 114 configured for different types of tasks.
  • the plurality of processors 113 comprises one or more graphics processing units (GPETs) and/or one or more central processing units (CPETs).
  • the ECS 110 further comprises a non-transitory memory 115 storing executable instructions that may be executed by one or more of the plurality of processors 113.
  • the non-transitory memory 115 may further include a deep learning (DL) model 116 that may be executed by a virtual machine 114 of the plurality of processors 113. While only one DL model is depicted in FIG. 1, it is to be understood that ECS 110 may include more than one DL model stored in non-transitory memory.
  • DL deep learning
  • the system 100 further comprises one or more external databases 130 that the imaging system 101 and the ECS 110 may be communicatively coupled to via a network 120.
  • the one or more external databases 130 may comprise, as exemplary and non-limiting examples, one or more of a hospital information system (HIS), a radiology information system (RIS), a picture archive and communication system (PACS), and an electronic medical record (EMR) system.
  • the imaging system 101 and/or the ECS 110 may retrieve information such as subject metadata, which may include metadata describing or relating to a particular subject to be scanned (e.g., patient age, gender, height, and weight), which may be retrieved from an EMR for the subject.
  • subject metadata may include metadata describing or relating to a particular subject to be scanned (e.g., patient age, gender, height, and weight), which may be retrieved from an EMR for the subject.
  • the imaging system 101 and/or the ECS 110 may use subject metadata retrieved from the one or more external databases 130 to automatically determine a scan protocol, train
  • the system 100 further comprises a repository 140 communicatively coupled to one or more of the imaging system 101 and the ECS 110 via the network 120.
  • the repository 140 stores a plurality of applications 142 that may be utilized by one or more of the imaging system 101 and the ECS 110.
  • the imaging system 101 and/or the ECS 110 may retrieve one or more applications of the plurality of applications 142 from the repository 140 via the network 120.
  • the repository 140 may push an application of the plurality of applications 142 to one or more of the imaging system 101 and the ECS 110.
  • a method for retrieving an application from the repository 140 is described further herein with regard to FIG. 5.
  • FIG. 2 shows a high-level flowchart illustrating an example method 200 for synchronizing an imaging system with an edge computing system (ECS) according to an embodiment.
  • method 200 relates to streaming imaging data acquired during a scan from an imaging system such as imaging system 101 to an ECS such as ECS 110 for processing the imaging data concurrently with the scan.
  • ECS edge computing system
  • Method 200 is described with regard to the systems and components depicted in FIG. 1 and described hereinabove, though it should be appreciated that the method may be implemented with other systems and components without departing from the scope of the present disclosure.
  • Method 200 may be implemented as executable instructions in non- transitory memory 104 of an imaging system 101 which may be executed by a processor 103 of the imaging system 101.
  • Method 200 begins at 205.
  • method 200 receives an indication of an initial task, also referred to herein as a primary task.
  • the initial task comprises a clinical task to be performed by the imaging system, and thus generally specifies what type of scan should be performed by the imaging system.
  • the initial tasks are generally clinical actions that must be completed during pre-scan and scan of an imaging prescription.
  • pre-scan related tasks are patient set-up, receiving and reviewing data from EMR, CDSS and HIS/RIS, and selecting the protocol(s) for the scan(s).
  • Some examples of scan related tasks are patient positioning, image acquisition and image reconstruction.
  • method 200 may receive the indication of the initial task, for example, via a user interface 107 of the imaging system 101. That is, an operator of the imaging system 101 may select the initial task or otherwise indicate the initial task via the user interface 107.
  • method 200 may automatically identify the initial task, for example by evaluating an EMR of the subject to be scanned.
  • an initial task may include or be defined by a diagnostic goal of the imaging scan, e.g., the reason a patient was referred for the imaging scan.
  • Example diagnostic goals may include diagnosing presence (or absence) of a brain bleed, diagnosing presence (or absence) of a liver lesion, determining a presence or extent of a spine fracture, and performing organ segmentation to plan for radiation therapy.
  • Each diagnostic goal may be targeted to a specific anatomy or set of anatomies and thus may dictate a type of scan/exam to be performed.
  • the diagnosis of the brain bleed may be targeted to the brain and thus may dictate that a non-contrast head scan be performed
  • the diagnosis of the liver lesion may be targeted to the liver and thus may dictate that an abdominal scan be performed
  • the diagnosis of the spine fracture may be targeted to the neck and/or back and thus may dictate that a full spine non-contrast scan be performed
  • the organ segmentation may be targeted to the entire abdominal region (or even the whole body) and thus may dictate that a non-contrast whole body scan be performed.
  • Each different scan type may have an associated set of scan parameters that dictate how the scanner is to be controlled during the scan. For example, for computed tomography scans, each different scan type may dictate the specific tube current (mA), tube voltage (kV), and gantry rotational speed for the CT scanner during the scan.
  • method 200 determines if the initial task is associated with a secondary task. Whereas the initial task specifies how imaging data may be acquired, a secondary task specifies how the imaging data may be processed. As a non limiting example, a secondary task may comprise an automatic lesion detection, organ classification, segmentation, or any type of post-processing method that may be applied to imaging data.
  • the secondary tasks are generally clinical actions that must be completed during post-scan of an imaging prescription.
  • post-scan related tasks are post-processing of images by applying post-processing applications.
  • a particular initial task may always specify one or more secondary tasks to be performed in conjunction with the initial or primary task.
  • the initial task indication received at 205 may specify one or more secondary tasks.
  • an initial task may not specify a secondary task, but an operator of the imaging system may select a secondary task when indicating the initial task.
  • the initial task may specify a brain scan and a secondary task may comprise lesion detection. Since lesion detection may not be necessary for every brain scan, the operator of the imaging system may optionally select a secondary task comprising lesion detection for the initial task of a brain scan for particular cases where the presence of a lesion may be suspected.
  • method 200 may automatically determine a secondary task based on the initial task as well as subject metadata, such as an EMR for the subject.
  • method 200 proceeds to 212.
  • method 200 performs a scan in accordance with the initial task indication.
  • Method 200 may further reconstruct and output one or more images from imaging data acquired during the scan. Since no secondary task is associated with the initial task, no imaging data acquired at 212 is streamed to the ECS. Method 200 then ends.
  • method 200 continues to 215.
  • method 200 outputs a secondary task indication to the ECS 110.
  • the ECS 110 performs post-processing of imaging data based on the secondary task indication.
  • method 200 begins scanning the subject with the scanner 105 in accordance with the initial task indication to acquire imaging data.
  • method 200 begins scanning the subject in accordance with a scan protocol associated with the initial task.
  • the scan protocol may be selected by protocol engine 106, as a non-limiting example.
  • method 200 synchronizes the imaging system 101 to the ECS 110 based on the state of the scanner 105. To that end, at 225, method 200 evaluates the state of the scanner 105. At 230, method 200 determines if the scanner 105 is in a critical state.
  • the scanner 105 may be in a critical state when asynchronous external operations could negatively impact its ability to meet essential safety requirements. For example, the scanner 105 may be in a critical state when active data acquisition is occurring, such as when the X-ray source is on and image data is being stored to a disk or memory storage location.
  • a scan failure could result (e.g., due to data being lost, delayed, or other issues) and the patient would need to be re-scanned, exposing the patient to additional radiation dose.
  • a CT scanner is in a critical state when the X-ray source is on or during rapid sequences of acquisitions.
  • Another example of a critical state is when the system is performing an interventional procedure, where both the scan data storage and image display operations are time sensitive and essential to safety.
  • method 200 proceeds to 232, wherein method 200 continues the scan. More specifically, method 200 continues scanning the subject but does not transmit acquired imaging data to the ECS 110.
  • method 200 continues to 233, wherein method 200 transmits acquired imaging data to the ECS 110 for post-processing.
  • the ECS 110 processes the transmitted imaging data in accordance with the secondary task.
  • Method 200 proceeds from both 232 and 233 to 235.
  • method 200 determines if the scan is complete. If the scan is not complete (“NO”), method 200 proceeds to 237, wherein method 200 continues the scan.
  • Method 200 proceeds to 225 to re-evaluate the scanner state.
  • method 200 continually evaluates the state of the scanner 105 during a scan to determine whether the scanner 105 is in a critical state, and streams or transmits the imaging data acquired during the scan to the ECS 110 only when the scanner 105 is not in a critical state. In other words, method 200 continually streams imaging data as it is acquired to the ECS 110 during the scan unless the scanner 105 is in a critical state.
  • the ECS 110 may receive and process the imaging data concurrently with the scan. Furthermore, the imaging data is not corrupted by transmitting it during a critical state, nor is the functioning of the imaging system 101 disturbed when the scanner 105 is operating in a critical state.
  • method 200 determines that the scan is complete at 235 (“YES”), method 200 proceeds to 240.
  • method 200 reconstructs one or more images from the acquired imaging data.
  • Method 200 may reconstruct the one or more images using any suitable iterative or analytical image reconstruction algorithm, as a non limiting example.
  • method 200 receives decision support from the ECS 110.
  • the decision support comprises the results of one or more post-processing algorithms applied to the imaging data streamed to the ECS 110 during the scan at 233. For example, if the exam is a brain bleed exam where a scan of the head is performed, the decision support output may include an indication of whether or not bleeding is detected. If the exam is a liver lesion exam where the liver is scanned, the decision support output may include whether or not a lesion is detected, and if so, the decision support output may include an indication of the size, shape, position, etc., of the lesion.
  • method 200 outputs the one or more images and the decision support.
  • Method 200 may output the one or more images and the decision support to a display device for display to the operator of the imaging system 101, for example. Additionally or alternatively, method 200 may output the one or more images and the decision support to a mass storage for later review, or to a picture archiving and communication system (PACS) for review at a remote workstation, for example.
  • PACS picture archiving and communication system
  • a method for an imaging system comprises performing a scan of a subject to acquire imaging data, transmitting the imaging data during the scan to a computing device communicatively coupled to and positioned external to the imaging system, receiving a decision support output from the computing device, the decision support output calculated by the computing device from the imaging data, and displaying an image reconstructed from the imaging data and the decision support output.
  • deep learning techniques can be used to provide decision support in parallel with the acquisition of imaging data, thereby reducing the time necessary to make an informed diagnosis.
  • the output data includes scan data, image data, EMR data for the patient, a type of scanner (as well as other scanner metadata), scan protocol, decision support output, and any clinical diagnosis associated with the scan.
  • the imaging system may retrieve data from a hospital information system (HIS) and/or radiology information system (RIS) as well as an EMR for a given patient (to obtain patient data as well as the clinical outcome).
  • HIS hospital information system
  • RIS radiology information system
  • the output data can be used by DL algorithms to optimize/personalize scan protocols for different patients and image quality targets, improve decision support, and potentially automate clinical diagnosis.
  • FIG. 3 shows a high-level flowchart illustrating an example method 300 for generating deep learning training data with an imaging system according to an embodiment.
  • method 300 relates to leveraging all information that may be potentially relevant to the quality of a reconstructed image and its analysis for the training of a deep learning model.
  • Method 300 is described with regard to the systems and components of FIG. 1, though it should be appreciated that the method may be implemented with other systems and components without departing from the scope of the present disclosure.
  • Method 300 may be implemented as executable instructions in non-transitory memory 104 of an imaging system 101 and executed by a processor 103 of the imaging system 101.
  • Method 300 begins at 305.
  • method 300 receives a selection of a scan protocol.
  • the scan protocol may be manually selected, for example by an operator via a user interface 107, or may be automatically selected, for example via a protocol engine 106.
  • method 300 retrieves subj ect metadata for the subj ect to be scanned from one or more external databases.
  • the subject metadata may comprise at least a subset of information relating to the subject, and as such may include but is not limited to demographic information and medical history.
  • Method 300 may retrieve the subject metadata, for example, from the one or more databases 130, including one or more of a HIS, RIS, and EMR database, via the network 120.
  • method 300 performs a scan of the subject according to the scan protocol to acquire imaging data.
  • method 300 may control the scanner 105 to scan the subject, wherein the scan protocol selected at 305 prescribes the control parameters of the scanner 105.
  • method 300 reconstructs an image from the imaging data acquired during the scan at 315.
  • Method 300 may reconstruct the image using any suitable image reconstruction algorithm in accordance with the modality of the imaging system.
  • method 300 transmits the image reconstructed at 320 and/or imaging data acquired at 315 to the ECS 110.
  • the ECS 110 processes the image and/or the imaging data using one or more DL algorithms to generate decision support output.
  • transmitting the imaging data is depicted as occurring after the scan in FIG. 3, it should be appreciated that in some examples method 300 may transmit the imaging data during the scan at 315, such that the ECS 110 processes the imaging data concurrently with the scan at 315, as discussed hereinabove with regard to FIG. 2.
  • the ECS 110 processes the image and/or the imaging data either during the scan or after the scan is completed to generate the decision support output.
  • method 300 receives, from the ECS 110, decision support output calculated for the image and/or imaging data with a learning model (e.g., a DL algorithm such as a neural network).
  • a learning model e.g., a DL algorithm such as a neural network.
  • method 300 displays the image and the decision support output. Both the image and the decision support output may be displayed via a display device of the imaging system 101, for example. The decision support output may be displayed superimposed or alongside the image, depending on the type of decision support output.
  • method 300 receives an outcome decision relating to the displayed image and decision support output.
  • the outcome decision comprises, for example, a clinical diagnosis made by a physician or radiologist based on the displayed image and decision support output.
  • the outcome decision may comprise a ground truth relating to the decision support output.
  • the decision support output comprises a detection or classification of an object such as a lesion in the image
  • a ground truth for the decision support output may comprise an indication that the detection or classification is correct or incorrect.
  • method 300 updates a training dataset with the imaging data, the image, the scan protocol, the subject metadata, the decision support output, the outcome decision, and system metadata.
  • updating the training dataset may comprise aggregating the imaging data, the image, the scan protocol, the subject metadata, the decision support output, the outcome decision, and the system metadata into a single case to be added to the training dataset.
  • the system metadata includes information that characterizes the imaging system 101 itself, such as a model number of the imaging system 101, a manufacturing date of the imaging system 101, and so on.
  • method 300 transmits the updated training dataset to the ECS 110 for updating the learning model used to generate the decision support output received at 330.
  • additional data that may impact the performance of the learning model such as the subject metadata, the system metadata, the scan protocol, and a clinical diagnosis made by a physician based on the reconstructed image and/or decision support output, may be leveraged to improve the performance of the learning model.
  • Method 300 then ends.
  • a method for an imaging system comprises performing a scan of a subject according to a scan protocol to acquire imaging data; displaying an image and decision support associated with the image, the image reconstructed from the imaging data, and the decision support calculated with a learning model and the imaging data; and updating a training dataset for the learning model with the imaging data, the image, the scan protocol, subject metadata describing the subject, the decision support, an outcome decision relating to the image and the decision support, and system metadata relating to the imaging system.
  • Radiologists and technicians must be trained to select and prepare the correct scan protocols for each patient, who may also be positioned differently within the imaging system for a given protocol. If any portion of the scan preparation is incorrect, the quality of the resulting image(s) may be too poor for clinical use and so the scan must be performed again. Further, image analysis must be performed by radiologists/physicians and so the turn-around time for a clinical report may be slow. As described further herein below with regard to FIG.
  • the imaging system 101 may therefore include an“express mode” that utilizes the extended processing power provided by the ECS 110 as well as the more sophisticated DL applications 116 to automate as many parts of the imaging procedure as possible.
  • Hospital information systems, radiology information systems, and electronic medical record data are used by the protocol engine 106 to select and personalize a scan protocol for a given patient, which in some examples may be assisted by deep learning.
  • the patient is automatically positioned based on the imaging protocol and with the assistance of a camera.
  • the camera further enables a determination of the correct scan range for the patient, so that the scan protocol can be further adjusted and personalized for the patient.
  • Scanning and the post-processing/decision support is automatically performed by the imaging system 101 and the ECS 110.
  • DL applications 116 executed by the plurality of processors 113 of the ECS 110 provide automatic processing of acquired data and at least a draft assessment of reconstructed images.
  • FIG. 4 shows a high-level flowchart illustrating an example method 400 for an imaging system express mode according to an embodiment.
  • method 400 relates to controlling an imaging system to scan a subject with minimal operator intervention or input.
  • Method 400 is described with regard to the systems and components of FIG. 1, though it should be appreciated that the method may be implemented with other systems and components without departing from the scope of the present disclosure.
  • Method 400 may be implemented as executable instructions in non-transitory memory 104 of the imaging system 101 and non-transitory memory 115 of the ECS 110, and may be executed by the processor 103 of the imaging system 101 and the plurality of processors 113 of the ECS 110.
  • Method 400 begins at 405. At 405, method 400 receives an indication of an express mode for a scan. Method 400 may receive the indication of the express mode for the scan, for example, via the user interface 107 of the imaging system 101.
  • the user interface 107 may include a dedicated button, switch, or another mechanism for indicating a desire to use an express mode of the imaging system 101
  • method 400 receives an identification of the subject to be scanned.
  • method 400 may receive the identification of the subject to be scanned via the user interface 107.
  • an operator of the imaging system 101 may manually input an identification of the subject to be scanned.
  • method 400 may automatically identify the subject to be scanned.
  • method 400 may obtain an image of a face of the subject via the camera 108, and may employ facial recognition techniques to identify the subject based on the image.
  • method 400 retrieves an EMR for the subject identified at 410, for example by accessing the EMR in the one or more databases 130 via the network 120.
  • method 400 determines an initial or primary task based on the EMR.
  • the primary task may indicate the clinical context of the scan and thus may indicate what type of scan should be performed.
  • Method 400 may determine the primary task from the EMR which may include a prescription of a particular type of scan by a physician.
  • method 400 determines a scan protocol for the primary task. For example, method 400 may input the primary task to the protocol engine 106 of the imaging system 101 to determine the scan protocol. In other examples, the primary task may be associated with a particular scan protocol. Furthermore, the scan protocol may be determined or adjusted based on the subject. For example, the scan protocol may prescribe more or less radiation dose according to the age and size of the subject. [0065] At 430, method 400 positions the subject within the imaging system. In some examples, method 400 may determine the position of the subject on a moveable table relative to the imaging system, for example by processing live images of the subject captured by the camera 108.
  • Method 400 may control a table motor controller to adjust the position of the table, and thus the subject, such that the region of interest to be scanned is within an imaging region of the imaging system (e.g., positioned within a gantry between a radiation source and a radiation detector).
  • an imaging region of the imaging system e.g., positioned within a gantry between a radiation source and a radiation detector.
  • method 400 determines a scan range for the subject.
  • Method 400 may determine the scan range for the subject based on live images captured by the camera 108. For example, different subjects have bodies of different sizes, and so the scan range should be adjusted accordingly. Method 400 may therefore evaluate the images captured by the camera 108 to determine the size and proportions of the subject, and may in turn set an appropriate scan range that would scan the ROI of the subject.
  • method 400 adjusts the scan protocol with the scan range determined at 435.
  • method 400 performs a scan of the subject according to the adjusted scan protocol to acquire imaging data.
  • method 400 controls the scanner 105 to scan the subject according to the adjusted scan protocol.
  • method 400 outputs imaging data to the ECS 110.
  • the imaging data may be output to the ECS 110 for processing during the scan.
  • the ECS 110 processes the imaging data using a learning model to generate decision support output.
  • method 400 receives the decision support output from the ECS 110.
  • method 400 reconstructs an image from the imaging data acquired at 445.
  • Method 400 may reconstruct the image from the imaging data using any suitable image reconstruction technique.
  • FIG. 4 depicts the reconstruction of the image as occurring after outputting the imaging data to the ECS 110, it should be understood that in some examples, the reconstructed image may be transmitted to the ECS 110, and the decision support output received at 455 may be generated by the ECS 110 based on the reconstructed image rather than the raw imaging data.
  • method 400 outputs the image and the decision support output to one or more of a display device for display, a mass storage for subsequent retrieval and review, and a PACS. Method 400 then ends.
  • a method for an imaging system comprises receiving an identification of a subject to be scanned, automatically determining a personalized scan protocol for the subject, automatically performing a scan of the subject according to the personalized scan protocol to acquire imaging data, and displaying an image and decision support, the image and the decision support automatically generated from the imaging data.
  • an application repository enables remote deployment of software applications for the imaging system.
  • the imaging system 101 and/or the ECS 110 may be coupled to the application repository 140 via a network 120.
  • the imaging system 101 or the ECS 110 may retrieve a new or updated application 142 from the application repository 140.
  • the application repository 140 may push an application 142 to the ECS 110 and/or the imaging system 101.
  • certain applications may only be compatible with particular combinations of an imaging system 101 and an ECS 110.
  • different applications may be displayed/deployable to an updated high-power imaging system coupled to a low-power ECS versus an outdated low-power imaging system coupled to a high-power ECS.
  • FIG. 5 shows a high-level flowchart illustrating an example method 500 for managing applications on an edge computing system (ECS) according to an embodiment.
  • method 500 relates to retrieving a new or updated application for processing images and/or imaging data from an external repository.
  • Method 500 is described with regard to the systems and components of FIG. 1, though it should be appreciated that the method may be implemented with other systems and components without departing from the scope of the present disclosure.
  • Method 500 may be implemented as executable instructions in the non-transitory memory 115 of the ECS 110, and may be executed by one or more of the plurality of processors 113 of the ECS 110.
  • Method 500 begins at 505.
  • method 500 transmits an access request to a repository, such as repository 140.
  • the access request may include an identification of the imaging system 101 and the ECS 110.
  • the repository 140 includes a plurality of applications that may or may not be compatible with one or more of the imaging system 101 and the ECS 110. Therefore, the repository 140 determines, based on the identification of the imaging system 101 and the ECS 110, which applications of the plurality of applications are compatible with the given combination of the imaging system 101 and the ECS 110.
  • method 500 receives a list of compatible applications from the repository 140.
  • the list of compatible applications may comprise a subset of the plurality of applications stored in the repository 140.
  • method 500 receives a selection of an application in the list of compatible applications.
  • an operator of the imaging system 101 may view the list of compatible applications and select a desired application from the list via the user interface 107.
  • the selection may thus be transmitted from the imaging system 101 to the ECS 110.
  • the ECS 110 may include its own user interface for enabling the selection of an application, and so the selection of the application may be received via said user interface.
  • the selection of the application may be automatic. For example, if the repository 140 includes an updated version of an application already installed on the ECS 110, method 500 may automatically select the updated version of the application from the list of compatible applications.
  • method 500 retrieves the selected application from the repository 140 and installs it locally in non-transitory memory 115.
  • the DL application 116 in non-transitory memory 115 of the ECS 110 depicted in FIG. 1 may thus comprise an application retrieved from the repository 140.
  • method 500 generates a decision support calculation from imaging data received from the imaging system 101 with the application. For example, if the application comprises a segmentation application, method 500 may segment an image reconstructed from the imaging data acquired with the imaging system 101, and the segmentation of the image comprises the decision support calculation. At 530, method 500 outputs the decision support calculation to the imaging system 101. Method 500 then ends.
  • a method for an ECS communicatively coupled to and positioned external to an imaging system comprises receiving, from an application repository communicatively coupled to the ECS via a network, a list of compatible deep learning applications, retrieving, from the application repository, a deep learning application selected from the list, receiving imaging data from the imaging system, processing the imaging data with the deep learning application to generate a decision support calculation, and outputting the decision support calculation to the imaging system.
  • FIG. 6 illustrates an exemplary CT system 600 configured to allow fast and iterative image reconstruction.
  • the CT system 600 is configured to image a subject 612 such as a patient, an inanimate object, one or more manufactured parts, and/or foreign objects such as dental implants, stents, and/or contrast agents present within the body.
  • the CT system 600 includes a gantry 602, which in turn, may further include at least one X-ray radiation source 604 configured to project a beam of X-ray radiation 606 for use in imaging the subject 612.
  • the X- ray radiation source 604 is configured to project the X-rays 606 towards a detector array 608 positioned on the opposite side of the gantry 602.
  • FIG. 6 depicts only a single X-ray radiation source 604, in certain embodiments, multiple X-ray radiation sources may be employed to project a plurality of X-rays 606 for acquiring projection data corresponding to the subject 612 at different energy levels.
  • CT system is described by way of example, it should be understood that the present techniques may also be useful when applied to images acquired using other imaging modalities, such as tomosynthesis, MRI, C-arm angiography, and so forth.
  • imaging modalities such as tomosynthesis, MRI, C-arm angiography, and so forth.
  • the present discussion of a CT imaging modality is provided merely as an example of one suitable imaging modality.
  • the CT system 600 is communicatively coupled to an edge computing system (ECS) 610 configured to process projection data with DL techniques.
  • ECS edge computing system
  • the CT system 600 may transmit projection data as it is acquired to the ECS 610, and in turn the ECS 610 processes the projection data to provide decision support alongside images reconstructed from the projection data.
  • a radiation source projects a fan-shaped beam which is collimated to lie within an X-Y plane of a Cartesian coordinate system and generally referred to as an“imaging plane.”
  • the radiation beam passes through an object being imaged, such as the patient or subject 612.
  • the beam after being attenuated by the object, impinges upon an array of radiation detectors.
  • the intensity of the attenuated radiation beam received at the detector array is dependent upon the attenuation of a radiation beam by the object.
  • Each detector element of the array produces a separate electrical signal that is a measurement of the beam attenuation at the detector location.
  • the attenuation measurements from all the detectors are acquired separately to produce a transmission profile.
  • the radiation source and the detector array are rotated with a gantry within the imaging plane and around the object to be imaged such that an angle at which the radiation beam intersects the object constantly changes.
  • a group of radiation attenuation measurements, i.e., projection data, from the detector array at one gantry angle is referred to as a“view.”
  • A“scan” of the object includes a set of views made at different gantry angles, or view angles, during one revolution of the radiation source and detector. It is contemplated that the benefits of the methods described herein accrue to medical imaging modalities other than CT, so as used herein the term view is not limited to the use as described above with respect to projection data from one gantry angle.
  • the term“view” is used to mean one data acquisition whenever there are multiple data acquisitions from different angles, whether from a CT, PET, or SPECT acquisition, and/or any other modality including modalities yet to be developed as well as combinations thereof in fused embodiments.
  • the projection data is processed to reconstruct an image that corresponds to a two-dimensional slice taken through the object.
  • One method for reconstructing an image from a set of projection data is referred to in the art as the filtered back-projection (FBP) technique.
  • Transmission and emission tomography reconstruction techniques also include statistical iterative methods such as maximum likelihood expectation maximization (MLEM) and ordered-subsets expectation reconstruction techniques as well as iterative reconstruction techniques.
  • MLEM maximum likelihood expectation maximization
  • This process converts the attenuation measurements from a scan into integers called“CT numbers” or“Hounsfield units,” which are used to control the brightness of a corresponding pixel on a display device.
  • CT numbers integers called“CT numbers” or“Hounsfield units”
  • a“helical” scan may be performed.
  • the patient is moved while the data for the prescribed number of slices is acquired.
  • Such a system generates a single helix from a cone beam helical scan.
  • the helix mapped out by the cone beam yields projection data from which images in each prescribed slice may be reconstructed.
  • the phrase“reconstructing an image” is not intended to exclude embodiments of the present disclosure in which data representing an image is generated but a viewable image is not. Therefore, as used herein the term“image” broadly refers to both viewable images and data representing a viewable image. However, many embodiments generate (or are configured to generate) at least one viewable image.
  • the CT system 600 may be communicatively coupled to a“cloud” network 620 through communication with the ECS 610.
  • FIG. 7 illustrates an exemplary imaging system 700 similar to the CT system 600 of FIG. 6.
  • the imaging system 700 may comprise the imaging system 101 described hereinabove with regard to FIG. 1.
  • the imaging system 700 includes the detector array 608 (see FIG. 6).
  • the detector array 608 further includes a plurality of detector elements 702 that together sense the X-ray beams 606 (see FIG. 6) that pass through a subject 704 such as a patient to acquire corresponding projection data.
  • the detector array 608 is fabricated in a multi- slice configuration including the plurality of rows of cells or detector elements 702. In such a configuration, one or more additional rows of the detector elements 702 are arranged in a parallel configuration for acquiring the projection data.
  • the imaging system 700 is configured to traverse different angular positions around the subject 704 for acquiring desired projection data.
  • the gantry 602 and the components mounted thereon may be configured to rotate about a center of rotation 706 for acquiring the projection data, for example, at different energy levels.
  • the mounted components may be configured to move along a general curve rather than along a segment of a circle.
  • the detector array 608 collects data of the attenuated X-ray beams.
  • the data collected by the detector array 608 undergoes pre-processing and calibration to condition the data to represent the line integrals of the attenuation coefficients of the scanned subject 704.
  • the processed data are commonly called projections.
  • two or more sets of projection data are typically obtained for the imaged object at different tube peak kilovoltage (kVp) levels, which change the peak and spectrum of energy of the incident photons comprising the emitted X-ray beams or, alternatively, at a single tube kVp level or spectrum with an energy resolving detector of the detector array 608.
  • kVp tube peak kilovoltage
  • the acquired sets of projection data may be used for basis material decomposition (BMD).
  • BMD basis material decomposition
  • the measured projections are converted to a set of density line-integral projections.
  • the density line-integral projections may be reconstructed to form a density map or image of each respective basis material, such as bone, soft tissue, and/or contrast agent maps.
  • the density maps or images may be, in turn, associated to form a volume rendering of the basis material, for example, bone, soft tissue, and/or contrast agent, in the imaged volume.
  • the basis material image produced by the imaging system 700 reveals internal features of the subject 704, expressed in the densities of the two basis materials.
  • the density image may be displayed to show these features.
  • a radiologist or physician would consider a hard copy or display of the density image to discern characteristic features of interest.
  • Such features might include lesions, sizes and shapes of particular anatomies or organs, and other features that would be discernable in the image based upon the skill and knowledge of the individual practitioner.
  • the imaging system 700 includes a control mechanism 608 to control movement of the components such as rotation of the gantry 602 and the operation of the X-ray radiation source 604.
  • the control mechanism 708 further includes an X-ray controller 710 configured to provide power and timing signals to the X-ray radiation source 604.
  • the control mechanism 708 includes a gantry motor controller 712 configured to control a rotational speed and/or position of the gantry 602 based on imaging requirements.
  • the control mechanism 708 further includes a data acquisition system (DAS) 714 configured to sample analog data received from the detector elements 702 and convert the analog data to digital signals for subsequent processing.
  • DAS data acquisition system
  • the data sampled and digitized by the DAS 714 is transmitted to a computer or computing device 716.
  • the computing device 716 stores the data in a storage device such as mass storage 718.
  • the mass storage 718 may include a hard disk drive, a floppy disk drive, a compact disk-read/write (CD-R/W) drive, a Digital Versatile Disc (DVD) drive, a flash drive, and/or a solid- state storage drive.
  • the computing device 716 provides commands and parameters to one or more of the DAS 714, the X-ray controller 710, and the gantry motor controller 712 for controlling system operations such as data acquisition and/or processing.
  • the computing device 716 controls system operations based on operator input.
  • the computing device 716 receives the operator input, for example, including commands and/or scanning parameters via an operator console 720 operatively coupled to the computing device 716.
  • the operator console 720 may include a keyboard (not shown) and/or a touchscreen to allow the operator to specify the commands and/or scanning parameters.
  • FIG. 7 illustrates only one operator console 720, more than one operator console may be coupled to the imaging system 700, for example, for inputting or outputting system parameters, requesting examinations, and/or viewing images.
  • the imaging system 700 may be coupled to multiple displays, printers, workstations, and/or similar devices located either locally or remotely, for example, within an institution or hospital, or in an entirely different location via one or more configurable wired and/or wireless networks such as the Internet and/or virtual private networks.
  • the imaging system 700 either includes or is coupled to a picture archiving and communications system (PACS) 724.
  • PACS picture archiving and communications system
  • the PACS 724 is further coupled to a remote system such as a radiology department information system, hospital information system, and/or to an internal or external network (not shown) to allow operators at different locations to supply commands and parameters and/or gain access to the image data.
  • the computing device 716 uses the operator-supplied and/or system- defined commands and parameters to operate a table motor controller 726, which in turn, may control a table 728 which may comprise a motorized table.
  • the table motor controller 726 moves the table 728 for appropriately positioning the subject 704 in the gantry 602 for acquiring projection data corresponding to the target volume of the subject 704.
  • an image reconstructor 730 uses the sampled and digitized X-ray data to perform high-speed reconstruction.
  • FIG. 7 illustrates the image reconstructor 730 as a separate entity, in certain embodiments, the image reconstructor 730 may form part of the computing device 716. Alternatively, the image reconstructor 730 may be absent from the imaging system 700 and instead the computing device 716 may perform one or more of the functions of the image reconstructor 730. Moreover, the image reconstructor 730 may be located locally or remotely, and may be operatively connected to the imaging system 700 using a wired or wireless network. Particularly, one exemplary embodiment may use computing resources in a“cloud” network cluster for the image reconstructor 730.
  • the image reconstructor 730 stores the images reconstructed in the storage device or mass storage 718. Alternatively, the image reconstructor 730 transmits the reconstructed images to the computing device 716 for generating useful patient information for diagnosis and evaluation. In certain embodiments, the computing device 716 transmits the reconstructed images and/or the patient information to a display 732 communicatively coupled to the computing device 716 and/or the image reconstructor 730.
  • image reconstructor 730 may include such executable instructions in non-transitory memory, and may apply the methods described herein to reconstruct an image from scanning data.
  • computing device 716 may include the instructions in non-transitory memory, and may apply the methods described herein, at least in part, to a reconstructed image after receiving the reconstructed image from image reconstructor 730.
  • the methods and processes described herein may be distributed across image reconstructor 730 and computing system 716.
  • the display 732 allows the operator to evaluate the imaged anatomy.
  • the display 732 may also allow the operator to select a volume of interest (VOI) and/or request patient information, for example, via a graphical user interface (GUI) for a subsequent scan or processing.
  • VI volume of interest
  • GUI graphical user interface
  • FIG. 8 shows a block schematic diagram of a more detailed example system
  • the system 800 may include a plurality of imaging systems 802, such as any suitable non-invasive imaging system, including but not limited to a computed tomography (CT) imaging system, a positron emission tomography (PET) imaging system, a magnetic resonance (MR) imaging system, an ultrasound system, and combinations thereof (e.g., a multi-modality imaging system such as a PET/CT or PET/MR imaging system) or other devices 802, such as an advanced workstation (AW) for visualizing images, a PACS, and a mobile client devices, such as a tablet, iPad, smart phone, iPhone, etc.
  • CT computed tomography
  • PET positron emission tomography
  • MR magnetic resonance
  • ultrasound system e.g., a multi-modality imaging system such as a PET/CT or PET/MR imaging system
  • AW advanced workstation
  • the plurality of imaging systems and other devices 802 may be communicatively coupled to an edge computing system 804 via a network.
  • the plurality of imaging systems and other devices 802 and edge computing system 804 may be communicatively coupled to a cloud network 806. Communication between the plurality of imaging systems and other devices 802, the ECS 804, and the cloud network 806 is secure.
  • the ECS 804 may also be known as an on-premises cloud and may include data 808 from a variety of inputs and sources, including EMR, HIS/RIS data, holistic data, etc., which may be communicatively coupled to a DL interface 810 and DL training 812.
  • the ECS 804 may further include at least one edge host 814, which may include at least one edge platform 816 and an application store 818 with a plurality of applications 820.
  • the cloud network 806 may include data 822 from a variety of inputs and sources and DL training 824.
  • the cloud network 806 may include an application store with a plurality of applications 826.
  • the cloud network 806 may further include an artificial intelligence (AI) interface for generating learning models or other models useful for system 800.
  • AI artificial intelligence
  • Data from the system 800 may be shared and/or stored between the plurality of imaging systems and other devices 802, the ECS 804, and the cloud network 806. Post-processing may be accomplished on data that is sent to the ECS prior to being sent to the DICOM. Quantification and segmentation may occur on imaging systems prior to sending to PACS.
  • Certain examples provide core processing ability organized into units or modules that can be deployed in a variety of locations. Off-device processing can be leveraged to provide a micro-cloud, mini-cloud, and/or a global cloud, etc.
  • the micro-cloud provides a one-to-one configuration with an imaging device console targeted for ultra-low latency processing (e.g., stroke, etc.) for customers that do not have cloud connectivity, etc.
  • the mini-cloud is deployed on a customer network, etc., targeted for low-latency processing for customers who prefer to keep their data in- house, for example.
  • the global cloud is deployed across customer organizations for high-performance computing and management of information technology infrastructure with operational excellence.
  • off-device processing engine(s) may be included in system 800.
  • off-device processing engine(s) e.g., an acquisition engine, a reconstruction engine, a diagnosis engine, etc., and their associated deployed deep learning network devices, etc.
  • a purpose for an exam electronic medical record information, heart rate and/or heart rate variability, blood pressure, weight, visual assessment of prone/supine, head first or feet first, etc., can be used to determine one or more acquisition settings such as default field of view (DFOV), center, pitch, orientation, contrast injection rate, contrast injection timing, voltage, current, etc., thereby providing a“one-click” imaging device.
  • DFOV field of view
  • kernel information, slice thickness, slice interval, etc. can be used to determine one or more reconstruction parameters including image quality feedback, for example.
  • Acquisition feedback, reconstruction feedback, etc. can be provided to the system design engine to provide real-time (or substantially real-time given processing and/or transmission delay) health analytics for the imaging device as represented by one or more digital models (e.g., deep learning models, machine models, digital twin, etc.).
  • the digital model(s) can be used to predict component health for the imaging device in real-time (or substantially real time given a processing and/or transmission delay).
  • FIG. 9 shows a flow chart illustrating a method 900 for generating decision support output using one or more deep learning (DL) applications. Method 900 is described with regard to the systems and components of FIG.
  • Method 900 may be implemented as executable instructions in the non-transitory memory 115 of the ECS 110, and may be executed by one or more of the plurality of processors 113 of the ECS 110
  • an exam type and secondary task for a decision support output related to an imaging scan is received.
  • the operator of the scanner may input an initial or primary task (as explained above with respect to FIG. 2) or the initial/primary task may be obtained from an EMR of the patient (as explained above with respect to FIG. 4).
  • the initial/primary task may include an indication of an exam type and may include an associated secondary task.
  • the exam type may include the target anatomy to be scanned and certain features of the scan used to carry out the exam, such as a brain bleed exam, liver lesion exam, etc.
  • the associated secondary task may indicate the decision support that is to be generated and output along with the images obtained during the imaging scan.
  • the decision support may include bleed detection, lesion detection, organ segmentation, etc.
  • an application is selected to be executed in order to generate the decision support output.
  • the application that is selected may be a deep learning application (DL application) that utilizes the imaging information sent from the scanner (as explained above with respect to FIG. 2, for example) as an input into a DL model.
  • the DL model then generates the decision support output using the DL model and input imaging information.
  • the DL application may be selected based on the secondary task and exam type. For example, if the exam is a brain exam that is to be performed to detect presence of bleeding, a brain bleed DL application may be selected. If the exam is a liver scan that is to be performed to detect presence of a lesion, a liver lesion DL application may be selected.
  • the same DL application may apply to more than one exam type.
  • a lesion detection DL application may be selected for a liver lesion exam and for a kidney lesion detection.
  • separate DL applications may be selected for each different exam type/secondary task combination.
  • different DL applications may be selected for the liver and kidney lesion detection exams.
  • the appropriate DL application may be selected according to a suitable mechanism.
  • the ECS may store a look-up table in memory that indexes DL applications as a function of exam type and secondary task.
  • Other mechanisms for selecting a DL application are possible.
  • the operator may select a desired DL application and an indication of the selected DL application may be sent to the ECS along with the exam type and secondary task.
  • a selection of a specific DL application may automatically trigger a sequence of other DL applications to be triggered for pre-processing or post-processing of the data for the main task.
  • selection of a DL application for liver lesion detection may trigger an anatomy localization DL application that may be executed to narrow down acquired images to only the images that include the liver.
  • a request for additional input is optionally sent.
  • the additional input may include subject and/or scanner metadata or other information that may be utilized by the selected DL application to generate the decision support output. Accordingly, the request for additional input may be sent to the EMR, PACS, or other database where the additional input is stored.
  • the DL applications may utilize various information in addition to the images/imaging data obtained during the imaging scan of the subject as inputs to the respective DL models, including but not limited to subject information (e.g., demographics, medical history) and scanner information.
  • the scanner information may include type of scanner (such as CT scanner or ultrasound), imaging category/type (such as contrast imaging, non-contrast imaging, Doppler, B-mode), scanner settings (such as tube current and tube voltage), and so forth.
  • the additional input may further include lab test results for the subject, previous diagnostic imaging scan results, and complimentary images from other imaging modalities.
  • what additional input is requested and/or used by the selected DL application may be based on the exam type and secondary task.
  • the ECS may store a look-up table that stores the additional input to be requested as a function of exam type and secondary task.
  • the additional input that is requested may include lab reports, a neuro physiological examination report, the X-ray tube kVp used during the scan, and/or the type of CT scan.
  • the additional input may include the type of scan and X-ray tube kVp used during the scan, contrast timing, volume of contrast injected, ejection fraction of the heart, and/or previous CT studies for the patient.
  • images and/or imaging data is received from the scanner.
  • the scanner may send reconstructed images and/or raw imaging data to the ECS during and/or after the imaging scan.
  • decision support output is generated using the selected DL application.
  • the selected DL application may enter the received images/image data as input into the DL model executed by the DL application, along with any additional information dictated by the DL application.
  • the DL model may then output the decision support.
  • the decision support may include organ segmentation, anatomy identification, an indication of whether or not bleeding, lesions, fractures, etc., are detected, and so forth.
  • the decision support output is send to the requesting device, such as the scanner that sent the imaging information, and/or other device, such as the PACS, a care provider device, or other suitable device.
  • the decision support output may be displayed or otherwise presented along with the reconstructed images from the imaging scan.
  • the decision support output may support a diagnosis or determination of a clinical finding via the reconstructed images.
  • the decision support output and reconstructed images may be analyzed by one or more clinicians, such as a radiologist. The one or more clinicians may accept or reject the decision support output.
  • the clinician may agree with the lesion detection and accept the output or disagree that the identified lesion is actually a lesion and reject the output.
  • the clinician may edit or update the contours defining the organs, where the contours are generated by the DL application. Further, the clinician may generate a final report with detailed and, if indicated, edited findings (e.g., updated contours, rejected decision support, etc.). The final report may be saved in the EMR of the subject.
  • the final report which may include edited results, is received (e.g., from the EMR database).
  • the final report may be analyzed by the ECS (e.g., by a training data aggregator module of the ECS) in order to identify the exam type and DL application used to generate the decision support included in the report, which may be labeled/stored with the final report.
  • the labeled report e.g., the final report, including edited results where appropriate and labeled with exam type and DL application
  • Each dataset may be stored on the ECS. Further, each dataset may be specific to a DL application.
  • each DL application may have a specific training dataset, testing dataset, and validation dataset.
  • the labeled report may be placed into a dataset based on the DL application used to generate the decision support listed in that report.
  • the labeled report may be randomly or semi randomly assigned to one of the training, testing, and/or validation datasets, at least in some embodiments.
  • method 900 includes determining if a training dataset includes a threshold number of reports.
  • the threshold number of reports may be a sufficient number to accurately reflect functionality of the DL model executed by the DL application, such as 100 reports.
  • the answer at 945 includes“YES” otherwise the answer at 945 includes“NO.” If the answer at 945 is”NO”, method 900 returns. If the answer is ”YES”, method 900 proceeds to 950 to re-train that DL application with the new data in the dataset. In this way, the DL model may be fine-tuned by re-training with the new data.
  • the updated weights may be deployed locally for a site-tuned model.
  • the DL model executed by the DL application on the ECS may include weighted connections, decision trees, etc., and the DL model may be re-trained so that the weights are updated to better fit the new data. As more and more data is collected, the DL model may become more accurate.
  • site-specific preferences in deploying the DL model may be preserved. For example, a particular medical facility may prefer to an aggressive approach to lesion detection to ensure false negative detections are reduced, while a different medical facility may prefer to reduce false positives by taking a less aggressive approach to detecting lesions.
  • a subset or all of the data and/or the updated weights may be sent to a central device (e.g., the cloud) to create an updated global model along with data from other sites, as indicated at 955.
  • the data sent back to the cloud may include images and other specific datasets that may be used for training the particular model.
  • the global models may also be sent back to the locally-stored DL application models.
  • a technical effect of the disclosure is the automatic scanning of a subject with minimal input or intervention from a human operator. Another technical effect of the disclosure is the scanning of a patient in accordance with a scan protocol automatically selected by evaluating an EMR of the patient. Yet another technical effect of the disclosure is the automatic creation of a clinical report for a diagnostic image. Another technical effect of the disclosure is the automatic positioning of a subject within an imaging system.
  • a method for an imaging system comprises receiving an identification of a subject to be scanned, automatically determining a personalized scan protocol for the subject, automatically performing a scan of the subject according to the personalized scan protocol to acquire imaging data, and displaying an image and decision support, the image and the decision support automatically generated from the imaging data.
  • automatically determining a personalized scan protocol for the subject comprises: retrieving an electronic medical record (EMR) for the subj ect; determining a primary task for the scan based on the EMR; and selecting a scan protocol according to the primary task.
  • EMR electronic medical record
  • automatically determining a personalized scan protocol for the subject further comprises: determining, with a camera, a scan range for the subject; and adjusting the selected scan protocol with the scan range to create the personalized scan protocol.
  • the method further comprises automatically positioning the subject within the imaging system based on live images captured by the camera.
  • the method further comprises transmitting the imaging data to an edge computing system (ECS) communicatively coupled to and positioned external to the imaging system, and receiving, from the ECS, the decision support generated by the ECS from the imaging data.
  • ECS edge computing system
  • the method further comprises transmitting a secondary task associated with the primary task to the ECS, wherein the ECS generates the decision support according to the secondary task.
  • the ECS generates the decision support by inputting the imaging data into a deep learning model.
  • the decision support includes a clinical diagnosis.
  • receiving the identification of the subject to be scanned comprises processing live images of the subject captured by the camera with facial recognition techniques to automatically identify the subject.
  • the method further comprises reconstructing the image from the imaging data.
  • a system comprises an imaging system including at least a scanner and a processor, the processor configured to reconstruct an image from data acquired during a scan of a subject via the scanner, and a computing device communicatively coupled to and positioned external to the imaging system, the computing device configured to generate a decision support calculation based on the data, wherein the processor is configured with executable instructions that when executed cause the processor to: retrieve, from a database via a network, an electronic medical record (EMR) for the subject; determine a primary task based on the EMR; determine a scan protocol based on the primary task; position the subject within the imaging system; determine a scan range for the subject; adjust the scan protocol with the scan range; perform the scan to acquire the data; and output the image reconstructed from the data and the decision support calculation generated based on the data.
  • EMR electronic medical record
  • the processor automatically executes the executable instructions responsive to receiving an indication to operate in an express mode.
  • the imaging system further includes a display device, wherein the processor is configured to display, via the display device, the image and the decision support calculation.
  • the imaging system further comprises a camera configured to capture live images of the subject within the imaging system.
  • the subject is automatically positioned within the imaging system based on the live images captured by the camera.
  • the scan range is determined from the live images captured by the camera.
  • the subject is automatically identified based on the live images captured by the camera.
  • an imaging system comprises an X-ray source that emits a beam of X-rays towards a subject to be imaged, a detector that receives the X-rays attenuated by the subject, a data acquisition system (DAS) operably connected to the detector, and a computing device operably connected to the DAS and configured with executable instructions in non-transitory memory that when executed cause the computing device to: retrieve, from a database via a network, an electronic medical record (EMR) for the subject; automatically determine a scan protocol based on the EMR; automatically position a region of interest (ROI) of the subject between the X- ray source and the detector; determine a scan range for the ROI; adjust the scan protocol with the scan range; control the X-ray source to perform the scan; receive, via the DAS, projection data acquired during the scan; and output an image reconstructed from the projection data and a decision support calculation generated from the projection data.
  • EMR electronic medical record
  • ROI region of interest
  • the computing device is further configured with executable instructions in non-transitory memory that when executed cause the computing device to transmit the projection data to an edge computing system (ECS) communicatively coupled to and positioned external to the imaging system, and receive, from the ECS, the decision support calculation generated by the ECS with a deep learning model from the projection data.
  • ECS edge computing system
  • the imaging system further comprises a table motor controller configured to adjust a position of a table upon which the subject is placed, wherein automatically positioning the ROI of the subject between the X-ray source and the detector comprises commanding the table motor controller to adjust the position of the table such that the ROI is positioned between the X-ray source and the detector.

Abstract

Methods and systems are provided for an express mode for an imaging system. In one embodiment, a method for an imaging system comprises receiving an identification of a subject to be scanned, automatically determining a personalized scan protocol for the subject, automatically performing a scan of the subject according to the personalized scan protocol to acquire imaging data, and displaying an image and decision support, the image and the decision support automatically generated from the imaging data. In this way, the imaging of a subject such as a patient may be performed with minimal input or intervention from an operator of the imaging system.

Description

SYSTEMS AND METHODS FOR AN IMAGING SYSTEM EXPRESS MODE
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims priority to U.S. Provisional Application No. 62/657,632, entitled“SYSTEMS AND METHODS FOR AN IMAGING SYSTEM EXPRESS MODE,” and filed on April 13, 2018. The entire contents of the above- identified application are hereby incorporated by reference for all purposes.
FIELD
[0002] Embodiments of the subject matter disclosed herein relate to non-invasive diagnostic imaging, and more particularly, to imaging a subject with minimal input or intervention from an operator.
BACKGROUND
[0003] Non-invasive imaging technologies allow images of the internal structures of a patient or object to be obtained without performing an invasive procedure on the patient or object. In particular, technologies such as computed tomography (CT) use various physical principles, such as the differential transmission of X-rays through the target volume, to acquire image data and to construct tomographic images (e.g., three- dimensional representations of the interior or the human body or of other imaged structures).
[0004] As the functionality of an imaging system becomes more powerful and complex, users may find operation of the imaging system to be too complicated. Radiologists and technicians must be trained to select and prepare the correct scan protocols for each patient, who may also be positioned differently within the imaging system for a given protocol. If any portion of the scan preparation is incorrect, the quality of the resulting image(s) may be too poor for clinical use and so the scan must be performed again. Further, image analysis must be performed by radiologists/physicians and so the turn-around time for a clinical report may be slow. BRIEF DESCRIPTION
[0005] In one embodiment, a method for an imaging system comprises receiving an identification of a subject to be scanned, automatically determining a personalized scan protocol for the subject, automatically performing a scan of the subject according to the personalized scan protocol to acquire imaging data, and displaying an image and decision support, the image and the decision support automatically generated from the imaging data. In this way, the imaging of a subject such as a patient may be performed with minimal input or intervention from an operator of the imaging system.
[0006] It should be understood that the brief description above is provided to introduce in simplified form a selection of concepts that are further described in the detailed description. It is not meant to identify key or essential features of the claimed subject matter, the scope of which is defined uniquely by the claims that follow the detailed description. Furthermore, the claimed subject matter is not limited to implementations that solve any disadvantages noted above or in any part of this disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The present invention will be better understood from reading the following description of non-limiting embodiments, with reference to the attached drawings, wherein below:
[0008] FIG. 1 shows a block schematic diagram of a simplified example system for extending the capabilities of an imaging system according to an embodiment;
[0009] FIG. 2 shows a high-level flowchart illustrating an example method for synchronizing an imaging system with an edge computing system according to an embodiment;
[0010] FIG. 3 shows a high-level flowchart illustrating an example method for generating deep learning training data with an imaging system according to an embodiment;
[0011] FIG. 4 shows a high-level flowchart illustrating an example method for an imaging system express mode according to an embodiment;
[0012] FIG. 5 shows a high-level flowchart illustrating an example method for managing applications on an edge computing system according to an embodiment; [0013] FIG. 6 shows a pictorial view of an imaging system according to an embodiment;
[0014] FIG. 7 shows a block schematic diagram of an exemplary imaging system according to an embodiment;
[0015] FIG. 8 shows a block schematic diagram of a more detailed example system, than FIG. 1, for extending the capabilities of an imaging system according to an embodiment; and
[0016] FIG. 9 shows a flowchart illustrating an example method for generating decision support output for an imaging system using an edge computing system according to an embodiment.
DETAILED DESCRIPTION
[0017] The following description relates to various embodiments of diagnostic imaging. In particular, systems and methods are provided for imaging a subject with minimal input or intervention from an operator. The processing capabilities of an imaging system may be expanded by coupling the imaging system to an ECS, as shown in FIG. 1. Imaging data acquired by the imaging system may be transmitted or streamed during a scan to the ECS. A method for synchronizing the transmission of imaging data to the ECS, such as the method depicted in FIG. 2, allows the imaging data to be processed by deep learning (DL) applications concurrently with the scan, thereby reducing the amount of time to obtain decision support. Additional information relating to the acquisition of imaging data, as well as information characterizing the subject being scanned, may be leveraged to train the DL applications, as depicted in FIG. 3. The expanded processing capabilities of the imaging system when coupled with the ECS allows for the imaging system to operate in an“express mode,” wherein a scan is performed with minimal intervention by or input from an operator of the imaging system, as depicted in FIG. 4. New and updated DL applications may be retrieved from a remote repository, as depicted in FIG. 5, thereby allowing the ECS to provide the latest DL capabilities that are compatible with the imaging system. An example of a CT imaging system that may be used to acquire images processed in accordance with the present techniques is provided in FIGS. 6 and 7. [0018] The ECS is connected to an imaging system/scanner. The ECS includes CPETs/GPETs running one or more virtual machines (VMs) configured for different types of tasks. Data is streamed in real-time from the scanner to the ECS which processes the data (in image and/or projection space) and returns the results. In this way, the imaging system appears to have additional processing power because the post- processing performed by the ECS is output alongside reconstructed images by the user interface of the imaging system.
[0019] The streaming of data to the ECS is synchronized with the state of the scanner. Data is only transferred from the scanner to the ECS when the scanner is not in a critical state. When an interventional mode (e.g., when the doctor is at the scanner performing an intervention such as contrast injection), the scanner does not transfer data at all to avoid data corruption.
[0020] The ECS provides task-based decision support. A particular task input to the imaging system triggers a secondary task input to and carried out by the ECS. For example, a task may prescribe a particular scan protocol and/or type of image reconstruction by the scanner, while the secondary task may prescribe the application of relevant post-processing techniques to the acquired data. These post-processing techniques may include deep learning analysis of the acquired data. As depicted in the method of FIG. 9, the ECS may select an appropriate DL application based on the secondary task, exam type, and/or other information, and generate the decision support with the selected DL application. Each instance of a decision support that is generated at the ECS may be saved with associated data (e.g., DL application that was used, exam type, fmal/edited exam report). After a threshold number of decision supports have been generated and saved on the ECS, one or more of the DL applications may be re- trained with the new data, both locally (e.g., DL applications stored on the ECS) and globally (e.g., updated model weights may be sent to a central server to create an updated global model along with data from other locations).
[0021] FIG. 1 shows a block schematic diagram of an example system 100 for extending the capabilities of an imaging system 101 with an edge computing system (ECS) 110 according to an embodiment. The imaging system 101 may comprise any suitable non-invasive imaging system, including but not limited to a computed tomography (CT) imaging system, a positron emission tomography (PET) imaging system, a single-photon emission computed tomography (SPECT) imaging system, a magnetic resonance (MR) imaging system, an X-ray imaging system, an ultrasound system, and combinations thereof (e.g., a multi-modality imaging system such as a PET/CT, PET/MR or SPECT/CT imaging system). An example imaging system is described further herein with regard to FIGS. 6 and 7.
[0022] The imaging system 101 includes a processor 103 and a non-transitory memory 104. One or more methods described herein may be implemented as executable instructions in the non-transitory memory 104 that when executed by the processor 103 cause the processor 103 to perform various actions. Such methods are described further herein with regard to FIGS. 2-4.
[0023] The imaging system 101 further comprises a scanner 105 for scanning a subject such as a patient to acquire imaging data. Depending on the type of imaging system 101, the scanner 105 may comprise multiple components necessary for scanning the subject. For example, if the imaging system 101 comprises a CT imaging system, the scanner 105 may comprise a CT tube and a detector array, as well as various components for controlling the CT tube and the detector array, as discussed further herein with regard to FIGS. 6 and 7. As another example, if the imaging system 101 comprises an ultrasound imaging system, the scanner 105 may comprise an ultrasound transducer. Thus, the term“scanner” as used herein refers to the components of the imaging system which are used and controlled to perform a scan of a subject.
[0024] The type of imaging data acquired by the scanner 105 also depends on the type of imaging system 101. For example, if the imaging system 101 comprises a CT imaging system, the imaging data acquired by the scanner 105 may comprise projection data. Similarly, if the imaging system 101 comprises an ultrasound imaging system, the imaging data acquired by the scanner 105 may comprise analog and/or digital echoes of ultrasonic waves emitted into the subject by the ultrasound transducer.
[0025] In some examples, the imaging system 101 includes a protocol engine 106 for automatically selecting and adjusting a scan protocol for scanning a subject. A scan protocol selected by protocol engine 106 prescribes a variety of settings for controlling the scanner 105 during a scan of the subject. As discussed further herein, protocol engine 106 may select or determine a scan protocol based on an indicated primary task. Although the protocol engine 106 is depicted as a separate component from the non- transitory memory 104, it should be understood that in some examples the protocol engine 106 may comprise a software module stored in non-transitory memory 104 as executable instructions that when executed by the processor 103 causes the processor 103 to select and adjust a scan protocol.
[0026] The imaging system 101 further comprises a user interface 107 configured to receive input from an operator of the imaging system 101 as well as display information to the operator. To that end, user interface 107 may comprise one or more of an input device, including but not limited to a keyboard, a mouse, a touchscreen device, a microphone, and so on, and an output device, including but not limited to a display device, a printer, and so on.
[0027] In some examples, the imaging system 101 further comprises a camera 108 for assisting with the automatic positioning of the subject within the imaging system. For example, the camera 108 may capture live images of the subject within the imaging system, while the processor 103 determines a position of the subject within the imaging system based on the live images. The processor 103 may then control a table motor controller, for example, to adjust the position of a table upon which the subject is resting in order to position at least a region of interest (ROI) of the subject within the imaging system. Furthermore, in some examples, a scan range may be at least approximately determined based on the live images captured by the camera 108.
[0028] The system 100 further comprises the ECS 110 that is communicatively coupled to the imaging system 101 via a wired or wireless connection. The ECS 110 comprises a plurality of processors 113 running one or more virtual machines (VMs) 114 configured for different types of tasks. The plurality of processors 113 comprises one or more graphics processing units (GPETs) and/or one or more central processing units (CPETs). The ECS 110 further comprises a non-transitory memory 115 storing executable instructions that may be executed by one or more of the plurality of processors 113. The non-transitory memory 115 may further include a deep learning (DL) model 116 that may be executed by a virtual machine 114 of the plurality of processors 113. While only one DL model is depicted in FIG. 1, it is to be understood that ECS 110 may include more than one DL model stored in non-transitory memory.
[0029] In some examples, the system 100 further comprises one or more external databases 130 that the imaging system 101 and the ECS 110 may be communicatively coupled to via a network 120. The one or more external databases 130 may comprise, as exemplary and non-limiting examples, one or more of a hospital information system (HIS), a radiology information system (RIS), a picture archive and communication system (PACS), and an electronic medical record (EMR) system. The imaging system 101 and/or the ECS 110 may retrieve information such as subject metadata, which may include metadata describing or relating to a particular subject to be scanned (e.g., patient age, gender, height, and weight), which may be retrieved from an EMR for the subject. As described further herein, the imaging system 101 and/or the ECS 110 may use subject metadata retrieved from the one or more external databases 130 to automatically determine a scan protocol, train a deep learning model, and so on.
[0030] In some examples, the system 100 further comprises a repository 140 communicatively coupled to one or more of the imaging system 101 and the ECS 110 via the network 120. The repository 140 stores a plurality of applications 142 that may be utilized by one or more of the imaging system 101 and the ECS 110. To that end, the imaging system 101 and/or the ECS 110 may retrieve one or more applications of the plurality of applications 142 from the repository 140 via the network 120. Alternatively, the repository 140 may push an application of the plurality of applications 142 to one or more of the imaging system 101 and the ECS 110. A method for retrieving an application from the repository 140 is described further herein with regard to FIG. 5.
[0031] FIG. 2 shows a high-level flowchart illustrating an example method 200 for synchronizing an imaging system with an edge computing system (ECS) according to an embodiment. In particular, method 200 relates to streaming imaging data acquired during a scan from an imaging system such as imaging system 101 to an ECS such as ECS 110 for processing the imaging data concurrently with the scan. Method 200 is described with regard to the systems and components depicted in FIG. 1 and described hereinabove, though it should be appreciated that the method may be implemented with other systems and components without departing from the scope of the present disclosure. Method 200 may be implemented as executable instructions in non- transitory memory 104 of an imaging system 101 which may be executed by a processor 103 of the imaging system 101. [0032] Method 200 begins at 205. At 205, method 200 receives an indication of an initial task, also referred to herein as a primary task. The initial task comprises a clinical task to be performed by the imaging system, and thus generally specifies what type of scan should be performed by the imaging system. The initial tasks are generally clinical actions that must be completed during pre-scan and scan of an imaging prescription. Some examples of pre-scan related tasks are patient set-up, receiving and reviewing data from EMR, CDSS and HIS/RIS, and selecting the protocol(s) for the scan(s). Some examples of scan related tasks are patient positioning, image acquisition and image reconstruction. In one example, method 200 may receive the indication of the initial task, for example, via a user interface 107 of the imaging system 101. That is, an operator of the imaging system 101 may select the initial task or otherwise indicate the initial task via the user interface 107. In another example, method 200 may automatically identify the initial task, for example by evaluating an EMR of the subject to be scanned.
[0033] For example, an initial task may include or be defined by a diagnostic goal of the imaging scan, e.g., the reason a patient was referred for the imaging scan. Example diagnostic goals may include diagnosing presence (or absence) of a brain bleed, diagnosing presence (or absence) of a liver lesion, determining a presence or extent of a spine fracture, and performing organ segmentation to plan for radiation therapy. Each diagnostic goal may be targeted to a specific anatomy or set of anatomies and thus may dictate a type of scan/exam to be performed. For example, the diagnosis of the brain bleed may be targeted to the brain and thus may dictate that a non-contrast head scan be performed, the diagnosis of the liver lesion may be targeted to the liver and thus may dictate that an abdominal scan be performed, and specifically that a multiphase liver study be conducted with contrast at different timings, the diagnosis of the spine fracture may be targeted to the neck and/or back and thus may dictate that a full spine non-contrast scan be performed, and the organ segmentation may be targeted to the entire abdominal region (or even the whole body) and thus may dictate that a non-contrast whole body scan be performed. Each different scan type may have an associated set of scan parameters that dictate how the scanner is to be controlled during the scan. For example, for computed tomography scans, each different scan type may dictate the specific tube current (mA), tube voltage (kV), and gantry rotational speed for the CT scanner during the scan.
[0034] Continuing at 210, method 200 determines if the initial task is associated with a secondary task. Whereas the initial task specifies how imaging data may be acquired, a secondary task specifies how the imaging data may be processed. As a non limiting example, a secondary task may comprise an automatic lesion detection, organ classification, segmentation, or any type of post-processing method that may be applied to imaging data. The secondary tasks are generally clinical actions that must be completed during post-scan of an imaging prescription. Some examples of post-scan related tasks are post-processing of images by applying post-processing applications.
[0035] In some examples, a particular initial task may always specify one or more secondary tasks to be performed in conjunction with the initial or primary task. Additionally or alternatively, the initial task indication received at 205 may specify one or more secondary tasks. For example, an initial task may not specify a secondary task, but an operator of the imaging system may select a secondary task when indicating the initial task. As a non-limiting example, the initial task may specify a brain scan and a secondary task may comprise lesion detection. Since lesion detection may not be necessary for every brain scan, the operator of the imaging system may optionally select a secondary task comprising lesion detection for the initial task of a brain scan for particular cases where the presence of a lesion may be suspected. Additionally or as an alternative to an operator manually selecting a secondary task, in some examples method 200 may automatically determine a secondary task based on the initial task as well as subject metadata, such as an EMR for the subject.
[0036] If the initial task is not associated with a secondary task (“NO”), method 200 proceeds to 212. At 212, method 200 performs a scan in accordance with the initial task indication. Method 200 may further reconstruct and output one or more images from imaging data acquired during the scan. Since no secondary task is associated with the initial task, no imaging data acquired at 212 is streamed to the ECS. Method 200 then ends.
[0037] However, referring again to 210, if the initial task is associated with a secondary task (“YES”), method 200 continues to 215. At 215, method 200 outputs a secondary task indication to the ECS 110. The ECS 110 performs post-processing of imaging data based on the secondary task indication.
[0038] At 220, method 200 begins scanning the subject with the scanner 105 in accordance with the initial task indication to acquire imaging data. For example, method 200 begins scanning the subject in accordance with a scan protocol associated with the initial task. The scan protocol may be selected by protocol engine 106, as a non-limiting example.
[0039] During the scan begun at 220, method 200 synchronizes the imaging system 101 to the ECS 110 based on the state of the scanner 105. To that end, at 225, method 200 evaluates the state of the scanner 105. At 230, method 200 determines if the scanner 105 is in a critical state. The scanner 105 may be in a critical state when asynchronous external operations could negatively impact its ability to meet essential safety requirements. For example, the scanner 105 may be in a critical state when active data acquisition is occurring, such as when the X-ray source is on and image data is being stored to a disk or memory storage location. If the imaging system were synchronized with the ECS (so that imaging data could be sent to the ECS from the imaging system) during such a time, a scan failure could result (e.g., due to data being lost, delayed, or other issues) and the patient would need to be re-scanned, exposing the patient to additional radiation dose. Primarily, a CT scanner is in a critical state when the X-ray source is on or during rapid sequences of acquisitions. Another example of a critical state is when the system is performing an interventional procedure, where both the scan data storage and image display operations are time sensitive and essential to safety.
[0040] If the scanner is in a critical state (“YES”), method 200 proceeds to 232, wherein method 200 continues the scan. More specifically, method 200 continues scanning the subject but does not transmit acquired imaging data to the ECS 110.
[0041] However, referring again to 230, if the scanner is not in a critical state (“NO”), method 200 continues to 233, wherein method 200 transmits acquired imaging data to the ECS 110 for post-processing. The ECS 110 processes the transmitted imaging data in accordance with the secondary task.
[0042] Method 200 proceeds from both 232 and 233 to 235. At 235, method 200 determines if the scan is complete. If the scan is not complete (“NO”), method 200 proceeds to 237, wherein method 200 continues the scan. Method 200 proceeds to 225 to re-evaluate the scanner state. Thus, method 200 continually evaluates the state of the scanner 105 during a scan to determine whether the scanner 105 is in a critical state, and streams or transmits the imaging data acquired during the scan to the ECS 110 only when the scanner 105 is not in a critical state. In other words, method 200 continually streams imaging data as it is acquired to the ECS 110 during the scan unless the scanner 105 is in a critical state. In this way, the ECS 110 may receive and process the imaging data concurrently with the scan. Furthermore, the imaging data is not corrupted by transmitting it during a critical state, nor is the functioning of the imaging system 101 disturbed when the scanner 105 is operating in a critical state.
[0043] Once method 200 determines that the scan is complete at 235 (“YES”), method 200 proceeds to 240. At 240, method 200 reconstructs one or more images from the acquired imaging data. Method 200 may reconstruct the one or more images using any suitable iterative or analytical image reconstruction algorithm, as a non limiting example.
[0044] At 245, method 200 receives decision support from the ECS 110. The decision support comprises the results of one or more post-processing algorithms applied to the imaging data streamed to the ECS 110 during the scan at 233. For example, if the exam is a brain bleed exam where a scan of the head is performed, the decision support output may include an indication of whether or not bleeding is detected. If the exam is a liver lesion exam where the liver is scanned, the decision support output may include whether or not a lesion is detected, and if so, the decision support output may include an indication of the size, shape, position, etc., of the lesion.
[0045] At 250, method 200 outputs the one or more images and the decision support. Method 200 may output the one or more images and the decision support to a display device for display to the operator of the imaging system 101, for example. Additionally or alternatively, method 200 may output the one or more images and the decision support to a mass storage for later review, or to a picture archiving and communication system (PACS) for review at a remote workstation, for example. Method 200 then ends.
[0046] Thus, a method for an imaging system comprises performing a scan of a subject to acquire imaging data, transmitting the imaging data during the scan to a computing device communicatively coupled to and positioned external to the imaging system, receiving a decision support output from the computing device, the decision support output calculated by the computing device from the imaging data, and displaying an image reconstructed from the imaging data and the decision support output. In this way, deep learning techniques can be used to provide decision support in parallel with the acquisition of imaging data, thereby reducing the time necessary to make an informed diagnosis.
[0047] Automatically analyzing clinical images with deep learning (DL) techniques could drastically simplify and improve clinical diagnoses made by physicians using such images. However, preparing a dataset to train a DL algorithm can be difficult and time-consuming since images and corresponding diagnoses must be manually collated and prepared for input to the algorithm. Further, such a training dataset is unnecessarily limited with respect to the amount of information potentially available that is relevant to a scan and that could be leveraged by the DL algorithm. Thus, as described herein below with regard to FIG. 3, the imaging system 101 outputs data appropriate for input to DL algorithms such as DL application 116 in the ECS 110. The output data includes scan data, image data, EMR data for the patient, a type of scanner (as well as other scanner metadata), scan protocol, decision support output, and any clinical diagnosis associated with the scan. To that end, the imaging system may retrieve data from a hospital information system (HIS) and/or radiology information system (RIS) as well as an EMR for a given patient (to obtain patient data as well as the clinical outcome). The output data can be used by DL algorithms to optimize/personalize scan protocols for different patients and image quality targets, improve decision support, and potentially automate clinical diagnosis.
[0048] FIG. 3 shows a high-level flowchart illustrating an example method 300 for generating deep learning training data with an imaging system according to an embodiment. In particular, method 300 relates to leveraging all information that may be potentially relevant to the quality of a reconstructed image and its analysis for the training of a deep learning model. Method 300 is described with regard to the systems and components of FIG. 1, though it should be appreciated that the method may be implemented with other systems and components without departing from the scope of the present disclosure. Method 300 may be implemented as executable instructions in non-transitory memory 104 of an imaging system 101 and executed by a processor 103 of the imaging system 101.
[0049] Method 300 begins at 305. At 305, method 300 receives a selection of a scan protocol. The scan protocol may be manually selected, for example by an operator via a user interface 107, or may be automatically selected, for example via a protocol engine 106.
[0050] At 310, method 300 retrieves subj ect metadata for the subj ect to be scanned from one or more external databases. The subject metadata may comprise at least a subset of information relating to the subject, and as such may include but is not limited to demographic information and medical history. Method 300 may retrieve the subject metadata, for example, from the one or more databases 130, including one or more of a HIS, RIS, and EMR database, via the network 120.
[0051] At 315, method 300 performs a scan of the subject according to the scan protocol to acquire imaging data. For example, method 300 may control the scanner 105 to scan the subject, wherein the scan protocol selected at 305 prescribes the control parameters of the scanner 105. At 320, method 300 reconstructs an image from the imaging data acquired during the scan at 315. Method 300 may reconstruct the image using any suitable image reconstruction algorithm in accordance with the modality of the imaging system.
[0052] At 325, method 300 transmits the image reconstructed at 320 and/or imaging data acquired at 315 to the ECS 110. The ECS 110 processes the image and/or the imaging data using one or more DL algorithms to generate decision support output. Although transmitting the imaging data is depicted as occurring after the scan in FIG. 3, it should be appreciated that in some examples method 300 may transmit the imaging data during the scan at 315, such that the ECS 110 processes the imaging data concurrently with the scan at 315, as discussed hereinabove with regard to FIG. 2.
[0053] The ECS 110 processes the image and/or the imaging data either during the scan or after the scan is completed to generate the decision support output. At 330, method 300 receives, from the ECS 110, decision support output calculated for the image and/or imaging data with a learning model (e.g., a DL algorithm such as a neural network). [0054] At 335, method 300 displays the image and the decision support output. Both the image and the decision support output may be displayed via a display device of the imaging system 101, for example. The decision support output may be displayed superimposed or alongside the image, depending on the type of decision support output.
[0055] At 340, method 300 receives an outcome decision relating to the displayed image and decision support output. The outcome decision comprises, for example, a clinical diagnosis made by a physician or radiologist based on the displayed image and decision support output. Additionally or alternatively, the outcome decision may comprise a ground truth relating to the decision support output. For example, if the decision support output comprises a detection or classification of an object such as a lesion in the image, a ground truth for the decision support output may comprise an indication that the detection or classification is correct or incorrect.
[0056] At 345, method 300 updates a training dataset with the imaging data, the image, the scan protocol, the subject metadata, the decision support output, the outcome decision, and system metadata. As the training dataset may be located remotely from the imaging system 101 rather than in the non-transitory memory 104 of the imaging system, in some examples updating the training dataset may comprise aggregating the imaging data, the image, the scan protocol, the subject metadata, the decision support output, the outcome decision, and the system metadata into a single case to be added to the training dataset. The system metadata includes information that characterizes the imaging system 101 itself, such as a model number of the imaging system 101, a manufacturing date of the imaging system 101, and so on.
[0057] At 350, method 300 transmits the updated training dataset to the ECS 110 for updating the learning model used to generate the decision support output received at 330. In this way, additional data that may impact the performance of the learning model, such as the subject metadata, the system metadata, the scan protocol, and a clinical diagnosis made by a physician based on the reconstructed image and/or decision support output, may be leveraged to improve the performance of the learning model. Method 300 then ends.
[0058] Thus, a method for an imaging system comprises performing a scan of a subject according to a scan protocol to acquire imaging data; displaying an image and decision support associated with the image, the image reconstructed from the imaging data, and the decision support calculated with a learning model and the imaging data; and updating a training dataset for the learning model with the imaging data, the image, the scan protocol, subject metadata describing the subject, the decision support, an outcome decision relating to the image and the decision support, and system metadata relating to the imaging system.
[0059] As the functionality of an imaging system becomes more powerful and complex, users may find operation of the imaging system to be too complicated. Radiologists and technicians must be trained to select and prepare the correct scan protocols for each patient, who may also be positioned differently within the imaging system for a given protocol. If any portion of the scan preparation is incorrect, the quality of the resulting image(s) may be too poor for clinical use and so the scan must be performed again. Further, image analysis must be performed by radiologists/physicians and so the turn-around time for a clinical report may be slow. As described further herein below with regard to FIG. 4, the imaging system 101 may therefore include an“express mode” that utilizes the extended processing power provided by the ECS 110 as well as the more sophisticated DL applications 116 to automate as many parts of the imaging procedure as possible. Hospital information systems, radiology information systems, and electronic medical record data are used by the protocol engine 106 to select and personalize a scan protocol for a given patient, which in some examples may be assisted by deep learning. The patient is automatically positioned based on the imaging protocol and with the assistance of a camera. The camera further enables a determination of the correct scan range for the patient, so that the scan protocol can be further adjusted and personalized for the patient. Scanning and the post-processing/decision support is automatically performed by the imaging system 101 and the ECS 110. DL applications 116 executed by the plurality of processors 113 of the ECS 110 provide automatic processing of acquired data and at least a draft assessment of reconstructed images.
[0060] FIG. 4 shows a high-level flowchart illustrating an example method 400 for an imaging system express mode according to an embodiment. In particular, method 400 relates to controlling an imaging system to scan a subject with minimal operator intervention or input. Method 400 is described with regard to the systems and components of FIG. 1, though it should be appreciated that the method may be implemented with other systems and components without departing from the scope of the present disclosure. Method 400 may be implemented as executable instructions in non-transitory memory 104 of the imaging system 101 and non-transitory memory 115 of the ECS 110, and may be executed by the processor 103 of the imaging system 101 and the plurality of processors 113 of the ECS 110.
[0061] Method 400 begins at 405. At 405, method 400 receives an indication of an express mode for a scan. Method 400 may receive the indication of the express mode for the scan, for example, via the user interface 107 of the imaging system 101. In some examples, the user interface 107 may include a dedicated button, switch, or another mechanism for indicating a desire to use an express mode of the imaging system 101
[0062] At 410, method 400 receives an identification of the subject to be scanned. In some examples, method 400 may receive the identification of the subject to be scanned via the user interface 107. For example, an operator of the imaging system 101 may manually input an identification of the subject to be scanned. In other examples, method 400 may automatically identify the subject to be scanned. As an illustrative and non-limiting example, method 400 may obtain an image of a face of the subject via the camera 108, and may employ facial recognition techniques to identify the subject based on the image.
[0063] At 415, method 400 retrieves an EMR for the subject identified at 410, for example by accessing the EMR in the one or more databases 130 via the network 120. At 420, method 400 determines an initial or primary task based on the EMR. As discussed hereinabove, the primary task may indicate the clinical context of the scan and thus may indicate what type of scan should be performed. Method 400 may determine the primary task from the EMR which may include a prescription of a particular type of scan by a physician.
[0064] At 425, method 400 determines a scan protocol for the primary task. For example, method 400 may input the primary task to the protocol engine 106 of the imaging system 101 to determine the scan protocol. In other examples, the primary task may be associated with a particular scan protocol. Furthermore, the scan protocol may be determined or adjusted based on the subject. For example, the scan protocol may prescribe more or less radiation dose according to the age and size of the subject. [0065] At 430, method 400 positions the subject within the imaging system. In some examples, method 400 may determine the position of the subject on a moveable table relative to the imaging system, for example by processing live images of the subject captured by the camera 108. Method 400 may control a table motor controller to adjust the position of the table, and thus the subject, such that the region of interest to be scanned is within an imaging region of the imaging system (e.g., positioned within a gantry between a radiation source and a radiation detector).
[0066] At 435, method 400 determines a scan range for the subject. Method 400 may determine the scan range for the subject based on live images captured by the camera 108. For example, different subjects have bodies of different sizes, and so the scan range should be adjusted accordingly. Method 400 may therefore evaluate the images captured by the camera 108 to determine the size and proportions of the subject, and may in turn set an appropriate scan range that would scan the ROI of the subject. At 440, method 400 adjusts the scan protocol with the scan range determined at 435.
[0067] At 445, method 400 performs a scan of the subject according to the adjusted scan protocol to acquire imaging data. For example, method 400 controls the scanner 105 to scan the subject according to the adjusted scan protocol. At 450, method 400 outputs imaging data to the ECS 110. As discussed hereinabove with regard to FIG. 2, in some examples the imaging data may be output to the ECS 110 for processing during the scan. The ECS 110 processes the imaging data using a learning model to generate decision support output. At 455, method 400 receives the decision support output from the ECS 110.
[0068] Continuing at 460, method 400 reconstructs an image from the imaging data acquired at 445. Method 400 may reconstruct the image from the imaging data using any suitable image reconstruction technique. Although FIG. 4 depicts the reconstruction of the image as occurring after outputting the imaging data to the ECS 110, it should be understood that in some examples, the reconstructed image may be transmitted to the ECS 110, and the decision support output received at 455 may be generated by the ECS 110 based on the reconstructed image rather than the raw imaging data. At 465, method 400 outputs the image and the decision support output to one or more of a display device for display, a mass storage for subsequent retrieval and review, and a PACS. Method 400 then ends. [0069] Thus, a method for an imaging system comprises receiving an identification of a subject to be scanned, automatically determining a personalized scan protocol for the subject, automatically performing a scan of the subject according to the personalized scan protocol to acquire imaging data, and displaying an image and decision support, the image and the decision support automatically generated from the imaging data.
[0070] As mentioned above, post-processing techniques for imaging systems are regularly improved over time. However, once an imaging system is installed, it is difficult to update the imaging system to include improved algorithms and new functionality. As discussed further herein with regard to FIG. 5, an application repository enables remote deployment of software applications for the imaging system. As discussed hereinabove with regard to FIG. 1, the imaging system 101 and/or the ECS 110 may be coupled to the application repository 140 via a network 120. In some examples, the imaging system 101 or the ECS 110 may retrieve a new or updated application 142 from the application repository 140. Alternatively, the application repository 140 may push an application 142 to the ECS 110 and/or the imaging system 101. Further, as discussed herein below, certain applications may only be compatible with particular combinations of an imaging system 101 and an ECS 110. For example, different applications may be displayed/deployable to an updated high-power imaging system coupled to a low-power ECS versus an outdated low-power imaging system coupled to a high-power ECS.
[0071] FIG. 5 shows a high-level flowchart illustrating an example method 500 for managing applications on an edge computing system (ECS) according to an embodiment. In particular, method 500 relates to retrieving a new or updated application for processing images and/or imaging data from an external repository. Method 500 is described with regard to the systems and components of FIG. 1, though it should be appreciated that the method may be implemented with other systems and components without departing from the scope of the present disclosure. Method 500 may be implemented as executable instructions in the non-transitory memory 115 of the ECS 110, and may be executed by one or more of the plurality of processors 113 of the ECS 110.
[0072] Method 500 begins at 505. At 505, method 500 transmits an access request to a repository, such as repository 140. The access request may include an identification of the imaging system 101 and the ECS 110. The repository 140 includes a plurality of applications that may or may not be compatible with one or more of the imaging system 101 and the ECS 110. Therefore, the repository 140 determines, based on the identification of the imaging system 101 and the ECS 110, which applications of the plurality of applications are compatible with the given combination of the imaging system 101 and the ECS 110.
[0073] At 510, method 500 receives a list of compatible applications from the repository 140. The list of compatible applications may comprise a subset of the plurality of applications stored in the repository 140.
[0074] At 515, method 500 receives a selection of an application in the list of compatible applications. For example, an operator of the imaging system 101 may view the list of compatible applications and select a desired application from the list via the user interface 107. The selection may thus be transmitted from the imaging system 101 to the ECS 110. In other examples, the ECS 110 may include its own user interface for enabling the selection of an application, and so the selection of the application may be received via said user interface.
[0075] Additionally or alternatively, the selection of the application may be automatic. For example, if the repository 140 includes an updated version of an application already installed on the ECS 110, method 500 may automatically select the updated version of the application from the list of compatible applications.
[0076] At 520, method 500 retrieves the selected application from the repository 140 and installs it locally in non-transitory memory 115. The DL application 116 in non-transitory memory 115 of the ECS 110 depicted in FIG. 1 may thus comprise an application retrieved from the repository 140.
[0077] At 525, method 500 generates a decision support calculation from imaging data received from the imaging system 101 with the application. For example, if the application comprises a segmentation application, method 500 may segment an image reconstructed from the imaging data acquired with the imaging system 101, and the segmentation of the image comprises the decision support calculation. At 530, method 500 outputs the decision support calculation to the imaging system 101. Method 500 then ends. [0078] Thus, a method for an ECS communicatively coupled to and positioned external to an imaging system comprises receiving, from an application repository communicatively coupled to the ECS via a network, a list of compatible deep learning applications, retrieving, from the application repository, a deep learning application selected from the list, receiving imaging data from the imaging system, processing the imaging data with the deep learning application to generate a decision support calculation, and outputting the decision support calculation to the imaging system.
[0079] FIG. 6 illustrates an exemplary CT system 600 configured to allow fast and iterative image reconstruction. Particularly, the CT system 600 is configured to image a subject 612 such as a patient, an inanimate object, one or more manufactured parts, and/or foreign objects such as dental implants, stents, and/or contrast agents present within the body. In one embodiment, the CT system 600 includes a gantry 602, which in turn, may further include at least one X-ray radiation source 604 configured to project a beam of X-ray radiation 606 for use in imaging the subject 612. Specifically, the X- ray radiation source 604 is configured to project the X-rays 606 towards a detector array 608 positioned on the opposite side of the gantry 602. Although FIG. 6 depicts only a single X-ray radiation source 604, in certain embodiments, multiple X-ray radiation sources may be employed to project a plurality of X-rays 606 for acquiring projection data corresponding to the subject 612 at different energy levels.
[0080] Though a CT system is described by way of example, it should be understood that the present techniques may also be useful when applied to images acquired using other imaging modalities, such as tomosynthesis, MRI, C-arm angiography, and so forth. The present discussion of a CT imaging modality is provided merely as an example of one suitable imaging modality.
[0081] In certain embodiments, the CT system 600 is communicatively coupled to an edge computing system (ECS) 610 configured to process projection data with DL techniques. For example, as discussed above, the CT system 600 may transmit projection data as it is acquired to the ECS 610, and in turn the ECS 610 processes the projection data to provide decision support alongside images reconstructed from the projection data.
[0082] In some known CT imaging system configurations, a radiation source projects a fan-shaped beam which is collimated to lie within an X-Y plane of a Cartesian coordinate system and generally referred to as an“imaging plane.” The radiation beam passes through an object being imaged, such as the patient or subject 612. The beam, after being attenuated by the object, impinges upon an array of radiation detectors. The intensity of the attenuated radiation beam received at the detector array is dependent upon the attenuation of a radiation beam by the object. Each detector element of the array produces a separate electrical signal that is a measurement of the beam attenuation at the detector location. The attenuation measurements from all the detectors are acquired separately to produce a transmission profile.
[0083] In some CT systems, the radiation source and the detector array are rotated with a gantry within the imaging plane and around the object to be imaged such that an angle at which the radiation beam intersects the object constantly changes. A group of radiation attenuation measurements, i.e., projection data, from the detector array at one gantry angle is referred to as a“view.” A“scan” of the object includes a set of views made at different gantry angles, or view angles, during one revolution of the radiation source and detector. It is contemplated that the benefits of the methods described herein accrue to medical imaging modalities other than CT, so as used herein the term view is not limited to the use as described above with respect to projection data from one gantry angle. The term“view” is used to mean one data acquisition whenever there are multiple data acquisitions from different angles, whether from a CT, PET, or SPECT acquisition, and/or any other modality including modalities yet to be developed as well as combinations thereof in fused embodiments.
[0084] In an axial scan, the projection data is processed to reconstruct an image that corresponds to a two-dimensional slice taken through the object. One method for reconstructing an image from a set of projection data is referred to in the art as the filtered back-projection (FBP) technique. Transmission and emission tomography reconstruction techniques also include statistical iterative methods such as maximum likelihood expectation maximization (MLEM) and ordered-subsets expectation reconstruction techniques as well as iterative reconstruction techniques. This process converts the attenuation measurements from a scan into integers called“CT numbers” or“Hounsfield units,” which are used to control the brightness of a corresponding pixel on a display device. [0085] To reduce the total scan time, a“helical” scan may be performed. To perform a helical scan, the patient is moved while the data for the prescribed number of slices is acquired. Such a system generates a single helix from a cone beam helical scan. The helix mapped out by the cone beam yields projection data from which images in each prescribed slice may be reconstructed.
[0086] As used herein, the phrase“reconstructing an image” is not intended to exclude embodiments of the present disclosure in which data representing an image is generated but a viewable image is not. Therefore, as used herein the term“image” broadly refers to both viewable images and data representing a viewable image. However, many embodiments generate (or are configured to generate) at least one viewable image.
[0087] Additionally or alternatively, the CT system 600 may be communicatively coupled to a“cloud” network 620 through communication with the ECS 610.
[0088] FIG. 7 illustrates an exemplary imaging system 700 similar to the CT system 600 of FIG. 6. The imaging system 700 may comprise the imaging system 101 described hereinabove with regard to FIG. 1. In one embodiment, the imaging system 700 includes the detector array 608 (see FIG. 6). The detector array 608 further includes a plurality of detector elements 702 that together sense the X-ray beams 606 (see FIG. 6) that pass through a subject 704 such as a patient to acquire corresponding projection data. Accordingly, in one embodiment, the detector array 608 is fabricated in a multi- slice configuration including the plurality of rows of cells or detector elements 702. In such a configuration, one or more additional rows of the detector elements 702 are arranged in a parallel configuration for acquiring the projection data.
[0089] In certain embodiments, the imaging system 700 is configured to traverse different angular positions around the subject 704 for acquiring desired projection data. Accordingly, the gantry 602 and the components mounted thereon may be configured to rotate about a center of rotation 706 for acquiring the projection data, for example, at different energy levels. Alternatively, in embodiments where a projection angle relative to the subject 704 varies as a function of time, the mounted components may be configured to move along a general curve rather than along a segment of a circle.
[0090] As the X-ray radiation source 604 and the detector array 608 rotate, the detector array 608 collects data of the attenuated X-ray beams. The data collected by the detector array 608 undergoes pre-processing and calibration to condition the data to represent the line integrals of the attenuation coefficients of the scanned subject 704. The processed data are commonly called projections.
[0091] In dual or multi-energy imaging, two or more sets of projection data are typically obtained for the imaged object at different tube peak kilovoltage (kVp) levels, which change the peak and spectrum of energy of the incident photons comprising the emitted X-ray beams or, alternatively, at a single tube kVp level or spectrum with an energy resolving detector of the detector array 608.
[0092] The acquired sets of projection data may be used for basis material decomposition (BMD). During BMD, the measured projections are converted to a set of density line-integral projections. The density line-integral projections may be reconstructed to form a density map or image of each respective basis material, such as bone, soft tissue, and/or contrast agent maps. The density maps or images may be, in turn, associated to form a volume rendering of the basis material, for example, bone, soft tissue, and/or contrast agent, in the imaged volume.
[0093] Once reconstructed, the basis material image produced by the imaging system 700 reveals internal features of the subject 704, expressed in the densities of the two basis materials. The density image may be displayed to show these features. In traditional approaches to diagnosis of medical conditions, such as disease states, and more generally of medical events, a radiologist or physician would consider a hard copy or display of the density image to discern characteristic features of interest. Such features might include lesions, sizes and shapes of particular anatomies or organs, and other features that would be discernable in the image based upon the skill and knowledge of the individual practitioner.
[0094] In one embodiment, the imaging system 700 includes a control mechanism 608 to control movement of the components such as rotation of the gantry 602 and the operation of the X-ray radiation source 604. In certain embodiments, the control mechanism 708 further includes an X-ray controller 710 configured to provide power and timing signals to the X-ray radiation source 604. Additionally, the control mechanism 708 includes a gantry motor controller 712 configured to control a rotational speed and/or position of the gantry 602 based on imaging requirements. [0095] In certain embodiments, the control mechanism 708 further includes a data acquisition system (DAS) 714 configured to sample analog data received from the detector elements 702 and convert the analog data to digital signals for subsequent processing. The data sampled and digitized by the DAS 714 is transmitted to a computer or computing device 716. In one example, the computing device 716 stores the data in a storage device such as mass storage 718. The mass storage 718, for example, may include a hard disk drive, a floppy disk drive, a compact disk-read/write (CD-R/W) drive, a Digital Versatile Disc (DVD) drive, a flash drive, and/or a solid- state storage drive.
[0096] Additionally, the computing device 716 provides commands and parameters to one or more of the DAS 714, the X-ray controller 710, and the gantry motor controller 712 for controlling system operations such as data acquisition and/or processing. In certain embodiments, the computing device 716 controls system operations based on operator input. The computing device 716 receives the operator input, for example, including commands and/or scanning parameters via an operator console 720 operatively coupled to the computing device 716. The operator console 720 may include a keyboard (not shown) and/or a touchscreen to allow the operator to specify the commands and/or scanning parameters.
[0097] Although FIG. 7 illustrates only one operator console 720, more than one operator console may be coupled to the imaging system 700, for example, for inputting or outputting system parameters, requesting examinations, and/or viewing images. Further, in certain embodiments, the imaging system 700 may be coupled to multiple displays, printers, workstations, and/or similar devices located either locally or remotely, for example, within an institution or hospital, or in an entirely different location via one or more configurable wired and/or wireless networks such as the Internet and/or virtual private networks.
[0098] In one embodiment, for example, the imaging system 700 either includes or is coupled to a picture archiving and communications system (PACS) 724. In an exemplary implementation, the PACS 724 is further coupled to a remote system such as a radiology department information system, hospital information system, and/or to an internal or external network (not shown) to allow operators at different locations to supply commands and parameters and/or gain access to the image data. [0099] The computing device 716 uses the operator-supplied and/or system- defined commands and parameters to operate a table motor controller 726, which in turn, may control a table 728 which may comprise a motorized table. Particularly, the table motor controller 726 moves the table 728 for appropriately positioning the subject 704 in the gantry 602 for acquiring projection data corresponding to the target volume of the subject 704.
[0100] As previously noted, the DAS 714 samples and digitizes the projection data acquired by the detector elements 702. Subsequently, an image reconstructor 730 uses the sampled and digitized X-ray data to perform high-speed reconstruction. Although FIG. 7 illustrates the image reconstructor 730 as a separate entity, in certain embodiments, the image reconstructor 730 may form part of the computing device 716. Alternatively, the image reconstructor 730 may be absent from the imaging system 700 and instead the computing device 716 may perform one or more of the functions of the image reconstructor 730. Moreover, the image reconstructor 730 may be located locally or remotely, and may be operatively connected to the imaging system 700 using a wired or wireless network. Particularly, one exemplary embodiment may use computing resources in a“cloud” network cluster for the image reconstructor 730.
[0101] In one embodiment, the image reconstructor 730 stores the images reconstructed in the storage device or mass storage 718. Alternatively, the image reconstructor 730 transmits the reconstructed images to the computing device 716 for generating useful patient information for diagnosis and evaluation. In certain embodiments, the computing device 716 transmits the reconstructed images and/or the patient information to a display 732 communicatively coupled to the computing device 716 and/or the image reconstructor 730.
[0102] The various methods and processes described further herein may be stored as executable instructions in non-transitory memory on a computing device in imaging system 700. For example, image reconstructor 730 may include such executable instructions in non-transitory memory, and may apply the methods described herein to reconstruct an image from scanning data. In another embodiment, computing device 716 may include the instructions in non-transitory memory, and may apply the methods described herein, at least in part, to a reconstructed image after receiving the reconstructed image from image reconstructor 730. In yet another embodiment, the methods and processes described herein may be distributed across image reconstructor 730 and computing system 716.
[0103] In one embodiment, the display 732 allows the operator to evaluate the imaged anatomy. The display 732 may also allow the operator to select a volume of interest (VOI) and/or request patient information, for example, via a graphical user interface (GUI) for a subsequent scan or processing.
[0104] FIG. 8 shows a block schematic diagram of a more detailed example system
800, than FIG. 1, for extending the capabilities of an imaging system or other device 802 according to an embodiment. The system 800 may include a plurality of imaging systems 802, such as any suitable non-invasive imaging system, including but not limited to a computed tomography (CT) imaging system, a positron emission tomography (PET) imaging system, a magnetic resonance (MR) imaging system, an ultrasound system, and combinations thereof (e.g., a multi-modality imaging system such as a PET/CT or PET/MR imaging system) or other devices 802, such as an advanced workstation (AW) for visualizing images, a PACS, and a mobile client devices, such as a tablet, iPad, smart phone, iPhone, etc. The plurality of imaging systems and other devices 802 may be communicatively coupled to an edge computing system 804 via a network. The plurality of imaging systems and other devices 802 and edge computing system 804 may be communicatively coupled to a cloud network 806. Communication between the plurality of imaging systems and other devices 802, the ECS 804, and the cloud network 806 is secure.
[0105] The ECS 804 may also be known as an on-premises cloud and may include data 808 from a variety of inputs and sources, including EMR, HIS/RIS data, holistic data, etc., which may be communicatively coupled to a DL interface 810 and DL training 812. The ECS 804 may further include at least one edge host 814, which may include at least one edge platform 816 and an application store 818 with a plurality of applications 820.
[0106] The cloud network 806 may include data 822 from a variety of inputs and sources and DL training 824. In addition or alternatively, the cloud network 806 may include an application store with a plurality of applications 826. The cloud network 806 may further include an artificial intelligence (AI) interface for generating learning models or other models useful for system 800. [0107] Data from the system 800 may be shared and/or stored between the plurality of imaging systems and other devices 802, the ECS 804, and the cloud network 806. Post-processing may be accomplished on data that is sent to the ECS prior to being sent to the DICOM. Quantification and segmentation may occur on imaging systems prior to sending to PACS.
[0108] Certain examples provide core processing ability organized into units or modules that can be deployed in a variety of locations. Off-device processing can be leveraged to provide a micro-cloud, mini-cloud, and/or a global cloud, etc. For example, the micro-cloud provides a one-to-one configuration with an imaging device console targeted for ultra-low latency processing (e.g., stroke, etc.) for customers that do not have cloud connectivity, etc. The mini-cloud is deployed on a customer network, etc., targeted for low-latency processing for customers who prefer to keep their data in- house, for example. The global cloud is deployed across customer organizations for high-performance computing and management of information technology infrastructure with operational excellence.
[0109] In certain other examples, off-device processing engine(s) (e.g., an acquisition engine, a reconstruction engine, a diagnosis engine, etc., and their associated deployed deep learning network devices, etc.), may be included in system 800. For example, a purpose for an exam, electronic medical record information, heart rate and/or heart rate variability, blood pressure, weight, visual assessment of prone/supine, head first or feet first, etc., can be used to determine one or more acquisition settings such as default field of view (DFOV), center, pitch, orientation, contrast injection rate, contrast injection timing, voltage, current, etc., thereby providing a“one-click” imaging device. Similarly, kernel information, slice thickness, slice interval, etc., can be used to determine one or more reconstruction parameters including image quality feedback, for example. Acquisition feedback, reconstruction feedback, etc., can be provided to the system design engine to provide real-time (or substantially real-time given processing and/or transmission delay) health analytics for the imaging device as represented by one or more digital models (e.g., deep learning models, machine models, digital twin, etc.). The digital model(s) can be used to predict component health for the imaging device in real-time (or substantially real time given a processing and/or transmission delay). [0110] FIG. 9 shows a flow chart illustrating a method 900 for generating decision support output using one or more deep learning (DL) applications. Method 900 is described with regard to the systems and components of FIG. 1, though it should be appreciated that the method may be implemented with other systems and components without departing from the scope of the present disclosure. Method 900 may be implemented as executable instructions in the non-transitory memory 115 of the ECS 110, and may be executed by one or more of the plurality of processors 113 of the ECS 110
[0111] At 905, an exam type and secondary task for a decision support output related to an imaging scan is received. As explained above, when an imaging scan is performed in order to diagnosis a patient condition, the operator of the scanner may input an initial or primary task (as explained above with respect to FIG. 2) or the initial/primary task may be obtained from an EMR of the patient (as explained above with respect to FIG. 4). The initial/primary task may include an indication of an exam type and may include an associated secondary task. The exam type may include the target anatomy to be scanned and certain features of the scan used to carry out the exam, such as a brain bleed exam, liver lesion exam, etc. The associated secondary task may indicate the decision support that is to be generated and output along with the images obtained during the imaging scan. The decision support may include bleed detection, lesion detection, organ segmentation, etc.
[0112] At 910, an application is selected to be executed in order to generate the decision support output. The application that is selected may be a deep learning application (DL application) that utilizes the imaging information sent from the scanner (as explained above with respect to FIG. 2, for example) as an input into a DL model. The DL model then generates the decision support output using the DL model and input imaging information. The DL application may be selected based on the secondary task and exam type. For example, if the exam is a brain exam that is to be performed to detect presence of bleeding, a brain bleed DL application may be selected. If the exam is a liver scan that is to be performed to detect presence of a lesion, a liver lesion DL application may be selected. In some examples, the same DL application may apply to more than one exam type. For example, a lesion detection DL application may be selected for a liver lesion exam and for a kidney lesion detection. In other examples, separate DL applications may be selected for each different exam type/secondary task combination. In such an example, different DL applications may be selected for the liver and kidney lesion detection exams.
[0113] The appropriate DL application may be selected according to a suitable mechanism. In some embodiments, the ECS may store a look-up table in memory that indexes DL applications as a function of exam type and secondary task. Other mechanisms for selecting a DL application are possible. For example, rather than the ECS selecting an appropriate DL application, the operator may select a desired DL application and an indication of the selected DL application may be sent to the ECS along with the exam type and secondary task. In some embodiments, a selection of a specific DL application may automatically trigger a sequence of other DL applications to be triggered for pre-processing or post-processing of the data for the main task. For example, selection of a DL application for liver lesion detection may trigger an anatomy localization DL application that may be executed to narrow down acquired images to only the images that include the liver.
[0114] At 915, a request for additional input is optionally sent. The additional input may include subject and/or scanner metadata or other information that may be utilized by the selected DL application to generate the decision support output. Accordingly, the request for additional input may be sent to the EMR, PACS, or other database where the additional input is stored. For example, as explained above with respect to FIG. 3, the DL applications may utilize various information in addition to the images/imaging data obtained during the imaging scan of the subject as inputs to the respective DL models, including but not limited to subject information (e.g., demographics, medical history) and scanner information. The scanner information may include type of scanner (such as CT scanner or ultrasound), imaging category/type (such as contrast imaging, non-contrast imaging, Doppler, B-mode), scanner settings (such as tube current and tube voltage), and so forth. The additional input may further include lab test results for the subject, previous diagnostic imaging scan results, and complimentary images from other imaging modalities. In some embodiments, what additional input is requested and/or used by the selected DL application may be based on the exam type and secondary task. For example, the ECS may store a look-up table that stores the additional input to be requested as a function of exam type and secondary task. As one non-limiting example, if the exam type is a non-contrast head CT scan to be analyzed by a brain bleed detection DL application, the additional input that is requested may include lab reports, a neuro physiological examination report, the X-ray tube kVp used during the scan, and/or the type of CT scan. As another non-limiting example, if the exam type is a multiphase liver study with contrast administered at different timings to be analyzed by a liver lesion detection module, the additional input may include the type of scan and X-ray tube kVp used during the scan, contrast timing, volume of contrast injected, ejection fraction of the heart, and/or previous CT studies for the patient.
[0115] At 920, images and/or imaging data is received from the scanner. As explained above with respect to FIGS. 2 and 3, the scanner may send reconstructed images and/or raw imaging data to the ECS during and/or after the imaging scan. At 925, decision support output is generated using the selected DL application. To generate the decision support output, the selected DL application may enter the received images/image data as input into the DL model executed by the DL application, along with any additional information dictated by the DL application. The DL model may then output the decision support. As explained above, the decision support may include organ segmentation, anatomy identification, an indication of whether or not bleeding, lesions, fractures, etc., are detected, and so forth. At 930, the decision support output is send to the requesting device, such as the scanner that sent the imaging information, and/or other device, such as the PACS, a care provider device, or other suitable device. As explained above, the decision support output may be displayed or otherwise presented along with the reconstructed images from the imaging scan. The decision support output may support a diagnosis or determination of a clinical finding via the reconstructed images. The decision support output and reconstructed images may be analyzed by one or more clinicians, such as a radiologist. The one or more clinicians may accept or reject the decision support output. For example, if the decision support output includes an indication of a lesion on a liver of a subject, the clinician may agree with the lesion detection and accept the output or disagree that the identified lesion is actually a lesion and reject the output. In examples where the decision support output includes organ segmentation, the clinician may edit or update the contours defining the organs, where the contours are generated by the DL application. Further, the clinician may generate a final report with detailed and, if indicated, edited findings (e.g., updated contours, rejected decision support, etc.). The final report may be saved in the EMR of the subject.
[0116] At 935, the final report, which may include edited results, is received (e.g., from the EMR database). The final report may be analyzed by the ECS (e.g., by a training data aggregator module of the ECS) in order to identify the exam type and DL application used to generate the decision support included in the report, which may be labeled/stored with the final report. At 940, the labeled report (e.g., the final report, including edited results where appropriate and labeled with exam type and DL application) is placed into a training, testing, and/or validation dataset. Each dataset may be stored on the ECS. Further, each dataset may be specific to a DL application. For example, each DL application may have a specific training dataset, testing dataset, and validation dataset. Thus, the labeled report may be placed into a dataset based on the DL application used to generate the decision support listed in that report. Within the datasets for the specific DL application, the labeled report may be randomly or semi randomly assigned to one of the training, testing, and/or validation datasets, at least in some embodiments.
[0117] At 945, method 900 includes determining if a training dataset includes a threshold number of reports. The threshold number of reports may be a sufficient number to accurately reflect functionality of the DL model executed by the DL application, such as 100 reports. Thus, if a training dataset for a given DL application reaches 100 reports, the answer at 945 includes“YES” otherwise the answer at 945 includes“NO.” If the answer at 945 is”NO”, method 900 returns. If the answer is ”YES”, method 900 proceeds to 950 to re-train that DL application with the new data in the dataset. In this way, the DL model may be fine-tuned by re-training with the new data. The updated weights may be deployed locally for a site-tuned model. For example, the DL model executed by the DL application on the ECS may include weighted connections, decision trees, etc., and the DL model may be re-trained so that the weights are updated to better fit the new data. As more and more data is collected, the DL model may become more accurate. By re-training the DL model locally (e.g., on the ECS) rather than globally via a central device such as the cloud, site-specific preferences in deploying the DL model may be preserved. For example, a particular medical facility may prefer to an aggressive approach to lesion detection to ensure false negative detections are reduced, while a different medical facility may prefer to reduce false positives by taking a less aggressive approach to detecting lesions. By allowing local re-training of the appropriate DL model, such preferences may be established and preserved. Further, local re-training of the DL model may be useful for tailoring the DL model to a specific patient demographic in certain regions of the world. For example, if the majority of the patient population at a hospital is obese, the DL models may need to be tuned to that kind of patient demographic imaging characteristics. However, at least in some examples, a subset or all of the data and/or the updated weights may be sent to a central device (e.g., the cloud) to create an updated global model along with data from other sites, as indicated at 955. The data sent back to the cloud may include images and other specific datasets that may be used for training the particular model. The global models may also be sent back to the locally-stored DL application models.
[0118] A technical effect of the disclosure is the automatic scanning of a subject with minimal input or intervention from a human operator. Another technical effect of the disclosure is the scanning of a patient in accordance with a scan protocol automatically selected by evaluating an EMR of the patient. Yet another technical effect of the disclosure is the automatic creation of a clinical report for a diagnostic image. Another technical effect of the disclosure is the automatic positioning of a subject within an imaging system.
[0119] In one embodiment, a method for an imaging system comprises receiving an identification of a subject to be scanned, automatically determining a personalized scan protocol for the subject, automatically performing a scan of the subject according to the personalized scan protocol to acquire imaging data, and displaying an image and decision support, the image and the decision support automatically generated from the imaging data.
[0120] In a first example of the method, automatically determining a personalized scan protocol for the subject comprises: retrieving an electronic medical record (EMR) for the subj ect; determining a primary task for the scan based on the EMR; and selecting a scan protocol according to the primary task. In a second example of the method optionally including the first example, automatically determining a personalized scan protocol for the subject further comprises: determining, with a camera, a scan range for the subject; and adjusting the selected scan protocol with the scan range to create the personalized scan protocol. In a third example of the method optionally including one or more of the first and second examples, the method further comprises automatically positioning the subject within the imaging system based on live images captured by the camera. In a fourth example of the method optionally including one or more of the first through third examples, the method further comprises transmitting the imaging data to an edge computing system (ECS) communicatively coupled to and positioned external to the imaging system, and receiving, from the ECS, the decision support generated by the ECS from the imaging data. In a fifth example of the method optionally including one or more of the first through fourth examples, the method further comprises transmitting a secondary task associated with the primary task to the ECS, wherein the ECS generates the decision support according to the secondary task. In a sixth example of the method optionally including one or more of the first through fifth examples, the ECS generates the decision support by inputting the imaging data into a deep learning model. In a seventh example of the method optionally including one or more of the first through sixth examples, the decision support includes a clinical diagnosis. In an eighth example of the method optionally including one or more of the first through seventh examples, receiving the identification of the subject to be scanned comprises processing live images of the subject captured by the camera with facial recognition techniques to automatically identify the subject. In a ninth example of the method optionally including one or more of the first through eighth examples, the method further comprises reconstructing the image from the imaging data.
[0121] In another embodiment, a system comprises an imaging system including at least a scanner and a processor, the processor configured to reconstruct an image from data acquired during a scan of a subject via the scanner, and a computing device communicatively coupled to and positioned external to the imaging system, the computing device configured to generate a decision support calculation based on the data, wherein the processor is configured with executable instructions that when executed cause the processor to: retrieve, from a database via a network, an electronic medical record (EMR) for the subject; determine a primary task based on the EMR; determine a scan protocol based on the primary task; position the subject within the imaging system; determine a scan range for the subject; adjust the scan protocol with the scan range; perform the scan to acquire the data; and output the image reconstructed from the data and the decision support calculation generated based on the data.
[0122] In a first example of the system, the processor automatically executes the executable instructions responsive to receiving an indication to operate in an express mode. In a second example of the system optionally including the first example, the imaging system further includes a display device, wherein the processor is configured to display, via the display device, the image and the decision support calculation. In a third example of the system optionally including one or more of the first and second examples, the imaging system further comprises a camera configured to capture live images of the subject within the imaging system. In a fourth example of the system optionally including one or more of the first through third examples, the subject is automatically positioned within the imaging system based on the live images captured by the camera. In a fifth example of the system optionally including one or more of the first through fourth examples, the scan range is determined from the live images captured by the camera. In a sixth example of the system optionally including one or more of the first through fifth examples, the subject is automatically identified based on the live images captured by the camera.
[0123] In yet another embodiment, an imaging system comprises an X-ray source that emits a beam of X-rays towards a subject to be imaged, a detector that receives the X-rays attenuated by the subject, a data acquisition system (DAS) operably connected to the detector, and a computing device operably connected to the DAS and configured with executable instructions in non-transitory memory that when executed cause the computing device to: retrieve, from a database via a network, an electronic medical record (EMR) for the subject; automatically determine a scan protocol based on the EMR; automatically position a region of interest (ROI) of the subject between the X- ray source and the detector; determine a scan range for the ROI; adjust the scan protocol with the scan range; control the X-ray source to perform the scan; receive, via the DAS, projection data acquired during the scan; and output an image reconstructed from the projection data and a decision support calculation generated from the projection data.
[0124] In a first example of the imaging system, the computing device is further configured with executable instructions in non-transitory memory that when executed cause the computing device to transmit the projection data to an edge computing system (ECS) communicatively coupled to and positioned external to the imaging system, and receive, from the ECS, the decision support calculation generated by the ECS with a deep learning model from the projection data. In a second example of the imaging system optionally including the first example, the imaging system further comprises a table motor controller configured to adjust a position of a table upon which the subject is placed, wherein automatically positioning the ROI of the subject between the X-ray source and the detector comprises commanding the table motor controller to adjust the position of the table such that the ROI is positioned between the X-ray source and the detector.
[0125] As used herein, an element or step recited in the singular and proceeded with the word“a” or“an” should be understood as not excluding plural of said elements or steps, unless such exclusion is explicitly stated. Furthermore, references to“one embodiment” of the present invention are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Moreover, unless explicitly stated to the contrary, embodiments “comprising,” “including,” or“having” an element or a plurality of elements having a particular property may include additional such elements not having that property. The terms “including” and“in which” are used as the plain-language equivalents of the respective terms“comprising” and“wherein.” Moreover, the terms“first,”“second,” and“third,” etc. are used merely as labels, and are not intended to impose numerical requirements or a particular positional order on their objects.
[0126] This written description uses examples to disclose the invention, including the best mode, and also to enable a person of ordinary skill in the relevant art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those of ordinary skill in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.

Claims

CLAIMS:
1. A method for an imaging system, comprising:
receiving an identification of a subject to be scanned;
automatically determining a personalized scan protocol for the subject;
automatically performing a scan of the subject according to the personalized scan protocol to acquire imaging data; and
displaying an image and decision support, the image and the decision support automatically generated from the imaging data.
2. The method of claim 1, wherein automatically determining a personalized scan protocol for the subject comprises:
retrieving an electronic medical record (EMR) for the subject;
determining a primary task for the scan based on the EMR; and
selecting a scan protocol according to the primary task.
3. The method of claim 2, wherein automatically determining a personalized scan protocol for the subject further comprises:
determining, with a camera, a scan range for the subject; and
adjusting the selected scan protocol with the scan range to create the personalized scan protocol.
4. The method of claim 3, further comprising automatically positioning the subject within the imaging system based on live images captured by the camera.
5. The method of claim 3, further comprising transmitting the imaging data to an edge computing system (ECS) communicatively coupled to and positioned external to the imaging system, and receiving, from the ECS, the decision support generated by the ECS from the imaging data.
6. The method of claim 5, further comprising transmitting a secondary task associated with the primary task to the ECS, wherein the ECS generates the decision support according to the secondary task.
7. The method of claim 5, wherein the ECS generates the decision support by inputting the imaging data into a deep learning model.
8. The method of claim 7, wherein the decision support includes a clinical diagnosis.
9. The method of claim 3, wherein receiving the identification of the subject to be scanned comprises processing live images of the subject captured by the camera with facial recognition techniques to automatically identify the subject.
10. The method of claim 1, further comprising reconstructing the image from the imaging data.
11. A system, comprising:
an imaging system including at least a scanner and a processor, the processor configured to reconstruct an image from data acquired during a scan of a subject via the scanner; and
a computing device communicatively coupled to and positioned external to the imaging system, the computing device configured to generate a decision support calculation based on the data;
wherein the processor is configured with executable instructions that when executed cause the processor to:
retrieve, from a database via a network, an electronic medical record
(EMR) for the subject;
determine a primary task based on the EMR;
determine a scan protocol based on the primary task;
position the subject within the imaging system; determine a scan range for the subject;
adjust the scan protocol with the scan range;
perform the scan to acquire the data; and
output the image reconstructed from the data and the decision support calculation generated based on the data.
12. The system of claim 11, wherein the processor automatically executes the executable instructions responsive to receiving an indication to operate in an express mode.
13. The system of claim 11, wherein the imaging system further includes a display device, wherein the processor is configured to display, via the display device, the image and the decision support calculation.
14. The system of claim 1 1, wherein the imaging system further comprises a camera configured to capture live images of the subject within the imaging system.
15. The system of claim 14, wherein the subject is automatically positioned within the imaging system based on the live images captured by the camera.
16. The system of claim 14, wherein the scan range is determined from the live images captured by the camera.
17. The system of claim 14, wherein the subject is automatically identified based on the live images captured by the camera.
18. An imaging system, comprising:
an X-ray source that emits a beam of X-rays towards a subject to be imaged; a detector that receives the X-rays attenuated by the subject;
a data acquisition system (DAS) operably connected to the detector; and a computing device operably connected to the DAS and configured with executable instructions in non-transitory memory that when executed cause the computing device to:
retrieve, from a database via a network, an electronic medical record (EMR) for the subject;
automatically determine a scan protocol based on the EMR;
automatically position a region of interest (ROI) of the subject between the X-ray source and the detector;
determine a scan range for the ROI;
adjust the scan protocol with the scan range;
control the X-ray source to perform the scan;
receive, via the DAS, projection data acquired during the scan; and
output an image reconstructed from the projection data and a decision support calculation generated from the projection data.
19. The imaging system of claim 18, wherein the computing device is further configured with executable instructions in non-transitory memory that when executed cause the computing device to transmit the projection data to an edge computing system (ECS) communicatively coupled to and positioned external to the imaging system, and receive, from the ECS, the decision support calculation generated by the ECS with a deep learning model from the projection data.
20. The imaging system of claim 18, further comprising a table motor controller configured to adjust a position of a table upon which the subject is placed, wherein automatically positioning the ROI of the subject between the X-ray source and the detector comprises commanding the table motor controller to adjust the position of the table such that the ROI is positioned between the X-ray source and the detector.
PCT/US2019/027376 2018-04-13 2019-04-12 Systems and methods for an imaging system express mode WO2019200351A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201980023558.2A CN112004471A (en) 2018-04-13 2019-04-12 System and method for imaging system shortcut mode

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862657632P 2018-04-13 2018-04-13
US62/657,632 2018-04-13

Publications (1)

Publication Number Publication Date
WO2019200351A1 true WO2019200351A1 (en) 2019-10-17

Family

ID=66323981

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/027376 WO2019200351A1 (en) 2018-04-13 2019-04-12 Systems and methods for an imaging system express mode

Country Status (2)

Country Link
CN (1) CN112004471A (en)
WO (1) WO2019200351A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4068300A1 (en) * 2021-04-01 2022-10-05 Siemens Healthcare GmbH Method, medical imaging device and control unit for performing a medical workflow
EP4066155A4 (en) * 2019-12-31 2023-01-11 Shanghai United Imaging Healthcare Co., Ltd. Imaging systems and methods
EP4066260A4 (en) * 2019-11-29 2024-03-13 Ge Prec Healthcare Llc Automated protocoling in medical imaging systems

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010043729A1 (en) * 2000-02-04 2001-11-22 Arch Development Corporation Method, system and computer readable medium for an intelligent search workstation for computer assisted interpretation of medical images
US20050121505A1 (en) * 2003-12-09 2005-06-09 Metz Stephen W. Patient-centric data acquisition protocol selection and identification tags therefor
US20050267348A1 (en) * 2004-06-01 2005-12-01 Wollenweber Scott D Methods and apparatus for automatic protocol selection
US20170311921A1 (en) * 2016-04-29 2017-11-02 Siemens Healthcare Gmbh Defining scanning parameters of a ct scan using external image capture

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102576387B (en) * 2009-10-22 2016-08-03 皇家飞利浦电子股份有限公司 Sweep parameter strategy
US10159454B2 (en) * 2011-09-09 2018-12-25 Siemens Healthcare Gmbh Contrast agent perfusion adaptive imaging system
CN104224218B (en) * 2014-08-06 2016-10-05 沈阳东软医疗系统有限公司 A kind of method and device of the reference information that scan protocols is provided
US10383590B2 (en) * 2015-09-28 2019-08-20 General Electric Company Methods and systems for adaptive scan control
CN107252353B (en) * 2017-06-01 2021-01-26 上海联影医疗科技股份有限公司 Control method of medical imaging equipment and medical imaging equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010043729A1 (en) * 2000-02-04 2001-11-22 Arch Development Corporation Method, system and computer readable medium for an intelligent search workstation for computer assisted interpretation of medical images
US20050121505A1 (en) * 2003-12-09 2005-06-09 Metz Stephen W. Patient-centric data acquisition protocol selection and identification tags therefor
US20050267348A1 (en) * 2004-06-01 2005-12-01 Wollenweber Scott D Methods and apparatus for automatic protocol selection
US20170311921A1 (en) * 2016-04-29 2017-11-02 Siemens Healthcare Gmbh Defining scanning parameters of a ct scan using external image capture

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4066260A4 (en) * 2019-11-29 2024-03-13 Ge Prec Healthcare Llc Automated protocoling in medical imaging systems
EP4066155A4 (en) * 2019-12-31 2023-01-11 Shanghai United Imaging Healthcare Co., Ltd. Imaging systems and methods
EP4068300A1 (en) * 2021-04-01 2022-10-05 Siemens Healthcare GmbH Method, medical imaging device and control unit for performing a medical workflow

Also Published As

Publication number Publication date
CN112004471A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
US20220117570A1 (en) Systems and methods for contrast flow modeling with deep learning
US10304198B2 (en) Automatic medical image retrieval
KR102522539B1 (en) Medical image displaying apparatus and medical image processing method thereof
US20180182130A1 (en) Diagnostic imaging method and apparatus, and recording medium thereof
WO2019200349A1 (en) Systems and methods for training a deep learning model for an imaging system
US10755407B2 (en) Systems and methods for capturing deep learning training data from imaging systems
US10679346B2 (en) Systems and methods for capturing deep learning training data from imaging systems
JP2005161054A (en) Method and system for computer-supported target
US11393579B2 (en) Methods and systems for workflow management
JP2004105728A (en) Computer aided acquisition of medical image
US10143433B2 (en) Computed tomography apparatus and method of reconstructing a computed tomography image by the computed tomography apparatus
US20190231224A1 (en) Systems and methods for profile-based scanning
CN111374690A (en) Medical imaging method and system
US20230071965A1 (en) Methods and systems for automated scan protocol recommendation
WO2019200351A1 (en) Systems and methods for an imaging system express mode
US20220399107A1 (en) Automated protocoling in medical imaging systems
WO2019200346A1 (en) Systems and methods for synchronization of imaging systems and an edge computing system
GB2573193A (en) System and method for using imaging quality metric ranking
WO2019200353A1 (en) Systems and methods for deploying deep learning applications to an imaging system
US20230154594A1 (en) Systems and methods for protocol recommendations in medical imaging
US11955228B2 (en) Methods and system for simulated radiology studies based on prior imaging data
US11941022B2 (en) Systems and methods for database synchronization
US20230048231A1 (en) Method and systems for aliasing artifact reduction in computed tomography imaging
US20240029415A1 (en) Simulating pathology images based on anatomy data
WO2021252751A1 (en) Systems and methods for generating synthetic baseline x-ray images from computed tomography for longitudinal analysis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19720302

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19720302

Country of ref document: EP

Kind code of ref document: A1