US20190180861A1 - Methods and systems for displaying an image - Google Patents
Methods and systems for displaying an image Download PDFInfo
- Publication number
- US20190180861A1 US20190180861A1 US15/840,768 US201715840768A US2019180861A1 US 20190180861 A1 US20190180861 A1 US 20190180861A1 US 201715840768 A US201715840768 A US 201715840768A US 2019180861 A1 US2019180861 A1 US 2019180861A1
- Authority
- US
- United States
- Prior art keywords
- images
- medical
- image
- classification
- electronic processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G06F17/212—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/103—Formatting, i.e. changing of presentation of documents
- G06F40/106—Display of layout of documents; Previewing
-
- G06K9/628—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/30—Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- Embodiments described herein relate to methods and systems for displaying an image, and more particularly, to displaying an image based on a classification of the image using image analytics.
- PACS picture archive and communication system
- a hanging protocol is a rule-based protocol for the automated presentation of medical images.
- the hanging protocol is generally based on a reviewer's preferences, such as a reviewer's preferences per modality, exam type, exam group (for example, a collection of exam types), and the like. For example, a hanging protocol determines which and how many comparison exams are selected, a number of medical images presented from each exam, how the medical images are arranged, and the like.
- PACS may allow a reviewer to “toggle” or “shuffle” between medical images to improve the speed and accuracy of image interpretation.
- PACS typically depend on information (for example, metadata) in associated meta-files, such as digital imaging and communications in medicine (“DICOM”) information, to characterize images and invoke a set of rules based on image characteristics.
- metadata for example, metadata
- DICOM digital imaging and communications in medicine
- meta-files may not contain accurate or sufficient information to differentiate between various classifications of medical images.
- Exams for example, a collection of medical images
- An anatomical image may refer to a medical image that represents anatomy.
- a textual image may refer to a medical image that shows text or other data.
- a textual image may include, for example, scanned or pictured forms, such as a radiation dose report, or even native DICOM images.
- an exam might include a posterior anterior (“PA”) chest radiograph, a lateral chest radiograph, and other formatted images of various text files, such as referral forms, technologist worksheets, medical record request forms, billing forms, screening forms, and the like.
- PA posterior anterior
- the inclusion of textual images within the exam interferes with the application of hanging protocols, and ultimately, with the display of the exam to a reviewer.
- embodiments described herein provide methods and systems for displaying a medical image based on a classification of the medical image.
- the methods and systems process an exam to determine a classification of each medical image included in the exam.
- the methods and systems display the medical images to a user based on the classifications associated with each medical image.
- the methods and systems may display the medical images based on a set of rules (for example, a hanging protocol).
- one embodiment provides a system for displaying a medical image captured as part of a medical imaging procedure.
- the system includes an electronic processor configured to receive a plurality of images from an image database, the plurality of images included in a medical exam associated with the procedure.
- the electronic processor is also configured to automatically determine a classification for each of the plurality of images using a classification model analyzing content of each of the plurality of images, the classification including one of a textual image and an anatomical image.
- the electronic processor is also configured to determine a set of rules displaying the plurality of images and display a subset of the plurality of images based on the classification determined for each of the plurality of images and the set of rules.
- Another embodiment provides a method for displaying a medical image captured as part of a medical imaging procedure.
- the method includes receiving, with an electronic processor, the plurality of medical images from a medical image database.
- the method also includes determining, with the electronic processor, a classification of each of the plurality of medical images using a classification model, wherein the classification of each of the plurality of medical images includes one of a textual image and an anatomical image.
- the method also includes determining, with the electronic processor, a set of rules for the plurality of medical images.
- the method also includes displaying, with the electronic processor, a first display of at least a subset of the plurality of medical images to a user via a display device based on the classification of each of the plurality of medical images and the set of rules.
- the method also includes receiving, with the electronic processor, a user interaction with at least one of the plurality of medical images and displaying, with the electronic processor, a second display of the medical images based on the user interaction.
- the set of functions includes receiving a first medical image from a medical image database in response to a request form a user for images included in a medical exam.
- the set of functions also includes automatically determining a first classification of the first medical image using a classification model, the classification model classifying an image as a textual image or an anatomical image.
- the set of functions also includes automatically determining whether to display the first medical image to a user based on the first classification.
- the set of functions also includes receiving a second medical image from the medical image database in response to the request and automatically determining a second classification of the second medical image using the classification model.
- the set of functions also includes automatically determining whether to display the second medical image to the user based on the second classification.
- FIG. 1 illustrates a set of display devices displaying a medical exam according to some embodiments.
- FIG. 2 schematically illustrates a system for displaying a medical image according to some embodiments.
- FIG. 3 is a flowchart illustrating a method for displaying a medical image using the system of FIG. 2 according to some embodiments.
- embodiments described herein may include hardware, software, and electronic components or modules that, for purposes of discussion, may be illustrated and described as if the majority of the components were implemented solely in hardware.
- electronic-based aspects of the embodiments described herein may be implemented in software (for example, stored on non-transitory computer-readable medium) executable by one or more processors.
- mobile device may include one or more electronic processors, one or more memory modules including non-transitory computer-readable medium, one or more input/output interfaces, and various connections (for example, a system bus) connecting the components.
- exams for example, a collection of medical images
- An anatomical image may refer to a medical image that represents anatomy, such as a lateral chest radiograph.
- a textual image may refer to a medical image that shows text or other data.
- a textual image may include, for example, scanned or pictured forms, such as a radiation dose report, or even native DICOM images.
- an exam might include a posterior anterior (“PA”) chest radiograph, a lateral chest radiograph, and other formatted images of various text files, such as referral forms, technologist worksheets, medical record request forms, billing forms, screening forms, and the like.
- PA posterior anterior
- the variety of image classifications or types included in an exam interferes with the application of hanging protocols, and ultimately, with the display of the exam to a reviewer.
- FIG. 1 illustrates a first display device 10 and a second display device 15 illustrating a medical exam.
- the medical exam may include a plurality of medical images.
- the medical images included in the medical exam may include a variety of image classifications or types, including anatomical images and textual images.
- the first display device 10 displays a textual image, such as a scanned document
- the second display device 15 discloses a plurality of anatomical images.
- a reviewer toggles or shuffles through the medical images included in the exam, the reviewer may be interrupted with one or more textual images that are irrelevant to the objective of the reviewer.
- a reviewing physical may shuffle through medical images to determine a diagnosis of a patient.
- a billing form for example, a textual image
- the reviewing physician is interrupted by a billing form that is irrelevant to the diagnosis of the patient.
- the difference in characteristics between textual images and medical images may cause eye strain and fatigue, such as when the reviewer shifts his or her attention from a bright textual image to a darker anatomical image. Accordingly, to solve these and other problems, embodiments described herein determine a classification for each medical image included in an exam and display each medical image based on the classification associated with each medical image.
- FIG. 2 schematically illustrates a system 100 for displaying an image (for example, a medical image) according to some embodiments.
- the system 100 includes a server 105 , a medical image database 115 , and a user device 117 .
- the system 100 includes fewer, additional, or different components than illustrated in FIG. 2 .
- the system 100 may include multiple servers 105 , medical image databases 115 , user devices 117 , or a combination thereof.
- the server 105 , the medical image database 115 , and the user device 117 communicate over one or more wired or wireless communication networks 120 .
- Portions of the communication network 120 may be implemented using a wide area network, such as the Internet, a local area network, such as a BluetoothTM network or Wi-Fi, and combinations or derivatives thereof.
- components of the system 100 communicate directly as compared to through the communication network 120 .
- the components of the system 100 communicate through one or more intermediary devices not illustrated in FIG. 2 .
- the server 105 is a computing device, which may server as a gateway for the medical image database 115 .
- the server 105 may be a PACS server.
- the server 105 may be a server that communicates with a PACS server to access the medical image database 115 .
- the server 105 includes an electronic processor 125 , a memory 130 , and a communication interface 135 .
- the electronic processor 125 , the memory 130 , and the communication interface 135 communicate wirelessly, over one or more communication lines or buses, or a combination thereof.
- the server 105 may include additional components than those illustrated in FIG. 2 in various configurations.
- the server 105 may also perform additional functionality other than the functionality described herein.
- the functionality described herein as being performed by the server 105 may be distributed among multiple devices, such as multiple servers included in a cloud service environment.
- the user device 117 may be configured to perform all or a portion of the functionality described herein as being performed by the server 105 .
- the electronic processor 125 includes a microprocessor, an application-specific integrated circuit (ASIC), or another suitable electronic device for processing data.
- the memory 130 includes a non-transitory computer-readable medium, such as read-only memory (“ROM”), random access memory (“RAM”) (for example, dynamic RAM (“DRAM”), synchronous DRAM (“SDRAM”), and the like), electrically erasable programmable read-only memory (“EEPROM”), flash memory, a hard disk, a secure digital (“SD”) card, another suitable memory device, or a combination thereof.
- the electronic processor 125 is configured to access and execute computer-readable instructions (“software”) stored in the memory 130 .
- the software may include firmware, one or more applications, program data, filters, rules, one or more program modules, and other executable instructions.
- the software may include instructions and associated data for performing a set of functions, including the methods described herein.
- the memory 130 may store a learning engine 145 and a classification model database 150 .
- the learning engine 145 develops a classification model using one or more machine learning functions.
- Machine learning functions are generally functions that allow a computer application to learn without being explicitly programmed.
- a computer application performing machine learning functions (sometimes referred to as a learning engine) is configured to develop an algorithm based on training data.
- the training data includes example inputs and corresponding desired (for example, actual) outputs, and the learning engine progressively develops a model (for example, a classification model) that maps inputs to the outputs included in the training data.
- Machine learning may be performed using various types of methods and mechanisms including but not limited to decision tree learning, association rule learning, artificial neural networks, inductive logic programming, support vector machines, clustering, Bayesian networks, reinforcement learning, representation learning, similarity and metric learning, sparse dictionary learning, and genetic algorithms. Using all of these approaches, a computer program may ingest, parse, and understand data and progressively refine models for data analytics, including image analytics.
- the learning engine 145 may perform machine learning using training data to develop a classification model that maps medical images 165 to one or more classifications.
- the training data may include, for example, medical images and their associated classifications.
- the learning engine 145 may identify one or more unique characteristics of a medical image (for example, objects within the medical image, metadata associated with the medical image, and the like) and develop a classification model that maps the one or more unique characteristics to a particular classification. Accordingly, when a subsequent medical image is received, the electronic processor 125 may determine a classification for that subsequent medical image using the classification model developed by the learning engine 145 .
- the electronic processor 125 determines a classification for each of the medical images 165 using a classification model that analyzes content of each of the medical images 165 .
- the electronic processor 125 may use a classification model to determine what images should (and should not) be displayed to a particular user, such as based on a user's role, permissions, and the like.
- the classification model may also specify display or presentation parameters for the medical images, such as hanging protocol, an filtering or modifications (for example, greyscale matching within a sequence of images), and the like.
- Classification models generated by the learning engine 145 may be stored in the classification model database 150 .
- the classification model database 150 is included in the memory 130 of the server 105 . It should be understood that, in some embodiments, the classification model database 150 is included in a separate device accessible by the server 105 (included in the server 105 or external to the server 105 ).
- the communication interface 135 allows the server 105 to communicate with devices external to the server 105 .
- the server 105 may communicate with the medical image database 115 through the communication interface 135 .
- the communication interface 135 may include a port for receiving a wired connection to an external device (for example, a universal serial bus (“USB”) cable and the like), a transceiver for establishing a wireless connection to an external device (for example, over one or more communication networks 120 , such as the Internet, local area network (“LAN”), a wide area network (“WAN”), and the like), or a combination thereof.
- USB universal serial bus
- the user device 117 is also a computing device and may include a desktop computer, a terminal, a workstation, a laptop computer, a tablet computer, a smart watch or other wearable, a smart television or whiteboard, or the like.
- the user device 117 may include similar components as the server 105 (an electronic processor, a memory, and a communication interface).
- the user device 117 may also include a human-machine interface 140 for interacting with a user.
- the human-machine interface 140 may include one or more input devices, one or more output devices, or a combination thereof. Accordingly, in some embodiments, the human-machine interface 140 allows a user to interact with (for example, provide input to and receive output from) the user device 117 .
- the human-machine interface 140 may include a keyboard, a cursor-control device (for example, a mouse), a touch screen, a scroll ball, a mechanical button, a display device (for example, a liquid crystal display (“LCD”)), a printer, a speaker, a microphone, or a combination thereof.
- the human-machine interface 140 includes a display device 160 .
- the display device 160 may be included in the same housing as the user device 117 or may communicate with the user device 117 over one or more wired or wireless connections.
- the display device 160 is a touchscreen included in a laptop computer or a tablet computer.
- the display device 160 is a monitor, a television, or a projector coupled to a terminal, desktop computer, or the like via one or more cables.
- the medical image database 115 stores a plurality of medical images 165 .
- the medical image database 115 is combined with the server 105 .
- the medical images 165 may be stored within a plurality of databases, such as within a cloud service.
- the medical image database 115 may include components similar to the server 105 , such as an electronic processor, a memory, a communication interface and the like.
- the medical image database 115 may include a communication interface configured to communicate (for example, receive data and transmit data) over the communication network 120 .
- the medical images 165 stored in the medical image database 115 may include a variety of classifications or types.
- the medical images 165 may include anatomical images, such as a lateral chest radiograph, a PA chest radiograph, and the like.
- the medical images 165 may include textual images, such as a referral form, a technologist worksheet, a medical record request form, a billing form, a screening form, another type of administrative form, and the like.
- a medical professional may capture a picture of a billing form by scanning the billing form. The scanned version of the billing form may be stored in the medical image database 115 as a medical image 165 .
- a memory of the medical image database 115 stores the medical images 165 and associated data (for example, reports, metadata, and the like).
- the medical image database 115 may include a picture archiving and communication system (“PACS”), a radiology information system (“RIS”), an electronic medical record (“EMR”), a hospital information system (“HIS”), an image study ordering system, and the like.
- PACS picture archiving and communication system
- RIS radiology information system
- EMR electronic medical record
- HIS hospital information system
- image study ordering system for example, reports, metadata, and the like.
- a user may use the user device 117 to access and view the medical images 165 and interact with the medical images 165 .
- the user may access the medical images 165 from the medical image database 115 (through a browser application or a dedicated application stored on the user device 117 that communicates with the server 105 ) and view the medical images 165 on the display device 160 associated with the user device 117 .
- the variety of image classifications included in an exam may interfere with the application of hanging protocols (for example, a set of rules), and ultimately, with the display of the exam to a reviewer.
- the system 100 is configured to automatically determine a classification of one or more medical images 165 (for example, a first medical image, a second medical image, and the like). Based on the classification of the medical images 165 , the methods and systems described herein display the medical images 165 to user (for example, a reviewer).
- FIG. 3 is a flowchart illustrating a method 200 for displaying an image (for example, a medical image 165 ) according to some embodiments.
- the method 200 is described here as being performed by the server 105 (the electronic processor 125 executing instructions). However, as noted above, the functionality performed by the server 105 (or a portion thereof) may be performed by other devices, including, for example, the user device 117 (via an electronic processor executing instructions).
- the method 200 includes receiving, with the electronic processor 125 , one or more medical images 165 from the medical image database 115 (at block 205 ). In some embodiments, the electronic processor 125 receives the medical images 165 via the communication interface 135 from the medical image database 115 over the communication network 120 .
- one or more medical images 165 may be stored at additional or different databases, servers, devices, or a combination thereof. Accordingly, in some embodiments, the electronic processor 125 receives the medical images 165 from additional or different databases, servers, devices, or a combination thereof.
- the image received by the server 105 may be based on a request for a particular medical exam received from the user device 117 .
- the request from the medical exam also includes information regarding a user making the request.
- the classification process performed by the server 105 as described above may be performed in response to other triggering events, including, for example, the generation and storage of a new medical exam in the medical image database 115 .
- the electronic processor 125 After receiving the medical images 165 from the medical image database 115 (at block 205 ), the electronic processor 125 automatically determines a classification of each of the medical images 165 (at block 210 ).
- a classification of a medical image 165 may include, for example, an anatomical image or a textual image. Accordingly, the electronic processor 125 may determine that a first medical image is an anatomical image while a second medical image is a textual image and vice versa. In some embodiments, the electronic processor 125 is further configured to determine a subcategory for a medical image.
- the electronic processor 125 when the electronic processor 125 determines that the medical image 165 is a textual image, the electronic processor 125 further determines a subcategory for the medical image 165 .
- the subcategory may include, for example, a referral form, a technologist worksheet, a medical record, a request form, a billing form, a screening form, another administrative or textual form, or a combination thereof.
- the electronic processor 125 displays the medical images 165 as described below based on the classification and the subcategory of the medical images 165 determined to be textual images.
- the electronic processor 125 determines the classification (and optionally subcategory) of each of the medical images 165 using one or more classification models stored in the classification model database 150 .
- the electronic processor 125 may access the classification models stored in the classification model database 150 to determine a classification of each of the medical images 165 .
- the server 105 identifies one or more unique characteristics of the medical image 165 and uses the one or more unique characteristics of the medical image 165 to select a classification model from the classification models stored in the classification model database 150 .
- the learning engine 145 may develop a plurality of classification models, wherein each classification model is associated with similar unique characteristics of medical images 165 (for example, an object within a medical image 165 , metadata associated with the medical images 165 , another unique characteristic of the medical image 165 , or a combination thereof). Accordingly, the server 105 may use the identified unique characteristics of the medical image 165 to select a classification model associated with medical images most similar to the identified unique characteristics.
- each classification model may take the medical image 165 (and, optionally, additional data) and output one or more classifications for each of the medical images 165 .
- the electronic processor 125 applies the selected classification model to a received medical image 165 to determine a classification of the received medical image 165 , such as an anatomical image or a textual image.
- the electronic processor 125 may also automatically determine a set of rules for the medical images 165 (at block 215 ).
- the set of rules are rules related to the automated presentation of the medical images 165 based on a user's preferences.
- the set of rules may be a “hanging protocol” feature of a PACS.
- the set of rules are configurable.
- the set of rules may define, for example, a display preference, a software application preference, a display device preference, a viewing environment preference, an image arrangement preference, an image comparison preference, another preference, or a combination thereof.
- the electronic processor 125 determines the set of rules for the medical images 165 based on, for example, a user identification, a user role, a location of service, a patient demographic, a modality, an exam type, an exam group, another parameter, or a combination thereof.
- the set of rules may be stored in the memory 130 of the server 105 , an external device, server, or database, and the like.
- the electronic processor 125 receives (via the user device 117 ) a user selection of a set of rules for the medical images 165 (for example, with a request for a particular medical exam or separate from a request for an exam, such as part of configuring the system 100 ).
- the electronic processor 125 may use machine learning functions and techniques to develop the set of rules for the medical images 165 , as similarly described above with respect to developing and applying the one or more classification models.
- the electronic processor 125 displays, via the display device 160 of the user device 117 , a first display of the one or more medical images 165 to a user based on the classification of each of the medical images 165 (at block 220 ).
- the first display of the medical images 165 also based on the set of rules as described above.
- how the electronic processor 125 displays the medical images 165 may be dependent on the classification of each of the medical images 165 , the set of rules (for example, a hanging protocol) for the medical images 165 , or a combination thereof.
- the electronic processor 125 may display a screening form to a technologist requiring acknowledgment from the technologist (based on the set of rules for a user having the role of a technologist).
- the electronic processor 125 may display a referral form to a physician (based on the set of rules for a user having the role of a physician).
- the electronic processor 125 may display medical images 165 classified as textual images on a first display device (for example, the first display device 10 ) and medical images 165 classified as anatomical images on a second display device (for example, the second display device 15 ), as illustrated in FIG. 1 .
- the electronic processor 125 displays the medical images 165 based on the classification of each medical image 165 as well as the set of rules (for example, textual images displayed on a first display device and anatomical images displayed on a second display device).
- the electronic processor 125 may optionally receive a user interaction with the first display of the medical images 165 (at block 225 ).
- the user interaction may be with one or more of the medical images 165 included in the first display.
- the electronic processor 125 may receive the user interaction via the human-machine interface 140 of the user device 117 .
- the user interaction is a modification to a classification of one or more of the medical images 165 , a modification to the set of rules for the medical images 165 , or a combination thereof.
- a user of the user device 117 viewing the first display of medical images 165 may identify an error in the classification of one of the medical images 165 .
- the user may interact with that medical image 165 , the first display, or a combination thereof to correct the classification of that medical image 165 .
- the user of the user device 117 viewing the first display of the medical images 165 may desire to alter a display preference (for example, a rule included in the set of rules) for viewing the first display.
- the user may interact with the medical image 165 , the first display, or a combination thereof to modify the display preference (for example, a rule included in the set of rules).
- the electronic processor 125 may optionally use the user interaction as feedback (for example, as training data for the learning engine 145 ). For example, the electronic processor 125 may provide the user interaction to the learning engine 145 to update or tune a previously generated classification model, develop a new classification model, or a combination thereof (for example, a modified classification model).
- the electronic processor 125 may use the user interaction to update one or more rules included in the set of rules. Accordingly, the user interaction may provide a closed feedback loop for the system 100 .
- the electronic processor 125 displays a second display of the medical images 165 (at block 230 ).
- the second display of the medical images 165 is based on the user interaction received via the human-machine interface 140 . Accordingly, the second display of the medical images 165 reflects any modifications or updates made by the electronic processor 125 in response to receiving the user interaction.
- the second display of the medical images 165 may be different from the first display. For example, the second display may be an updated version of the first display. Alternatively, in some embodiments, the second display of the medical images 165 is the same as the first display of the medical images 165 .
- classifications may be stored with an image so that the image does not subsequently need to be classified.
- the classification can be stored as part of metadata for the image. This metadata may be editable by a user, which allows a user to modify the automatic classification performed by the system. Also, in some embodiments, when metadata for an image is updated, the system 100 as described herein may be configured to re-assess any previous classifications to dynamically adjust to such metadata changes.
- embodiments described herein apply one or more classification models (which may be generated using machine learning) to automatically distinguish anatomical images from other types of images, such as textual images, so that the anatomical can be displayed without interference.
- One or more rules may also be applied to determine the precise display, timing and presentation of images based on the classifications, such as who to display a particular image to, when, or in what sequence.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Epidemiology (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Primary Health Care (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Pathology (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
Description
- Embodiments described herein relate to methods and systems for displaying an image, and more particularly, to displaying an image based on a classification of the image using image analytics.
- Physicians and other medical professionals typically use a commercial picture archive and communication system (“PACS”) when reviewing medical images (for example, medical image studies or exams). PACS provide an automated presentation of medical images in accordance with a “hanging” protocol. A hanging protocol is a rule-based protocol for the automated presentation of medical images. The hanging protocol is generally based on a reviewer's preferences, such as a reviewer's preferences per modality, exam type, exam group (for example, a collection of exam types), and the like. For example, a hanging protocol determines which and how many comparison exams are selected, a number of medical images presented from each exam, how the medical images are arranged, and the like. Additionally, PACS may allow a reviewer to “toggle” or “shuffle” between medical images to improve the speed and accuracy of image interpretation.
- Whether using image toggling, image shuffling, or another presentation arrangement, PACS typically depend on information (for example, metadata) in associated meta-files, such as digital imaging and communications in medicine (“DICOM”) information, to characterize images and invoke a set of rules based on image characteristics. Unfortunately, such meta-files may not contain accurate or sufficient information to differentiate between various classifications of medical images.
- Exams (for example, a collection of medical images) often include a variety of image types, such as anatomical images and textual images. An anatomical image may refer to a medical image that represents anatomy. A textual image may refer to a medical image that shows text or other data. A textual image may include, for example, scanned or pictured forms, such as a radiation dose report, or even native DICOM images. For example, an exam might include a posterior anterior (“PA”) chest radiograph, a lateral chest radiograph, and other formatted images of various text files, such as referral forms, technologist worksheets, medical record request forms, billing forms, screening forms, and the like. The inclusion of textual images within the exam interferes with the application of hanging protocols, and ultimately, with the display of the exam to a reviewer.
- To solve these and other problems, embodiments described herein provide methods and systems for displaying a medical image based on a classification of the medical image. The methods and systems process an exam to determine a classification of each medical image included in the exam. The methods and systems display the medical images to a user based on the classifications associated with each medical image. Alternatively or in addition, the methods and systems may display the medical images based on a set of rules (for example, a hanging protocol).
- For example, one embodiment provides a system for displaying a medical image captured as part of a medical imaging procedure. The system includes an electronic processor configured to receive a plurality of images from an image database, the plurality of images included in a medical exam associated with the procedure. The electronic processor is also configured to automatically determine a classification for each of the plurality of images using a classification model analyzing content of each of the plurality of images, the classification including one of a textual image and an anatomical image. The electronic processor is also configured to determine a set of rules displaying the plurality of images and display a subset of the plurality of images based on the classification determined for each of the plurality of images and the set of rules.
- Another embodiment provides a method for displaying a medical image captured as part of a medical imaging procedure. The method includes receiving, with an electronic processor, the plurality of medical images from a medical image database. The method also includes determining, with the electronic processor, a classification of each of the plurality of medical images using a classification model, wherein the classification of each of the plurality of medical images includes one of a textual image and an anatomical image. The method also includes determining, with the electronic processor, a set of rules for the plurality of medical images. The method also includes displaying, with the electronic processor, a first display of at least a subset of the plurality of medical images to a user via a display device based on the classification of each of the plurality of medical images and the set of rules. The method also includes receiving, with the electronic processor, a user interaction with at least one of the plurality of medical images and displaying, with the electronic processor, a second display of the medical images based on the user interaction.
- Another embodiment provides a non-transitory, computer-readable medium storing instructions that, when executed by an electronic processor, perform a set of functions. The set of functions includes receiving a first medical image from a medical image database in response to a request form a user for images included in a medical exam. The set of functions also includes automatically determining a first classification of the first medical image using a classification model, the classification model classifying an image as a textual image or an anatomical image. The set of functions also includes automatically determining whether to display the first medical image to a user based on the first classification. The set of functions also includes receiving a second medical image from the medical image database in response to the request and automatically determining a second classification of the second medical image using the classification model. The set of functions also includes automatically determining whether to display the second medical image to the user based on the second classification.
- Other aspects of the embodiments described herein will become apparent by consideration of the detailed description and accompanying drawings.
-
FIG. 1 illustrates a set of display devices displaying a medical exam according to some embodiments. -
FIG. 2 schematically illustrates a system for displaying a medical image according to some embodiments. -
FIG. 3 is a flowchart illustrating a method for displaying a medical image using the system ofFIG. 2 according to some embodiments. - Other aspects of the embodiments described herein will become apparent by consideration of the detailed description.
- Before embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the accompanying drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways.
- Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. The terms “mounted,” “connected” and “coupled” are used broadly and encompass both direct and indirect mounting, connecting and coupling. Further, “connected” and “coupled” are not restricted to physical or mechanical connections or couplings, and may include electrical connections or couplings, whether direct or indirect. Also, electronic communications and notifications may be performed using any known means including direct connections, wireless connections, etc.
- A plurality of hardware and software based devices, as well as a plurality of different structural components may be utilized to implement the embodiments described herein. In addition, embodiments described herein may include hardware, software, and electronic components or modules that, for purposes of discussion, may be illustrated and described as if the majority of the components were implemented solely in hardware. However, one of ordinary skill in the art, and based on a reading of this detailed description, would recognize that, in at least one embodiment, the electronic-based aspects of the embodiments described herein may be implemented in software (for example, stored on non-transitory computer-readable medium) executable by one or more processors. As such, it should be noted that a plurality of hardware and software based devices, as well as a plurality of different structural components, may be utilized to implement the embodiments described herein. For example, “mobile device,” “computing device,” and “server” as described in the specification may include one or more electronic processors, one or more memory modules including non-transitory computer-readable medium, one or more input/output interfaces, and various connections (for example, a system bus) connecting the components.
- As described above, exams (for example, a collection of medical images) often include a variety of image classifications or image types, such as anatomical images and textual images. An anatomical image may refer to a medical image that represents anatomy, such as a lateral chest radiograph. A textual image may refer to a medical image that shows text or other data. A textual image may include, for example, scanned or pictured forms, such as a radiation dose report, or even native DICOM images. For example, an exam might include a posterior anterior (“PA”) chest radiograph, a lateral chest radiograph, and other formatted images of various text files, such as referral forms, technologist worksheets, medical record request forms, billing forms, screening forms, and the like. The variety of image classifications or types included in an exam interferes with the application of hanging protocols, and ultimately, with the display of the exam to a reviewer.
- For example,
FIG. 1 illustrates afirst display device 10 and asecond display device 15 illustrating a medical exam. As noted above, the medical exam may include a plurality of medical images. The medical images included in the medical exam may include a variety of image classifications or types, including anatomical images and textual images. For example, as illustrated inFIG. 1 , thefirst display device 10 displays a textual image, such as a scanned document, and thesecond display device 15 discloses a plurality of anatomical images. While a reviewer toggles or shuffles through the medical images included in the exam, the reviewer may be interrupted with one or more textual images that are irrelevant to the objective of the reviewer. For example, a reviewing physical may shuffle through medical images to determine a diagnosis of a patient. However, as the reviewing physician shuffles through various anatomical images, a billing form (for example, a textual image) may be displayed to the reviewing physician. Accordingly, the reviewing physician is interrupted by a billing form that is irrelevant to the diagnosis of the patient. Also, the difference in characteristics between textual images and medical images may cause eye strain and fatigue, such as when the reviewer shifts his or her attention from a bright textual image to a darker anatomical image. Accordingly, to solve these and other problems, embodiments described herein determine a classification for each medical image included in an exam and display each medical image based on the classification associated with each medical image. -
FIG. 2 schematically illustrates asystem 100 for displaying an image (for example, a medical image) according to some embodiments. Thesystem 100 includes aserver 105, amedical image database 115, and auser device 117. In some embodiments, thesystem 100 includes fewer, additional, or different components than illustrated inFIG. 2 . For example, thesystem 100 may includemultiple servers 105,medical image databases 115,user devices 117, or a combination thereof. - The
server 105, themedical image database 115, and theuser device 117 communicate over one or more wired orwireless communication networks 120. Portions of thecommunication network 120 may be implemented using a wide area network, such as the Internet, a local area network, such as a Bluetooth™ network or Wi-Fi, and combinations or derivatives thereof. Alternatively or in addition, in some embodiments, components of thesystem 100 communicate directly as compared to through thecommunication network 120. Also, in some embodiments, the components of thesystem 100 communicate through one or more intermediary devices not illustrated inFIG. 2 . - The
server 105 is a computing device, which may server as a gateway for themedical image database 115. For example, in some embodiments, theserver 105 may be a PACS server. Alternatively, in some embodiments, theserver 105 may be a server that communicates with a PACS server to access themedical image database 115. As illustrated inFIG. 2 , theserver 105 includes anelectronic processor 125, amemory 130, and acommunication interface 135. Theelectronic processor 125, thememory 130, and thecommunication interface 135 communicate wirelessly, over one or more communication lines or buses, or a combination thereof. Theserver 105 may include additional components than those illustrated inFIG. 2 in various configurations. Theserver 105 may also perform additional functionality other than the functionality described herein. Also, the functionality described herein as being performed by theserver 105 may be distributed among multiple devices, such as multiple servers included in a cloud service environment. In addition, in some embodiments, theuser device 117 may be configured to perform all or a portion of the functionality described herein as being performed by theserver 105. - The
electronic processor 125 includes a microprocessor, an application-specific integrated circuit (ASIC), or another suitable electronic device for processing data. Thememory 130 includes a non-transitory computer-readable medium, such as read-only memory (“ROM”), random access memory (“RAM”) (for example, dynamic RAM (“DRAM”), synchronous DRAM (“SDRAM”), and the like), electrically erasable programmable read-only memory (“EEPROM”), flash memory, a hard disk, a secure digital (“SD”) card, another suitable memory device, or a combination thereof. Theelectronic processor 125 is configured to access and execute computer-readable instructions (“software”) stored in thememory 130. The software may include firmware, one or more applications, program data, filters, rules, one or more program modules, and other executable instructions. For example, the software may include instructions and associated data for performing a set of functions, including the methods described herein. - For example, as illustrated in
FIG. 2 , thememory 130 may store alearning engine 145 and aclassification model database 150. In some embodiments, thelearning engine 145 develops a classification model using one or more machine learning functions. Machine learning functions are generally functions that allow a computer application to learn without being explicitly programmed. In particular, a computer application performing machine learning functions (sometimes referred to as a learning engine) is configured to develop an algorithm based on training data. For example, to perform supervised learning, the training data includes example inputs and corresponding desired (for example, actual) outputs, and the learning engine progressively develops a model (for example, a classification model) that maps inputs to the outputs included in the training data. Machine learning may be performed using various types of methods and mechanisms including but not limited to decision tree learning, association rule learning, artificial neural networks, inductive logic programming, support vector machines, clustering, Bayesian networks, reinforcement learning, representation learning, similarity and metric learning, sparse dictionary learning, and genetic algorithms. Using all of these approaches, a computer program may ingest, parse, and understand data and progressively refine models for data analytics, including image analytics. - Accordingly, the learning engine 145 (as executed by the electronic processor 125) may perform machine learning using training data to develop a classification model that maps
medical images 165 to one or more classifications. The training data may include, for example, medical images and their associated classifications. For example, thelearning engine 145 may identify one or more unique characteristics of a medical image (for example, objects within the medical image, metadata associated with the medical image, and the like) and develop a classification model that maps the one or more unique characteristics to a particular classification. Accordingly, when a subsequent medical image is received, theelectronic processor 125 may determine a classification for that subsequent medical image using the classification model developed by thelearning engine 145. In other words, theelectronic processor 125 determines a classification for each of themedical images 165 using a classification model that analyzes content of each of themedical images 165. Similarly, as described in more detail below, in some embodiments, theelectronic processor 125 may use a classification model to determine what images should (and should not) be displayed to a particular user, such as based on a user's role, permissions, and the like. In some embodiments, the classification model may also specify display or presentation parameters for the medical images, such as hanging protocol, an filtering or modifications (for example, greyscale matching within a sequence of images), and the like. - Classification models generated by the
learning engine 145 may be stored in theclassification model database 150. As illustrated inFIG. 2 , theclassification model database 150 is included in thememory 130 of theserver 105. It should be understood that, in some embodiments, theclassification model database 150 is included in a separate device accessible by the server 105 (included in theserver 105 or external to the server 105). - The
communication interface 135 allows theserver 105 to communicate with devices external to theserver 105. For example, as illustrated inFIG. 2 , theserver 105 may communicate with themedical image database 115 through thecommunication interface 135. In particular, thecommunication interface 135 may include a port for receiving a wired connection to an external device (for example, a universal serial bus (“USB”) cable and the like), a transceiver for establishing a wireless connection to an external device (for example, over one ormore communication networks 120, such as the Internet, local area network (“LAN”), a wide area network (“WAN”), and the like), or a combination thereof. - The
user device 117 is also a computing device and may include a desktop computer, a terminal, a workstation, a laptop computer, a tablet computer, a smart watch or other wearable, a smart television or whiteboard, or the like. Although not illustrated, theuser device 117 may include similar components as the server 105 (an electronic processor, a memory, and a communication interface). Theuser device 117 may also include a human-machine interface 140 for interacting with a user. The human-machine interface 140 may include one or more input devices, one or more output devices, or a combination thereof. Accordingly, in some embodiments, the human-machine interface 140 allows a user to interact with (for example, provide input to and receive output from) theuser device 117. For example, the human-machine interface 140 may include a keyboard, a cursor-control device (for example, a mouse), a touch screen, a scroll ball, a mechanical button, a display device (for example, a liquid crystal display (“LCD”)), a printer, a speaker, a microphone, or a combination thereof. As illustrated inFIG. 2 , in some embodiments, the human-machine interface 140 includes adisplay device 160. Thedisplay device 160 may be included in the same housing as theuser device 117 or may communicate with theuser device 117 over one or more wired or wireless connections. For example, in some embodiments, thedisplay device 160 is a touchscreen included in a laptop computer or a tablet computer. In other embodiments, thedisplay device 160 is a monitor, a television, or a projector coupled to a terminal, desktop computer, or the like via one or more cables. - The
medical image database 115 stores a plurality ofmedical images 165. In some embodiments, themedical image database 115 is combined with theserver 105. Alternatively or in addition, themedical images 165 may be stored within a plurality of databases, such as within a cloud service. Although not illustrated inFIG. 2 , themedical image database 115 may include components similar to theserver 105, such as an electronic processor, a memory, a communication interface and the like. For example, themedical image database 115 may include a communication interface configured to communicate (for example, receive data and transmit data) over thecommunication network 120. - As mentioned above, the
medical images 165 stored in themedical image database 115 may include a variety of classifications or types. For example, themedical images 165 may include anatomical images, such as a lateral chest radiograph, a PA chest radiograph, and the like. Alternatively or in addition, themedical images 165 may include textual images, such as a referral form, a technologist worksheet, a medical record request form, a billing form, a screening form, another type of administrative form, and the like. For example, a medical professional may capture a picture of a billing form by scanning the billing form. The scanned version of the billing form may be stored in themedical image database 115 as amedical image 165. Accordingly, in some embodiments, a memory of themedical image database 115 stores themedical images 165 and associated data (for example, reports, metadata, and the like). For example, themedical image database 115 may include a picture archiving and communication system (“PACS”), a radiology information system (“RIS”), an electronic medical record (“EMR”), a hospital information system (“HIS”), an image study ordering system, and the like. - A user may use the
user device 117 to access and view themedical images 165 and interact with themedical images 165. For example, the user may access themedical images 165 from the medical image database 115 (through a browser application or a dedicated application stored on theuser device 117 that communicates with the server 105) and view themedical images 165 on thedisplay device 160 associated with theuser device 117. As noted above, the variety of image classifications included in an exam may interfere with the application of hanging protocols (for example, a set of rules), and ultimately, with the display of the exam to a reviewer. To solve this and other problems, thesystem 100 is configured to automatically determine a classification of one or more medical images 165 (for example, a first medical image, a second medical image, and the like). Based on the classification of themedical images 165, the methods and systems described herein display themedical images 165 to user (for example, a reviewer). - For example,
FIG. 3 is a flowchart illustrating amethod 200 for displaying an image (for example, a medical image 165) according to some embodiments. Themethod 200 is described here as being performed by the server 105 (theelectronic processor 125 executing instructions). However, as noted above, the functionality performed by the server 105 (or a portion thereof) may be performed by other devices, including, for example, the user device 117 (via an electronic processor executing instructions). As illustrated inFIG. 3 , themethod 200 includes receiving, with theelectronic processor 125, one or moremedical images 165 from the medical image database 115 (at block 205). In some embodiments, theelectronic processor 125 receives themedical images 165 via thecommunication interface 135 from themedical image database 115 over thecommunication network 120. As noted above, in some embodiments, one or moremedical images 165 may be stored at additional or different databases, servers, devices, or a combination thereof. Accordingly, in some embodiments, theelectronic processor 125 receives themedical images 165 from additional or different databases, servers, devices, or a combination thereof. The image received by theserver 105 may be based on a request for a particular medical exam received from theuser device 117. In some embodiments, the request from the medical exam also includes information regarding a user making the request. However, in other embodiments, the classification process performed by theserver 105 as described above may be performed in response to other triggering events, including, for example, the generation and storage of a new medical exam in themedical image database 115. - After receiving the
medical images 165 from the medical image database 115 (at block 205), theelectronic processor 125 automatically determines a classification of each of the medical images 165 (at block 210). As noted above, a classification of amedical image 165 may include, for example, an anatomical image or a textual image. Accordingly, theelectronic processor 125 may determine that a first medical image is an anatomical image while a second medical image is a textual image and vice versa. In some embodiments, theelectronic processor 125 is further configured to determine a subcategory for a medical image. For example, in some embodiments, when theelectronic processor 125 determines that themedical image 165 is a textual image, theelectronic processor 125 further determines a subcategory for themedical image 165. The subcategory may include, for example, a referral form, a technologist worksheet, a medical record, a request form, a billing form, a screening form, another administrative or textual form, or a combination thereof. In some embodiments, theelectronic processor 125 displays themedical images 165 as described below based on the classification and the subcategory of themedical images 165 determined to be textual images. - In some embodiments, the
electronic processor 125 determines the classification (and optionally subcategory) of each of themedical images 165 using one or more classification models stored in theclassification model database 150. Theelectronic processor 125 may access the classification models stored in theclassification model database 150 to determine a classification of each of themedical images 165. For example, in some embodiments, theserver 105 identifies one or more unique characteristics of themedical image 165 and uses the one or more unique characteristics of themedical image 165 to select a classification model from the classification models stored in theclassification model database 150. As noted above, thelearning engine 145 may develop a plurality of classification models, wherein each classification model is associated with similar unique characteristics of medical images 165 (for example, an object within amedical image 165, metadata associated with themedical images 165, another unique characteristic of themedical image 165, or a combination thereof). Accordingly, theserver 105 may use the identified unique characteristics of themedical image 165 to select a classification model associated with medical images most similar to the identified unique characteristics. - After selecting the appropriate classification model, the
electronic processor 125 applies the selected classification model to the receivedmedical images 165. As noted above, each classification model may take the medical image 165 (and, optionally, additional data) and output one or more classifications for each of themedical images 165. For example, in some embodiments, theelectronic processor 125 applies the selected classification model to a receivedmedical image 165 to determine a classification of the receivedmedical image 165, such as an anatomical image or a textual image. - As illustrated in
FIG. 3 , optionally, theelectronic processor 125 may also automatically determine a set of rules for the medical images 165 (at block 215). In some embodiments, the set of rules are rules related to the automated presentation of themedical images 165 based on a user's preferences. For example, the set of rules may be a “hanging protocol” feature of a PACS. In some embodiments, the set of rules are configurable. The set of rules may define, for example, a display preference, a software application preference, a display device preference, a viewing environment preference, an image arrangement preference, an image comparison preference, another preference, or a combination thereof. - In some embodiments, the
electronic processor 125 determines the set of rules for themedical images 165 based on, for example, a user identification, a user role, a location of service, a patient demographic, a modality, an exam type, an exam group, another parameter, or a combination thereof. The set of rules may be stored in thememory 130 of theserver 105, an external device, server, or database, and the like. In some embodiments, theelectronic processor 125 receives (via the user device 117) a user selection of a set of rules for the medical images 165 (for example, with a request for a particular medical exam or separate from a request for an exam, such as part of configuring the system 100). Alternatively or in addition, theelectronic processor 125 may use machine learning functions and techniques to develop the set of rules for themedical images 165, as similarly described above with respect to developing and applying the one or more classification models. - As illustrated in
FIG. 3 , theelectronic processor 125 displays, via thedisplay device 160 of theuser device 117, a first display of the one or moremedical images 165 to a user based on the classification of each of the medical images 165 (at block 220). In some embodiments, the first display of themedical images 165 also based on the set of rules as described above. In other words, how theelectronic processor 125 displays themedical images 165 may be dependent on the classification of each of themedical images 165, the set of rules (for example, a hanging protocol) for themedical images 165, or a combination thereof. For example, theelectronic processor 125 may display a screening form to a technologist requiring acknowledgment from the technologist (based on the set of rules for a user having the role of a technologist). As another example, theelectronic processor 125 may display a referral form to a physician (based on the set of rules for a user having the role of a physician). As yet another example, theelectronic processor 125 may displaymedical images 165 classified as textual images on a first display device (for example, the first display device 10) andmedical images 165 classified as anatomical images on a second display device (for example, the second display device 15), as illustrated inFIG. 1 . In this example, theelectronic processor 125 displays themedical images 165 based on the classification of eachmedical image 165 as well as the set of rules (for example, textual images displayed on a first display device and anatomical images displayed on a second display device). - After displaying the first display of the
medical images 165 to the user (at block 220), theelectronic processor 125 may optionally receive a user interaction with the first display of the medical images 165 (at block 225). The user interaction may be with one or more of themedical images 165 included in the first display. Theelectronic processor 125 may receive the user interaction via the human-machine interface 140 of theuser device 117. In some embodiments, the user interaction is a modification to a classification of one or more of themedical images 165, a modification to the set of rules for themedical images 165, or a combination thereof. For example, a user of theuser device 117 viewing the first display ofmedical images 165 may identify an error in the classification of one of themedical images 165. Using the human-machine interface 140 of theuser device 117, the user may interact with thatmedical image 165, the first display, or a combination thereof to correct the classification of thatmedical image 165. As another example, the user of theuser device 117 viewing the first display of themedical images 165 may desire to alter a display preference (for example, a rule included in the set of rules) for viewing the first display. Using the human-machine interface 140, the user may interact with themedical image 165, the first display, or a combination thereof to modify the display preference (for example, a rule included in the set of rules). - When the user interaction is a modification to a classification of one or more of the
medical images 165, theelectronic processor 125 may optionally use the user interaction as feedback (for example, as training data for the learning engine 145). For example, theelectronic processor 125 may provide the user interaction to thelearning engine 145 to update or tune a previously generated classification model, develop a new classification model, or a combination thereof (for example, a modified classification model). When the user interaction is a modification to the set of rules for themedical images 165, theelectronic processor 125 may use the user interaction to update one or more rules included in the set of rules. Accordingly, the user interaction may provide a closed feedback loop for thesystem 100. - After receiving a user interaction with (at block 225), the
electronic processor 125 displays a second display of the medical images 165 (at block 230). In some embodiments, the second display of themedical images 165 is based on the user interaction received via the human-machine interface 140. Accordingly, the second display of themedical images 165 reflects any modifications or updates made by theelectronic processor 125 in response to receiving the user interaction. The second display of themedical images 165 may be different from the first display. For example, the second display may be an updated version of the first display. Alternatively, in some embodiments, the second display of themedical images 165 is the same as the first display of themedical images 165. - In some embodiments, classifications may be stored with an image so that the image does not subsequently need to be classified. In some embodiments, the classification can be stored as part of metadata for the image. This metadata may be editable by a user, which allows a user to modify the automatic classification performed by the system. Also, in some embodiments, when metadata for an image is updated, the
system 100 as described herein may be configured to re-assess any previous classifications to dynamically adjust to such metadata changes. - Thus, embodiments described herein apply one or more classification models (which may be generated using machine learning) to automatically distinguish anatomical images from other types of images, such as textual images, so that the anatomical can be displayed without interference. One or more rules may also be applied to determine the precise display, timing and presentation of images based on the classifications, such as who to display a particular image to, when, or in what sequence. Various features and advantages of the embodiments described herein are set forth in the following claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/840,768 US10304564B1 (en) | 2017-12-13 | 2017-12-13 | Methods and systems for displaying an image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/840,768 US10304564B1 (en) | 2017-12-13 | 2017-12-13 | Methods and systems for displaying an image |
Publications (2)
Publication Number | Publication Date |
---|---|
US10304564B1 US10304564B1 (en) | 2019-05-28 |
US20190180861A1 true US20190180861A1 (en) | 2019-06-13 |
Family
ID=66636311
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/840,768 Active US10304564B1 (en) | 2017-12-13 | 2017-12-13 | Methods and systems for displaying an image |
Country Status (1)
Country | Link |
---|---|
US (1) | US10304564B1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190278869A1 (en) * | 2018-03-08 | 2019-09-12 | International Business Machines Corporation | Web-based medical image viewer with web database |
US20190362835A1 (en) * | 2018-05-23 | 2019-11-28 | Koninklijke Philips N.V. | System and method for generating textual descriptions from medical images |
CN110910991A (en) * | 2019-11-21 | 2020-03-24 | 张军 | Medical automatic image processing system |
US20200250826A1 (en) * | 2019-02-03 | 2020-08-06 | Nec Corporation Of America | Systems and methods for processing data extracted from frames captured from video signals |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6885896B2 (en) * | 2017-04-10 | 2021-06-16 | 富士フイルム株式会社 | Automatic layout device and automatic layout method and automatic layout program |
EP4134971A1 (en) | 2021-08-09 | 2023-02-15 | Ai Medical AG | Method and devices for supporting the observation of an abnormality in a body portion |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8594410B2 (en) * | 2006-08-28 | 2013-11-26 | Definiens Ag | Context driven image mining to generate image-based biomarkers |
US7593967B2 (en) * | 2002-11-27 | 2009-09-22 | Amirsys, Inc. | Electronic clinical reference and education system and method of use |
US7343030B2 (en) * | 2003-08-05 | 2008-03-11 | Imquant, Inc. | Dynamic tumor treatment system |
US7783094B2 (en) * | 2005-06-02 | 2010-08-24 | The Medipattern Corporation | System and method of computer-aided detection |
US8014576B2 (en) * | 2005-11-23 | 2011-09-06 | The Medipattern Corporation | Method and system of computer-aided quantitative and qualitative analysis of medical images |
US9152759B2 (en) | 2006-07-31 | 2015-10-06 | General Electric Company | Key image note matching by image hanging protocols |
US8150175B2 (en) * | 2007-11-20 | 2012-04-03 | General Electric Company | Systems and methods for image handling and presentation |
US8165368B2 (en) | 2008-09-29 | 2012-04-24 | General Electric Company | Systems and methods for machine learning based hanging protocols |
US9002120B2 (en) * | 2008-10-03 | 2015-04-07 | Intellectual Ventures Fund 83 Llc | Interactive image selection method |
US20110093293A1 (en) * | 2009-10-16 | 2011-04-21 | Infosys Technologies Limited | Method and system for performing clinical data mining |
US8687860B2 (en) * | 2009-11-24 | 2014-04-01 | Penrad Technologies, Inc. | Mammography statistical diagnostic profiler and prediction system |
US9152760B2 (en) | 2011-11-23 | 2015-10-06 | General Electric Company | Smart 3D PACS workflow by learning |
US8923580B2 (en) | 2011-11-23 | 2014-12-30 | General Electric Company | Smart PACS workflow systems and methods driven by explicit learning from users |
US20140143710A1 (en) * | 2012-11-21 | 2014-05-22 | Qi Zhao | Systems and methods to capture and save criteria for changing a display configuration |
US20160335403A1 (en) | 2014-01-30 | 2016-11-17 | Koninklijke Philips N.V. | A context sensitive medical data entry system |
US20150254403A1 (en) * | 2014-03-09 | 2015-09-10 | Lucy LaPerna | Electronic Medical Record Interface |
US20170154167A1 (en) | 2014-05-13 | 2017-06-01 | Agfa Healthcare Inc. | A system and a related method for automatically selecting a hanging protocol for a medical study |
SG11201705768QA (en) * | 2015-01-16 | 2017-08-30 | Pricewaterhousecoopers Llp | Healthcare data interchange system and method |
JP2016151827A (en) * | 2015-02-16 | 2016-08-22 | キヤノン株式会社 | Information processing unit, information processing method, information processing system and program |
US10140421B1 (en) * | 2017-05-25 | 2018-11-27 | Enlitic, Inc. | Medical scan annotator system |
-
2017
- 2017-12-13 US US15/840,768 patent/US10304564B1/en active Active
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190278869A1 (en) * | 2018-03-08 | 2019-09-12 | International Business Machines Corporation | Web-based medical image viewer with web database |
US10831854B2 (en) * | 2018-03-08 | 2020-11-10 | International Business Machines Corporation | Web-based medical image viewer with web database |
US11550869B2 (en) | 2018-03-08 | 2023-01-10 | Merative Us L.P. | Web-based medical image viewer with web database |
US20190362835A1 (en) * | 2018-05-23 | 2019-11-28 | Koninklijke Philips N.V. | System and method for generating textual descriptions from medical images |
US11056227B2 (en) * | 2018-05-23 | 2021-07-06 | Koninklijke Philips N.V. | System and method for generating textual descriptions from medical images |
US20200250826A1 (en) * | 2019-02-03 | 2020-08-06 | Nec Corporation Of America | Systems and methods for processing data extracted from frames captured from video signals |
US10818013B2 (en) * | 2019-02-03 | 2020-10-27 | Nec Corporation Of America | Systems and methods for processing data extracted from frames captured from video signals |
US11238592B2 (en) * | 2019-02-03 | 2022-02-01 | Nec Corporation Of America | Systems and methods for processing data extracted from frames captured from video signals |
CN110910991A (en) * | 2019-11-21 | 2020-03-24 | 张军 | Medical automatic image processing system |
Also Published As
Publication number | Publication date |
---|---|
US10304564B1 (en) | 2019-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10304564B1 (en) | Methods and systems for displaying an image | |
AU2020202337B2 (en) | Characterizing states of subject | |
US20240000314A1 (en) | Method for automating collection, association, and coordination of multiple medical data sources | |
US11342064B2 (en) | Triage of patient medical condition based on cognitive classification of medical images | |
US20190340753A1 (en) | Systems and methods for detecting an indication of a visual finding type in an anatomical image | |
US11227384B2 (en) | Methods and systems for determining a diagnostically unacceptable medical image | |
CN107729929B (en) | Method and device for acquiring information | |
US10916341B2 (en) | Automated report generation based on cognitive classification of medical images | |
US10993653B1 (en) | Machine learning based non-invasive diagnosis of thyroid disease | |
US20190189268A1 (en) | Differential diagnosis mechanisms based on cognitive evaluation of medical images and patient data | |
EP2608094A1 (en) | Medical apparatus and image displaying method using the same | |
US11935636B2 (en) | Dynamic medical summary | |
JP2017507386A (en) | Medical device tracking | |
US20190189266A1 (en) | Automated worklist prioritization of patient care based on cognitive classification of medical images | |
US20190189267A1 (en) | Automated medical resource reservation based on cognitive classification of medical images | |
WO2020118101A1 (en) | System and method for providing personalized health data | |
US20190189265A1 (en) | Automated medical case routing based on discrepancies between human and machine diagnoses | |
US20140297316A1 (en) | Method And Apparatus For Adaptive Prefetching Of Medical Data | |
JP2018142145A (en) | Remote image reading system, its control method, and program | |
US20220181007A1 (en) | Computerized systems for prediction of geographic atrophy progression using deep learning applied to clinical imaging | |
CN112069865A (en) | Method and system for reporting a request for review of a physical object | |
US20190043441A1 (en) | Automatically adjusting a display property of data to reduce impaired visual perception | |
US20230154592A1 (en) | Radiology peer review using artificial intelligence | |
US20230154612A1 (en) | Radiology peer review using artificial intelligence with review feedback | |
US20190180864A1 (en) | Method and system for selecting and arranging images with a montage template |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:REICHER, MURRAY A.;REEL/FRAME:044393/0909 Effective date: 20171205 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FEPP | Fee payment procedure |
Free format text: SURCHARGE FOR LATE PAYMENT, LARGE ENTITY (ORIGINAL EVENT CODE: M1554); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |