EP3791325A1 - Systems and methods for detecting an indication of a visual finding type in an anatomical image - Google Patents
Systems and methods for detecting an indication of a visual finding type in an anatomical imageInfo
- Publication number
- EP3791325A1 EP3791325A1 EP19800738.7A EP19800738A EP3791325A1 EP 3791325 A1 EP3791325 A1 EP 3791325A1 EP 19800738 A EP19800738 A EP 19800738A EP 3791325 A1 EP3791325 A1 EP 3791325A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- neural network
- label
- visual finding
- visual
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
- G06F18/256—Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- the present invention in some embodiments thereof, relates to medical anatomical images and, more specifically, but not exclusively, to systems and methods for automated analysis of medical anatomical images.
- a system for computing a single-label neural network for detection of an indication of a single visual finding type in an anatomical image of a target individual, the single visual finding type denoting an acute medical condition for early and rapid treatment thereof comprises: at least one hardware processor executing a code for: providing a multi-label training dataset including a plurality of anatomical images each associated with a label indicative of at least one visual finding type selected from a plurality of visual findings type, or indicative of no visual finding types, training a multi-label neural network for detection of the plurality of visual finding types in a target anatomical image according to the multi-label training dataset, creating a single-label training dataset including a plurality of anatomical images each associated with a label indicative of the single visual finding type selected from the plurality of visual finding types, or indicative of an absence of the single visual finding type, and training a single-label neural network for detection of the single visual finding type in a target anatomical image, by setting the trained multi-label
- a system for detection of an indication of a single visual finding type in a target anatomical image of a target individual by a single-label neural network comprises: at least one hardware processor executing a code for: feeding a target anatomical image into a single-label neural network, and computing likelihood of an indication of the single visual finding type in the target anatomical image by the single-label neural network, wherein the single-label neural network is computed by at least one of fine-tuning and retraining a trained multi-label neural networking according to a single-label training dataset of a plurality of anatomical images labeled with an indication of the visual finding type, wherein the multi-label neural network is trained to compute likelihood of each of a plurality of visual finding types based on a multi-label training dataset of a plurality of anatomical images labeled with the plurality of visual finding types.
- a method of computing a single-label neural network for detection of an indication of a single visual finding type in an anatomical image of a target individual, the single visual finding type denoting an acute medical condition for early and rapid treatment thereof comprises: providing a multi-label training dataset including a plurality of anatomical images each associated with a label indicative of at least one visual finding type selected from a plurality of visual finding types, or indicative of no visual finding types, training a multi-label neural network for detection of the plurality of visual finding types in a target anatomical image according to the multi-label training dataset, creating a single-label training dataset including a plurality of anatomical images each associated with a label indicative of the single visual finding type selected from the plurality of visual finding types, or indicative of an absence of the single visual finding type, and training a single-label neural network for detection of the single visual finding type in a target anatomical image, by setting the trained multi-label neural network as an initial baseline of the single-label
- the accuracy of the trained single-label neural network for detection of the single visual finding type in the target anatomical image is higher than the accuracy of the multi-label neural network for detection of the single visual finding type in a target anatomical image, and higher than another single-label neural network trained only on the single-label training dataset using a standard un-trained neural network as the initial baseline, and higher than another single-label neural network trained on a multi-object neural network trained to detect non-medical objects in non-medical images.
- detection of the single visual finding type comprises computing, by the single-label neural network, a likelihood score indicative of a probability of the single visual finding type being depicted in the target anatomical image.
- the anatomical image and the single visual finding type are selected from the group consisting of: two dimensional (2D) AP and/or PA and/or lateral chest x-ray and pneumothorax including a small pneumothorax, 2D AP and/or PA chest x-ray and pneumomediastinum, and 2D abdominal x-ray and pneumoperitoneum.
- labels of the plurality of anatomical images of the multi-label training dataset are created based on an analysis that maps individual sentences of a plurality of sentences of a respective text based radiology report to a corresponding visual finding type of the plurality of visual finding types.
- the multi-label neural network is trained to identify about 20-50 different visual finding types.
- the plurality of visual finding types include members selected from the group consisting of: abnormal aorta, aortic calcification, artificial valve, atelectasis, bronchial wall thickening, cardiac pacer, cardiomegaly, central line, consolidation, costrophrenic angle blunting, degenerative changes, elevated diaphragm, fracture, granuloma, hernia diaphragm, hilar prominence, hyperinflation, interstitial markings, kyphosis, mass, mediastinal widening, much bowel gas, nodule, orthopedic surgery, osteopenia, pleural effusion, pleural thickening, pneumothorax, pulmonary edema, rib fracture, scoliosis, soft tissue calcification, sternotomy wires, surgical clip noted, thickening of fissure, trachea deviation, transplant, tube, and vertebral height loss.
- training the single label neural network comprises training a plurality of instances of the single-label neural network, wherein each instance has different neural network parameters, and further comprising: evaluating performance of each instance of the plurality of instances for detection of the indication of the single visual finding type, and creating an ensemble by selecting a combination of the instances according to a requirement of the evaluated performance, wherein single-label neural network comprises the ensemble.
- the different neural network parameters of the plurality of instances of the single-label neural network are selected from the group consisting of: preprocessing image size, preprocessing input size, neural network architecture modification, at least one additional intermediate dense layer before a final output, preprocessing normalization type, and standard deviation normalization.
- training the multi label neural network comprises training a plurality of instances of the multi-label neural network, and selecting one of the instances having a lowest validation loss for the single visual finding, wherein training the single-label neural network comprises training the selected one instance using a checkpoint of network weights of the selected one instance.
- training the single label neural network comprises training a plurality of instances of the single-label neural network varying according to at least one network parameter, and further comprising: obtaining at least one of a target sensitivity and a target specificity, and a tolerance, computing a mini-AUC (area under curve) for a region under the receiver operating characteristic (ROC) curve computed for each instance of the plurality of instances of the single-label neural network, corresponding to the at least one of the target sensitivity and target specificity within the tolerance, and selecting at least one instance of the plurality of instances of the single-label neural network according to a requirement of the mini-AUC, for inclusion in an ensemble of the single-label neural network.
- ROC receiver operating characteristic
- weights of the baseline are set according to corresponding weights of non-last fully connected layers of the trained multi-label neural network.
- a prevalence of the anatomical images labeled with the single visual finding type of the single-label training dataset is statistically significantly higher than a prevalence of the anatomical images labeled with the single visual finding type of the multi-label training dataset and denoting a wild prevalence of the single visual finding type in practice.
- the plurality of anatomical images of the multi-label training dataset are clustered into three clusters, comprising: a single visual finding type cluster including anatomical images depicting at least the single visual finding type, a general positive finding cluster including anatomical images depicting at least one of the plurality of visual finding types excluding the single visual finding type, and a negative finding cluster including anatomical images depicting none of the plurality of visual finding types, wherein the single-label training dataset is created by randomly sampling one image from each of the clusters in succession.
- system further comprises code for and/or the method further comprises at least one of: diagnosing the acute medical condition and treating the patient for the acute medical condition.
- the feeding, and the computing are iterated for each of a plurality of target anatomical images, and further comprising: generating instructions for creating a triage list for manual review by a human user of respective target anatomical images computed as likely including the indication of the visual finding type.
- the visual finding type denotes an acute medical condition requiring urgent treatment, wherein a time delay in diagnosis and treatment of the acute medical condition leads to increased risk of morbidity for the patient.
- the computed likelihood denotes a confidence score indicative of probability of the presence of the visual finding type in the anatomical image
- the instructions are for creating the triage list according to priority for review by the human reviewer, ranked by decreasing likelihood of the indication of the visual finding type based on the confidence score.
- the system further comprises code for and/or the method further comprises receiving a plurality of target anatomical images from a medical imaging storage server, feeding each one of the plurality of target anatomical images into a visual filter neural network for outputting a classification category indicative of a target body region depicted at a target sensor orientation and a rotation relative to a baseline defined by a single-label neural network, or another classification category indicative of at least one of a non-target body region and a non-target sensor orientation, rejecting a sub-set of the plurality of target anatomical images classified into the another classification category, to obtain a remaining sub-set of the plurality of target anatomical images, rotating to the baseline the remaining sub-set of the plurality of target anatomical images classified as rotated relative to the baseline, and feeding each one of the remaining sub-set of the plurality of target anatomical images into the single-label neural network for computing likelihood of an indication of the single visual finding type in the respective target anatomical
- the system further comprises code for and/or the method further comprises identifying pixels for the target anatomical image having outlier pixel intensity values denoting an injection of content, and adjusting the outlier pixel intensity values of the identified pixels to values computed as a function of non-outlier pixel intensity values, prior to the feeding the target anatomical image into the single-label neural network.
- FIG. 1 is a flowchart of a method for detection of an indication of a single visual finding type in a target anatomical image of a target individual by a single-label neural network trained from a baseline of a multi-label neural network that include multiple visual finding types including the selected single visual finding type, in accordance with some embodiments of the present invention
- FIG. 2 is a block diagram of a system for training a single-label neural network for detection of a single visual finding type from a baseline of a multi-label neural network, and/or for analyzing anatomical images using the single-label neural network, optionally to create a priority list, in accordance with some embodiments of the present invention
- FIG. 3 is a dataflow diagram depicting exemplary dataflow for detection of an indication of a single visual finding type in a target anatomical image of a target individual by a single-label neural network trained from a baseline of a multi-label neural network, and optionally creating a priority worklist, in accordance with some embodiments of the present invention
- FIG. 4 is a flowchart of a process for training the single-label neural network from the multi-label neural network, in accordance with some embodiments of the present invention
- FIG. 5 is a table and graphs summarizing the experimental evaluation for comparison of instances of the single-label neural network based on the mini-AUC in comparison to the standard AUC process, in accordance with some embodiments of the present invention.
- the present invention in some embodiments thereof, relates to medical anatomical images and, more specifically, but not exclusively, to systems and methods for automated analysis of medical anatomical images.
- An aspect of some embodiments of the present invention relates to systems, methods, an apparatus, and/or code instructions (i.e., stored on a data storage device and executable by one or more hardware processor(s)) for computing a single-label training neural network for detection of an indication of a single visual finding in an anatomical image of a target individual.
- the single visual finding denotes an acute medical condition for early and rapid treatment thereof, for example, pneumothorax, pneumoperitoneum, pneumomediastinum, and fracture.
- the single-label neural network is trained in two steps. First, a multi-label neural network is trained for detection of multiple visual finding types in a target anatomical image.
- the multi-label neural network is trained according to a multi-label training dataset that includes multiple anatomical images each associated with a label indicative of one or more visual finding types, or indicative of no visual findings type. Then, the single-label neural network is trained for detection of a selected single visual finding type in a target anatomical image. The single visual finding type is selected from the multiple visual finding types. The single-label neural network is trained according to a single label training dataset storing multiple anatomical images each associated with a label indicative of the single visual finding type, or indicative of an absence of the single visual finding type.
- the single-label neural network is trained by setting the trained multi-label neural network as an initial baseline of the single-label neural network, and fine-tuning and/or re-training the baseline according to the single-label training dataset. For example, setting the values of the weights of the single-label neural network according to the weights of the trained multi-label neural network, and adjusting the values of the weights according to the single-label training dataset.
- an ensemble of single-label networks are trained from the baseline multi-label neural network.
- a target sensitivity and/or target specificity are obtained, for example, manually entered by a user.
- the target sensitivity and/or target specificity may be selected, for example, based on clinical requirement for detecting the selected visual finding type. For example, a certain visual finding may be indicative of a detrimental medical condition if missed by the radiologist.
- the sensitivity may be set to be high in such a case.
- correct diagnosis of the visual finding may be necessary to prevent unnecessary treatment due to an incorrect diagnosis. In such a case, the specificity may be set to be high.
- a mini-AUC area under curve
- ROC receiver operating characteristic
- single-label neural network may refer to the ensemble of instances of the single-label neural network.
- An aspect of some embodiments of the present invention relates to systems, methods, an apparatus, and/or code instructions (i.e., stored on a data storage device and executable by one or more hardware processor(s)) for detecting an indication of a single visual finding type in a target anatomical image of a target individual.
- the single visual finding type denotes an acute medical condition for early and rapid treatment thereof, for example, pneumothorax, pneumomediastinum, pneumoperitoneum, and fracture.
- the target anatomical image is fed into a single-label neural network that computes likelihood of the single visual finding being present in the target anatomical image.
- the single-label neural network is computed by fine-tuning and/or retraining a trained multi-label neural networking according to a single-label training dataset of anatomical images labeled with an indication of the selected visual finding type.
- the multi-label neural network is trained to compute likelihood of each of multiple visual finding types based on a multi-label training dataset of anatomical images labeled with respective visual findings.
- multiple anatomical images are analyzed by being fed into the single-label neural network.
- the multiple anatomical images may be stored in a centralized anatomical imaging storage server, for example, a picture archiving and communication system (PACS) server, optionally according to a medical data storage format, for example, DICOM®.
- a triage list of the analyzed anatomical images is created, for manual review by a user, for example, a radiologist.
- the triage list includes anatomical images for which a likelihood of depicting the selected visual finding type is computed, optionally ranked according to a computed probability.
- the process of training the single-label neural network for detection of the single visual finding type provides relatively higher accuracy (e.g., sensitivity and/or specificity and/or precision) over other neural network architectures trained to detect the single visual finding type in an anatomical image.
- accuracy e.g., sensitivity and/or specificity and/or precision
- other neural network architectures trained to detect the single visual finding type in an anatomical image.
- fine visual findings which may be easily missed by radiologists, and/or difficult to identify by radiologists.
- a standard un-trained neural network that is trained on a single-label training dataset for detecting the single visual finding type.
- a standard multi object neural network that detects a plurality of different objects, none of which include visual findings of anatomical images (e.g., ImageNet), that is fine tuned and/or re-trained to detect the single visual finding type.
- anatomical images e.g., ImageNet
- the multi-label neural network alone (created and/or trained as described herein) in terms of detecting the single visual finding type.
- detecting the visual finding type in the medical image is a more challenging task in comparison to classifying non-medical objects appearing in non-medical images (e.g., house, dog, name of person depicted in image).
- the visual finding type occupies a relatively small region of the anatomical image, and is generally a fine feature, making it a challenge for neural networks to extract sufficient data for accurate classification of the fine feature.
- non medical objects in non-medical images may occupy a relatively large region of the non-medical image, and since the entire object is classified rather than a finding in the object, the neural network may rely on a much larger number of features extracted from the image in order to classify the non-medical object.
- the multi-label neural network described herein is different than standard neural networks trained to detect multiple different objects in image (e.g., ImageNet) for multiple reasons: (i) the multi-label neural network described herein is designed to process anatomical images such as x-ray images, which may have a bit depth larger than the displayed depth (e.g., 10- 14 vs 8), in contrast to standard neural networks that are designed to process environmental images based on visible light and not anatomical images (ii) the multi-label neural network described herein is designed to identify multiple different visual finding types in the same context (e.g., AP chest x-ray), in contrast to standard neural networks that identify different objects in different contexts (iii) the multi-label neural network described herein is designed to identify multiple different visual finding types, each of which may appear at different anatomical locations (e.g., different parts of the lung), may appear differently (e.g., depending on size, process of evolution), in contrast to standard neural network that identify objects that are similar to one another (
- the visual finding may be an acute finding, which is not normally present, and representing a medical problem.
- the acute finding may progress or remain stable, but in either case it may be indicative of a situation that in which the clinical state of the patient is worsening.
- the acute finding may be indicative of the need for urgent medical treatment. Delay in treatment of the acute finding leads to increases in complications for the patient.
- the visual finding may be a fine feature, which may be easily missed by a radiologist.
- Examples of such acute, fine, easily missed visual findings include: pneumothorax in a chest x-ray, pneumomediastinum in a chest x-ray, pneumoperitoneum in an abdominal x-ray, fracture in a limb x-ray, and detection of acute appendicitis in an US of the appendix.
- At least some of the systems, methods, apparatus, and/or code instructions described herein improve the technical field of automated analysis of anatomical images to identify likelihood of the presence of a visual finding in a medial image, optionally a fine visual finding, optionally representing an acute medical condition requiring urgent diagnosis and treatment, which may easily be missed by a radiologist.
- To identify such visual findings in anatomical images requires a classifier with high accuracy, which is not provided by any standard classifier.
- Such standard classifiers use an off the shelf classifier (e.g., neural network), and a training dataset of labeled anatomical images. Such standard classifiers are trained to detect a single visual finding.
- the improvement provided by at least some of the systems, methods, apparatus, and/or code instructions described herein includes an increase in accuracy of the automated detection process, for example, in comparison to accuracy achieved by standard automated detection processes.
- the increase in accuracy is obtained at least by the process of training a multi-label neural network using a multi-label training dataset to detect multiple different visual finding types in a target anatomical image, and then training a single-label neural network to detect a single visual finding type (i.e., selected from the multiple visual finding types which the multi-label neural network is trained to detect), by setting the trained multi-label neural network as a baseline neural network, optionally with one or more adjustments of neural network parameters, and fine tuning and/or re training the baseline neural network using a single-label training dataset.
- the improvement provided by at least some of the systems, methods, apparatus, and/or code instructions described herein may include a reduction in the amount of time for alerting a user (e.g., treating physician) to the presence of a visual finding type in an anatomical image for rapid diagnosis and/or treatment thereof.
- At least some of the systems, methods, apparatus, and/or code instructions described herein improve the medical process of diagnosis and/or treatment of acute medical conditions in a patient, for example, within an emergency room setting.
- At least some of the systems, methods, apparatus, and/or code instructions described herein provide a triage system that identifies likelihood of anatomical images (e.g., chest x-rays) including a visual finding indicating an acute medical condition requiring urgent treatment, for example, pneumothorax.
- the medical images having identified visual findings are triaged for priority viewing by a healthcare professional (e.g., radiologist, emergency room physician), for example, by ranking according to a priority score, for example, probability of the respective image having the visual finding.
- a healthcare professional e.g., radiologist, emergency room physician
- images likely having pneumothorax visual findings are prioritized, optionally according to computed probability of having the pneumothorax visual finding.
- the triage system enables rapid diagnosis of pneumothorax, which leads to rapid treatment of the pneumothorax, saving the patient from complication of delayed treatment of pneumothorax and/or missing the pneumothorax entirely.
- the triage system is enabled, at least due to the trained single-label neural network described herein that computes the likelihood of a single visual finding type being depicted in the target anatomical image.
- At least some of the systems, methods, apparatus, and/or code instructions described herein improve neural network technology, by improving the process of selecting an ensemble from multiple instances of a trained neural network, where each instance varies by neural network parameter(s) (e.g., input image size, normalization, mean, and architecture variations, as described herein).
- the ensemble is selected from instances of the single-label neural network that were trained using the two step process described herein, i.e., fine tuned and/or re-trained using the trained multi-label neural network as baseline.
- Each instance is a variation of the single-label neural network in terms of one or more neural network parameters (as described herein).
- each instance is trained to perform the same task of determining likelihood of the single visual finding type being depicted in the target anatomical image, and since each trained instance of the single-label neural network varies in terms of neural network parameters, the performance of each instance varies.
- the ensemble is selected to identify the combination of instances that provide the overall best performance in terms of determining likelihood of the single visual finding type being depicted in the target anatomical image.
- a standard AUC metric measures the entire area under the ROC for computing a metric indicative of performance of a certain trained neural network.
- the standard process using AUC provides a general overall performance metric, which does not necessarily reflect desired target sensitivity and/or target specificity.
- a certain trained neural network may have excellent overall performance, but does not perform sufficiently well (and/or has lower performance) at the target sensitivity and/or target specificity.
- another trained neural network may have lower overall performance, but has excellent performance at the target sensitivity and/or target specificity. Measuring the entire area using standard AUC metrics is less informative.
- the selection process enabled by the mini-AUC code described here is based on a more focused area of the ROC defined by the target sensitivity and/or target specificity.
- the area under the graph for the defined region is measured.
- the area under the graph for the defined region is used to select the members of the ensemble rather than the entire area as done using standard techniques.
- the mini-AUC process is used to select the members of the ensemble based on a target sensitivity and/or target specificity within a tolerance requirement.
- the working point, and/or threshold (for determining whether the respective is positive or negative for depicting the desired single visual finding type) are selected according to having at least a minimal value of the target sensitivity and/or according to a highest value of the target specificity.
- the minimum target sensitivity may be set as 90% with a tolerance of 2%.
- the corresponding maximum specificity may be identified.
- the present invention may be a system, a method, and/or a computer program product.
- the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
- the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, and any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- SRAM static random access memory
- CD-ROM compact disc read-only memory
- DVD digital versatile disk
- memory stick a floppy disk, and any suitable combination of the foregoing.
- a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures.
- two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- FIG. 1 is a flowchart of a method for detection of an indication of a single visual finding type in a target anatomical image of a target individual by a single-label neural network trained from a baseline of a multi-label neural network that include multiple visual finding types including the selected single visual finding type, in accordance with some embodiments of the present invention.
- FIG. 1 is a flowchart of a method for detection of an indication of a single visual finding type in a target anatomical image of a target individual by a single-label neural network trained from a baseline of a multi-label neural network that include multiple visual finding types including the selected single visual finding type, in accordance with some embodiments of the present invention.
- FIG. 1 is a flowchart of a method for detection of an indication of a single visual finding type in a target anatomical image of a target individual by a single-label neural network trained from a baseline of a multi-label neural network that include multiple visual finding types including the selected single visual finding type, in
- FIG. 2 which is a block diagram of a system 200 for training a single-label neural network 222A for detection of a single visual finding type from a baseline of a multi-label neural network 222C that include multiple visual finding types including the selected single visual finding type, and/or for analyzing anatomical images using the single-label neural network, optionally to create a priority list 222B, in accordance with some embodiments of the present invention.
- FIG. 3 is a dataflow diagram depicting exemplary dataflow for detection of an indication of a single visual finding type in a target anatomical image of a target individual by a single-label neural network trained from a baseline of a multi-label neural network that include multiple visual finding types including the selected single visual finding type, and optionally creating a priority worklist, in accordance with some embodiments of the present invention.
- FIG. 4 is a flowchart of a process for training the single-label neural network from the multi-label neural network, in accordance with some embodiments of the present invention.
- System 200 may implement the acts of the method described with reference to FIG. 1 and/or FIG. 3 and/or FIG. 4, optionally by a hardware processor(s) 202 of a computing device 204 executing code instructions stored in a memory 206.
- an exemplary implementation of an x-ray triage system is now described to help understand system 200.
- many chest x-rays of different patients are captured by imaging device 212 and stored in a PACS server 214.
- Computing device computes a likelihood of each chest x-ray depicting a single visual finding type denoting pneumothorax by trained a single-label neural network 222A.
- Single-label neural network 222A is computed from multi-label neural network 222C using a respective multi-label training dataset and a single-label training dataset, as described herein.
- the performance of single-label neural network in terms of target sensitivity and/or target specificity may be obtained by mini-AUC code, as described herein.
- each chest x-ray may be processed by visual filter neural network code 206C for exclusion of irrelevant images (e.g., non-chest x-rays, and/or non-x-ray images and/or non AP-PA images).
- the chest x-ray images (before or after filtering) may be further processed for removal of outlier pixel intensity values and/or adjusting pixel intensity values by executing pixel adjustment code 206E. Additional details of removing outlier pixel intensity values and/or adjusting pixel intensity values are described with reference to co-filed Application having Attorney Docket No.“76406”.
- the system provides a triage of the anatomical images, by generating a priority worklist 222B.
- the worklist 222B is generated by ranking the chest x-rays according to a priority score computed based on the likelihood. The higher the probability that a certain chest x-ray has a visual finding indicating pneumothorax, the higher the ranking on the worklist.
- a healthcare practitioner e.g., radiologist, ER physician
- the healthcare practitioner is directed to the most urgent chest x-rays most likely to have a visual finding indicative of pneumothorax, reducing the time to diagnose and treat the patient for the pneumothorax in comparison to standard systems that do not provide the triage feature. Patients determined to have pneumothorax may be treated by a physician to remove the excess air.
- Computing device 204 may be implemented as, for example, a client terminal, a server, a virtual server, a radiology workstation, a virtual machine, a computing cloud, a mobile device, a desktop computer, a thin client, a Smartphone, a Tablet computer, a laptop computer, a wearable computer, glasses computer, and a watch computer.
- Computing 204 may include an advanced visualization workstation that sometimes is add-on to a radiology workstation and/or other devices for presenting indications of the visual finding type to the radiologist.
- Computing device 204 may include locally stored software that performs one or more of the acts described with reference to FIG. 1 and/or FIG. 3 and/or FIG. 4, and/or may act as one or more servers (e.g., network server, web server, a computing cloud, virtual server) that provides services (e.g., one or more of the acts described with reference to FIG. 1 and/or FIG. 3 and/or FIG.
- servers e.g., network server, web server, a computing cloud, virtual server
- services e.g., one or more of the acts described with reference to FIG. 1 and/or FIG. 3 and/or FIG.
- client terminals 208 e.g., client terminal used by a user for viewing anatomical images, remotely located radiology workstations, remote picture archiving and communication system (PACS) server, remote electronic medical record (EMR) server
- client terminals 208 e.g., client terminal used by a user for viewing anatomical images, remotely located radiology workstations, remote picture archiving and communication system (PACS) server, remote electronic medical record (EMR) server
- PPS remote picture archiving and communication system
- EMR remote electronic medical record
- Client terminal(s) 208 may be implemented as, for example, a radiology workstation, a desktop computer (e.g., running a PACS viewer application), a mobile device (e.g., laptop, smartphone, glasses, wearable device), and nurse station server.
- a radiology workstation e.g., running a PACS viewer application
- a mobile device e.g., laptop, smartphone, glasses, wearable device
- nurse station server e.g., a radiology workstation
- desktop computer e.g., running a PACS viewer application
- mobile device e.g., laptop, smartphone, glasses, wearable device
- the training of the single-label neural network and multi-label neural network, and the application of the trained single-label neural network to anatomical images to compute likelihood of visual finding types may be implemented by the same computing device 204, and/or by different computing devices 204, for example, one computing device 204 trains the multi-label neural network and single-label neural network, and transmits the trained single-label neural network to a server device 204.
- Computing device 204 receives 2D images, and/or 2D slices (optionally extracted from 3D imaging data) captured by an anatomical imaging device(s) 212, for example, an x-ray machine, a magnetic resonance imaging (MRI) device, a computer tomography (CT) machine, and/or an ultrasound machine.
- Anatomical images captured by imaging machine 212 may be stored in an image repository 214, for example, a storage server (e.g., PACS server), a computing cloud, virtual memory, and a hard disk.
- the anatomical images stored by image repository 214 may include images of patients optionally associated with text based radiology reports. Training images 216 are created based on the captured anatomical images and text based radiology reports, as described herein.
- Training images 216 may include (and/or be used to create) the multi-label training dataset for training the multi-label neural network, and/or single-label training dataset for training the single-label neural network, as described herein.
- training images and training dataset i.e., single and/or multi-label
- training images 216 may be stored by a server 218, accessibly by computing device 204 over network 210, for example, a publicly available training dataset, and/or a customized training dataset created for training the multi-label neural network and/or the single-label neural network, as described herein.
- Anatomical images captured by imaging machine(s) 212 depict internal anatomical features and/or anatomical structures within the body of the target patient.
- Exemplary anatomical images include 2D x-ray images captured by an x-ray machine.
- Exemplary x-ray anatomical images include: AP and PA views of the chest, abdominal x-rays, and x-rays of limbs. Selected views of the x-ray images may be defined as the best view for detecting the visual finding type.
- Computing device 204 may receive the anatomical images for computation of the likelihood of depicting the visual finding type, and/or receive training images 216 (e.g., single and/or multi label training dataset, or create the single and/or multi label training datasets from the training images), from imaging device 212 and/or image repository 214 using one or more imaging interfaces 220, for example, a wire connection (e.g., physical port), a wireless connection (e.g., antenna), a local bus, a port for connection of a data storage device, a network interface card, other physical interface implementations, and/or virtual interfaces (e.g., software interface, virtual private network (VPN) connection, application programming interface (API), software development kit (SDK)).
- a wire connection e.g., physical port
- a wireless connection e.g., antenna
- local bus e.g., a port for connection of a data storage device
- network interface card e.g., other physical interface implementations
- virtual interfaces e.
- Hardware processor(s) 202 may be implemented, for example, as a central processing unit(s) (CPET), a graphics processing unit(s) (GPET), field programmable gate array(s) (FPGA), digital signal processor(s) (DSP), and application specific integrated circuit(s) (ASIC).
- processors 202 may include one or more processors (homogenous or heterogeneous), which may be arranged for parallel processing, as clusters and/or as one or more multi core processing units.
- Memory 206 stores code instruction for execution by hardware processor(s) 202, for example, a random access memory (RAM), read-only memory (ROM), and/or a storage device, for example, non-volatile memory, magnetic media, semiconductor memory devices, hard drive, removable storage, and optical media (e.g., DVD, CD-ROM).
- RAM random access memory
- ROM read-only memory
- storage device for example, non-volatile memory, magnetic media, semiconductor memory devices, hard drive, removable storage, and optical media (e.g., DVD, CD-ROM).
- memory 206 may store image processing code 206A that implement one or more acts and/or features of the method described with reference to FIG. 1 and/or 3, and/or training code 206B that execute one or more acts of the method described with reference to FIG.
- client terminal(s) may locally store and/or execute image processing code 206A, visual filter neural network 206C, and/or code instructions of trained single-label neural network 222A and/or code of multi-label neural network 222C and/or priority list 222B and/or mini-AUC code 206D and/or pixel adjustment code 206E.
- Computing device 204 may include a data storage device 222 for storing data, for example, code instructions of trained single-label neural network 222A and/or code of multi-label neural network 222C (as described herein), priority list 222B (generated as described herein), visual filter neural network 206C, mini-AUC code 206D, and/or training images 216, and/or text based radiology reports (for creating the multi-label training dataset and/or single-label training dataset, as described herein).
- Data storage device 222 may be implemented as, for example, a memory, a local hard-drive, a removable storage device, an optical disk, a storage device, and/or as a remote server and/or computing cloud (e.g., accessed over network 210).
- code instructions of trained single-label neural network 222A, code of multi-label neural network 222C, visual filter neural network 206C, training images 216, priority list 222B, mini-AUC code 206D and/or pixel adjustment code 206E and/or text based radiology reports may be stored in data storage device 222, with executing portions loaded into memory 206 for execution by processor(s) 202.
- priority list 222B is provided to image server 214, for example, for instructing the priority presentation of images stored by image server 214.
- computing device 204 provides instructions for image server 214 to generate priority list 222B.
- Computing device 204 may include data interface 224, optionally a network interface, for connecting to network 210, for example, one or more of, a network interface card, a wireless interface to connect to a wireless network, a physical interface for connecting to a cable for network connectivity, a virtual interface implemented in software, network communication software providing higher layers of network connectivity, and/or other implementations.
- Computing device 204 may access one or more remote servers 218 using network 210, for example, to download updated training images 216 and/or to download an updated version of image processing code, training code, visual filter neural network code, the trained single-label neural network, and/or trained multi-label neural network.
- imaging interface 220 and data interface 224 may be implemented as a single interface (e.g., network interface, single software interface), and/or as two independent interfaces such as software interfaces (e.g., as application programming interfaces (API), network ports) and/or hardware interfaces (e.g., two network interfaces), and/or combination (e.g., single network interface, and two software interfaces, two virtual interfaces on a common physical interface, virtual networks on a common network port).
- API application programming interfaces
- the term/component imaging interface 220 may sometimes be interchanged with the term data interface 224.
- Computing device 204 may communicate using network 210 (or another communication channel, such as through a direct link (e.g., cable, wireless) and/or indirect link (e.g., via an intermediary computing device such as a server, and/or via a storage device) with one or more of:
- network 210 or another communication channel, such as through a direct link (e.g., cable, wireless) and/or indirect link (e.g., via an intermediary computing device such as a server, and/or via a storage device) with one or more of:
- Client terminal(s) 208 for example, when computing device 204 acts as a server that computes likelihood of the visual finding in anatomical images, provides the image storage server with the computed likelihood for determining a priority score of the respective anatomical image for creating the priority list, and where the highest ranked anatomical images are viewed on a display of the client terminal 208.
- server 218 is implemented as image server 214, for example, a PACS server.
- Server 218 may store new anatomical images as they are captured, and/or may store the training dataset.
- Server 214 may store and/or generate priority list 222B.
- server 218 is in communication with image server 214 and computing device 204.
- Server 218 may coordinate between image server 214 and computing device 204, for example, transmitting newly received anatomical images from server 218 to computing device 204 for computation of likelihood of having a visual finding (by single-label training dataset 222A as described herein), and transmitting an indication of the computed likelihood from computing device 204 to server 218.
- Server 218 may compute priority scores and/or rank the anatomical images according to the computed likelihood for computing the priority list.
- Server 218 may send a list of priority ranked anatomical images and/or the priority list to image server 214, optionally for presentation to a healthcare provider on the display of the client terminal.
- Client terminal 208 may access the anatomical images of the priority list via server 218, which obtains the images from image server 214.
- server 218, which obtains the images from image server 214.
- one or more of the described functions of server 218 are performed by computing device 204 and/or imager server 214.
- Anatomical image repository 214 that stores anatomical images and/or imaging device 212 that outputs the anatomical images.
- Computing device 204 includes or is in communication with a user interface 226 that includes a mechanism designed for a user to enter data (e.g., patient data) and/or view the indications of identified visual findings.
- exemplary user interfaces 226 include, for example, one or more of, a touchscreen, a display, a keyboard, a mouse, and voice activated software using speakers and microphone.
- FIG. 3 is a schematic 300 depicting exemplary dataflow for detection of an indication of a single visual finding type in a target anatomical image of a target individual by a single-label neural network trained from a baseline of a multi-label neural network, in accordance with some embodiments of the present invention.
- a data storage server 314 for example, a PACS server, provides anatomical images 350 to an Imaging Analytics process 352.
- Data storage server 314 may correspond to image repository and/or image server 214 described with reference to FIG. 2.
- Anatomical images 350 may be DICOM® studies, for example, chest x- ray images, abdominal x-ray limb x-rays, and CT scans (i.e., of chest, abdomen and/or limbs).
- Imaging Analytics process 352 may be implemented as, for example, a server, a process executing on data storage server 314, corresponding to server 218 of FIG. 2, and/or corresponding to computing device 204.
- Imaging analytics process 352 provides anatomical images 350 to HealthPNX process 304.
- HealthPNX process 304 may correspond to computing device 204 of FIG. 2, and/or be implemented as a process executing on PACS server 314 and/or on the image analytics server.
- HealthPNX process 304 computes the likelihood of each anatomical image (e.g., x-ray) depicting a visual finding type, for example, indicative of pneumothorax. The computation of the likelihood of the visual finding type is computed by the trained single-label neural network, as described herein.
- each x-ray is first processed by the visual filter neural network (as described herein) for exclusion of irrelevant images, prior to computation by the single-label neural network.
- HealthPNX process 304 sends an indication of the computed likelihood of visual finding 354 to imaging analytics process 352, for example, formatted in JSON.
- Imaging analytics process 352 may reformat the indication of the computed likelihood of visual finding into another protocol 356 (e.g., HF7) for providing to PACS 314.
- the anatomical images are arranged into a worklist 358 (corresponding to priority list 222B described with reference to FIG. 2) according to the computed likelihood of visual finding type, for example, ranked in decreasing order according to a ranking score computed based on the computed likelihood of visual finding.
- the anatomical images are accessed for manual review according to worklist 358 by a healthcare provider (e.g., hospital worker, radiologist, clinician) via a client terminal 308.
- the anatomical images are triaged for review by the healthcare provider according to the most urgent cases, most likely to include the visual finding, for example, pneumothorax, enabling rapid diagnosis and treatment of the acute cases.
- Patients diagnosed with pneumothorax may be rapidly treated, preventing or reducing complications and/or morbidity resulting from a delay in diagnosis and a delay in treatment.
- the single-label neural network(s) is trained and/or provided.
- the trained single-label neural network is fed a target anatomical image, and outputs an indication of likelihood (e.g., absolute indication thereof, and/or probability value) of the single visual finding type being depicted in the target anatomical image.
- an indication of likelihood e.g., absolute indication thereof, and/or probability value
- the single-label neural network is computed by fine-tuning and/or retraining a trained multi-label neural networking according to a single-label training dataset of anatomical images labeled with an indication of the visual finding type.
- the multi-label neural network is trained to compute likelihood (e.g., absolute indication thereof, and/or probability value) of each of multiple visual finding types based on a multi-label training dataset of anatomical images labeled with the multiple visual finding types.
- the accuracy of the trained single-label neural network for detection of the single visual finding type in the target anatomical image may be higher than the accuracy of the multi-label neural network for detection of the single visual finding type in a target anatomical image, and/or may be higher than another single-label neural network trained only on the single-label training dataset using a standard un-trained neural network as the initial baseline, and/or may be higher than another single-label neural network trained on a multi-object neural network trained to detect non-medical objects in non-medical images (e.g., ImageNet).
- a standard un-trained neural network as the initial baseline
- another single-label neural network trained on a multi-object neural network trained to detect non-medical objects in non-medical images e.g., ImageNet
- Each single-label neural network may be trained and/or provided. Each single-label neural network may be trained to detect a respective unique visual finding type when fed the same anatomical image outputted by a target anatomical imaging modality depicting a target body region at a target sensor viewing angle. For example, one single-label neural network detects pneumothorax in AP and/or PA and/or lateral chest x-rays, and another single-label neural network detects pneumomediastinum in the AP and/or PA chest x-rays.
- each single-label neural network may be trained to detect a unique visual finding type according to a certain anatomical image outputted by a certain anatomical imaging modality depicting a certain body region at a certain sensor viewing angle. For example, one single-label neural network detects pneumothorax in AP and/or PA and/or lateral chest x-rays, and another single-label neural network detects pneumoperitoneum in supine abdominal x-rays, and/or AP/PA chest x-rays.
- one or more visual filter neural networks may exclude inappropriate anatomical images from being fed into the respective single-label neural network and/or may select appropriate anatomical image for feeding into the respective single-label neural network.
- Exemplary types of anatomical images fed into the single-label neural network and resulting output of likelihood of the single visual finding type being depicted in the respective image include: two dimensional (2D) AP and/or PA and/or lateral chest x-ray for detecting pneumothorax including a small pneumothorax, 2D AP and/or PA chest x-ray for detecting pneumomediastinum, 2D abdominal x-ray at a side-view for detecting pneumoperitoneum, one or more x-rays of bones and/or joints captured at one or more sensor orientations to detect fracture, and US images of the appendix for detecting acute appendicitis.
- 2D two dimensional
- PA and/or lateral chest x-ray for detecting pneumothorax including a small pneumothorax
- 2D AP and/or PA chest x-ray for detecting pneumomediastinum
- 2D abdominal x-ray at a side-view for detecting pneumoperitoneum
- one or more anatomical images are received, for example, from a PACS server, an EMR server, from the anatomical imaging device, and/or from a storage device (e.g., portable storage medium, storage server).
- the images may be obtained one at a time, for example, as the anatomical images are captured and stored, and/or may be obtained as a batch, for example, all images captured in the last 15 minutes.
- the images may be captured from different anatomical imaging modality machines, and/or captured at different sensor orientations.
- Exemplary anatomical imaging device includes an x-ray machine that captures a two dimensional anatomical image.
- the anatomical images may be, for example, 2D images (e.g., x-ray, ultrasound) and/or 2D slices of 3D images (e.g., of CT and/or MRI scans).
- 2D images e.g., x-ray, ultrasound
- 2D slices of 3D images e.g., of CT and/or MRI scans.
- Anatomical images may be stored as single images, a series of multiple independent images, and/or set of slices (e.g., 2D slices of a 3D volume image).
- each one of the anatomical images is fed into the neural network(s), as described herein.
- the anatomical images are fed into one or more visual filter neural network(s)
- each visual filter neural network excludes anatomical images that are inappropriate for the corresponding target single-label neural network, in terms of images that are captured by another imaging modality and/or images depicting another body part and/or images captured by another sensor orientation angle.
- anatomical images that are inappropriate for the corresponding target single-label neural network, in terms of images that are captured by another imaging modality and/or images depicting another body part and/or images captured by another sensor orientation angle.
- non-chest x-rays are excluded, and other sensor orientations (i.e., non-AP and non-PA) are excluded.
- each single-label neural network has its own corresponding visual filter neural network, trained according to the image requirements of the corresponding single-label neural network.
- a single visual filter neural network is trained for multiple different single label neural networks, for example, outputting a label for the fed image indicative of anatomical imaging modality and/or body type and/or sensor orientation.
- the labels may be used to feed the images to the relevant single-label neural networks.
- the visual filter neural network may detect rotated anatomical images, optionally an amount of rotation relative to baseline (e.g., 90 degrees, 180 degrees, 270 degrees).
- the rotated anatomical images may be rotated back to baseline.
- the target anatomical image(s) are fed into the visual filter neural network for outputting a classification category indicative of a target body region depicted at a target sensor orientation and a rotation relative to a baseline defined by the single-label neural network (e.g., AP/PA chest x-ray), or another classification category indicative of at least one of a non-target body region and a non-target sensor orientation (e.g., non-chest, non-AP/PA, non-x-ray).
- a sub-set of the target anatomical images classified into the another classification category are rejected, to obtain a remaining sub- set of the target anatomical images.
- the remaining sub- set of the target anatomical images classified as rotated relative to the baseline are rotated to the baseline, to create a set of images which are all at baseline.
- images are processed to remove outlier pixel intensity values and/or adjust the pixel intensity values.
- the outlier pixel intensity values may denote injected content, for example, metadata text overlaid on the real image, for example, the name of the patient, and an indication of the side of the patient (e.g., L for left side) and/or additional data of the study (e.g., AP chest x- ray).
- the injected content is removed and/or adjusted by adjusting the intensity of the pixels thereof.
- pixels for the target anatomical image having outlier pixel intensity values denoting an injection of content are identified.
- the outlier pixel intensity values of the identified pixels are adjusted to values computed as a function of non-outlier pixel intensity values.
- the processing of the image may be performed prior to being fed into the visual filter neural network and/or for the images that passed through the visual filter neural network (i.e., non-rejected). Additional details of performing the removal of the outlier pixel intensity values and/or adjustment of pixel intensity values are described with reference to co-filed Application having Attorney Docket No. 76406.
- the target anatomical image(s) are fed into the single-label neural network.
- the anatomical images(s) fed into the single-label neural network include the anatomical images selected by the visual filter and/or excludes anatomical images rejected by the visual filter, and/or includes anatomical images rotated to baseline based on output of the visual filter.
- the anatomical image(s) are processed to remove injected content and/or adjust pixel intensity value of the pixels denoting injected content and/or other pixels (i.e., not denoting injected content).
- the target anatomical image(s) is fed into each one of the neural networks of the ensemble.
- multiple instances of the target anatomical image(s) is created, by pre processing the target anatomical image(s) according to input requirements of the respective neural network of the ensemble.
- Input requirements of the neural networks of the ensemble may differ according to neural network parameters described herein, for example, image size (i.e., scaling the image to the required image size), center-crop (e.g., center area of the image, given by the corresponding neural network of the ensemble’s input size is extracted out), mean normalization (e.g., mean of the pre-cropped image is subtracted), and standard deviation normalization (e.g., the standard deviation of the pre-cropped image is divided by).
- image size i.e., scaling the image to the required image size
- center-crop e.g., center area of the image, given by the corresponding neural network of the ensemble’s input size is extracted out
- mean normalization e.g., mean of the pre-cropped image is subtracted
- standard deviation normalization e.g., the standard deviation of the pre-cropped image is divided by
- a likelihood of an indication of the single visual finding type being depicted in the target anatomical image is outputted and/or computed by the single-label neural network.
- the single visual finding type denoting an acute medical condition for early and rapid treatment thereof, for example, pneumothorax, pneumomediastinum, pneumoperitoneum, and fracture.
- Such visual findings tend to be fine features and therefore difficult to identify on x-ray, especially in early stages and/or when the radiologist is looking for other multiple visual findings.
- the visual finding types are indicative of an acute medical condition that is progressive, requiring early and rapid treatment to reduce risk of complications. A delay in treatment may result in patient morbidity and/or mortality that may otherwise have been prevented or reduced by early treatment.
- the likelihood is represented as an absolute value, optionally a single indication or a binary indication, for example, present, or not present.
- the computed likelihood denotes a confidence score indicative of probability of the visual finding type being depicted in the anatomical image.
- instructions for creating a triage list may be generated, and/or the triage list may be generated.
- the triage list includes anatomical images determined as likely to depict the single visual finding type.
- the triage list is ranked by decreasing likelihood of the indication of the visual finding type based on the confidence score. For example, images having a higher computed probability score of depicting the visual finding type are ranked higher in the list than other images having lower probability scores.
- priority list and triage list are interchangeable.
- the triage list is for manual review by a human user (e.g., radiologist, emergency room physician, surgeon) of respective target anatomical images computed as likely depicting the indication of the visual finding type.
- a human user e.g., radiologist, emergency room physician, surgeon
- the priority list may be created by the computing device, and provided to image server, and/or the client terminal.
- the computing device provides instructions for creating the priority list to the image server, and the image server crates the priority list.
- the list may be viewed by the user (e.g., within the PACS viewer) optionally for manual selection of images for viewing, and/or may define automatic sequential loading of images for viewing by the user (e.g., within the PACS viewer).
- the user may manually view images in the priority list, optionally according to the ranking.
- the acute medical condition may be diagnosed in the patient.
- the visual finding type is a sign of the acute medical condition.
- the patient may be treated for the acute medical condition.
- the patient may be diagnosed and/or treated, for example, by insertion of a needle or chest tube to remove the excess air.
- the multi-label training dataset includes multiple anatomical images each associated with a label indicative of one or more defined visual finding types, or indicative of no visual finding types (i.e., explicit indication, or implicit indication by lack of a label of visual finding type(s)).
- the images stored in the multi-label training dataset have been processed by one or more visual filter neural networks, for exclusion of irrelevant images (e.g., of a different body region and/or different sensor orientation and/or different imaging modality) and/or for rotating rotated images back to baseline.
- irrelevant images e.g., of a different body region and/or different sensor orientation and/or different imaging modality
- images may be preprocessed to increase the number of training images and/or variety of the training image, for example, with the following additional augmentations: random horizontal flip, random crop, random rotation, and random zoom.
- Exemplary visual finding types defined for AP and/or PA and/or lateral chest x-rays include: abnormal aorta, aortic calcification, artificial valve, atelectasis, bronchial wall thickening, cardiac pacer, cardiomegaly, central line, consolidation, costrophrenic angle blunting, degenerative changes, elevated diaphragm, fracture, granuloma, hernia diaphragm, hilar prominence, hyperinflation, interstitial markings, kyphosis, mass, mediastinal widening, much bowel gas, nodule, orthopedic surgery, osteopenia, pleural effusion, pleural thickening, pneumothorax, pulmonary edema, rib fracture, scoliosis, soft tissue calcification, sternotomy wires, surgical clip noted, thickening of fissure, trachea deviation, transplant, tube, and vertebral height loss.
- the multi-label training dataset stores images of a same body region, captured by a same imaging modality time, having a same sensor orientation.
- AP and/or PA chest x-rays As used herein, AP and PA may be considered as having the same sensor orientation.
- Multiple multi-label training datasets may be created, for example, for different body parts and/or different imaging modality types and/or different sensor orientations.
- labels of the anatomical images of the multi-label training dataset are created based on an analysis that maps individual sentences of a respective text based radiology report to a corresponding visual finding type of multiple defined visual finding types.
- the text based radiology report includes a description of the radiological reading of the images, for example, typed by radiologist, or transcribed from a verbal dictation provided by the radiologist.
- the sets of anatomical images and associated radiology report may be obtained, for example, from a PACS server, and/or EMR records of the sample individuals.
- a respective tag is created for each set of anatomical images of each sample individuals.
- Each tag includes one or more visual findings depicted in one or both of the images.
- the tags may be implemented as, for example, a metadata tags, electronic labels, and/or pointers to entries in a dataset (e.g., an array where each element denotes a distinct visual findings).
- the tags may be created according to an analysis that maps individual sentences of each respective radiology report to corresponding indications of distinct visual findings types depicted in the anatomical images associated with the respective radiology report. An individual sentence is mapped to one of the distinct visual finding types.
- a set of distinct visual findings types may be created according to an analysis of the sentences (optionally all sentences) of the radiology reports of the images.
- the indications of distinct visual finding types are based on visual finding types that are identified by radiologists. However, since different radiologists may use different sentences and/or different terms to refer to the same visual finding, multiple different sentences may map to the same distinct visual finding.
- the individual sentences from the radiology reports of the sample individuals are clustered into a relatively small number of distinct visual finding types, for example, about 10 visual finding types, or about 20, or about 25, or about 30, or about 40, or about 50, or about 100.
- the number of radiology reports is small in comparison to the number of distinct sentences, for example, about 500,000, or about 1 million sentences.
- Each cluster denotes one of the distinct visual finding. All sentences within the respective cluster are indicative of same respective distinct visual finding.
- the clustering may be performed, for example, manually by users, and/or based on supervised and/or unsupervised machine learning methods that are designed to create clusters.
- Clustering may be performed according to one or more of the following:
- the algorithm(s) in (2) may be rule-based (e.g., for each finding, a human writes a formula, and if the sentence satisfies this formula, the sentence is mapped to a positive indication of the finding). 4.
- the algorithm(s) in (2) may learn the formula automatically (i.e. ML algorithm) given a sample of manually annotated sentences (such as the ones in (1)).
- One or more training datasets may be created for training one or more multi-label neural network.
- Each training dataset includes sets of anatomical images and associated tags.
- the training datasets may be sorted according to the target multi-label neural network being trained, for example, according to body portion being imaged, according to image modality, and/or according to sensor orientation.
- the training dataset is created by mapping a sub- set of sentences of the text based radiology reports (optionally one or more sentences from each report) of the sample individuals that are indicative of positive findings (i.e., a visual finding type which may be abnormal) to one of the indications of visual finding types.
- the negative sentences are either ignored, or mapped to negative labels for the mentioned visual finding types in the sentence.
- the neutral sentences are just ignored, as they convey no indicative information.
- the ambiguous sentences may lead to the removal of the associated set of images from the training set.
- another sub-set of sentences denoting negative findings e.g., normal findings, or lack or abnormal finding
- neutral data i.e., does not indicate a positive or negative finding
- ambiguous data e.g., unclear whether the data indicates a positive or negative finding
- the multi-label neural network may be trained on both sub-sets of sentences and associated anatomical images, where one sub-set trains the multi-label neural network to identify the visual finding types, and/or the other sub-set trains the multi-label neural network to avoid false positives by incorrectly designating negative finding as visual finding types.
- a fully covered training dataset is created according to a sub- set of the text based radiology reports of the sample individuals (i.e., some reports are excluded from the training dataset).
- each respective text based radiology report included in the sub- set each one of the sentences of the respective text based radiology report is mapped to one of: one of the indications of visual finding types (i.e., denoting a positive finding from the findings supported by the model), a negative finding, and neutral data.
- the multi-label neural network may be trained according to the fully covered training dataset and associated anatomical images.
- any hit training dataset is created according to a sub- set of the text based radiology reports of the sample individuals (i.e., some reports are excluded from the training dataset). For each respective text based radiology report included in the sub-set, at least one of the sentences of the respective text based radiology report is mapped to one of the indications of visual finding types. Sentences mapping to negative findings and/or neural data are ignored.
- the multi label neural network may be trained according to the any hit training dataset and associated anatomical images.
- a prevalence of the anatomical images labeled with the single visual finding type stored in the multi-label training dataset is statistically significantly higher than a prevalence of the anatomical images labeled with the single visual finding type stored in the storage server and/or prevalence of patients with pneumothorax in practice (e.g., prevalence in the emergency room, prevalence in anatomical images, prevalence in the general population).
- the multi-label training dataset may include 5828 images depicting pneumothorax, and 2,043,625 images not depicting pneumothorax, which may be representative of the wild prevalence of pneumothorax in patients undergoing chest x-ray imaging.
- the prevalence of the images depicting pneumothorax in the multi-label training dataset is set to be much higher, for example, about 33% of all images, or about 50%, or about 25-50%, or other values.
- the higher prevalence may increase the accuracy of the trained multi-label neural network in detecting the visual finding type indicative of pneumothorax.
- the higher prevalence of images depicting pneumothorax may be obtained, for example, by arranging the training images in 3 cyclic queues:“positive”,“normal”, and“abnormal”, optionally each set at 1/3 of the total images, or other distributions may be used.
- An image is added to the multi-label training batch queue by being drawn uniformly at random, and then the next image for training the multi-label neural network is obtained from that queue. Hence each batch contains, on average, 33% positive images.
- one or more multi-label neural networks are trained for detection of one or more of the visual finding types in a target anatomical image, according to the multi-label training dataset.
- the multi-label neural network is trained to identify about 20-50 different visual finding types.
- the multi-label neural network is trained from scratch from a template neural network.
- the template neural network may be an off-the-shelf publicly available neural network, for example, DenseNetl2l, ResNetl52, Inception-v4, and Inception-v3.
- the multi-label neural network is trained using a categorical cross-entropy loss function.
- the validation loss of the single visual finding type is monitored during the training of the multi-label neural network (which is trained to detect a large number of visual finding types, for example, about 40 types) to determine when the validation loss of the single visual finding type has stabilized.
- the validation loss of the multi-label neural network for the selected single visual finding type is monitored to detect a checkpoint of the neural network that had the lowest validation loss.
- Multiple instances of the multi-label neural network may be trained, each varying in terms of one or more neural network parameters.
- the instance of the multi-label neural network that obtained the lowest validation loss for the selected single visual finding type is designated for use for training the single-label neural network, as described herein.
- the single-label neural network is trained to directly minimize the loss for the selected single visual finding type, as described herein.
- the stabilizing denotes that the ability of the multi-label neural network to detect the presence of the single visual finding type in a target anatomical image has peaked.
- the baseline neural network for training of the single-label neural network to detect the single visual finding type is set according to a checkpoint of network weights of the multi-label neural network when stabilization of the validation loss is determined.
- One of the instances may be selected according to a validation loss of the selected single visual finding type.
- the single-label neural network is trained, the single-label neural network is initialized with a checkpoint of network weights of the selected instance of the multi label neural network.
- a single-label training dataset is provided and/or created.
- the single-label training dataset includes anatomical images each associated with a label indicative of the selected single visual finding type, and optionally alternatively a label indicative of an absence of the single visual finding type.
- Multiple single-label training dataset may be provided and/or created, for example, for each distinct visual finding type and/or distinct body region depicted in the anatomical image and/or distinct sensor orientation and/or distinct imaging modality.
- the single visual finding type is selected from the multiple visual findings types which are used to create the multi-label training dataset.
- the single-label training dataset may be created from the image of the multi-label training dataset.
- a prevalence of the anatomical images labeled with the single visual finding type stored in the single-label training dataset is statistically significantly higher than a prevalence of the anatomical images labeled with the single visual finding type stored in the multi-label training dataset and denoting a wild prevalence of the single visual finding type in practice (e.g., in the population of patients, in the ER, in anatomical images).
- the multi-label training dataset may include 5828 images depicting pneumothorax, and 2,043,625 images not depicting pneumothorax, which may be representative of the wild prevalence of pneumothorax in patients undergoing chest x-ray imaging.
- the prevalence of the images depicting pneumothorax is the single-label training dataset is set to be much higher, for example, about 33% of all images, or about 50%, or about 25-50%, or other values.
- the higher prevalence may increase the accuracy of the trained single-label neural network in detecting the visual finding type indicative of pneumothorax.
- the higher prevalence of images depicting pneumothorax may be obtained, for example, by arranging the training images in 3 cyclic queues:“positive”,“normal”, and“abnormal”, optionally each set at 1/3 of the total images, or other distributions may be used. An image is added to the single-label training batch queue by being drawn uniformly at random, and then the next image for training the single-label neural network is obtained from that queue. Hence each batch contains, on average, 33% positive images.
- the anatomical images of the multi-label training dataset are clustered into three clusters for creating the single-label training dataset having higher prevalence of images depicting the single visual finding type.
- Exemplary clusters include: a single visual finding type cluster including anatomical images depicting at least the selected single visual finding type (e.g., pneumothorax and another possible finding), a general positive finding cluster including anatomical images depicting at least one of the plurality of visual finding types excluding the selected single visual finding type (e.g. no pneumothorax but one or more other findings), and a negative finding cluster including anatomical images depicting none of the plurality of visual finding types (e.g., no findings).
- a single visual finding type cluster including anatomical images depicting at least the selected single visual finding type (e.g., pneumothorax and another possible finding)
- a general positive finding cluster including anatomical images depicting at least one of the plurality of visual finding types excluding the selected single visual finding type (e.g. no pneumothorax but one or
- Images may be picked at random from one of the clusters in succession (e.g., from cluster 1, then 2, then 3, then 1 again, and repeated) for insertion into the single-label training dataset, resulting in a pneumothorax prevalence of about 33% in the single label training dataset.
- a single-label neural network is trained for detection of the single visual finding type in a target anatomical image. It is noted that multiple single-label neural networks may be trained, each one trained independently.
- single-label neural network may refer to the ensemble of instances of the single-label neural network.
- the (e.g., each) single-label neural network is trained by setting the trained multi-label neural network as an initial baseline neural network of the single-label neural network.
- the weights from the other corresponding layers of the multi-label neural network were used to initialize the baseline neural network before training.
- the baseline neural network may be an architecture adaptation of the multi-label neural network. Different architectures may be used, and/or the same architecture with variation in one or more neural network parameters.
- the weights of the baseline neural network may be set according to the weights of the trained multi-label neural network.
- the baseline neural network is fine-tuning and/or re-training according to the single-label training dataset.
- An exemplary loss function for training the single-label neural network is a binary cross entropy loss, mathematically represented as:
- y denotes the output of the neural network (e.g., between 0 and 1) and z denotes the ground truth (either 0 or 1).
- the single-label neural network is trained to compute a likelihood score indicative of a probability of the single visual finding type being depicted in the target anatomical image.
- multiple instances of the single-label neural network are trained using the same baseline and the same single-label training dataset.
- the instances are created by varying one or more neural network parameters, for example, preprocessing image size, preprocessing input size (i.e., scaling the image to the required input image size), neural network architecture modification, center-crop (e.g., center area of the image, given by the corresponding neural network of the ensemble’s input size is extracted out), additional intermediate dense layer(s) before a final output, preprocessing normalization type (e.g., mean of the pre-cropped image is subtracted), and standard deviation normalization (e.g., the standard deviation of the pre-cropped image is divided by).
- preprocessing image size i.e., scaling the image to the required input image size
- neural network architecture modification e.g., center area of the image, given by the corresponding neural network of the ensemble’s input size is extracted out
- additional intermediate dense layer(s) before a final output e.g., mean of
- the instances of the single-label neural network are different than training different single-label neural networks. All the instances are trained to detect the same visual finding type for the same body region of the same anatomical imaging modality for the same sensor orientation.
- the different single-label neural networks may differ at least in the detected visual finding type.
- the performance of each instance is evaluated for detection of the indication of the single visual finding type.
- a combination of the instances may be selected according to a requirement of the evaluated performance.
- the combination of instances may be referred to as an ensemble.
- a set of rules may define how the final indication of likelihood of the visual finding type being present in the target anatomical image is computed from the ensemble, for example, an average of the members of the ensemble, majority vote, and maximum value.
- the number of selected instances is 2, 3, 4, 5, 6, 7, 8, or greater.
- each one of the instances computes a score indicative of likelihood of the anatomical image depicting the single visual finding type.
- the scores are aggregated to compute an aggregated score, for example, by averaging, majority vote, or other methods.
- the aggregated score is above a predefined threshold, the image is designated as positive for including the visual finding type.
- the selection of the threshold may be performed using mini-AUC, as described herein.
- the instances of the single-label neural network included in the ensemble are selected according to a target sensitivity and/or a target specificity.
- the target sensitivity and/or a target specificity may be provided, for example, manually entered by a user, and/or automatically computed by code.
- the definition of the target sensitivity and/or a target specificity provides for customization of the performance ability of the single-label neural network, for example, for different hospitals, different radiologists, and/or according to clinical requirements.
- a mini-AUC area under curve
- the mini-AUC may be computed as the area under the ROC curve bounded by the target sensitivity, within a tolerance requirement.
- the tolerance requirement may be provided by the user, automatically defined, and/or provided as a predefined system parameter value.
- the target sensitivity may denote a lower limit.
- the specificity may be selected as the maximal specificity within the mini-AUC.
- One or more instances are selected according to a requirement of the mini-AUC, for inclusion in the ensemble.
- a sub-region of the ROC defined by the target sensitivity, optionally with a tolerance is computed.
- the ensemble of instances is selected from a pool of multiple trained neural networks by performing an exhaustive search over all combinations of trained neural networks (e.g., according to a predefined number of neural networks and/or testing combinations with different numbers of neural networks) according to the mini-AUC.
- the performance of the instances may be computed based on the mini-AUC rather than the full AUC as in traditional methods.
- the instances with highest performance based on the mini-AUC are selected. For inclusion in the ensemble.
- the process of computing multiple instances of a neural network, each varying by one or more neural network parameters, and selecting one or more of the trained instances according to a mini-AUC computed for the ROC curve computed for each trained instance corresponding to the target sensitivity and/or target specificity may be performed for any neural network, and is not necessarily limited to the single-label and multi-label neural networks described herein.
- the trained (optionally selected instances of the) single-label neural network is provided.
- the provided trained single-label neural network may include an ensemble of instances that differ by combination of neural network parameters. Multiple different trained single-label neural networks may be provided.
- Inventors performed a computational evaluation according to the systems and/or methods and/or apparatus and/or code instructions described herein, based on the features and/or system components discussed with reference to FIGs. 1-4.
- Inventors performed a computational evaluation for training the single-label neural network for detecting an indication of pneumothorax in PA (posterior-anterior) and/or AP (anterior- posterior) two dimensional (2D) chest x-rays.
- the single-label neural network includes an ensemble of four neural network models, denoted A, B, C, and D, which instances of the same single-label neural network, with different adjustments of neural network parameters, as detailed below.
- the pre-trained model i.e., multi-label neural network
- the pre-trained model was trained from scratch using images from the same training set used to train single-label neural networks A, B, C, and D, and in the same setting, except instead of having a single output (i.e., pneumothorax), it was trained to predict the existence of 57 common x-ray findings (including pneumothorax), using the categorical cross-entropy loss function.
- the weights from the other layers were used to initialize the other ensemble models before training.
- the labels for the 57 common x-ray findings were obtained by a sentence-based analysis of the textual reports of the original 2 million studies, as described herein.
- the report of that study contained a sentence that was manually reviewed by an expert and found to indicate the presence of the certain finding, as described herein.
- Neural networks A, B, C, and D were fine-tuned from the same pre-trained model based on the multi-label neural network described herein. An ensemble of four neural networks was created, denoted herein as A, B,
- Neural networks A, B, C, and D were fine-tuned from the same pre-trained model based on the multi-label neural network described herein, where each member of the ensemble differed in one or more neural network parameters.
- Neural networks A, B, and C have the same setting: (i) Preprocessing image size: 330x330. (ii) Preprocessing input size: 299x299. (iii) Preprocessing normalization: mean normalization AND STD normalization
- iv Architecture: DenseNetl2l as implemented by keras, the output layer has a single unit. Model D has the following setting: (i) Preprocessing image size: 450x450. (ii) Preprocessing input size: 400x400. (iii) Preprocessing normalization: mean normalization
- Neural Networks A, B, C, and D were trained using the same training set of 5,828 positive (i.e., containing pneumothorax) and 2,043,625 negative (i.e., no pneumothorax) chest x-ray images, with PA and/or AP views.
- the same training framework was used for all neural networks.
- 1,446,413 images were“normal” (i.e. no findings at all)
- the other 597,212 images were“abnormal” (i.e., at least one abnormal finding other than pneumothorax).
- the training images were arranged in 3 cyclic queues:“positive”, “normal”, and“abnormal”. To add an image to the training batch, a queue was drawn uniformly at random, and then the next image was obtained from that queue. Hence each batch contained, on average, 33% positive images.
- the loss function used for training was the binary cross-entropy loss, mathematically represented as:
- y denotes the output of the neural network (e.g., between 0 and 1) and z denotes the ground truth (either 0 or 1).
- the training was done on a single 1080T ⁇ GPU.
- the built-in Keras 2.1.3 implementation of DenseNetl2l over Tensorflow 1.4 was used.
- the Adam optimizer with Keras default parameters, and a batch size of 16 was used.
- An epoch was defined as 150 batches.
- the starting learning rate was 0.001, which was multiplied by 0.75 when validation loss did’t improved for 30 epochs.
- Training was performed for 2000 epochs.
- the model with the lowest loss on the validation set was selected.
- Each x-ray image was preprocessed according to the model specifications, with the following additional augmentations: random horizontal flip, random crop (instead of center-crop), random rotation (up to ⁇ 9 degrees), and random zoom (up to ⁇ 10%).
- the held-out validation set used in the training process (i.e., to select the best model and to determine the threshold) consisted of 532 images (218 positives and 314 negatives). Each image was seen by three expert radiologists. The majority of their opinions (i.e.,“No pneumothorax”, “There is pneumothorax”,“Can’t see and can’t rule out”) was used as the ground-truth (cases without positive or negative majority were removed).
- Inventors performed another computational evaluation of an ensemble of single-label neural networks, created based on at least some implementations of the systems, and/or methods described herein.
- An engineered validation dataset that includes 532 anatomical images was prepared.
- the validation dataset that used to quantify the performance of the trained single-label neural networks i.e. for selection for inclusion in the ensemble
- the validation dataset that used to quantify the performance of the trained single-label neural networks was enriched with cases that are difficult to detect (e.g., difficult for a human to detect and/or difficult for a neural network to detect, for example, small pneumothorax) and/or the prevalence of pneumothorax in the validation dataset was set higher than the prevalence of pneumothorax in anatomical images and/or prevalence in emergency room patients and/or prevalence in the general population.
- the purpose of the validation dataset was to measure and calibrate the process for detection of pneumothorax by the ensemble on more challenging cases, while giving the challenging cases a higher prevalence than the prevalence measured in the field.
- the evaluation was also performed to validate that the process for detection of pneumothorax solves successfully also those rare cases.
- the validation dataset was engineered to have a diverse cover of a few different subcategories.
- the validation dataset included 218 positives and 314 negatives, and covered the following subcategories:
- Data sources - 3 Modality - CR, DX, Manufacturer - 5, View - AP, PA, Gender - female, male, Age [y] - 18 to 40, 40 to 60, above 60, Co-findings, Size - small, big.
- Inventors performed yet another computational evaluation of the ensemble of single-label neural networks, created based on at least some implementations of the systems, and/or methods described herein.
- a wild validation dataset that included 1215 images was created.
- the purpose of the wild validation dataset was to measure the performance of the ensemble of single-label neural networks, created based on at least some implementations of the systems, and/or methods described herein, on negatives that originate from the general population, while giving them a prevalence that resembles the prevalence measured in the field.
- the evaluation was performed to validate that pneumothorax is successfully solved in the usual cases.
- the wild validation dataset had the same positives as the validation dataset described above.
- the wild validation dataset included 218 positives and 997 negatives.
- the single-label neural network detects about 80.9% of small pneumothorax in anatomical images. It is noted that small pneumothorax are difficult to detect.
- One prior art method that attempted to detect pneumothorax with a simple neural network excluded small pneumothorax due to the technical ability to detect them.
- Inventors performed an evaluation to compare selection of instances of the single-label neural network (e.g., for inclusion in an ensemble) based on the mini-AUC (as described herein) in comparison to the standard AUC process.
- the mini-AUC is more relevant for selection of instances than the standard AUC process, since the mini-AUC may outperform the standard AUC>
- the AUC was 95.1% and the mini-AUC (target FPR 85%, tolerance +/-2%) was 93.8%, while for a second instance the AUC was 95.3% and the mini-AUC was 93.6%.
- the mini-AUC for the first instance was greater than 0.2% than the mini-AUC for the second instance, while the AUC for the first instance was lower than the AUC for the second instance by 0.2%.
- the actual specificity of the first instance was better by 0.9% at the produced target.
- FIG. 5 presents a table and graphs summarizing the experimental evaluation for comparison of instances of the single-label neural network based on the mini-AUC in comparison to the standard AUC process, in accordance with some embodiments of the present invention.
- composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
- a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
- range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/972,912 US10706545B2 (en) | 2018-05-07 | 2018-05-07 | Systems and methods for analysis of anatomical images |
US16/269,633 US10949968B2 (en) | 2018-05-07 | 2019-02-07 | Systems and methods for detecting an indication of a visual finding type in an anatomical image |
US16/269,619 US10891731B2 (en) | 2018-05-07 | 2019-02-07 | Systems and methods for pre-processing anatomical images for feeding into a classification neural network |
PCT/IB2019/053724 WO2019215604A1 (en) | 2018-05-07 | 2019-05-07 | Systems and methods for detecting an indication of a visual finding type in an anatomical image |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3791325A1 true EP3791325A1 (en) | 2021-03-17 |
EP3791325A4 EP3791325A4 (en) | 2022-04-13 |
Family
ID=66448425
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19800865.8A Withdrawn EP3791310A4 (en) | 2018-05-07 | 2019-05-07 | Systems and methods for pre-processing anatomical images for feeding into a classification neural network |
EP19800738.7A Withdrawn EP3791325A4 (en) | 2018-05-07 | 2019-05-07 | Systems and methods for detecting an indication of a visual finding type in an anatomical image |
EP19173136.3A Pending EP3567525A1 (en) | 2018-05-07 | 2019-05-07 | Systems and methods for analysis of anatomical images each captured at a unique orientation |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19800865.8A Withdrawn EP3791310A4 (en) | 2018-05-07 | 2019-05-07 | Systems and methods for pre-processing anatomical images for feeding into a classification neural network |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19173136.3A Pending EP3567525A1 (en) | 2018-05-07 | 2019-05-07 | Systems and methods for analysis of anatomical images each captured at a unique orientation |
Country Status (4)
Country | Link |
---|---|
EP (3) | EP3791310A4 (en) |
JP (1) | JP2019195627A (en) |
DE (1) | DE202019005911U1 (en) |
WO (3) | WO2019215606A1 (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10891731B2 (en) | 2018-05-07 | 2021-01-12 | Zebra Medical Vision Ltd. | Systems and methods for pre-processing anatomical images for feeding into a classification neural network |
US10706545B2 (en) | 2018-05-07 | 2020-07-07 | Zebra Medical Vision Ltd. | Systems and methods for analysis of anatomical images |
US10949968B2 (en) | 2018-05-07 | 2021-03-16 | Zebra Medical Vision Ltd. | Systems and methods for detecting an indication of a visual finding type in an anatomical image |
WO2019239155A1 (en) * | 2018-06-14 | 2019-12-19 | Kheiron Medical Technologies Ltd | Second reader suggestion |
CN111126454B (en) * | 2019-12-05 | 2024-03-26 | 东软集团股份有限公司 | Image processing method, device, storage medium and electronic equipment |
JP6737491B1 (en) * | 2020-01-09 | 2020-08-12 | 株式会社アドイン研究所 | Diagnostic device, diagnostic system and program using AI |
KR102405314B1 (en) * | 2020-06-05 | 2022-06-07 | 주식회사 래디센 | Method and system for real-time automatic X-ray image reading based on artificial intelligence |
EP4161391A4 (en) * | 2020-06-09 | 2024-06-19 | Annalise-AI Pty Ltd | Systems and methods for automated analysis of medical images |
US11487651B2 (en) | 2020-07-06 | 2022-11-01 | Fujifilm Medical Systems U.S.A., Inc. | Systems and methods for quantifying the effectiveness of software at displaying a digital record |
CN112101162B (en) * | 2020-09-04 | 2024-03-26 | 沈阳东软智能医疗科技研究院有限公司 | Image recognition model generation method and device, storage medium and electronic equipment |
KR102226743B1 (en) * | 2020-09-15 | 2021-03-12 | 주식회사 딥노이드 | Apparatus for quantitatively measuring pneumothorax in chest radiographic images based on a learning model and method therefor |
EP4315162A1 (en) | 2021-04-01 | 2024-02-07 | Bayer Aktiengesellschaft | Reinforced attention |
CN113764077B (en) * | 2021-07-27 | 2024-04-19 | 上海思路迪生物医学科技有限公司 | Pathological image processing method and device, electronic equipment and storage medium |
CN113806538B (en) * | 2021-09-17 | 2023-08-22 | 平安银行股份有限公司 | Label extraction model training method, device, equipment and storage medium |
WO2023056261A1 (en) * | 2021-09-30 | 2023-04-06 | Microport Orthopedics Holdings Inc. | Systems and methods of using photogrammetry for intraoperatively aligning surgical elements |
KR102671359B1 (en) * | 2022-02-18 | 2024-05-30 | 건양대학교 산학협력단 | Scoliosis early screening system using chest X-ray image |
WO2024036374A1 (en) * | 2022-08-17 | 2024-02-22 | Annalise-Ai Pty Ltd | Methods and systems for automated analysis of medical images |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5857030A (en) * | 1995-08-18 | 1999-01-05 | Eastman Kodak Company | Automated method and system for digital image processing of radiologic images utilizing artificial neural networks |
US7519207B2 (en) * | 2004-11-19 | 2009-04-14 | Carestream Health, Inc. | Detection and correction method for radiograph orientation |
US7574028B2 (en) * | 2004-11-23 | 2009-08-11 | Carestream Health, Inc. | Method for recognizing projection views of radiographs |
US8923580B2 (en) * | 2011-11-23 | 2014-12-30 | General Electric Company | Smart PACS workflow systems and methods driven by explicit learning from users |
US20170221204A1 (en) * | 2016-01-28 | 2017-08-03 | Siemens Medical Solutions Usa, Inc. | Overlay Of Findings On Image Data |
CN109690554B (en) * | 2016-07-21 | 2023-12-05 | 西门子保健有限责任公司 | Method and system for artificial intelligence based medical image segmentation |
US10445462B2 (en) * | 2016-10-12 | 2019-10-15 | Terarecon, Inc. | System and method for medical image interpretation |
-
2019
- 2019-05-07 EP EP19800865.8A patent/EP3791310A4/en not_active Withdrawn
- 2019-05-07 WO PCT/IB2019/053726 patent/WO2019215606A1/en unknown
- 2019-05-07 JP JP2019087284A patent/JP2019195627A/en active Pending
- 2019-05-07 WO PCT/IB2019/053724 patent/WO2019215604A1/en unknown
- 2019-05-07 EP EP19800738.7A patent/EP3791325A4/en not_active Withdrawn
- 2019-05-07 DE DE202019005911.3U patent/DE202019005911U1/en active Active
- 2019-05-07 EP EP19173136.3A patent/EP3567525A1/en active Pending
- 2019-05-07 WO PCT/IB2019/053725 patent/WO2019215605A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2019215605A1 (en) | 2019-11-14 |
EP3791310A4 (en) | 2022-03-30 |
EP3567525A1 (en) | 2019-11-13 |
EP3791310A1 (en) | 2021-03-17 |
DE202019005911U1 (en) | 2023-04-19 |
EP3791325A4 (en) | 2022-04-13 |
WO2019215606A1 (en) | 2019-11-14 |
JP2019195627A (en) | 2019-11-14 |
WO2019215604A1 (en) | 2019-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10949968B2 (en) | Systems and methods for detecting an indication of a visual finding type in an anatomical image | |
EP3791325A1 (en) | Systems and methods for detecting an indication of a visual finding type in an anatomical image | |
US10891731B2 (en) | Systems and methods for pre-processing anatomical images for feeding into a classification neural network | |
US10706545B2 (en) | Systems and methods for analysis of anatomical images | |
US11380432B2 (en) | Systems and methods for improved analysis and generation of medical imaging reports | |
US10311566B2 (en) | Methods and systems for automatically determining image characteristics serving as a basis for a diagnosis associated with an image study type | |
JP5952835B2 (en) | Imaging protocol updates and / or recommenders | |
US11646119B2 (en) | Systems and methods for automated analysis of medical images | |
US20180365834A1 (en) | Learning data generation support apparatus, learning data generation support method, and learning data generation support program | |
WO2017164204A1 (en) | Method and apparatus for extracting diagnosis object from medical document | |
EP3939003B1 (en) | Systems and methods for assessing a likelihood of cteph and identifying characteristics indicative thereof | |
US10839299B2 (en) | Non-leading computer aided detection of features of interest in imagery | |
US20230099284A1 (en) | System and method for prognosis management based on medical information of patient | |
US20240029251A1 (en) | Medical image analysis apparatus, medical image analysis method, and medical image analysis program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20201207 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40040488 Country of ref document: HK |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Free format text: PREVIOUS MAIN CLASS: G06N0003020000 Ipc: G06K0009620000 |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20220311 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06K 9/62 20060101AFI20220305BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20221011 |