US12340509B2 - Systems and methods for automated digital image content extraction and analysis - Google Patents
Systems and methods for automated digital image content extraction and analysis Download PDFInfo
- Publication number
- US12340509B2 US12340509B2 US17/191,963 US202117191963A US12340509B2 US 12340509 B2 US12340509 B2 US 12340509B2 US 202117191963 A US202117191963 A US 202117191963A US 12340509 B2 US12340509 B2 US 12340509B2
- Authority
- US
- United States
- Prior art keywords
- feature
- image
- patient
- image analysis
- analysis system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2137—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on criteria of topology preservation, e.g. multidimensional scaling or self-organising maps
- G06F18/21375—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on criteria of topology preservation, e.g. multidimensional scaling or self-organising maps involving differential geometry, e.g. embedding of pattern manifold
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
- G06T2207/30012—Spine; Backbone
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
- G06T2207/30064—Lung nodule
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- Exchanging information relating to certain topics is facilitated by the inclusion of images as support for (or in place of) textual descriptions of information to be exchanged.
- images may be particularly helpful in providing complete descriptions of certain medical conditions through the use of photographs or other images generated through applicable imaging techniques (e.g., X-ray, Magnetic Resonance Imaging (MRI), CT-scan, and/or the like).
- MRI Magnetic Resonance Imaging
- generated images have historically been unsuitable for automated review via computer-implemented systems, such as for automated diagnoses of medical conditions reflected within those images. Accordingly, a need exists for systems and methods configured for automated review of images to establish objective image content and/or to perform automated analysis of the content of images.
- Various embodiments are directed to computing system and methods configured for applying objective image content analysis processes so as to determine the presence (or absence) of objective image characteristics.
- receive data indicative of an intended analytical process to be applied to a particular image and the image analysis system retrieves appropriate image analysis rules and models for performing the intended analytical process.
- the image analysis system identifies reference features within the image for analysis, and utilizes those reference features to establish a scale and/or a point of reference to determine absolute or relative characteristics of various features identified within the image. Those characteristics may be compared against applicable objective criteria that may be utilized to perform an automated diagnosis and/or identification of characteristics of the image.
- the resulting diagnosis and/or identification of characteristics of the image may be utilized to select appropriate routing of the image (and/or an associated data file generated with the image).
- Appropriate reports may be made available via one or more tools based at least in part on the results of the image analysis.
- the one or more processors are further configured to: generate a visual scale overlay based at least in part on the absolute measurement scale; and generate a display comprising the at least one embedded image overlaid with the visual scale overlay.
- the visual scale overlay comprises a visual grid having grid lines a defined absolute distance apart.
- the visual scale overlay comprises one or more visual boundaries identifying boundaries of one or more of the plurality of included features.
- the one or more processors are further configured to: determine a transmission destination based at least in part on the image characteristics for the image data; and transmit the image data to the transmission destination.
- Various embodiments are directed to a computer-implemented method for identifying characteristics of features present within an image, the computer-implemented method comprising: receiving, via one or more processors, image data comprising at least one embedded image; identifying, via the one or more processors, image analysis criteria stored within the non-transitory memory storage areas, wherein the image analysis criteria comprises at least one feature identification model and at least one scaling model; applying, via the one or more processors, the at least one feature identification model to the at least one embedded image to identify a plurality of included features represented within the at least one embedded image, wherein the plurality of included features comprises at least one reference feature; applying, via the one or more processors, the at least one scaling model, based at least in part on the at least one reference feature, to establish an absolute measurement scale for the at least one embedded image; measuring, via the absolute measurement scale, a distance between locations within the at least one embedded image and associated with at least two of the plurality of included features; and determining, based at least in part on the image analysis criteria and the
- the method further comprises generating a visual scale overlay based at least in part on the absolute measurement scale; and generating a display comprising the at least one embedded image overlaid with the visual scale overlay.
- the visual scale overlay comprises a visual grid having grid lines a defined absolute distance apart.
- the visual scale overlay comprises one or more visual boundaries identifying boundaries of one or more of the plurality of included features.
- the method further comprises determining a transmission destination based at least in part on the image characteristics for the image data; and transmitting the image data to the transmission destination.
- the image data is received from a user computing entity; and wherein the one or more processors are further configured to transmit data comprising the image characteristics to the user computing entity.
- the method further comprises determining, based at least in part on application of the at least one feature identification model, whether at least one reference feature is present within the at least one embedded image; and upon determining that the at least one reference feature is not present within the at least one embedded image, transmitting an error message to the user computing entity.
- Certain embodiments are directed to a computer program product for identifying characteristics of features present within an image
- the computer program product comprising at least one non-transitory computer-readable storage medium having computer-readable program code portions stored therein, the computer-readable program code portions configured to cause an executing processor to: receive image data comprising at least one embedded image; identify image analysis criteria stored within the non-transitory memory storage areas, wherein the image analysis criteria comprises at least one feature identification model and at least one scaling model; apply the at least one feature identification model to the at least one embedded image to identify a plurality of included features represented within the at least one embedded image, wherein the plurality of included features comprises at least one reference feature; apply the at least one scaling model, based at least in part on the at least one reference feature, to establish an absolute measurement scale for the at least one embedded image; measure, via the absolute measurement scale, a distance between locations within the at least one embedded image and associated with at least two of the plurality of included features; and determine, based at least in part on the image analysis criteria and the distance between the at
- the computer program product further comprises one or more executable portions configured to: generate a visual scale overlay based at least in part on the absolute measurement scale; and generate a display comprising the at least one embedded image overlaid with the visual scale overlay.
- the visual scale overlay comprises a visual grid having grid lines a defined absolute distance apart.
- the visual scale overlay comprises one or more visual boundaries identifying boundaries of one or more of the plurality of included features.
- the computer program product further comprises one or more executable portions configured to: determining a transmission destination based at least in part on the image characteristics for the image data; and transmitting the image data to the transmission destination.
- the image data is received from a user computing entity; and wherein the one or more processors are further configured to transmit data comprising the image characteristics to the user computing entity.
- certain embodiments comprise one or more executable portions configured to: determine, based at least in part on application of the at least one feature identification model, whether at least one reference feature is present within the at least one embedded image; and upon determining that the at least one reference feature is not present within the at least one embedded image, transmitting an error message to the user computing entity.
- Certain embodiments are directed to an image analysis system configured for identifying characteristics of features present within an image, the image analysis system comprising: one or more non-transitory memory storage areas; and one or more processors collectively configured to: receive image data comprising at least one embedded image; identify image analysis criteria stored within the non-transitory memory storage areas, wherein the image analysis criteria comprises at least one feature identification model; apply the at least one feature identification model to the at least one embedded image to identify a plurality of included features represented within the embedded image; determine, based at least in part on the at least one feature identification model and the plurality of included features identified within the embedded images, feature characteristics of one or more of the plurality of included features; and determine, based at least in part on the image analysis criteria and the feature characteristics of the one or more of the plurality of included features, an image classification for the image data.
- the one or more processors are further configured to: determine an orientation of each of the plurality of included features; determine a relative positioning between at least two of the plurality of included features; and determine, based at least in part on the image analysis criteria and the relative positioning between the at least two of the plurality of included features, the image classification for the image data.
- determining a relative positioning between at least two of the plurality of included features comprises determining an angle of orientation between the at least two of the plurality of included features.
- determining a relative positioning between at least two of the plurality of included features comprises determining a relative distance between portions of each of the at least two of the plurality of included features, wherein the relative distance is a percentage of a size of one of the at least two of the plurality of included features.
- the at least one feature identification model comprises a supervised machine-learning feature identification model configured to: identify a plurality of features represented within the embedded image; and classify one or more of the plurality of features represented within the embedded image; and wherein determining an image classification for the image data comprises determining image characteristics based at least in part on a classification of the one or more of the plurality of features represented within the embedded image.
- classifying the one or more of the plurality of features represented within the embedded image comprises classifying a skin lesion based at least in part on a convolutional neural network based feature identification model. Moreover, in various embodiments, classifying the one or more of the plurality of features represented within the embedded image comprises classifying a breast mass within a mammograph based at least in part on a convolutional neural network based feature identification model.
- classifying the one or more of the plurality of features represented within the embedded image comprises determining a ratio between measurements of one or more vertebrae. In other embodiments, classifying the one or more of the plurality of features represented within the embedded image comprises comparing a location of a pubic symphysis relative to a detected lower edge of a hanging abdominal panniculus. In certain embodiments, classifying the one or more of the plurality of features represented within the embedded image comprises detecting an indention within a patient's shoulder. In various embodiments, classifying the one or more of the plurality of features represented within the embedded image comprises detecting a symmetry of a detected nose.
- Certain embodiments are directed to a computer-implemented method for identifying characteristics of features present within an image, the computer-implemented method comprising: receiving, via one or more processors, image data comprising at least one embedded image; identifying, via the one or more processors, image analysis criteria stored within the non-transitory memory storage areas, wherein the image analysis criteria comprises at least one feature identification model; applying, via the one or more processors, the at least one feature identification model to the at least one embedded image to identify a plurality of included features represented within the embedded image; determining, based at least in part on the at least one feature identification model and the plurality of included features identified within the embedded images, feature characteristics of one or more of the plurality of included features; and determining, based at least in part on the image analysis criteria and the feature characteristics of the one or more of the plurality of included features, an image classification for the image data.
- the method further comprises determining an orientation of each of the plurality of included features; determining a relative positioning between at least two of the plurality of included features; and determining, based at least in part on the image analysis criteria and the relative positioning between the at least two of the plurality of included features, the image classification for the image data.
- determining a relative positioning between at least two of the plurality of included features comprises determining an angle of orientation between the at least two of the plurality of included features.
- determining a relative positioning between at least two of the plurality of included features comprises determining a relative distance between portions of each of the at least two of the plurality of included features, wherein the relative distance is a percentage of a size of one of the at least two of the plurality of included features.
- the at least one feature identification model comprises a supervised machine-learning feature identification model configured to: identify a plurality of features represented within the embedded image; and classify one or more of the plurality of features represented within the embedded image; and wherein determining an image classification for the image data comprises determining image characteristics based at least in part on a classification of the one or more of the plurality of features represented within the embedded image.
- classifying the one or more of the plurality of features represented within the embedded image comprises classifying a skin lesion based at least in part on a convolutional neural network based feature identification model. In certain embodiments, classifying the one or more of the plurality of features represented within the embedded image comprises classifying a breast mass within a mammograph based at least in part on a convolutional neural network based feature identification model. In certain embodiments, classifying the one or more of the plurality of features represented within the embedded image comprises determining a ratio between measurements of one or more vertebrae.
- classifying the one or more of the plurality of features represented within the embedded image comprises comparing a location of a pubic symphysis relative to a detected lower edge of a hanging abdominal panniculus. In certain embodiments, classifying the one or more of the plurality of features represented within the embedded image comprises detecting an indention within a patient's shoulder. In various embodiments, classifying the one or more of the plurality of features represented within the embedded image comprises detecting a symmetry of a detected nose.
- Certain embodiments are directed to a computer program product for identifying characteristics of features present within an image
- the computer program product comprising at least one non-transitory computer-readable storage medium having computer-readable program code portions stored therein, the computer-readable program code portions configured to cause an executing processor to: receive image data comprising at least one embedded image; identify image analysis criteria stored within the non-transitory memory storage areas, wherein the image analysis criteria comprises at least one feature identification model; apply the at least one feature identification model to the at least one embedded image to identify a plurality of included features represented within the embedded image; determine, based at least in part on the at least one feature identification model and the plurality of included features identified within the embedded images, feature characteristics of one or more of the plurality of included features; and determine, based at least in part on the image analysis criteria and the feature characteristics of the one or more of the plurality of included features, an image classification for the image data.
- the executable portions are further configured to cause an executing processor to: determine an orientation of each of the plurality of included features; determine a relative positioning between at least two of the plurality of included features; and determine, based at least in part on the image analysis criteria and the relative positioning between the at least two of the plurality of included features, the image classification for the image data.
- determining a relative positioning between at least two of the plurality of included features comprises determining an angle of orientation between the at least two of the plurality of included features.
- determining a relative positioning between at least two of the plurality of included features comprises determining a relative distance between portions of each of the at least two of the plurality of included features, wherein the relative distance is a percentage of a size of one of the at least two of the plurality of included features.
- the at least one feature identification model comprises a supervised machine-learning feature identification model configured to: identify a plurality of features represented within the embedded image; and classify one or more of the plurality of features represented within the embedded image; and wherein determining an image classification for the image data comprises determining image characteristics based at least in part on a classification of the one or more of the plurality of features represented within the embedded image.
- classifying the one or more of the plurality of features represented within the embedded image comprises classifying a skin lesion based at least in part on a convolutional neural network based feature identification model. In certain embodiments, classifying the one or more of the plurality of features represented within the embedded image comprises classifying a breast mass within a mammograph based at least in part on a convolutional neural network based feature identification model.
- FIG. 1 is a diagram of a system that can be used in conjunction with various embodiments of the present invention
- FIG. 2 is a schematic of an image analysis system in accordance with certain embodiments of the present invention.
- FIG. 3 is a schematic of a user computing entity in accordance with certain embodiments of the present invention.
- FIG. 4 is a flow diagram illustrating operation of processing images according to certain embodiments.
- FIGS. 5 A- 5 B are example images extracted from a source data file in accordance with certain embodiments.
- FIGS. 6 A- 6 C are example images illustrating detection of feature orientation within such images in accordance with certain embodiments
- FIGS. 7 A- 7 B are example images illustrating application of at least one quality control filter in accordance with certain embodiments.
- FIGS. 8 A- 8 B are example images illustrating application of another quality control filter in accordance with certain embodiments.
- FIGS. 9 A- 9 B are example images illustrating application of a content-orientation based filter in accordance with certain embodiments.
- FIGS. 10 A- 10 B are example images illustrating detection of specific features within an image in accordance with certain embodiments.
- FIGS. 11 A- 13 E are example images illustrating various processes for detecting sub-features in accordance with certain embodiments
- FIG. 16 is an example image illustrating application of sub-feature detection models in accordance with certain embodiments.
- FIGS. 17 - 18 B are example images illustrating example overlays that may be applied to an image within an image analysis tool in accordance with certain embodiments
- FIGS. 19 A- 19 E are example images illustrating processes for extraction and analysis of sinusitis images in accordance with certain embodiments
- FIGS. 20 A- 21 C are example images illustrating processes for extraction and analysis of scoliosis images in accordance with certain embodiments
- FIG. 22 is an example image relating to a skin lesion analysis in accordance with certain embodiments.
- FIGS. 23 - 23 B are example images relating to a lung nodule analysis in accordance with certain embodiments.
- FIG. 24 is an example flowchart illustrating analysis of a mammography in accordance with certain embodiments.
- FIG. 25 illustrates example processes for detecting nasal deformities in accordance with certain embodiments
- FIGS. 26 A- 26 C illustrate example outputs of detections of nasal deformities in accordance with certain embodiments
- FIG. 27 illustrates an example mid-sagittal X-rays spinal view for classifying images according to vertebrae thinning in accordance with certain embodiments
- FIGS. 28 A- 28 B illustrate example mid-sagittal X-rays spinal views for classifying images according to disk thinning in accordance with certain embodiments
- FIG. 29 illustrates another example mid-sagittal X-ray spinal view for classifying images according to spinal canal restrictions in accordance with certain embodiments
- FIG. 30 illustrates torso images views in accordance with certain embodiments
- FIGS. 31 A- 32 B illustrate example frontal torso images indicating filters for classifying images relating to breast surgery according to certain embodiments.
- FIGS. 33 - 34 B illustrate example facial images for classifying images relating to rhinoplasty requirements according to certain embodiments.
- Embodiments of the present invention may be implemented in various ways, including as computer program products that comprise articles of manufacture.
- Such computer program products may include one or more software components including, for example, software objects, methods, data structures, and/or the like.
- a software component may be coded in any of a variety of programming languages.
- An illustrative programming language may be a lower-level programming language such as an assembly language associated with a particular hardware architecture and/or operating system platform.
- a software component comprising assembly language instructions may require conversion into executable machine code by an assembler prior to execution by the hardware architecture and/or platform.
- Another example programming language may be a higher-level programming language that may be portable across multiple architectures.
- a software component comprising higher-level programming language instructions may require conversion to an intermediate representation by an interpreter or a compiler prior to execution.
- programming languages include, but are not limited to, a macro language, a shell or command language, a job control language, a script language, a database query or search language, and/or a report writing language.
- a software component comprising instructions in one of the foregoing examples of programming languages may be executed directly by an operating system or other software component without having to be first transformed into another form.
- a software component may be stored as a file or other data storage construct.
- Software components of a similar type or functionally related may be stored together such as, for example, in a particular directory, folder, or library.
- Software components may be static (e.g., pre-established or fixed) or dynamic (e.g., created or modified at the time of execution).
- a computer program product may include a non-transitory computer-readable storage medium storing applications, programs, program modules, scripts, source code, program code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like (also referred to herein as executable instructions, instructions for execution, computer program products, program code, and/or similar terms used herein interchangeably).
- Such non-transitory computer-readable storage media include all computer-readable media (including volatile and non-volatile media).
- a non-volatile computer-readable storage medium may include a floppy disk, flexible disk, hard disk, solid-state storage (SSS) (e.g., a solid state drive (SSD), solid state card (SSC), solid state module (SSM), enterprise flash drive, magnetic tape, or any other non-transitory magnetic medium, and/or the like.
- SSS solid state storage
- a non-volatile computer-readable storage medium may also include a punch card, paper tape, optical mark sheet (or any other physical medium with patterns of holes or other optically recognizable indicia), compact disc read only memory (CD-ROM), compact disc-rewritable (CD-RW), digital versatile disc (DVD), Blu-ray disc (BD), any other non-transitory optical medium, and/or the like.
- Such a non-volatile computer-readable storage medium may also include read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory (e.g., Serial, NAND, NOR, and/or the like), multimedia memory cards (MMC), secure digital (SD) memory cards, SmartMedia cards, CompactFlash (CF) cards, Memory Sticks, and/or the like.
- ROM read-only memory
- PROM programmable read-only memory
- EPROM erasable programmable read-only memory
- EEPROM electrically erasable programmable read-only memory
- flash memory e.g., Serial, NAND, NOR, and/or the like
- MMC multimedia memory cards
- SD secure digital
- SmartMedia cards SmartMedia cards
- CompactFlash (CF) cards Memory Sticks, and/or the like.
- a non-volatile computer-readable storage medium may also include conductive-bridging random access memory (CBRAM), phase-change random access memory (PRAM), ferroelectric random-access memory (FeRAM), non-volatile random-access memory (NVRAM), magneto resistive random-access memory (MRAM), resistive random-access memory (RRAM), Silicon-Oxide-Nitride-Oxide-Silicon memory (SONOS), floating junction gate random access memory (FJG RAM), Millipede memory, racetrack memory, and/or the like.
- CBRAM conductive-bridging random access memory
- PRAM phase-change random access memory
- FeRAM ferroelectric random-access memory
- NVRAM non-volatile random-access memory
- MRAM magneto resistive random-access memory
- RRAM resistive random-access memory
- SONOS Silicon-Oxide-Nitride-Oxide-Silicon memory
- FJG RAM floating junction gate random access memory
- Millipede memory racetrack memory
- a volatile computer-readable storage medium may include random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), fast page mode dynamic random access memory (FPM DRAM), extended data-out dynamic random access memory (EDO DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), double data rate type two synchronous dynamic random access memory (DDR2 SDRAM), double data rate type three synchronous dynamic random access memory (DDR3 SDRAM), Rambus dynamic random access memory (RDRAM), Twin Transistor RAM (TTRAM), Thyristor RAM (T-RAM), Zero-capacitor (Z-RAM), Rambus in-line memory module (RIMM), dual in-line memory module (DIMM), single in-line memory module (SIMM), video random access memory (VRAM), cache memory (including various levels), flash memory, register memory, and/or the like.
- RAM random access memory
- DRAM dynamic random access memory
- SRAM static random access memory
- FPM DRAM fast page mode dynamic random access
- embodiments of the present invention may also be implemented as methods, apparatus, systems, computing devices, computing entities, and/or the like.
- embodiments of the present invention may take the form of a data structure, apparatus, system, computing device, computing entity, and/or the like executing instructions stored on a computer-readable storage medium to perform certain steps or operations.
- embodiments of the present invention may also take the form of an entirely hardware embodiment, an entirely computer program product embodiment, and/or an embodiment that comprises combination of computer program products and hardware performing certain steps or operations.
- FIG. 1 provides an illustration of a system 100 that can be used in conjunction with various embodiments of the present invention.
- the system 100 may comprise one or more image analysis systems 65 , one or more user computing entities 30 (e.g., which may encompass handheld computing devices, laptop computing devices, desktop computing devices, and/or one or more Internet of Things (IoT) devices, and/or the like, one or more networks 135 , and/or the like.
- Each of the components of the system may be in electronic communication with, for example, one another over the same or different wireless or wired networks 135 including, for example, a wired or wireless Personal Area Network (PAN), Local Area Network (LAN), Metropolitan Area Network (MAN), Wide Area Network (WAN), and/or the like.
- PAN Personal Area Network
- LAN Local Area Network
- MAN Metropolitan Area Network
- WAN Wide Area Network
- FIG. 1 illustrate certain system entities as separate, standalone entities, the various embodiments are not limited to this particular architecture.
- FIG. 2 provides a schematic of an image analysis system 65 according to one embodiment of the present invention.
- the terms computing entity, entity, device, system, and/or similar words used herein interchangeably may refer to, for example, one or more computers, computing entities, desktop computers, mobile phones, tablets, phablets, notebooks, laptops, distributed systems, items/devices, terminals, servers or server networks, blades, gateways, switches, processing devices, processing entities, set-top boxes, relays, routers, network access points, base stations, the like, and/or any combination of devices or entities adapted to perform the functions, operations, and/or processes described herein.
- the image analysis system 65 may further include or be in communication with non-volatile media (also referred to as non-volatile storage, memory, memory storage, memory circuitry and/or similar terms used herein interchangeably).
- non-volatile storage or memory may include one or more non-volatile storage or memory media 206 as described above, such as hard disks, ROM, PROM, EPROM, EEPROM, flash memory, MMCs, SD memory cards, Memory Sticks, CBRAM, PRAM, FeRAM, RRAM, SONOS, racetrack memory, and/or the like.
- Memory media 206 may also be embodied as a data storage device or devices, as a separate database server or servers, or as a combination of data storage devices and separate database servers. Further, in some embodiments, memory media 206 may be embodied as a distributed repository such that some of the stored information/data is stored centrally in a location within the system and other information/data is stored in one or more remote locations. Alternatively, in some embodiments, the distributed repository may be distributed over a plurality of remote storage locations only.
- An example of the embodiments contemplated herein would include a cloud data storage system maintained by a third-party provider and where some or all of the information/data required for the operation of the system may be stored. As a person of ordinary skill in the art would recognize, the information/data required for the operation of the system may also be partially stored in the cloud data storage system and partially stored in a locally maintained data storage system.
- Memory media 206 may include information/data accessed and stored by the system to facilitate the operations of the system. More specifically, memory media 206 may encompass one or more data stores configured to store information/data usable in certain embodiments.
- data storage repositories may comprise stored data of one or more models utilized for identifying features and/or classifications of various images (e.g., training data utilized by one or more machine-learning models). Data stored within such data repositories may be utilized during operation of various embodiments as discussed herein.
- the image analysis system 65 may further include or be in communication with volatile media (also referred to as volatile storage, memory, memory storage, memory circuitry and/or similar terms used herein interchangeably).
- volatile storage or memory may also include one or more volatile storage or memory media 207 as described above, such as RAM, DRAM, SRAM, FPM DRAM, EDO DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, RDRAM, RIMM, DIMM, SIMM, VRAM, cache memory, register memory, and/or the like.
- the volatile storage or memory media may be used to store at least portions of the databases, database instances, database management system entities, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like being executed by, for example, the processing element 205 .
- the databases, database instances, database management system entities, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like may be used to control certain aspects of the operation of the image analysis system 65 with the assistance of the processing element 205 and operating system.
- the image analysis system 65 may also include one or more network and/or communications interfaces 208 for communicating with various computing entities (e.g., user computing entities 30 ), such as by communicating data, content, information, and/or similar terms used herein interchangeably that can be transmitted, received, operated on, processed, displayed, stored, and/or the like.
- various computing entities e.g., user computing entities 30
- the image analysis system 65 may communicate with computing entities or communication interfaces of other computing entities, user computing entities 30 , and/or the like.
- the image analysis system 65 may access various data assets.
- the image analysis system 65 may also include one or more network and/or communications interfaces 208 for communicating with various computing entities, such as by communicating data, content, information, and/or similar terms used herein interchangeably that can be transmitted, received, operated on, processed, displayed, stored, and/or the like.
- a wired data transmission protocol such as fiber distributed data interface (FDDI), digital subscriber line (DSL), Ethernet, asynchronous transfer mode (ATM), frame relay, data over cable service interface specification (DOCSIS), or any other wired transmission protocol.
- FDDI fiber distributed data interface
- DSL digital subscriber line
- Ethernet asynchronous transfer mode
- ATM asynchronous transfer mode
- frame relay such as frame relay, data over cable service interface specification (DOCSIS), or any other wired transmission protocol.
- DOCSIS data over cable service interface specification
- the image analysis system 65 may be configured to communicate via wireless external communication networks using any of a variety of protocols, such as general packet radio service (GPRS), Universal Mobile Telecommunications System (UMTS), Code Division Multiple Access 2000 (CDMA2000), CDMA2000 1 ⁇ (1 ⁇ RTT), Wideband Code Division Multiple Access (WCDMA), Global System for Mobile Communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), Evolved Universal Terrestrial Radio Access Network (E-UTRAN), Evolution-Data Optimized (EVDO), High Speed Packet Access (HSPA), High-Speed Downlink Packet Access (HSDPA), IEEE 802.11 (Wi-Fi), Wi-Fi Direct, 802.16 (WiMAX), ultra-wideband (UWB), infrared (IR) protocols, near field communication (NFC) protocols, Wibree, Bluetooth protocols, wireless universal serial bus (USB) protocols, and/or any other wireless protocol.
- the image analysis system 65 may use such protocols and standards to communicate using Border Gateway Protocol (BGP), Dynamic Host Configuration Protocol (DHCP), Domain Name System (DNS), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), HTTP over TLS/SSL/Secure, Internet Message Access Protocol (IMAP), Network Time Protocol (NTP), Simple Mail Transfer Protocol (SMTP), Telnet, Transport Layer Security (TLS), Secure Sockets Layer (SSL), Internet Protocol (IP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Datagram Congestion Control Protocol (DCCP), Stream Control Transmission Protocol (SCTP), Hypertext Markup Language (HTML), and/or the like.
- Border Gateway Protocol BGP
- Dynamic Host Configuration Protocol DHCP
- DNS Domain Name System
- FTP File Transfer Protocol
- HTTP Hypertext Transfer Protocol
- HTTP Hypertext Transfer Protocol
- HTTP Hypertext Transfer Protocol
- HTTP Hypertext Transfer Protocol
- HTTP Hypertext Transfer Protocol
- HTTP Hypertext Transfer Protocol
- HTTP Hypertext Transfer Protocol
- HTTP Hypertext Transfer
- one or more of the image analysis system's components may be located remotely from other image analysis system 65 components, such as in a distributed system. Furthermore, one or more of the components may be aggregated and additional components performing functions described herein may be included in the image analysis system 65 . Thus, the image analysis system 65 can be adapted to accommodate a variety of needs and circumstances.
- the signals provided to and received from the transmitter 304 and the receiver 306 , respectively, may include signaling information/data in accordance with an air interface standard of applicable wireless systems to communicate with various entities, such as a image analysis system 65 , another user computing entity 30 , and/or the like.
- the user computing entity 30 may be capable of operating with one or more air interface standards, communication protocols, modulation types, and access types. More particularly, the user computing entity 30 may operate in accordance with any of a number of wireless communication standards and protocols.
- the user computing entity 30 can communicate with various other entities using concepts such as Unstructured Supplementary Service data (USSD), Short Message Service (SMS), Multimedia Messaging Service (MMS), Dual-Tone Multi-Frequency Signaling (DTMF), and/or Subscriber Identity Module Dialer (SIM dialer).
- USSD Unstructured Supplementary Service data
- SMS Short Message Service
- MMS Multimedia Messaging Service
- DTMF Dual-Tone Multi-Frequency Signaling
- SIM dialer Subscriber Identity Module Dialer
- the user computing entity 30 can also download changes, add-ons, and updates, for instance, to its firmware, software (e.g., including executable instructions, applications, program modules), and operating system.
- the user computing entity 30 may include location determining aspects, devices, modules, functionalities, and/or similar words used herein interchangeably.
- the user computing entity 30 may include outdoor positioning aspects, such as a location module adapted to acquire, for example, latitude, longitude, altitude, geocode, course, direction, heading, speed, UTC, date, and/or various other information/data.
- the location module can acquire data, sometimes known as ephemeris data, by identifying the number of satellites in view and the relative positions of those satellites.
- the satellites may be a variety of different satellites, including LEO satellite systems, DOD satellite systems, the European Union Galileo positioning systems, the Chinese Compass navigation systems, Indian Regional Navigational satellite systems, and/or the like.
- the location information/data/data may be determined by triangulating the position in connection with a variety of other systems, including cellular towers, Wi-Fi access points, and/or the like.
- the user computing entity 30 may include indoor positioning aspects, such as a location module adapted to acquire, for example, latitude, longitude, altitude, geocode, course, direction, heading, speed, time, date, and/or various other information/data.
- indoor aspects may use various position or location technologies including RFID tags, indoor beacons or transmitters, Wi-Fi access points, cellular towers, nearby computing devices (e.g., smartphones, laptops) and/or the like.
- such technologies may include iBeacons, Gimbal proximity beacons, BLE transmitters, Near Field Communication (NFC) transmitters, and/or the like. These indoor positioning aspects can be used in a variety of settings to determine the location of someone or something to within inches or centimeters.
- the user computing entity 30 may also comprise one or more user input/output interfaces (e.g., a display 316 and/or speaker/speaker driver coupled to a processing element 308 and a touch screen, keyboard, mouse, and/or microphone coupled to a processing element 308 ).
- the user output interface may be configured to provide an application, browser, user interface, dashboard, webpage, and/or similar words used herein interchangeably executing on and/or accessible via the user computing entity 30 to cause display or audible presentation of information/data and for user interaction therewith via one or more user input interfaces.
- the user output interface may be updated dynamically from communication with the image analysis system 65 .
- the user input interface can comprise any of a number of devices allowing the user computing entity 30 to receive information/data, such as a keypad 318 (hard or soft), a touch display, voice/speech or motion interfaces, scanners, readers, or other input device.
- the keypad 318 can include (or cause display of) the conventional numeric (0-9) and related keys (#, *), and other keys used for operating the user computing entity 30 and may include a full set of alphabetic keys or set of keys that may be activated to provide a full set of alphanumeric keys.
- the user input interface can be used, for example, to activate or deactivate certain functions, such as screen savers and/or sleep modes. Through such inputs the user computing entity 30 can collect information/data, user interaction/input, and/or the like.
- the user computing entity 30 can also include volatile storage or memory 322 and/or non-volatile storage or memory 324 , which can be embedded and/or may be removable.
- the non-volatile memory may be ROM, PROM, EPROM, EEPROM, flash memory, MMCs, SD memory cards, Memory Sticks, CBRAM, PRAM, FeRAM, RRAM, SONOS, racetrack memory, and/or the like.
- the volatile memory may be RAM, DRAM, SRAM, FPM DRAM, EDO DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, RDRAM, RIMM, DIMM, SIMM, VRAM, cache memory, register memory, and/or the like.
- the volatile and non-volatile storage or memory can store databases, database instances, database management system entities, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like to implement the functions of the user computing entity 30 .
- the networks 135 may include, but are not limited to, any one or a combination of different types of suitable communications networks such as, for example, cable networks, public networks (e.g., the Internet), private networks (e.g., frame-relay networks), wireless networks, cellular networks, telephone networks (e.g., a public switched telephone network), or any other suitable private and/or public networks.
- the networks 135 may have any suitable communication range associated therewith and may include, for example, global networks (e.g., the Internet), MANs, WANs, LANs, or PANs.
- the networks 135 may include any type of medium over which network traffic may be carried including, but not limited to, coaxial cable, twisted-pair wire, optical fiber, a hybrid fiber coaxial (HFC) medium, microwave terrestrial transceivers, radio frequency communication mediums, satellite communication mediums, or any combination thereof, as well as a variety of network devices and computing platforms provided by network providers or other entities.
- medium over which network traffic may be carried including, but not limited to, coaxial cable, twisted-pair wire, optical fiber, a hybrid fiber coaxial (HFC) medium, microwave terrestrial transceivers, radio frequency communication mediums, satellite communication mediums, or any combination thereof, as well as a variety of network devices and computing platforms provided by network providers or other entities.
- HFC hybrid fiber coaxial
- certain embodiments are configured to execute various image analysis processes to determine various characteristics of the substantive contents of an image, and to execute one or more processes selected based at least in part on the determined contents of the images.
- Image analysis has historically required at least some amount of manual review to make objective and subjective determinations regarding the contents of those images. Particularly when reviewing images of multiple subjects having distinguishing features (e.g., photographing humans), structured rule-based analyses of those images may be difficult to automate in light of the distinguishing features present within each image. Moreover, the subject matter of individual images may have different perspectives, (e.g., slight variations in orientation of an imaging device versus the subject matter of the image), which may impact the ability to perform objective comparisons between the subject matter of an individual image and corresponding analytical requirements. Such inherent differences in image data contents inherently impede the use of automated systems and methods for analyzing contents of images.
- various embodiments utilize a structured series of analytical modules each utilizing rule-based and/or machine-learning based models to identify relevant features within an image, to establish appropriate scales, and/or reference points (e.g., based at least in part on the identified relevant features within the image), so as to enable objective determinations of various image characteristics.
- the image preprocessing methodologies comprise processes and/or corresponding modules for receiving a plurality of source data files.
- These source data files may be received from any of a variety of data sources, such as directly from individual user computing entities 30 , from external systems (e.g., electronic health record systems operating to store medical notes received from individual care providers relating to individual patients, and/or the like).
- the image analysis system 65 may utilize thresholds to differentiate between white, black, and other colors (e.g., those pixels having color values above a white threshold may be considered white; those pixels having color values below a black threshold may be considered black; and/or the like). It should be understood that other values for distinguishing between white, black, and other pixels may be utilized.
- the image analysis system 65 may be configured to generate a color profile for the image data file, indicating an overall percentage of white pixels, black pixels, and/or other pixels within the image data file. The image analysis system 65 may then compare the color profile for the image data file against one or more thresholds to identify those image data files comprising embedded images of suitable size for further analysis.
- the image analysis system 65 may determine whether the image profile for an image data file indicates that the image data file comprises a percentage of black pixels greater than a threshold percentage (e.g., 75%) and/or a percentage of white pixels greater than a threshold percentage (e.g., 75%) to determine whether the image data file contains embedded images of suitable quality for further analysis. Further to the above example, if the image data file comprises more black pixels than the threshold amount or if the image data file comprises more white pixels than the threshold amount, the image data file may be determined to be unsuitable for further analysis. The image analysis system 65 may then flag those image data files as irrelevant for further analysis (e.g., by updating metadata associated with those image data files with a relevant flag; by discarding those image data files from the data storage directory; and/or the like).
- FIGS. 5 A- 5 B illustrate differences between an image data file having an embedded image unsuitable for further analysis ( FIG. 5 A ) and an image data file having an embedded image suitable for further analysis ( FIG. 5 B ). Because the image data file of FIG. 5 A has a high percentage of white pixels and a relatively small color photograph, the histogram color analysis determines that the number of white pixels exceeds a threshold value indicating the image data file is unsuitable for further analysis. By contrast, the image data file of FIG. 5 B does not have a high percentage of white pixels or black pixels, as determined by a histogram color analysis, indicating that the image data file includes a color photograph of sufficient size for further analysis.
- the image pre-processing steps may additionally determine whether identified embedded images are of a sufficient size for further analysis.
- the size of each embedded image may thus be identified (e.g., via image feature edge detection processes to identify the boundaries of each image), and the determined size of each image may be compared against a minimize image size (e.g., 400 pixels by 400 pixels). Those images having a size less than the threshold size may be discarded or otherwise excluded from further analysis.
- the image pre-processing steps may additionally comprise detecting various features within an image to determine the orientation of the image.
- the image pre-processing steps may additionally comprise rotating the image to satisfy applicable image analysis criteria.
- the image pre-processing steps may additionally comprise one or more additional quality-control processes, such as ensuring the image color profile is adequate for further analysis (e.g., determining whether a photograph is full-color or grey-scale, such as reflected in FIGS. 7 A- 7 B ), ensuring the images are photographs (or other appropriate imaging-device-generated images, rather than illustrations as reflected in FIGS. 8 A- 8 B ) for further analysis, and/or the like.
- additional quality-control processes such as ensuring the image color profile is adequate for further analysis (e.g., determining whether a photograph is full-color or grey-scale, such as reflected in FIGS. 7 A- 7 B ), ensuring the images are photographs (or other appropriate imaging-device-generated images, rather than illustrations as reflected in FIGS. 8 A- 8 B ) for further analysis, and/or the like.
- the image preprocessing process may additionally comprise certain steps for identifying features within the image, and for ensuring that those features deemed necessary for a relevant image analysis process are present within the image and oriented properly within the image to enable an objective determination of characteristics of the image in accordance with a relevant image analysis model.
- the image preprocessing steps may comprise image analysis criteria indicating that a frontal-facial image is required for further analysis, and thus detecting (through machine-learning models) that the contents of an image does not satisfy applicable image analysis criteria may result in a determination that the image is inappropriate for further analysis.
- certain of these processes may be performed as a part of the image analysis process as discussed herein.
- certain additional preprocessing steps as discussed herein may be performed during or after image analysis processes as discussed herein.
- FIG. 4 provides a flowchart illustrating various steps involved in image analysis in accordance with certain embodiments.
- Image analysis may relate to analysis of specific features present within an image, and thus the process may begin with a determination of an appropriate analysis to be completed for a particular image, as indicated at Block 401 . It should be understood that the determination of an appropriate image analysis may be identified during image pre-processing in accordance with certain embodiments.
- the image analysis process to be utilized for certain embodiments may comprise image analysis criteria, which may specify particular features to be identified and/or to be utilized for measurements (relative or absolute) during image analysis.
- the image analysis criteria may additionally comprise and/or may specify particular models for use in identifying features within an image, for classifying features within an image, and/or the like.
- a determination of an appropriate image analysis may comprise specifying whether a blepharoptosis analysis, a sinusitis analysis, a scoliosis analysis, a skin lesion analysis, a vertebrae thinning analysis, a vertebroplasty analysis, a spinal canal analysis, a panniculectomy analysis, an orthognathic analysis, a breast surgery analysis, a rhinoplasty analysis, and/or the like is a most appropriate analysis for a particular image.
- a blepharoptosis analysis a sinusitis analysis, a scoliosis analysis, a skin lesion analysis, a vertebrae thinning analysis, a vertebroplasty analysis, a spinal canal analysis, a panniculectomy analysis, an orthognathic analysis, a breast surgery analysis, a rhinoplasty analysis, and/or the like is a most appropriate analysis for a particular image.
- image analyses are merely examples, and other analyses may be utilized for certain embodiments.
- the discussed image analyses are specifically provided in
- the identification of a relevant image analysis process may comprise receipt of user input identifying a selection of an image analysis process. In other embodiments, the identification of a relevant image analysis process may proceed automatically by the image analysis system 65 , for example, by reviewing data provided together with the images (e.g., as a part of the source data file(s) provided and including the embedded images for analysis). It should be understood that any of a variety of methodologies may be provided for identifying a relevant image analysis process for performing the image analysis.
- the image analysis system 65 receives the images for further analysis in accordance with the identified image analysis process. It should be understood that the receipt of images may begin with preprocessing steps (including image selection), as discussed briefly above, such that only those images deemed sufficiently relevant and of sufficient quality for performing the image analysis process may be received and/or otherwise utilized for additional analysis. In yet other embodiments, the image analysis system 65 may receive a plurality of images which may be received together with corresponding priority scores for those images, thereby enabling a sequential processing methodology of those images, such that the images deemed to have a highest or best priority score are reviewed first, and lower-priority images are only reviewed if the higher-priority images do not yield a successful image analysis.
- the image analysis system 65 is configured to detect individual features relevant for the image analysis that are present within the image. Accordingly, utilizing the image analysis criteria received previously, the image analysis system 65 identifies individual features within the image that are relevant for further analysis.
- the image analysis criteria may indicate (e.g., within a list, a matrix, a reference table, and/or the like), specific features within an image to be identified, and individual entries within the feature listing may be associated with specific models (e.g., machine-learning based models) utilized for identifying those features within an image.
- a single model e.g., a single machine-learning model
- the machine-learning model may comprise a convolutional neural network (CNN) configured for identifying individual features of an image.
- CNN convolutional neural network
- the machine-learning model may be trained utilizing supervised training data in accordance with certain embodiments, utilizing labels within training data images identifying specific features therein.
- the patient's nose, eyes, irises, and eyelids may be detected within the image.
- the image analysis system 65 may further assign image-based characteristics to features detected within an image.
- the image-based characteristics may be indicative of a relative location of a detected feature within an image.
- the image analysis system 65 may determine a relative location of the detected feature within the image. The location may be relative, such as by detecting multiple features having a similar classification and assigning relative locations to each detected feature (e.g., a feature identified as the most-left feature, such as by pixel-location, may be assigned a “left” designation).
- the location may be relative to the image itself, such as by assigning a “left” designation to a feature indicated as being to the left of a center-point of the image (e.g., by comparing a pixel-based location of the detected feature relative to a total number of pixels within the width of the image).
- Analogous configurations may be utilized for “right,” “top,” “bottom” and/or other location-based designations of features within an image.
- the image analysis system 65 may be configured to identify a percentage value corresponding to a position of a sub-feature of a feature (e.g., indicating a percentage-closed that a an eyelid is, such as 30% closed, 80% closed, 100% closed, and/or the like).
- Identifying the status of features may be performed utilizing a machine-learning model configured for comparing the image as a whole, or a portion of an image (e.g., a portion of an image containing a detected feature) against a training set providing example images of corresponding features in various states. For example, an initial determination of a state of a particular feature may be determined utilizing a binary classification model (e.g., a convolutional neural network) trained to classify identified features into one of two states.
- a binary classification model e.g., a convolutional neural network
- the binary classification model may be utilized to classify the patient's eye as “open” or “closed.”
- the image analysis system 65 may utilize cropped partial images placed around a detected feature, such as an individual detected eye within the image (and removing the surrounding portion of the image, including other portions of the patient's face) as input to the binary classification, such that a classification may be assigned to each individual detected feature.
- the image analysis system 65 may store data indicative of the resulting classification assigned to the feature within a data file corresponding with the image.
- the classification model may be configured to assign one of more than two possible states to a detected feature.
- the possible states may be indicated as “open,” “closed,” or “almost closed.”
- the feature classification determined for one or more features within an image may be utilized for determining an appropriate routing of the image for further review. For example, an “almost closed” determination may be utilized to route the image to a human reviewer, because the state of the patient's eye may not be suitable for automated measurements of various features as discussed in greater detail herein.
- the image analysis system 65 is configured to utilize one or more machine-learning models to identify a plurality of sub-features within the identified feature, thereby enabling measurements (e.g., absolute measurements based on detected reference scales or relative measurements between sub-features).
- Additional feature states may be detected in certain embodiments, which may utilize additional binary (or other discrete) classifications to provide a multi-stage classification detection system.
- Other, non-discrete classifications may be utilized in certain embodiments, such as detecting a relative position/orientation of a sub-feature.
- a feature state classification may comprise an eye gaze determination model, which may be utilized to detect the relative direction of a patient's eye gaze relative to a camera. This non-discrete classification system may be utilized together with discrete state assignment criteria, such as to assign an image classification as to whether the patient is/is not gazing directly at the camera utilized to generate the image.
- the image analysis system 65 identifies and tags features and/or sub-features as reference points within the image to enable further analysis (as indicated at Block 405 of FIG. 4 ), such as absolute-measurement-based analysis, relative-measurement-based analysis, feature classification-based analysis, and/or the like. Certain features and/or sub-features may be identified as a part of the feature-state classification methodologies as discussed above, and accordingly the image analysis system 65 may simply tag these identified features for further analysis.
- the image analysis system 65 may be configured to perform relative-measurement based image analysis for particular features identified within an image.
- the relative-measurement-based image analysis may comprise determining relative positions of multiple features, sub-features, and/or the like within an image, determining characteristics of a particular feature based at least in part on a matched template corresponding with the feature, and/or the like.
- the relative location of detected features within an image may be determined, such as by determining a relative orientation of the features (e.g., whether the features are aligned relative to one another, a percentage of misalignment/alignment relative to one another, a relative orientation (e.g., in degrees, radians, and/or the like) of features), and/or the like.
- a relative orientation of the features e.g., whether the features are aligned relative to one another, a percentage of misalignment/alignment relative to one another, a relative orientation (e.g., in degrees, radians, and/or the like) of features), and/or the like.
- edges of each feature may be detected, as discussed above.
- each feature may then be utilized to compare the detected features relative to one or more templates (e.g., through template matching processes discussed above) that may be correlated with feature characteristics, such as a feature orientation (e.g., defining a top of the feature, defining a front of the feature, and/or the like).
- feature characteristics such as a feature orientation (e.g., defining a top of the feature, defining a front of the feature, and/or the like).
- a feature orientation e.g., defining a top of the feature, defining a front of the feature, and/or the like.
- one or more reference planes may be associated with the feature within the image (that may be utilized for determining a relative orientation to other features, that may be utilized to determine a degree of alignment/misalignment relative to other features, and/or the like).
- the image analysis system 65 may be configured to detect an angle between reference planes corresponding with each of the detected features.
- the detected angle may be compared against angle-based criteria, such as angular thresholds that may be utilized for classifying the image (e.g., as reflected within Block 407 of FIG. 4 ).
- angle-based criteria such as angular thresholds that may be utilized for classifying the image (e.g., as reflected within Block 407 of FIG. 4 ).
- angular thresholds that may be utilized for classifying the image (e.g., as reflected within Block 407 of FIG. 4 ).
- the image upon determining that the detected angle is above an angle threshold, the image may be assigned a first classification, and upon determining that the detected angle is below an angle threshold, the image may be assigned a second classification.
- the overall image classification may be utilized to determine appropriate further processing steps for the image, such as appropriate image routing for further analysis (e.g., to an appropriate human based on automated detections of images), to initiate payment based on documentation including the overall image, to update records corresponding with the image (e.g., update patient records to reflect a diagnosis determined based on the image analysis), and/or the like.
- appropriate image routing for further analysis e.g., to an appropriate human based on automated detections of images
- update records corresponding with the image e.g., update patient records to reflect a diagnosis determined based on the image analysis
- the image analysis system 65 may be configured to detect a percentage of each of two or more features that are overlapping (e.g., by identifying a number of pixels between corresponding reference planes of each feature that are overlapping within at least one direction (e.g., a vertical direction) relative to a total number of pixels between reference planes corresponding with each detected feature.
- the detected percentage of alignment/misalignment may be utilized for classifying the image (e.g., as reflected within Block 407 of FIG. 4 ).
- the image upon determining that the percentage of alignment is above an alignment threshold, the image may be assigned a first classification, and upon determining that the percentage of alignment is below the alignment threshold, the image may be assigned a second classification.
- the overall image classification may be utilized to determine appropriate further processing steps for the image, such as appropriate image routing for further analysis (e.g., to an appropriate human based on automated detections of images), to initiate payment based on documentation including the overall image, to update records corresponding with the image (e.g., update patient records to reflect a diagnosis determined based on the image analysis), and/or the like.
- an image analysis process may be utilized for detecting relative sizes of detected features (or between various portions of a detected feature) and/or sub-features within an image for providing an image classification.
- two or more features (or sub-features) may be detected within an image, wherein a first feature is located entirely within a second feature.
- the relative size of the first feature within the second feature e.g., the percentage of a region within the second feature that is attributable to the first feature
- a fill percentage threshold may be assigned to the image, and the detected fill percentage (e.g., the percentage of the second feature that is filled by the first feature) may be utilized for classifying the image.
- the image upon determining that the fill percentage is above a fill percentage threshold, the image may be assigned a first classification, and upon determining that the fill percentage is below the fill percentage threshold, the image may be assigned a second classification.
- the overall image classification may be utilized to determine appropriate further processing steps for the image, such as appropriate image routing for further analysis (e.g., to an appropriate human based on automated detections of images), to initiate payment based on documentation including the overall image, to update records corresponding with the image (e.g., update patient records to reflect a diagnosis determined based on the image analysis), and/or the like.
- an image analysis process may be utilized for detecting relative sizes of detected features within an image for providing an image classification at least in part by determining a size ratio between two or more detected features (or between various portions of a single detected feature).
- two or more features may be detected within an image, wherein a first feature does not overlap with a second feature (i.e., the first feature and the second feature are discrete features identifiable within the image).
- the ratio of the size of the first feature to the size of the second feature (e.g., by overall area, by length, by width, and/or the like) may be utilized for classifying the image.
- the image analysis process may utilize a reference ratio for classifying the image, such that determined ratios of sizes between the first feature and the second feature that significantly deviate from the reference ratio may receive a first classification, whereas images reflecting a ratio between the first feature and the second feature that at least substantially matches the reference ratio may receive a second classification.
- determining whether the detected ratio between the first feature and the second feature deviates significantly from the reference ratio may be determined based at least in part on a threshold (e.g., based at least in part on a standard deviation of a sample data set relative to the reference ratio) and/or may be determined via a machine-learning model.
- an image of an underside of a patient's nose may be utilized to determine a ratio of sizes between a patient's nasal openings.
- a determination that the size ratio between the patient's nasal openings is significantly different than 1:1 may be indicative of a nasal deformity, such as a nasal cavity collapse that may require a surgical rhinoplasty to correct.
- relative-measurement-based image analysis may utilize assigned classifications to certain features (or sub-features) for classifying the overall image.
- an identified feature within an image may be compared against one or more templates each having corresponding image classifications associated therewith. Based on the template identified as having a highest match score, the image classification corresponding with the template may be applied to the image.
- certain embodiments may be configured to identify interruptions within a feature (e.g., to identify an indention within a surface of a feature) and/or to determine a relative depth of an interruption within a surface of a feature.
- the image analysis process may utilize a relative depth criteria for identifying whether a particular image satisfies a particular criteria.
- an image analysis process may identify indentions within a patient's shoulders caused by bra straps, as a proxy for determining whether a patient's breasts are sufficiently heavy to justify breast surgery.
- the image analysis process may generate a classification to be assigned to an image that may be indicative of whether a claim for breast surgery is likely to be approved.
- the image analysis system 65 is configured to provide one or more image analysis tools that may be accessible to one or more user computing entities 30 (e.g., via a web-browser based access, via a software program installed on the user computing entities, and/or the like).
- the image analysis tools may be configured for generating and/or otherwise providing a graphical user interface providing user-readable data regarding the results of the image analysis discussed above.
- the image analysis tool may be configured for providing a series of interactive user interfaces that may be provided to a user computing entity 30 and may be configured to enable the user to provide various user input.
- a first user interface may be configured to enable a user to upload a file/document having one or more embedded images therein.
- the image analysis tool may then provide the uploaded files/documents to one or more preprocessing and/or image analysis systems 65 as discussed herein.
- the image analysis tool may then display one or more additional user interfaces to the user computing entity 30 , such as a user interface indicating that the image analysis is being performed, and/or one or more user interfaces indicating the results of the image analysis, including annotations provided on the image at relevant portions of the image to provide additional indications to the user of the results of the image analysis.
- the generated annotations may comprise graphical markers of identified features (e.g., a line illustrating a detected edge of a feature or sub-feature, a point indicating a point of reference for measurements, a boundary following a detected edge of a feature or sub-feature, a line indicating an estimated location of feature, a line indicating an estimated centerline of a feature, and/or the like).
- the image analysis tool may be configured to selectively overlay one or more reference features, such as a reference grid having constant-sized grid blocks such as that shown in FIG. 17 . These constant-sized grid blocks may be sized based at least in part on a detected reference size, thereby enabling a user to self-measure distances between various features within the image.
- the various annotations may be selectively applied to an image, such as in accordance with user-input selecting one or more annotations to be applied to the image.
- the image analysis tool may be provided with a user interface comprising one or more interactive elements enabling a user to select one or more annotations to be applied to a displayed image.
- a user may manipulate annotations already present within an image, and/or the user may provide annotations or other indications of approval of use of images and/or attestation of various provided annotations.
- the image analysis tool may additionally comprise one or more features enabling users to generate custom reports, such as comprising images having overlaid annotations as discussed above, that may be output and provided to the user computing entity 30 .
- the annotated images may comprise images having overlaid shapes corresponding to identified features (e.g., illustrating an identified iris, an identified eyelid, and an identified measurement distance), and may be provided in various cropped image sizes, such as illustrating a pair of eyes as shown in FIG. 18 A or a single eye, as shown in FIG. 18 B .
- the generated reports may be provided to the user computing entity 30 for local storage thereon. In other embodiments, the generated reports may be provided to the user computing entity 30 for printing a physical report. It should be understood that these and/or other features for providing reports generated via the image analysis tool may be utilized in certain embodiments.
- the image analysis system 65 is configured to route images (and/or the documents/reports/files) to one of a plurality of routing locations based on an image classification assigned during the above-mentioned image analysis. For example, a document/report/file having at least one image embedded therein and having a first classification may be routed to a first location (e.g., transmitted to a first user computing entity 30 ) and a document/report/file having at least one image embedded therein and having a second classification may be routed to a second location (e.g., transmitted to a second user computing entity 30 ).
- a first location e.g., transmitted to a first user computing entity 30
- a document/report/file having at least one image embedded therein and having a second classification may be routed to a second location (e.g., transmitted to a second user computing entity 30 ).
- the image analysis system 65 may be utilized for routing images (and corresponding reports/documents) to appropriate healthcare review staff based on an automated determination that a claim for which the document/report is generated has been indicated as meeting the criteria for approval of coverage or indicated as not meeting the criteria for approval of coverage (e.g., through an initial automated approval process).
- the automated determination of whether a claim has been indicated as meeting the criteria for approval of coverage may be accomplished based on a classification assigned to at least one image embedded within a report/document corresponding with a claim.
- the claim including embedded images analyzed as discussed above may be routed to a nurse for a final determination of whether the claim has met the criteria for approval of coverage, while those claims indicated as not meeting the applicable criteria for approval of coverage based on the initial automated approval process may be routed to a licensed physician for final determination that the claim has not met the applicable criteria for approval of coverage.
- An image analysis system 65 may be configured to automatically diagnose various medical conditions of a patient that may be diagnosed based on images generated for the patient. Such images may comprise photographs for certain conditions, MRI-images for certain conditions, X-ray images for certain conditions, and/or other image types that may be useful for diagnosing other medical conditions. As just one example, the image analysis system 65 may be configured for diagnosing Blepharoptosis utilizing photographs of a patient's face. Specifically, the image analysis system 65 may be configured to detect the location of a patient's iris within each eye, and to measure a distance between a center of the patient's iris to the lower edge of the patient's upper eye-lid while the patient's eyes are fully open.
- a Blepharoptosis diagnosis criteria such as a determination that the measured distance is less than a Blepharoptosis diagnosis threshold (e.g., an applicable Blepharoptosis diagnosis threshold selected from a plurality of available Blepharoptosis diagnosis thresholds based at least in part on data associated with an analyzed image, such as patient-specific data indicative of a patient location, a patient age, and/or the like)
- a Blepharoptosis diagnosis threshold e.g., an applicable Blepharoptosis diagnosis threshold selected from a plurality of available Blepharoptosis diagnosis thresholds based at least in part on data associated with an analyzed image, such as patient-specific data indicative of a patient location, a patient age, and/or the like
- the image analysis system 65 is configured to generate data indicating a positive Blepharoptosis diagnosis.
- the image analysis system 65 is configured to generate data indicative of a negative Blepharoptosis diagnosis.
- the image analysis system 65 is configured to route the image (and/or any report/documentation containing the image) to an appropriate user computing entity 30 for determination of whether the claim has met the appropriate criteria for approval relating to payment of treatment for Blepharoptosis.
- images to be analyzed for a Blepharoptosis diagnosis may be extracted from reports and/or other documentation uploaded by care provider so as to automatically determine a diagnosis based at least in part on the images included in the report, and to thereby determine the accuracy of the report.
- Those reports and/or other documentation may comprise one or more images, text, and/or other documentation content, and accordingly the image analysis system 65 (alone or in combination with a separate image pre-processor) may extract images from the documentation, identify whether one or more of the extracted images are of sufficient quality for further analysis, and to pre-process the image to facilitate further analysis (e.g., by resizing images, by sharpening images, by rotating images, and/or the like), such as to enable application of machine-learning based models for identifying particular features and sub-features within the images. As discussed herein, such preprocessing may additionally comprise determining whether the patient's face is properly oriented within the image, determining whether the image has an appropriate color profile, and/or the like.
- the image analysis system 65 is configured to determine whether the patient's eyes are open, closed, or almost closed. Each of these discrete states may be identified by comparing the images against training data tagged with identical classification tags, thereby enabling the image analysis system 65 to determine whether the images of the patient's eyes most closely match the training data images of open eyes, closed eyes, or almost closed eyes.
- the image analysis system 65 additionally identifies the patient's iris in each eye for further analysis. To identify irises in a facial image, the image analysis system 65 utilizes an iris template repository comprising a plurality of iris templates of varying shapes that may be utilized to match to shapes detected within a facial image to indicate the location of the patient's iris, as well as a center-point of the patient's iris, even when the patient's iris is partially covered by the patient's eyelid.
- detection of an iris may be performed utilizing the cropped images of individual eyes of the patient, and applying one or more edge detection methodologies e.g., within a grayscale image, within a masked image, within a color-intensity-mapped image, and/or the like).
- edge detection methodologies e.g., within a grayscale image, within a masked image, within a color-intensity-mapped image, and/or the like.
- the image analysis system 65 utilizes circle-identification models to identify circular shapes within the cropped eye images, generates a binary mask of the circular shape identified within the cropped eye image, and identifies a closest matching iris template to the binary-masked image to determine an iris template most appropriate for the cropped eye image.
- the image analysis system 65 is configured to convert a red-channel within the original image to binary (e.g., black and white), and the resulting binary image may be utilized for comparison with templates (e.g., utilizing a convolutional neural network to compare the binary mask against templates to identify a closest-matching template to be utilized for further analysis).
- binary e.g., black and white
- templates e.g., utilizing a convolutional neural network to compare the binary mask against templates to identify a closest-matching template to be utilized for further analysis.
- the image analysis system 65 By matching an iris template with each eye within the facial image, the image analysis system 65 identifies boundaries of the patient's irises within the facial image. Moreover, the image analysis system 65 stores data indicating a patient's iris is to be estimated to have a 10 mm diameter. Thus, by identifying the edges of the patient's irises within the image, the image analysis system 65 is configured to determine a scale to be utilized for absolute measurements within the image, by determining the number of pixels across the diameter of the patient's iris, and correlating the determined pixel-based distance across the patient's iris with a 10 mm measurement, thereby enabling a determination of the absolute measurement distance to be correlated with each pixel within the image.
- the image analysis system 65 is configured to determine a center-point of the patient's irises (for each eye), despite a portion of each iris may be covered within the image by the patient's eyelid. For example, the center point of each eye may be identified geometrically, based on identified locations of edges of an identified iris.
- the image analysis system 65 Based on the determined center point of each iris, and the lower edge of each eyelid, the image analysis system 65 performs an absolute measurement of the distance between the center point of each iris and the corresponding lower edge of the upper eyelid (referred to as the MRD-1 measurement; an example MRD-1 measurement is visible within FIG. 18 B ).
- the image analysis system may be additionally configured to measure an MRD-2 distance, as the distance between the center of the iris and a detected upper edge of a lower eyelid, which may be detected in a manner similar to that discussed herein for detection of the lower edge of the upper eyelid.
- the image analysis system 65 may store a Blepharoptosis diagnosis distance threshold that may be utilized for comparison to determine whether the image is indicative of a positive or a negative Blepharoptosis diagnosis.
- the image analysis system 65 may be configured to automatically provide a positive Blepharoptosis diagnosis, otherwise the image analysis system 65 is configured to automatically provide a negative Blepharoptosis diagnosis.
- the image analysis system 65 is configured to route the image (and the containing report and/or document) to an appropriate user computing entity 30 corresponding to an approver (e.g., a nurse or a physician).
- the image analysis system 65 may be configured to route the image (and the containing report and/or document) to a first user computing entity 30 upon detecting a positive Blepharoptosis diagnosis, and to route the image (and the containing report and/or document) to a second user computing entity 30 upon detecting a negative Blepharoptosis diagnosis.
- the image (and the containing report and/or document) may be annotated automatically by the image analysis system 65 to facilitate human review.
- the annotations may comprise an annotated MRD-1 distance, a grid having blocks of a measured size based on the detected iris size (e.g., 1 mm grid blocks), an overlay of the detected iris, an overlay of the detected lower edge of the upper eyelids, and/or the like.
- those annotations may be selectively added and/or reviewed via user input to appropriate interactive elements of an image analysis tool to further facilitate human review.
- An example annotated figure is shown in FIG. 17 .
- the image analysis system 65 may be configured for executing one of a plurality of image analysis processes, which may be selected based at least in part on user input, based at least in part on detected features within an image, based at least in part on detected text within a report/document containing the image, and/or the like.
- the image analysis system 65 may be configured for automatically finding sinus opacification—the absence of air within the defined bone contours of the sinus cavity (which may be indicative of sinusitis) based on CT images, utilizing an image analysis process as reflected within the images of FIGS. 19 A- 19 E .
- the image analysis system 65 may first extract images provided for analysis, and to determine whether the images are of sufficient quality and content to enable a finding of sinus opacification. Such images may be extracted from documents/reports or other data source files, as reflected at FIG. 19 A . However, because a diagnosis is based on a CT image, rather than a photograph, the image analysis system 65 may be configured for implementing a different pre-processing and image quality check process than the Blepharoptosis diagnosis process discussed above.
- the image analysis system 65 may be configured to determine whether the CT image is of a sufficient size (e.g., in pixels) to enable an accurate determination of a finding of sinus opacification. Moreover, as discussed above, the image analysis system 65 may perform one or more image editing processes, such as image rotation, image sharpening, and/or the like to enable a more accurate determination of a finding of sinus opacification.
- the image analysis system 65 may utilize one or more machine-learning based models for determining whether the content of the extracted image contains images of a patient's sinuses with sufficient clarity and orientation so as to enable an accurate assessment of whether the patient is suffering from sinus opacification, which may be indicative of sinusitis.
- the image analysis system 65 may utilize a convolutional neural network to compare the contents of the extracted images with an image training set to determine whether the extracted image matches images within the training set that are deemed sufficient to enable a finding of sinus opacification.
- the image analysis system 65 applies a machine-learning-based (e.g., convolutional neural network) sinus detection model to identify the sinus cavities within the image.
- the sinus detection models may be configured to identify portions of the image having a known sinus cavity-shape, to identify relationships between various expected cavity shapes reflected within the image to identify a likely location of a sinus cavity within the image, and/or the like.
- the image analysis system 65 may then identify boundaries of each detected sinus cavity, for example, utilizing edge detection methodologies for identifying changes in image color intensity around an area identified as encompassing a sinus cavity.
- the image analysis system 65 is configured to generate an overlay (e.g., a color overlay, and image mask, and/or the like) having a shape, size, and location corresponding with the identified location of each sinus cavity within the image.
- the location, size, and boundaries of each sinus cavity may be stored (e.g., in metadata associated with the image) for further analysis.
- the image analysis system 65 may provide a bounding box surrounding the identified sinus cavities, such that the images may be cropped for further analysis.
- the image analysis system 65 identifies tissue linings within the sinus cavity as reflected in FIG. 19 D .
- the tissue linings may be identified as having a different color intensity within the image at a location determined to be within each identified sinus cavity.
- the image analysis system 65 utilizes a sinusitis prediction engine to determine whether the identified sinus cavities and the identified tissue within each sinus cavity are collectively indicative of a sinus opacification (which may be indicative of a sinusitis condition).
- the sinusitis prediction engine may utilize a relative-measurement analysis configuration for determining whether sinus opacification finding criteria are satisfied.
- the image analysis system 65 may store data indicative of a size (e.g., based on number of pixels, pixel-based area measurements, and/or the like) of the sinus cavities.
- the image analysis system 65 may similarly determine and store data indicative of a size (e.g., as a pixel-based measurement) of the area encompassed within identified boundaries of the tissue within the sinus cavity.
- the sinusitis prediction engine may encompass a machine-learning based (e.g., convolutional neural network based) models for determining whether a particular image is indicative of sinus opacification.
- the sinusitis prediction engine may utilize a training data set (e.g., a labeled training data set) to determine a relevant percentage of coverage of the sinus cavity by the sinus tissue that is indicative of a finding of sinus opacification. Based on the determined sinus coverage percentage, the sinusitis prediction engine may calculate a sinus coverage percentage of the embedded image under analysis to determine whether the embedded image satisfies relevant sinus opacification finding criteria, which may be provided in the form of a threshold sinus coverage percentage.
- Other sinus opacification finding criteria may be implemented in certain embodiments, such as a percentage of a boundary of a sinus cavity in contact with identified sinus tissue, and/or the like.
- the sinus opacification criteria may be implemented as relative-measurement-based analysis based on multiple identified features or sub-features (e.g., the sinus cavities and corresponding sinus tissue).
- the image analysis system 65 Upon determining whether the embedded image satisfies sinus opacification finding criteria, the image analysis system 65 assigns diagnosis data to be associated with the embedded image (and/or the documents/reports comprising the embedded image). The image analysis system 65 may thus generate and transmit data indicative of the resulting automatically generated diagnosis to the user computing entity 30 providing the embedded image, and may provide the document/report containing the embedded image, together with the embedded image (which may be annotated, such as with a visual overlay of the sinus cavity and sinus tissue, as well as a visual indication of the percentage of the sinus cavity occupied by the sinus tissue as reflected at Block 19 E).
- the image analysis system 65 may route the embedded image (e.g., together with the document/report containing the embedded image) to an appropriate user computing entity 30 for final approval of the determined diagnosis.
- the embedded image may be routed to a first user computing entity 30 (e.g., operated by a nurse) for approval, and upon determining a negative finding of sinus opacification, the embedded image (e.g., together with the document/report containing the embedded image) may be routed to a second user computing entity 30 (e.g., operated by a physician) for evaluation.
- a first user computing entity 30 e.g., operated by a nurse
- a second user computing entity 30 e.g., operated by a physician
- the image analysis system 65 may provide one or more image analysis tools, such as a user interface having a plurality of user input elements configured to enable a user to selectively apply one or more overlays to the embedded image that are indicative of the results of the sinus opacification analysis.
- the image analysis system 65 may be configured to generate a visual overlay that may be selectively applied or removed from the embedded image to indicate the size, shape, and location of each of the identified sinus cavities and the identified sinus tissue.
- the image analysis tool may automatically calculate a percentage of each sinus cavity occupied by the sinus tissue, and to display the calculated percentage within the embedded image.
- the image analysis system 65 may be configured for executing one of a plurality of image analysis processes, which may be selected based at least in part on user input, based at least in part on detected features within an image, based at least in part on detected text within a report/document containing the image, and/or the like.
- the image analysis system 65 may be configured for diagnosing scoliosis based on X-ray images.
- the image analysis system 65 may first extract images provided for analysis, and to determine whether the images are of sufficient quality and content to enable a diagnosis of scoliosis. However, because a diagnosis is based on an X-ray image, rather than a photograph or a CT image, the image analysis system 65 may be configured for implementing a different pre-processing and image quality check process than the Blepharoptosis diagnosis and sinus opacification diagnosis processes discussed above.
- the image analysis system 65 may be configured to determine whether the X-ray image is of a sufficient size (e.g., in pixels) to enable an accurate determination of a scoliosis diagnosis. Moreover, as discussed above, the image analysis system 65 may perform one or more image editing processes, such as image rotation, image sharpening, and/or the like to enable a more accurate determination of a scoliosis diagnosis.
- the image analysis system 65 may utilize one or more machine-learning based models for determining whether the content of the extracted image contains images of a patient's spine with sufficient clarity and orientation so as to enable an accurate assessment of whether the patient is suffering from scoliosis.
- the image analysis system 65 may utilize a convolutional neural network to compare the contents of the extracted images with an image training set to determine whether the extracted image matches images within the training set that are deemed sufficient to enable a diagnosis of scoliosis.
- the image analysis system 65 applies a machine-learning based (e.g., convolutional neural network) spinal detection model to identify the spine within the image.
- the spinal detection model may be configured to identify portions of the image having a known spinal shape, to identify known relationships between various expected shapes represented within the X-ray image (e.g., where identified ribs connect with a larger elongated component—the spine—within the image), and/or the like.
- the identified spinal shape may be identified as having a color intensity deviation of less than a defined threshold percentage.
- the acceptable color intensity deviation may be identified by an appropriate machine-learning based model trained via a training data set in which the spine is identified in each image within the training data set, thereby enabling the machine-learning model to identify color intensity deviations within the indicated boundaries of an identified spine within each image of the training data set, so as to establish a maximum color deviation likely to be indicative of a continuous spine.
- the maximum color deviation may accommodate differences in color intensity in the X-ray image due at least in part to the presence of cartilage disks between individual bone vertebrae of the spine.
- the image analysis system 65 may then identify individual vertebrae within the boundaries of the identified spine.
- the image analysis system 65 may utilize the above-noted differences in color intensity due to the presence of cartilage between individual vertebrae to identify the individual boundaries of each vertebrae. Any of a variety of methodologies may be utilized for identifying individual vertebrae, and such image analysis process may be performed within the boundaries set surrounding the spine identified within the image. In other embodiments however, the image analysis system 65 may expand the boundaries corresponding with the spine to accommodate potential errors in locating the boundaries of the spine, to accommodate the possibility that small portions of individual vertebrae may extend beyond the identified overall spinal boundary within the image.
- the image analysis system 65 may identify boundaries between adjacent vertebrae and cartilage portions, so as to identify the boundaries of each individual vertebrae. Certain embodiments may execute such processes without further image manipulation. However, in other embodiments, the image analysis system 65 may generate one or more binary image masks based on detected intensity differences within regions of the image within the boundaries of the spine. The binary masks may thus emphasize the boundaries of each vertebrae, which may then be utilized to identify the location of each vertebrae boundary. Moreover, in certain embodiments, the image analysis system 65 may additionally utilize a machine-learning based vertebrae detection model to label each identified vertebrae.
- the vertebrae detection model may utilize a labeled training data set in which each of a plurality of vertebrae within each image are individually labeled to uniquely identify each vertebrae visible within the image.
- the image analysis system 65 may then utilize this training data to review the embedded image provided for analysis to individually identify each vertebrae.
- Criteria utilized for identifying and labeling each vertebrae may comprise relative location-based criteria (e.g., relative location within the overall image, relative location compared with other identified vertebrae, relative location compared with other identified features of the image, such as the location of individual ribs, and/or the like).
- the image analysis system 65 may execute a relative measurement based image analysis process to determine whether the image is indicative of a scoliosis diagnosis. For example, an angle is measured between lines extending perpendicular to specific vertebrae located at the top and bottom of the spine, as shown in FIG. 20 C . This angle, referred to as the Cobb's angle is determined by drawing a first line extending perpendicular to a central orientation line of a top-most identified vertebrae within the image and drawing a second line extending perpendicular to a central orientation line of a bottom-most identified vertebrae within the image. Cobb's angle is then measured at the intersection point between the first line and the second line. Because the determination of an angle between components is not dependent on the absolute size of any individual features (e.g., vertebrae), the image analysis system 65 need not determine absolute sizes of features identified within the image.
- an angle is measured between lines extending perpendicular to specific vertebrae located at the top and bottom of the spine, as shown in FIG. 20 C .
- the image analysis system 65 may provide one or more image analysis tools, such as a user interface having a plurality of user input elements configured to enable a user to selectively apply one or more overlays to the embedded image that are indicative of the results of the scoliosis diagnosis analysis.
- the image analysis system 65 may be configured to generate a visual overlay that may be selectively applied or removed from the embedded image to indicate the size, shape, and location of the identified spine, the size, shape, and location of each of the identified vertebrae, the generated first line (extending from a top-most identified vertebrae), the generated second line (extending from the bottom-most identified vertebrae), the Cobb's angle, and/or the like.
- Example overlays are shown in FIGS. 21 A- 21 C .
- the image analysis system 65 may be configured to determine whether the image is of a sufficient size (e.g., in pixels) to enable an accurate determination of a skin lesion diagnosis. Moreover, as discussed above, the image analysis system 65 may perform one or more image editing processes, such as image rotation, image sharpening, and/or the like to enable a more accurate determination of a scoliosis diagnosis.
- the image analysis system 65 may not perform certain image editing processes, such as image rotation processes, as such image editing processes may not increase the accuracy of skin lesion diagnosis, and such image editing processes may not provide other benefits to users viewing the images (e.g., such as providing a point of reference for the generated images).
- the image analysis system 65 may determine an appropriate routing for the image (e.g., together with the document/report containing the embedded image) based at least in part on the skin lesion diagnosis and/or the relevance score assigned to the skin lesion diagnosis. For example, upon determining the relevance score exceeds a routing threshold value, the image (e.g., together with the document/report containing the embedded image) may be routed to a first computing entity 30 (e.g., corresponding with a nurse) for final approval.
- a first computing entity 30 e.g., corresponding with a nurse
- the image e.g., together with the document/report containing the embedded image
- a second user computing entity 30 e.g., corresponding with a physician
- the image analysis system 65 may provide one or more image analysis tools, such as a user interface providing additional data regarding the skin lesion diagnosis.
- the image analysis tools may provide a visual indication of relevance scores for each of a plurality of skin lesion types, thereby providing an indication of the skin lesion type diagnosed for the image of the skin lesion.
- the image analysis tools may additionally enable the user to view additional representative images from the image training data to visually compare the image under analysis against images of various skin lesion types.
- FIG. 22 provides an example user interface of an image analysis tool showing an image under analysis together with indications of relevance scores for various skin lesion types reflected within a training data set.
- the image analysis system 65 may be configured for executing one of a plurality of image analysis processes, which may be selected based at least in part on user input, based at least in part on detected features within an image, based at least in part on detected text within a report/document containing the image, and/or the like.
- the image analysis system 65 may be configured for diagnosing lung nodules based on CT images.
- the image analysis system 65 may first extract images provided for analysis, and may determine whether the images are of sufficient quality and content to enable a diagnosis of a lung nodule. Because the identification and diagnosis of lung nodules is based on a CT image, the image analysis system 65 may be configured for implementing pre-processing and image quality check processes similar to that discussed above in reference to sinusitis.
- the image analysis system 65 may be configured to determine whether the CT image is of a sufficient size (e.g., in pixels) to enable an accurate determination of a lung nodule identification and diagnosis. Moreover, as discussed above, the image analysis system 65 may perform one or more image editing processes, such as image rotation, image sharpening, and/or the like to enable a more accurate determination of a lung nodule diagnosis.
- the image analysis system 65 may utilize one or more machine-learning based models for determining whether the content of the extracted image contains images of a patient's lungs with sufficient clarity and orientation so as to enable an accurate assessment of whether the patient's lungs have any nodules thereon, and whether those lung nodules are cancerous.
- the image analysis system 65 may utilize a convolutional neural network to compare the contents of the extracted images with an image training set (example images of which are shown in FIG. 23 A ) to determine whether an extracted image such as that shown in FIG. 23 B matches images within the training set that are deemed sufficient to enable the identification and diagnosis of lung nodules.
- the image analysis system 65 utilizes a machine-learning based (e.g., convolutional neural network) lung nodule detection model to identify lung nodules within an image and to determine whether the detected lung nodules are cancerous.
- a machine-learning based lung nodule detection model to identify lung nodules within an image and to determine whether the detected lung nodules are cancerous.
- the image analysis system 65 of certain embodiments may be configured to implement a single-stage convolutional neural network configuration for both identifying lung nodules and classifying those lung nodules as cancerous or benign (e.g., identifying cancerous lung nodules separately from identifying benign lung nodules), or the image analysis system 65 of certain embodiments may be configured to implement a multi-stage convolutional neural network configuration that first identifies lung nodules, and then distinguishes between those lung nodules deemed cancerous and those lung nodules deemed benign.
- the convolutional neural network may utilize a training data set comprising a plurality of CT images of patient lungs comprising data identifying lung nodules therein (including data identifying the location of boundaries of those lung nodules), or indicating that no lung nodules are present within the image.
- the training data set may additionally comprise diagnosis data for each identified lung nodule, indicating whether the identified lung nodule is cancerous or benign.
- the image analysis system 65 is configured for utilizing the training data to facilitate the identification of lung nodules within CT images provided as a part of a document/report for diagnosis of lung nodules therein.
- the image analysis system 65 is configured for executing a multi-stage convolutional neural network-based analysis of patient lung images. As a first stage, the image analysis system 65 is configured to identify and localize lung nodules within an image. Identification of nodules may comprise steps for generating candidate files corresponding with individual portions of an image that may be determined to comprise a region within an image that may be determined to be a lung nodule. These candidate files may be generated by first reviewing an image to identify regions of interest within an image, such as by detecting differences in intensity of color within the image. To avoid false positives and to thus decrease the number of candidate files, only those differences in color intensity within the data file corresponding with shapes determined to correspond with lung nodules may be flagged as potential candidates.
- regions having a distinct color intensity that is distinguishable from surrounding regions, but having sharp corners or other shape characteristics that are generally not associated with lung nodules may be excluded from generation of candidate files.
- identification of candidate files indicative of potential nodules may additionally comprise three-dimensional considerations of regions within images that may be indicative of lung nodules.
- a plurality of images each image corresponding with a different imaging plane of the patient's lungs
- Identification of candidate files may proceed by identifying regions of changed color intensity within images and/or the identification of candidate files may additionally comprise execution of candidate nodule machine-learning based models (e.g., convolutional neural networks utilizing the above-mentioned training data for identifying and localizing regions of interest within individual images that may be reflective of a candidate nodule).
- candidate nodule machine-learning based models e.g., convolutional neural networks utilizing the above-mentioned training data for identifying and localizing regions of interest within individual images that may be reflective of a candidate nodule.
- utilizing a candidate nodule machine-learning based model may comprise determining whether any individual regions within an image are determined to be similar to identified nodules within the training data set.
- the image analysis system 65 executes a nodule classification model (e.g., a machine-learning based model, such as a convolutional neural network) to classify each candidate file as being reflective of a benign nodule, a cancerous nodule, a null nodule (reflecting a determination that the candidate is not a nodule at all), and/or the like.
- the classification model may be configured to apply one of a plurality of discrete classifications to each candidate file.
- classifying each candidate file may comprise detecting a closest match between various characteristics of a particular nodule relative to labeled nodules reflected within the training data set, as reflected within the image.
- classifying each candidate file may comprise analyzing characteristics of the nodule such as the contours of the boundaries of a particular nodule (e.g., in two-dimensions within a single image or in three-dimensions across multiple images), the color intensity profile of the nodule within the one or more images (e.g., based at least in part on the average color intensity of the nodule, the location of variations in color intensity, and/or the like), the relative location of the nodule in comparison with the overall lung represented within the one or more images, and/or the like.
- the image analysis system 65 utilizes a convolutional neural network to generate a vector representation of the candidate file (e.g., a vector representation of each nodule reflected within the image).
- This vector representation of the candidate file may be compared against vector representations of each of a plurality of nodules reflected within the training data set so as to determine a vector distance between the candidate file and each of a plurality of nodules reflected within the training data.
- the training data may have corresponding labels, such that the image analysis system 65 may be configured to determine an average distance between the nodule reflected within the candidate file and a plurality of nodules having a common classification.
- the image analysis system is configured to determine a closest average vector distance between the candidate file and a particular classified nodule type, as reflected within the training data.
- the image analysis system 65 assigns relevance scores to each possible nodule classification based at least in part on the average distance between the candidate file and training data having an associated nodule classification.
- the image analysis system 65 may be configured to assign relevance scores for each of the plurality of available classifications, and to select a highest relevance score as an automatically identified diagnosis of the nodule reflected within the candidate file (and the image(s) embedded within the document/report provided for analysis).
- the image analysis system 65 may determine an appropriate routing for the image(s) (e.g., together with the document/report containing the embedded image(s)) based at least in part on the lung nodule diagnosis and/or the relevance score assigned to the lung nodule diagnosis. For example, upon determining the relevance score exceeds a routing threshold value, the image(s) (e.g., together with the document/report containing the embedded image(s)) may be routed to a first computing entity 30 (e.g., corresponding with a nurse) for final approval.
- a first computing entity 30 e.g., corresponding with a nurse
- the image analysis system 65 may provide one or more image analysis tools, such as a user interface providing additional data regarding the lung nodule diagnosis.
- the image analysis tools may provide a visual indication of relevance scores for each of a plurality of lung nodule types, thereby providing an indication of the lung nodule type diagnosed for the image of the skin lesion.
- the image analysis tools may additionally enable the user to view additional representative images from the image training data to visually compare the image under analysis against images of various lung nodule types.
- FIGS. 23 A- 23 B illustrate example aspects of an example user interface of an image analysis tool showing an image under analysis together with various example training data images of various lung nodules as reflected within a training data set.
- the image analysis system 65 may be configured for executing one of a plurality of image analysis processes, which may be selected based at least in part on user input, based at least in part on detected features within an image, based at least in part on detected text within a report/document containing the image, and/or the like.
- the image analysis system 65 may be configured for detecting and diagnosing masses visible within an input mammograph.
- the image analysis system 65 may be configured to execute mammograph analysis based at least in part on two-dimensional images and/or three-dimensional images (e.g., image sets).
- mammograph analysis may be performed utilizing relative-based measurements, such as for establishing a relative size of a detected mass within an image, or absolute-based measurements, such as for establishing an absolute size of a detected mass within an image, such as utilizing metadata indicative of an image scale within an analyzed image.
- FIG. 24 provides an example flowchart illustrating steps associated with analysis of a mammograph in accordance with certain embodiments.
- the image analysis system 65 may first extract images provided for analysis, as reflected at Block 2401 .
- the images may be extracted from an input file, such as a document/report generated based at least in part by a healthcare provider, which may contain both textual descriptions of the healthcare provider's assessment of a particular patient's condition, and images (e.g., mammographs).
- the input files may comprise a plurality of images, and accordingly the image analysis system 65 is configured to determine whether the images are of sufficient size (e.g., in pixels), quality, and content to enable a diagnosis of a breast mass.
- the image analysis system 65 may be configured for implementing pre-processing and image quality check processes (as reflected at Block 2402 ) such as those discussed above in reference to sinusitis or other non-photograph related image preprocessing configurations discussed herein. Those images that fail the relevant quality control checks are rejected, as indicated at Block 2403 , and the image analysis system 65 may be configured to transmit a notification to the user computing entity 30 originally providing the input files indicating that one or more images included within the provided input file are not suitable for automated image analysis. It should be understood that such notifications may only be generated upon determining that all images within a relevant input file failed the relevant quality control check, and in such embodiments, the process may continue upon selecting (in Block 2404 ) at least one image that satisfies applicable criteria for analysis.
- the preprocessing steps may additionally comprise an image selection step, as reflected at Block 2403 .
- the image analysis system 65 may be configured to generate quality scores, relevance scores, and/or the like for each image in those instances in which an input document comprises a plurality of images therein. Such scores may be based at least in part on detected image quality, detected image size, detected image content (e.g., utilizing detected features as a part of a criteria for generating a score), and/or the like. The scores assigned to each image may be generated based at least in part on application of a machine-learning based scoring model as discussed herein. As indicated at Block 2404 , one or more images are ultimately selected for further analysis (e.g., the highest scored images may be selected for further analysis).
- the image analysis system 65 executes one or more feature detection models for detecting individual features reflected within the image.
- Such features may comprise an overall breast shape (e.g., based on differences in color intensity within the image), one or more dense portions within the image, and/or the like.
- the image analysis system 65 identifies one or more features as a mass subject to further analysis, as reflected at Block 2406 .
- the image analysis system 65 may be configured to identify one or more features identified within an image as a mass candidate when initially identifying such features. Each of those mass candidates may be isolated for further image based analysis, such as by cropping the containing image around the detected mass candidate, such that further analysis and ultimate determinations of a candidate as being a mass (as indicated at Block 2406 ) may proceed utilizing the smaller, cropped images encompassing the mass candidate.
- a determination of a particular mass candidate as a mass subject to further analysis may comprise applying a machine-learning based (e.g., convolutional neural network) model trained using a training data set of a plurality of images having corresponding labels indicating whether the image includes a mass.
- a machine-learning based e.g., convolutional neural network
- Such processes may proceed in a manner analogous to that for detecting lung nodules, as discussed above, utilizing an appropriate training data set for identifying breast masses.
- the image analysis system 65 is configured for determining a typology for all those masses detected as noted above.
- the image analysis system 65 may utilize one or more classification machine-learning models, such as convolutional neural networks, for assigning a classification to individual images, selected from a defined set of available classifications.
- classification machine-learning models such as convolutional neural networks
- the image analysis system 65 may be configured to distinguish between benign cysts and tumors, and to assign corresponding classifications to images based on the determined contents thereof.
- the image analysis system 65 may be additionally configured to assign one or more sub-classifications (e.g., assigning a tumor type sub-classification) through application of the classification model.
- the image analysis system 65 is configured to perform one or more mass measurement processes for detecting a size of a mass identified within an image.
- the mass measurement processes may comprise one or more absolute-measurement processes such as those discussed above (and utilizing metadata indicative of an image scale embedded within one or more images), after utilizing the detection of a feature having a known size for generation of a scale to be utilized for performing absolute measurements within the image.
- the mass measurement processes may comprise one or more relative-measurement processes such as those discussed above for establishing a relative size of a detected mass within an image.
- the size of a mass may be identified as a percentage of an image (e.g., based on number of pixels identified as associated with the detected mass relative to the total number of pixels within an image encompassing a breast region).
- the image analysis system 65 is configured to perform one or more relative measurement processes for detecting a relative size and/or location of a mass identified within an image.
- the mass measurement processes may comprise detection of a boundary assigned with each mass, and boundaries assigned to one or more additional detected features within the image, thereby enabling a determination of relative positions of the detected masses relative to other detected features within the image.
- the image analysis system 65 is configured to generate a final diagnostic profile and classification for any detected masses within an image.
- diagnostic profile may comprise patient-specific diagnostic indications, which may be based at least in part on supplemental health data received by the image analysis system 65 , as indicated at Block 2412 (e.g., data indicative of a family history of breast cancer, a personal history of surgery or trauma in the region of the identified lesion, a personal history of infection such as fever and/or localized redress, and/or the like).
- the image analysis system 65 may be configured to determine an appropriate routing for the image(s) (e.g., together with the document/report containing the embedded image(s)) based at least in part on the final diagnostic profile and classification as indicated at Block 2411 . For example, upon determining that the final diagnostic profile indicates that the mass is benign but further care is provided, the image analysis system 65 may route the image (and containing document/report) to a user computing entity 30 associated with a healthcare provider.
- the image analysis system 65 may transmit the image (and including document/report) to a user computing entity 30 associated with a specialist and/or a user computing entity 30 associated with a user providing ancillary support.
- the image analysis system 65 may route the image (and including document/report) to an Electronic Health Record (EHR) system.
- EHR Electronic Health Record
- the image analysis system 65 may provide one or more image analysis tools, such as a user interface providing additional data regarding a determined diagnosis accompanying a mammograph.
- the image analysis tools may provide a visual indication of relevance scores for each of a plurality of mass types (e.g., a benign cyst, a cancerous tumor, and/or the like), thereby providing an indication of mass type assigned to a detected mass within an image.
- the image analysis tools may additionally comprise one or more interactive user interface elements enabling a user to selectively apply one or more visual overlays to an analyzed image, such visual overlays may comprise a visual outline of a detected mass, a scaled grid having grid squares of a defined size to provide a visual indication of an absolute measurement corresponding with a detected mass.
- the image analysis tools may be configured to enable a user to selectively display one or more example images from the training data set to provide a visual comparison with images providing examples of known mass types that may be utilized by a user for confirmation of a determined mass type classification.
- the image analysis system 65 may first extract images provided for analysis, and to determine whether the images are of sufficient quality and content to enable a finding of structural deficiencies of the nose. Such images may be extracted from documents/reports or other data source files, as reflected at FIG. 25 .
- the nasal analysis engine may be configured to generate a threshold nasal passage area ratio that may be utilized as an image analysis criteria to classify newly received images as indicating a level of nasal structural deficiency for which a rhinoplasty procedure is a medical necessity, from those images that do not indicate a level of nasal structural deficiency for which a rhinoplasty procedure is a medical necessity.
- a threshold nasal passage area ratio may identify a relative difference between areas of the left nasal passage and the right nasal passage, rather than a defined minimum ratio value.
- the threshold nasal passage area ratio may identify a relative difference in area of 200% between nasal passages, and such a relative difference may be satisfied by a ratio of 2:1, 1:2, 1:0.5, or 0.5:1.
- the embedded image may be routed to a first user computing entity 30 (e.g., operated by a nurse) for approval, and upon determining that none of the vertebrae visible within the X-ray image demonstrate significant thinning, the embedded image (e.g., together with the document/report containing the embedded image) may be routed to a second user computing entity 30 (e.g., operated by a physician) for approval.
- a first user computing entity 30 e.g., operated by a nurse
- a second user computing entity 30 e.g., operated by a physician
- the image analysis system 65 may transmit the generated diagnosis data to the user computing entity 30 providing the embedded image, and may provide the document/report containing the embedded image, together with the embedded image (which may be annotated, such as with a visual overlay of the identified disks, the boundaries of vertebrae and disks utilized for calculating area ratios, and/or the like).
- FIG. 29 illustrates yet another example analysis of a patient's spinal X-ray that may be performed by the image analysis system 65 .
- the analysis illustrated in FIG. 29 may utilize images analogous to those discussed in reference to FIGS. 27 - 28 B (e.g., a mid-sagittal view).
- FIG. 29 specifically focuses on detection of a constriction of a spinal canal extending adjacent to the spinal vertebrae for classification of features within the embedded image.
- the image analysis system 65 identifies boundaries of the spinal canal within the image. Once the image analysis system 65 determines that an image is sufficient for classifying the spinal canal by measuring various widths of the spinal canal, the image analysis system 65 may identify certain boundaries and/or other measurements to be utilized for determining a width of the spinal canal, as well as for assigning location labels for the calculated spinal canal widths. Accordingly, the image analysis system 65 may utilize one or more machine-learning based models (e.g., classification models) for identifying individual boundaries of each vertebrae and the spinal canal. Certain embodiments may execute processes for identifying individual vertebrae and for identifying the spinal canal without further image manipulation.
- machine-learning based models e.g., classification models
- the image analysis system 65 may generate one or more binary image masks based on detected intensity differences within regions of the image within identified boundaries of the spine.
- the binary masks may thus emphasize the boundaries of each vertebrae and/or the boundaries of the spinal canal, which may then be utilized to identify the location of each vertebrae boundary and the spinal canal boundaries.
- the image analysis system 65 may additionally utilize a machine-learning based vertebrae detection model to label each vertebrae (so as to enable a labelling of locations where the width of the spinal canal is measured).
- the image analysis system 65 may identify boundaries of each vertebrae, thereby enabling a determination of a centerline of each vertebrae such that measurements of the spinal canal may be taken at a location at least substantially aligned with the locations of centerlines of each vertebrae.
- boundaries of each vertebrae may be utilized for estimating centerlines of disks between vertebrae (based on a central location between adjacent bottom and top edges of adjacent vertebrae), such that measurements of the spinal canal may be taken at a location at least substantially aligned with the locations of centerlines of each disk.
- each of these identified locations including the centerline of specific vertebrae and the centerline of specific disks
- the image analysis system 65 is configured to draw a line across the width of the spinal canal at each of these locations, and to provide a label for each of these lines corresponding to the vertebrae or disk location where the lines are drawn.
- the lines are drawn parallel and aligned with the determined centerlines of the vertebrae and disks, and the lines are draw such that opposite ends of the lines are located at the detected edges of the spinal canal.
- the image analysis system 65 calculates a length (in pixels) of each generated line, and the image analysis system 65 generates two length ratios for each drawn line (with the exception of the top-most line and the bottom-most line, for which only a single ratio is calculated).
- L1 corresponding to a location adjacent the L1 vertebrae
- L1_L2 the line immediately below the L1 line, that is aligned with the disk located between the L1 and L2 vertebrae. No other ratios are calculated since no other lines are adjacent the L1 line.
- the image analysis system 65 is configured to detect changes in width of the spinal canal at various locations along the length of the spinal canal.
- the image analysis system 65 may be configured to utilize an established length ratio threshold for determining whether spinal canal width changes at one or more locations along the length of the spinal canal are indicative of constriction of the spinal canal. It should be understood that a single established length ratio threshold may be utilized along the entirety of the length of the spinal canal, or a plurality of established length ratio thresholds may be utilized, with each established length ratio threshold being applicable to a particular location along the length of the spinal canal.
- the image analysis system 65 Upon determining whether any of the calculated length ratios for the spinal canal locations visible within the X-ray image satisfy an established length ratio threshold (e.g., being above the established threshold or being below the established threshold, as specified with respect to a particular threshold), the image analysis system 65 assigns diagnosis data to be associated with the embedded image (and/or the documents/reports comprising the embedded image). The image analysis system 65 may thus generate and transmit data indicative of the classification of the spinal canal and resulting automatically generated length ratios for each line across the width of the spinal canal visible within the X-ray image, as well as an indication of which line (if any) satisfies an applicable length ratio threshold. Such diagnosis data may be utilized, for example, for approval of procedures to address spinal canal construction for the patient.
- an established length ratio threshold e.g., being above the established threshold or being below the established threshold, as specified with respect to a particular threshold
- the image analysis system 65 may transmit the generated diagnosis data to the user computing entity 30 providing the embedded image, and may provide the document/report containing the embedded image, together with the embedded image (which may be annotated, such as with a visual overlay illustrating the locations where the widths of the spinal canal are measured).
- the image of analysis system 65 may route the embedded image (e.g., together with the document/report containing the embedded image) to an appropriate user computing entity 30 for final approval of the determined designation of constriction of the spinal canal.
- the embedded image may be routed to a first user computing entity (e.g., operated by a nurse) for approval, and upon determining that none of the locations along the length of the spinal canal demonstrate constriction, the embedded image (e.g., together with the document/report containing the embedded image) may be routed to a second user computing entity (e.g., operated by a physician) for approval.
- a first user computing entity e.g., operated by a nurse
- a second user computing entity e.g., operated by a physician
- the image analysis system 65 may be configured for executing one of a plurality of image analysis processes, which may be selected based at least in part on user input, based at least in part on detected features within an image, based at least in part on detected text within a report/document containing the image, and/or the like.
- the image analysis system 65 is configured for analyzing photographs of a patient's torso for estimation of the location of the patient's pubic symphysis and a location of a lower edge of a patient's hanging abdominal panniculus.
- the image analysis system 65 may first extract images provided for analysis, and to determine whether the images are of sufficient quality and content to enable a determination of an estimated location of the patient's pubic symphysis (e.g., based on a detected widest part of the patient's hips) and a determination of a lower edge of a patient's hanging panniculus.
- the image analysis system 65 may be configured to implement a pre-processing and image quality check process analogous to that discussed in reference to other photograph-based examples discussed above.
- the image analysis system 65 may be configured to determine whether the photograph is of a sufficient size (e.g., in pixels) to enable an accurate detection of the location of a patient's pubic symphysis and hanging abdominal panniculus. Moreover, as discussed above, the image analysis system 65 may perform one or more image editing processes, such as image rotation, image sharpening, and/or the like to enable a more accurate detection of the patient's pubic symphysis and hanging abdominal panniculus. However, it should be understood that certain image analysis processes, including feature detection processes as discussed herein, may be orientation agnostic, so as to be able to perform appropriate image analysis regardless of the orientation of the extracted image.
- the image analysis system 65 may utilize one or more machine-learning based models (e.g., classification models) for determining whether the content of the extracted image contains images of a patient's torso with sufficient clarity, content, and orientation so as to enable an accurate assessment of an estimated location of the patient's pubic symphysis and hanging abdominal panniculus.
- machine-learning based models e.g., classification models
- the image analysis system 65 may utilize a convolutional neural network to compare the contents of the extracted images with an image training set to determine whether the extracted images match images within the training set that are deemed sufficient to enable a desired analysis of the patient's torso.
- the classification model may be configured to determine whether a sufficient proportion of the patient's torso is shown in the photograph, such as extending between at least the patient's shoulders/neck to the patient's knees or lower thighs. By identifying certain features and relationships between those certain features, the image analysis system 65 is configured to determine whether the image is of a proper content and orientation to enable further analysis. Moreover, as indicated at FIG.
- the image analysis system 65 may look for a frontal and side-view of the patient's torso.
- the image analysis system 65 may utilize one or more machine-learning based models to implement relative measurement techniques to compare the patient's torso positioning (e.g., slouched, straight, bent, arched, and/or the like) within the frontal and side-views to determine whether the patient has at least substantially the same posture in both frontal and side-view images.
- the image analysis system 65 identifies and overlays a posture detection line in the side-view image.
- FIG. 30 illustrates an example graphical output resulting from an example analysis of the patient's torso.
- the image analysis system 65 may utilize a machine-learning based model for identifying and classifying specific features within the image, such as a widest point of the patient's hips, a lower edge of a patient's hanging abdominal panniculus, and/or the like. Any of a variety of methodologies may be utilized for identifying boundaries of the various identified features. Certain embodiments may execute processes for identifying boundaries of the identified features without further image manipulation. In other embodiments, the image analysis system 65 may utilize image manipulation (e.g., changing image contrast) to emphasize feature boundaries, so as to identify precise locations of boundaries of specific features.
- image manipulation e.g., changing image contrast
- the image analysis system 65 identifies the boundaries of the patient's torso and measures a horizontal distance across the patient's torso between identified boundaries of the patient's torso so as to determine a widest point of a patient's hips.
- the image analysis system 65 is further configured to utilize a machine-learning based model to approximate a boundary between the patient's hips and the patient's abdomen so as to avoid false positive determinations of a widest point of the patient's hips in the event the widest point of the patient's torso is located within the patient's abdomen, shoulders, or elsewhere outside of the patient's hips.
- the image analysis system 65 is configured to generate an annotated line at the detected widest point of the patient's hips (as shown with the red dashed horizontal line of FIG. 30 ) to reflect the results of the machine-learning based identification of this point.
- This widest point of the patient's hips is used clinically to approximate the location of the patient's pubic symphysis, without additional imaging processes required to generate additional images of the patient's torso.
- the image analysis system 65 is further configured to identify a lower boundary of a patient's hanging abdominal panniculus. Similar to the above-discussion regarding the identification of other features within the image, the image analysis system 65 may be configured to utilizes differences in contrast within the image and/or other differences in color intensity within the image, as well as machine-learning based modelling to determine an approximate location and shape of a hanging abdominal panniculus. Upon detecting the lower boundary of the hanging abdominal panniculus, the image analysis system 65 is configured to identify a lowest point of the detected lower boundary of the hanging abdominal panniculus, and to generate and overlay a horizontal line tangent to this detected lowest point of the hanging abdominal panniculus (shown in a green line in the annotated view of FIG. 30 ).
- the image analysis system 65 Upon detecting a location of the pubic symphysis and a location of the lowest point of the hanging abdominal panniculus, the image analysis system 65 implements a relative-measurement based hanging abdominal panniculus criteria to classify the hanging abdominal panniculus by comparing the detected location of the pubic symphysis and a location of the lowest point of the hanging abdominal panniculus. Specifically, the relative-measurement based hanging abdominal panniculus criteria determines which of the lower edge of the hanging abdominal panniculus or the estimated location of the pubic symphysis is higher on the patient's body.
- the image analysis system 65 Upon determining the lowest point of the hanging abdominal panniculus is below the pubic symphysis, the image analysis system 65 is configured to generate a positive diagnosis data for the hanging abdominal panniculus. Upon determining the lowest point of the hanging abdominal panniculus is above the pubic symphysis, the image analysis system 65 is configured to generate a negative diagnosis data for the hanging abdominal panniculus.
- the diagnosis data is assigned to be associated with the embedded image (and/or the documents/reports comprising the embedded image). The image analysis system 65 may thus generate and transmit data indicative of the resulting automatically generated diagnosis data, as well as the annotated images. Such diagnosis data may be utilized, for example, for preapproving a claim for appropriate medical treatment to address the hanging abdominal panniculus.
- the image analysis system 65 may transmit the generated diagnosis data to the user computing entity 30 providing the embedded image, and may provide the document/report containing the embedded image, together with the embedded image (which may be annotated, such as with a visual overlay of the posture detection line, the hanging abdominal panniculus line, and the pubic symphysis line).
- the image analysis system 65 may route the embedded image (e.g., together with the document/report containing the embedded image) to an appropriate user computing entity 30 for final approval of a payment claim associated with the submitted document/report.
- the embedded image may be routed to a first user computing entity 30 (e.g., operated by a nurse) for approval, and upon determining that the hanging abdominal panniculus is above the pubic symphysis, the embedded image (e.g., together with the document/report containing the embedded image) may be routed to a second user computing entity 30 (e.g., operated by a physician) for approval.
- a first user computing entity 30 e.g., operated by a nurse
- the embedded image e.g., together with the document/report containing the embedded image
- a second user computing entity 30 e.g., operated by a physician
- the image analysis system 65 may be configured for executing one of a plurality of image analysis processes, which may be selected based at least in part on user input, based at least in part on detected features within an image, based at least in part on detected text within a report/document containing the image, and/or the like.
- the image analysis system 65 is configured for analyzing photographs of a patient's upper torso for estimation of whether the patient's bra straps have created an indention within the patient's shoulders, which may be utilized as a proxy for a determination that the patient's breasts are sufficiently heavy to justify breast surgery.
- the image analysis system 65 may first extract images provided for analysis, and to determine whether the images are of sufficient quality and content to enable a determination of whether the patient's shoulders are indented from the patient's bra straps.
- the image analysis system 65 may be configured to implement a pre-processing and image quality check process analogous to that discussed in reference to other photograph-based examples discussed above.
- the image analysis system 65 may be configured to determine whether the photograph is of a sufficient size (e.g., in pixels) to enable an accurate assessment of whether the patient's shoulders are indented from the patient's bra strap. Moreover, as discussed above, the image analysis system 65 may perform one or more image editing processes, such as image rotation, image sharpening, and/or the like to enable a more accurate assessment of the patient's shoulders to determine whether the patient's shoulders are indented. However, it should be understood that certain image analysis processes, including feature detection processes as discussed herein, may be orientation agnostic, so as to be able to perform appropriate image analysis regardless of the orientation of the extracted image.
- FIGS. 31 A- 31 B illustrate examples of image pre-processing filters that may be applied to extracted images to ensure that images are of sufficient quality and content to enable a determination of whether the patient's shoulders are indented. It should be noted that certain embodiments may be utilize any detected features within a patient's breasts (not fully shown in FIGS. 31 A- 31 B ) visible within uncensored images to assist in determining whether the images are of sufficient size, quality, and/or content for classification and further analysis. Specifically, FIG. 31 A illustrates an example image deemed unacceptable for further analysis, and such images are discarded as a result of the image preprocessing processes. With reference to FIG.
- the image analysis system 65 applies a relative-measurement based machine learning model to classify the embedded image by detecting indentions in the patient's shoulders, such as by detecting sudden and short changes in slope in a detected edge of the patient's shoulders.
- the image analysis system 65 is configured to utilize a binary determination of whether indentions are identified or not.
- the image analysis system 65 is configured to utilize a threshold-based determination of characteristics of those indentions, such as by determining whether the indentions are present or not, and if the indentions are present, are they shallow or deep indentions.
- the image analysis system 65 may utilize one or more machine-learning based models (e.g., classification models) for determining whether the content of the extracted image contains images of the patient's face with sufficient clarity, content, and orientation so as to enable an accurate assessment of characteristics of the patient's nose.
- machine-learning based models e.g., classification models
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Public Health (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Quality & Reliability (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
Description
Claims (22)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/191,963 US12340509B2 (en) | 2020-03-19 | 2021-03-04 | Systems and methods for automated digital image content extraction and analysis |
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202062991686P | 2020-03-19 | 2020-03-19 | |
| US17/191,921 US11869189B2 (en) | 2020-03-19 | 2021-03-04 | Systems and methods for automated digital image content extraction and analysis |
| US17/191,963 US12340509B2 (en) | 2020-03-19 | 2021-03-04 | Systems and methods for automated digital image content extraction and analysis |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/191,921 Continuation US11869189B2 (en) | 2020-03-19 | 2021-03-04 | Systems and methods for automated digital image content extraction and analysis |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20210295504A1 US20210295504A1 (en) | 2021-09-23 |
| US12340509B2 true US12340509B2 (en) | 2025-06-24 |
Family
ID=77748292
Family Applications (4)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/191,963 Active 2042-09-01 US12340509B2 (en) | 2020-03-19 | 2021-03-04 | Systems and methods for automated digital image content extraction and analysis |
| US17/191,868 Active 2043-06-03 US12131475B2 (en) | 2020-03-19 | 2021-03-04 | Systems and methods for automated digital image selection and pre-processing for automated content analysis |
| US17/191,921 Active 2042-04-13 US11869189B2 (en) | 2020-03-19 | 2021-03-04 | Systems and methods for automated digital image content extraction and analysis |
| US18/889,983 Pending US20250037282A1 (en) | 2020-03-19 | 2024-09-19 | Systems and methods for automated digital image selection and pre-processing for automated content analysis |
Family Applications After (3)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/191,868 Active 2043-06-03 US12131475B2 (en) | 2020-03-19 | 2021-03-04 | Systems and methods for automated digital image selection and pre-processing for automated content analysis |
| US17/191,921 Active 2042-04-13 US11869189B2 (en) | 2020-03-19 | 2021-03-04 | Systems and methods for automated digital image content extraction and analysis |
| US18/889,983 Pending US20250037282A1 (en) | 2020-03-19 | 2024-09-19 | Systems and methods for automated digital image selection and pre-processing for automated content analysis |
Country Status (1)
| Country | Link |
|---|---|
| US (4) | US12340509B2 (en) |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP4078603A1 (en) * | 2019-12-18 | 2022-10-26 | Koninklijke Philips N.V. | A co-training framework to mutually improve concept extraction from clinical notes and medical image classification |
| US20210241019A1 (en) * | 2020-01-31 | 2021-08-05 | Salesforce.Com, Inc. | Machine learning photographic metadata |
| CN113763242A (en) * | 2021-05-17 | 2021-12-07 | 腾讯科技(深圳)有限公司 | An image processing method, device and computer-readable storage medium |
| US12482124B2 (en) * | 2021-06-14 | 2025-11-25 | National Jewish Health | Systems and methods of volumetrically assessing structures of skeletal cavities |
| WO2023220580A1 (en) * | 2022-05-09 | 2023-11-16 | Mofaip, Llc | Morphologic mapping and analysis on anatomic distributions for skin tone and diagnosis categorization |
| US20230214996A1 (en) * | 2021-12-30 | 2023-07-06 | National Yang Ming Chiao Tung University | Eyes measurement system, method and computer-readable medium thereof |
| ES2976657B2 (en) * | 2022-12-21 | 2025-05-26 | Skilled Skin Sl | Procedure for control and comparative support related to dermatological lesions |
| US20240211634A1 (en) * | 2022-12-27 | 2024-06-27 | Fasoo Co., Ltd. | Method for managing image based on de-identification, apparatus for the same, computer program for the same, and recording medium storing computer program thereof |
| CN117274361A (en) * | 2023-08-18 | 2023-12-22 | 软通动力信息技术(集团)股份有限公司 | Material surface area measurement method and device, electronic equipment and medium |
Citations (22)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4688780A (en) * | 1986-03-31 | 1987-08-25 | Siemens Gammasonics, Inc. | Patient support |
| US8548828B1 (en) | 2012-05-09 | 2013-10-01 | DermTap | Method, process and system for disease management using machine learning process and electronic media |
| US20140064583A1 (en) * | 2012-08-30 | 2014-03-06 | The Regents Of The University Of Michigan | Analytic Morphomics: High Speed Medical Image Automated Analysis Method |
| US20140088989A1 (en) | 2012-09-27 | 2014-03-27 | Balaji Krishnapuram | Rapid Learning Community for Predictive Models of Medical Knowledge |
| US8898798B2 (en) | 2010-09-01 | 2014-11-25 | Apixio, Inc. | Systems and methods for medical information analysis with deidentification and reidentification |
| US8954339B2 (en) | 2007-12-21 | 2015-02-10 | Koninklijke Philips N.V. | Detection of errors in the inference engine of a clinical decision support system |
| US20170330320A1 (en) | 2016-05-13 | 2017-11-16 | National Jewish Health | Systems and methods for automatic detection and quantification of pathology using dynamic feature classification |
| US20180040122A1 (en) * | 2008-01-02 | 2018-02-08 | Bio-Tree Systems, Inc. | Methods Of Obtaining Geometry From Images |
| US20180060512A1 (en) | 2016-08-29 | 2018-03-01 | Jeffrey Sorenson | System and method for medical imaging informatics peer review system |
| US20180342060A1 (en) * | 2017-05-25 | 2018-11-29 | Enlitic, Inc. | Medical scan image analysis system |
| US10242443B2 (en) | 2016-11-23 | 2019-03-26 | General Electric Company | Deep learning medical systems and methods for medical procedures |
| US20190138693A1 (en) | 2017-11-09 | 2019-05-09 | General Electric Company | Methods and apparatus for self-learning clinical decision support |
| US20190392950A1 (en) | 2018-06-21 | 2019-12-26 | Mark Conroy | Procedure assessment engine |
| US20200004561A1 (en) | 2018-06-28 | 2020-01-02 | Radiology Partners, Inc. | User interface for determining real-time changes to content entered into the user interface to provide to a classifier program and rules engine to generate results for the content |
| US20200082507A1 (en) | 2018-09-10 | 2020-03-12 | University Of Florida Research Foundation, Inc. | Neural network evolution using expedited genetic algorithm for medical image denoising |
| US20200085546A1 (en) | 2018-09-14 | 2020-03-19 | Align Technology, Inc. | Machine learning scoring system and methods for tooth position assessment |
| US20200258615A1 (en) * | 2017-10-05 | 2020-08-13 | Koninklijke Philips N.V. | Image feature annotation in diagnostic imaging |
| US20210090694A1 (en) | 2019-09-19 | 2021-03-25 | Tempus Labs | Data based cancer research and treatment systems and methods |
| US20210174503A1 (en) * | 2019-12-06 | 2021-06-10 | Raylytic GmbH | Method, system and storage medium with a program for the automatic analysis of medical image data |
| US20220019771A1 (en) * | 2019-04-19 | 2022-01-20 | Fujitsu Limited | Image processing device, image processing method, and storage medium |
| US20220039774A1 (en) * | 2019-02-23 | 2022-02-10 | Guangzhou Lian-Med Technology Co., Ltd. | Fetal head direction measuring device and method |
| US20220284609A1 (en) * | 2019-08-28 | 2022-09-08 | Hover Inc. | Image analysis |
-
2021
- 2021-03-04 US US17/191,963 patent/US12340509B2/en active Active
- 2021-03-04 US US17/191,868 patent/US12131475B2/en active Active
- 2021-03-04 US US17/191,921 patent/US11869189B2/en active Active
-
2024
- 2024-09-19 US US18/889,983 patent/US20250037282A1/en active Pending
Patent Citations (22)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4688780A (en) * | 1986-03-31 | 1987-08-25 | Siemens Gammasonics, Inc. | Patient support |
| US8954339B2 (en) | 2007-12-21 | 2015-02-10 | Koninklijke Philips N.V. | Detection of errors in the inference engine of a clinical decision support system |
| US20180040122A1 (en) * | 2008-01-02 | 2018-02-08 | Bio-Tree Systems, Inc. | Methods Of Obtaining Geometry From Images |
| US8898798B2 (en) | 2010-09-01 | 2014-11-25 | Apixio, Inc. | Systems and methods for medical information analysis with deidentification and reidentification |
| US8548828B1 (en) | 2012-05-09 | 2013-10-01 | DermTap | Method, process and system for disease management using machine learning process and electronic media |
| US20140064583A1 (en) * | 2012-08-30 | 2014-03-06 | The Regents Of The University Of Michigan | Analytic Morphomics: High Speed Medical Image Automated Analysis Method |
| US20140088989A1 (en) | 2012-09-27 | 2014-03-27 | Balaji Krishnapuram | Rapid Learning Community for Predictive Models of Medical Knowledge |
| US20170330320A1 (en) | 2016-05-13 | 2017-11-16 | National Jewish Health | Systems and methods for automatic detection and quantification of pathology using dynamic feature classification |
| US20180060512A1 (en) | 2016-08-29 | 2018-03-01 | Jeffrey Sorenson | System and method for medical imaging informatics peer review system |
| US10242443B2 (en) | 2016-11-23 | 2019-03-26 | General Electric Company | Deep learning medical systems and methods for medical procedures |
| US20180342060A1 (en) * | 2017-05-25 | 2018-11-29 | Enlitic, Inc. | Medical scan image analysis system |
| US20200258615A1 (en) * | 2017-10-05 | 2020-08-13 | Koninklijke Philips N.V. | Image feature annotation in diagnostic imaging |
| US20190138693A1 (en) | 2017-11-09 | 2019-05-09 | General Electric Company | Methods and apparatus for self-learning clinical decision support |
| US20190392950A1 (en) | 2018-06-21 | 2019-12-26 | Mark Conroy | Procedure assessment engine |
| US20200004561A1 (en) | 2018-06-28 | 2020-01-02 | Radiology Partners, Inc. | User interface for determining real-time changes to content entered into the user interface to provide to a classifier program and rules engine to generate results for the content |
| US20200082507A1 (en) | 2018-09-10 | 2020-03-12 | University Of Florida Research Foundation, Inc. | Neural network evolution using expedited genetic algorithm for medical image denoising |
| US20200085546A1 (en) | 2018-09-14 | 2020-03-19 | Align Technology, Inc. | Machine learning scoring system and methods for tooth position assessment |
| US20220039774A1 (en) * | 2019-02-23 | 2022-02-10 | Guangzhou Lian-Med Technology Co., Ltd. | Fetal head direction measuring device and method |
| US20220019771A1 (en) * | 2019-04-19 | 2022-01-20 | Fujitsu Limited | Image processing device, image processing method, and storage medium |
| US20220284609A1 (en) * | 2019-08-28 | 2022-09-08 | Hover Inc. | Image analysis |
| US20210090694A1 (en) | 2019-09-19 | 2021-03-25 | Tempus Labs | Data based cancer research and treatment systems and methods |
| US20210174503A1 (en) * | 2019-12-06 | 2021-06-10 | Raylytic GmbH | Method, system and storage medium with a program for the automatic analysis of medical image data |
Non-Patent Citations (12)
| Title |
|---|
| "Eyelid Drooping—Blepharoptosis," RSIP Vision—Custom Medtech Imaging Algorithms, [article, online], (8 pages). [Retrieved from the Internet Jun. 7, 2021] <URL: https://www.rsipvision.com/eyelid-drooping-blepharoptosis/>. |
| "Pannus Is Not the Same Thing as Panniculus," Bariatric Pal, Apr. 22, 2014, (7 pages), [article, online]. [Retrieved from the Internet Jun. 7, 2021] <URL: https://www.bariatricpal.com/topic/305193-pannus-is-not-the-same-thing-as-panniculus/>. |
| "sovaSage—Reinventing Sleep Therapy," [online], (2 pages). [Retrieved from the Internet Jun. 7, 2021] <URL: https://www.sovasage.com/solution/>. |
| Borojeni, Azadeh A.T. et al. "Normative Ranges of Nasal Airflow Variables in Healthy Adults," International Journal of Computer Assisted Radiology and Surgery, Jan. 2020, vol. 15, No. 1, pp. 87-98. doi: 10.1007/s11548-019-02023-y. Epub: Jul. 2, 2019, PMID: 31267334; PMCID: PMC6939154. |
| Cress, C. Ray. "Panniculus-Pannus," JAMA the Journal of the American Medical Association, vol. 226, No. 3, p. 353, Oct. 15, 1973. doi: 10.1001/jama.1973.03230030065024. |
| Froomkin, A. Michael et al. "When Als Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning," University of Miami School of Law Institutional Repository, vol. 61 Ariz. L. Rev. 33, Feb. 20, 2019, (68 pages). |
| Non-Final Rejection Mailed on Feb. 23, 2024 for U.S. Appl. No. 17/191,868, 9 page(s). |
| Notice of Allowance and Fees Due (PTOL-85) Mailed on Aug. 22, 2023 for U.S. Appl. No. 17/191,921, 11 page (s). |
| Notice of Allowance and Fees Due (PTOL-85) Mailed on Jun. 24, 2024 for U.S. Appl. No. 17/191,868, 10 page (s). |
| Notice of Allowance and Fees Due (PTOL-85) Mailed on Nov. 8, 2023 for U.S. Appl. No. 17/191,921, 2 page (s). |
| Notice of Allowance and Fees Due (PTOL-85) Mailed on Sep. 25, 2023 for U.S. Appl. No. 17/191,921, 2 page (s). |
| Notice of Allowance and Fees Due for U.S. Appl. No. 17/191,921, dated Aug. 22, 2023, (11 pages), United States Patent and Trademark Office, US. |
Also Published As
| Publication number | Publication date |
|---|---|
| US20210295503A1 (en) | 2021-09-23 |
| US11869189B2 (en) | 2024-01-09 |
| US20250037282A1 (en) | 2025-01-30 |
| US20210295504A1 (en) | 2021-09-23 |
| US20210295551A1 (en) | 2021-09-23 |
| US12131475B2 (en) | 2024-10-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12340509B2 (en) | Systems and methods for automated digital image content extraction and analysis | |
| US11295446B2 (en) | Method and system for computer-aided triage | |
| US11854200B2 (en) | Skin abnormality monitoring systems and methods | |
| CN112699869A (en) | Rib fracture auxiliary detection method based on deep learning and image identification method | |
| CN109272002B (en) | A kind of classification method and device of Guage tablets | |
| EP3722996A2 (en) | Systems and methods for processing 3d anatomical volumes based on localization of 2d slices thereof | |
| CN112241961B (en) | Chest X-ray assisted diagnosis method and system based on deep convolutional neural network | |
| KR102739821B1 (en) | Apparatus, method and computer program for anayzing musculoskeletal medical image using classification and segmentation | |
| CN112381762A (en) | CT rib fracture auxiliary diagnosis system based on deep learning algorithm | |
| CN119515827A (en) | Adenoids recognition method and device based on image instance segmentation | |
| US7352888B2 (en) | Method for computer recognition of projection views and orientation of chest radiographs | |
| CN112991289B (en) | Processing methods and devices for image standard sections | |
| CN116385756B (en) | Medical image recognition method and related device based on enhancement annotation and deep learning | |
| Zhou et al. | Computerized image analysis: texture‐field orientation method for pectoral muscle identification on MLO‐view mammograms | |
| EP4679447A1 (en) | Human-artificial intelligence collaborative platform for oral cancer lesion diagnosis | |
| Yuan et al. | Automatic measurement of fetal abdomen subcutaneous soft tissue thickness from ultrasound image based on a U‐shaped attention network with morphological method | |
| CN120390940A (en) | Defining the location of markers in medical images | |
| HK40031988A (en) | Systems and methods for processing 3d anatomical volumes based on localization of 2d slices thereof |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: UNITEDHEALTH GROUP INCORPORATED, MINNESOTA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AMUNDSON, RUSSELL H.;BHARGAVA, SAURABH;SINGH, RAMA KRISHNA;AND OTHERS;SIGNING DATES FROM 20210225 TO 20210302;REEL/FRAME:055493/0208 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| FEPP | Fee payment procedure |
Free format text: PETITION RELATED TO MAINTENANCE FEES GRANTED (ORIGINAL EVENT CODE: PTGR); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |