WO2020038771A1 - Pertinence visuelle saillante d'évaluations de caractéristique par des modèles d'apprentissage machine - Google Patents

Pertinence visuelle saillante d'évaluations de caractéristique par des modèles d'apprentissage machine Download PDF

Info

Publication number
WO2020038771A1
WO2020038771A1 PCT/EP2019/071700 EP2019071700W WO2020038771A1 WO 2020038771 A1 WO2020038771 A1 WO 2020038771A1 EP 2019071700 W EP2019071700 W EP 2019071700W WO 2020038771 A1 WO2020038771 A1 WO 2020038771A1
Authority
WO
WIPO (PCT)
Prior art keywords
salient
image
volumetric
feature
medical
Prior art date
Application number
PCT/EP2019/071700
Other languages
English (en)
Inventor
Grzegorz Andrzej TOPOREK
Ze HE
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2020038771A1 publication Critical patent/WO2020038771A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/10Interfaces, programming languages or software development kits, e.g. for simulating neural networks
    • G06N3/105Shells for specifying net layout
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/008Cut plane or projection plane definition

Definitions

  • Various embodiments described in the present disclosure relate to systems, machines, controllers and methods incorporating one or more machine learning models for rendering a feature assessment of a medical imaging of an anatomical region or an anatomical organ. More particularly, but not exclusively, various embodiments relate to providing a salient visual relevancy of the feature assessment by the machine learning model(s) of the medical imaging of the anatomical region/organ.
  • Deep learning based artificial intelligence (AI) technologies have shown to provide promising results in medical imaging based diagnosis (e.g., deep learning diagnosis of an ultrasound (US) imaging, an X-ray imaging, a computed tomography (CT) imaging, a magnetic resonance imaging (MRI), a positron emission tomography (PET) imaging, a single photon emission computed tomography (SPECT) imaging and a diffuse optical tomography (DOT) imaging).
  • US ultrasound
  • CT computed tomography
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • SPECT single photon emission computed tomography
  • DOT diffuse optical tomography
  • 3D ConvNets are often used for computer-assisted diagnosis based on volumetric medical images (e.g., 3D US, 3D X-Ray, CT, MRI, PET, SPECT and DOT). Understanding the decision-making process of 3D ConvNets is critical for the well-trained clinician to confirm and admit the proposed prediction. Additionally, visualization of 3D localization maps (heatmaps) is challenging mostly because such maps from a last layer or an intermediate layer of a 3D ConvNet are (1) multidimensional (4D tensors), (2) low resolution as compared to the inputted volumetric medical image and (3) difficult to interpret using simple rendering techniques.
  • 3D localization maps e.g., 3D US, 3D X-Ray, CT, MRI, PET, SPECT and DOT.
  • One set of embodiments of the present disclosure are directed to a
  • this aspect of the present disclosure provides various embodiments of medical imaging display engines providing user interpretability of a feature assessment by one or more machine learning models of a medical imaging of a body, human or animal.
  • this aspect of the present disclosure provide various embodiments of medical imaging display engines a feature relevancy reslicing of a volumetric salient image of the feature assessment by the machine learning model(s) of the medical imaging of the body.
  • a first embodiment of this set of embodiments is a salient medical imaging controller employing a memory storing an artificial intelligence engine and a medical image display engine, wherein the artificial intelligence engine includes one or more machine learning models.
  • the salient medical imaging controller further employs one or more processor(s) in communication with the memory with the processor(s) configured to (1) apply the machine learning model(s) to medical image data representative of one or more features of a volumetric medical image to render a feature assessment of the volumetric medical image, (2) generate, via the medical image display engine, a volumetric salient image illustrative of the feature assessment of the volumetric medical image by the at least one machine learning model, and (3) reslice, via the medical image display engine, a planar salient image from the volumetric salient image based on a relevancy of each feature to the feature assessment of the volumetric medical image by the machine learning model(s).
  • a second embodiment of this set of embodiments is a non-transitory machine-readable storage medium encoded with instructions for execution by one or more processors.
  • the non-transitory machine -readable storage medium stores instructions to (1) apply an artificial intelligence engine including one or more machine learning models to medical image data representative of one or more features of a volumetric medical image to render a feature assessment of the volumetric medical image, (2) generate, via the medical image display engine, a volumetric salient image illustrative of the feature assessment of the volumetric medical image by the at least one machine learning model, and (3) reslice, via the medical image display engine, a planar salient image from the volumetric salient image based on a relevancy of each feature to the feature assessment of the volumetric medical image by the machine learning model(s).
  • a third embodiment of this set of embodiments is salient medical imaging method involving (1) an application of an artificial intelligence engine including one or more machine learning models to medical image data representative of one or more features of a volumetric medical image to render a feature assessment of the volumetric medical image, (2) a generation, via a medical image display engine, of a volumetric salient image illustrative of the feature assessment of the volumetric medical image by the at least one machine learning model, and (3) a reslicing, via the medical image display engine, of a planar salient image from the volumetric salient image based on a relevancy of each feature to the feature assessment of the volumetric medical image by the machine learning model(s).
  • feature broadly encompasses any type of object identifiable and/or characterizable within a medical image by one or more trained machine learning model(s) including, but not limited to, anatomical objects (e.g., vessels, organs, etc.), foreign objects (e.g., procedural tools/instruments and implanted devices) and image artifacts (e.g., noise, grating lobes);
  • anatomical objects e.g., vessels, organs, etc.
  • foreign objects e.g., procedural tools/instruments and implanted devices
  • image artifacts e.g., noise, grating lobes
  • feature assessment broadly encompasses any type of prediction or classification of an identification and/or characterization of the feature
  • salient image broadly encompasses broadly
  • a salient image includes, but are not limited to, a heatmap, a feature segmentation and an activation diagram;
  • the term "salient visualization” broadly encompasses a display of one or more salient images
  • controller broadly encompasses all structural configurations, as understood in the art of the present disclosure and hereinafter conceived, of a main circuit board or an integrated circuit for controlling an application of various principles of the present disclosure as subsequently described in the present disclosure.
  • the structural configuration of the controller may include, but is not limited to, processor(s), non-transitory machine-readable storage medium(s), an operating system, application module(s), peripheral device controller(s), slot(s) and port(s); and [0017] (7) “data” may be embodied in all forms of a detectable physical quantity or impulse (e.g., voltage, current, magnetic field strength, impedance, color) as understood in the art of the present disclosure and as exemplary described in the present disclosure for transmitting information and/or instructions in support of applying various principles of the present disclosure as subsequently described in the present disclosure. Data communication encompassed by the present disclosure may involve any combination of a detectable physical quantity or impulse (e.g., voltage, current, magnetic field strength, impedance, color) as understood in the art of the present disclosure and as exemplary described in the present disclosure for transmitting information and/or instructions in support of applying various principles of the present disclosure as subsequently described in the present disclosure. Data communication encompassed by the present disclosure may involve any combination of a detectable physical
  • FIG. 1 illustrates exemplary embodiments of salient medical imaging method in accordance with the principles of the present disclosure
  • FIG. 2A illustrates exemplary embodiments of an artificial intelligence engine and medical image display engine in accordance with the principles of the present disclosure
  • FIG. 2B illustrates a feature assessment method in accordance with the principles of the present disclosure
  • FIG. 2C illustrates a salient visualization method in accordance with the principles of the present disclosure
  • FIG. 3A illustrates an exemplary embodiment of a two-dimensional (2D) convolutional neural network (CNN) in accordance with the principles of the present disclosure
  • FIG. 3B illustrates an exemplary embodiment of a two-dimensional (2D) convolutional neural network/support vector machine (CNN/SVM) in accordance with the principles of the present disclosure
  • FIG. 3C illustrates an exemplary embodiment of a 3D CNN in accordance with the principles of the present disclosure
  • FIG. 3D illustrates an exemplary embodiment of a 3D CNN/SVM in accordance with the principles of the present disclosure
  • FIG. 3E illustrates an exemplary embodiment of a series network of 2D
  • FIG. 3F illustrates an exemplary embodiment of a series network of 3D
  • FIG. 3G illustrates a first exemplary embodiment of a parallel network of
  • FIG. 3H illustrates a first exemplary embodiment of a parallel network of
  • FIG. 31 illustrates a first exemplary embodiment of a parallel network of
  • FIG. 3J illustrates a first exemplary embodiment of a parallel network of
  • FIG. 4A illustrates an exemplary 2D planar CT image and an exemplary generation of a 2D planar heatmap corresponding to the 2D planar CT image as known in the art of the present disclosure
  • FIG. 4B illustrates an exemplary rendering of a 3D volumetric CT image and an exemplary generation of 3D volumetric heatmap corresponding to the 3D volumetric CT image in accordance with the principles of the present disclosure
  • FIGS. 5A-5F illustrate an exemplary reslicing of a 3D volumetric salient image at a first principle plane in accordance with the principles of the present disclosure
  • FIG. 6A illustrates an exemplary feature segmenation in accordance with the principles of the present disclosure
  • FIG. 6B illustrates an exemplary reslicing of the 3D volumetric CT image and the 3D volumetric heatmap of FIG. 4B at a first principle plane in accordance with the principles of the present disclosure
  • FIG. 6C illustrates an exemplary reslicing of the 3D volumetric CT image and the 3D volumetric heatmap of FIG. 4B at a second principle plane in accordance with the principles of the present disclosure
  • FIG. 6D illustrates an exemplary reslicing of the 3D volumetric CT image and the 3D volumetric heatmap of FIG. 4B at a third principle plane in accordance with the principles of the present disclosure
  • FIG. 7 illustrates an exemplary embodiment of an activation diagram in accordance with the principles of the present disclosure
  • FIG. 8 illustrates an exemplary embodiment of clinical reporting in accordance with the principles of the present disclosure
  • FIG. 9A illustrates exemplary embodiment of a graphical user interface in accordance with the principles of the present disclosure
  • FIG. 9B illustrates a saline manipulation method in accordance with the principles of the present disclosure
  • FIG. 10A illustrates an exemplary embodiment of an image masking graphical user interface (GUI) in accordance with the principles of the present disclosure
  • FIG. 10B illustrates an exemplary embodiment of a date revising graphical user interface (GUI) in accordance with the principles of the present disclosure
  • FIG. 10C illustrates an exemplary embodiment of relevancy threshold graphical user interface (GUI) in accordance with the principles of the present disclosure
  • FIG. 10D illustrates an exemplary embodiment of an alternative assessment graphical user interface (GUI) in accordance with the principles of the present disclosure
  • FIG. 11 illustrates an exemplary embodiment of a salient medical imaging controller in accordance with the principles of the present disclosure
  • FIG. 12A illustrates a first exemplary embodiment of a salient medical imaging system in accordance with the principles of the present disclosure
  • FIG. 12B illustrates a second exemplary embodiment of a salient medcial imaging system in accordance with the principles of the present disclosure.
  • a machine learning model may be any type of predicting machine learning model or any type of classifying machine learning model known in the art of the present disclosure including, but not limited to, a deep neural network (e.g., a convolutional neural network, a recurrent neural network, etc.) and a supervised learning machine (e.g., a linear or non-linear support vector machine, a boosting classifier, etc.).
  • a deep neural network e.g., a convolutional neural network, a recurrent neural network, etc.
  • a supervised learning machine e.g., a linear or non-linear support vector machine, a boosting classifier, etc.
  • FIG. 1 teaches various embodiments of a salient visualization method of the present disclosure. From the description of FIG. 1, those having ordinary skill in the art of the present disclosure will appreciate how to apply the present disclosure for making and using numerous and various additional embodiments of salient visualization methods of the present disclosure.
  • a salient medical imaging method 20 of the present disclosure implements a feature assessment stage S22, an salient visualization stage S24 and a salient manipulation stage 26.
  • Feature assessment stage S22 involves an application of an artificial intelligence engine including one or more machine learning models to a medical imaging 30 of a body, human or animal, to render a feature assessment 31 of the medical imaging 30.
  • medical imaging 30 encompasses a visualization of a portion or an entirety of a body for use in numerous and various procedures including, but not limited to, clinical diagnostic procedures, interventions/surgical procedures and patient monitoring.
  • medical imaging 30 include, but are not limited to, two- dimensional (2D) medical imaging and three-dimensional (3D) medical imaging of an anatomical region/organ represented by raw imaging data generated by a medical imaging machine (e.g., 2D/3D US, 2D/3D X-Ray, CT, MRI, PET, SPECT and DOT) or represented by a medical image of the anatomical region/organ reconstructed from the raw imaging data.
  • a medical imaging machine e.g., 2D/3D US, 2D/3D X-Ray, CT, MRI, PET, SPECT and DOT
  • feature assessment 31 encompasses a prediction or a classification of one or more features illustrated within the medical imaging of the body in terms of an identification and/or a characterization of the feature(s). More particularly, a feature encompasses any type of object identifiable and/or characterizable within a medical image by one or more trained machine learning models including, but not limited to, anatomical objects (e.g., vessels, organs, etc.), foreign objects (e.g., procedural tools/instruments and implanted devices) and image artifacts (e.g., noise, grating lobes).
  • anatomical objects e.g., vessels, organs, etc.
  • foreign objects e.g., procedural tools/instruments and implanted devices
  • image artifacts e.g., noise, grating lobes
  • Examples of a classification of a feature illustrated within the medical imaging 30 include, but are not limited to, classifying (1) anatomy (e.g., grey or white matter of the brain), (2) foreign objects (e.g., a guidewire, an implanted stent or valve),
  • lesion malignancy e.g., skin lesion benign or malignant
  • certain disease e.g., positive or negative for diabetic retinopathy
  • multi-categorical lesion classification e.g. normal, glioblastoma, sarcoma and brain metastatic tumor
  • variances within a particular tissue e.g., stenosis in a vessel, blood pooling in tissue/organ and tissue stiffness variances.
  • Examples of a prediction of a feature illustrated within the medical imaging 30 include, but are not limited to, predicting (1) whether a mass is non-cancerous (a negative diagnosis) or cancerous (a positive diagnosis), such as, for examples, risk scores for a lung nodule and (2) patient outcomes, such as, for example, a high chance and a low chance of a five-year disease-specific survival for instance of a patient diagnosed with colorectal cancer.
  • salient visualization stage 24 involves a display of one or more medical images 32 and salient images 33.
  • a medical image 32 encompasses any type of image acquired by a medical imaging machine (e.g., a 2D/3D ultrasound machine, a 2D/3D X-ray machine, a CT machine, a MRI machine, a PET machine, a SPECT machine and a DOT machine).
  • a medical imaging machine e.g., a 2D/3D ultrasound machine, a 2D/3D X-ray machine, a CT machine, a MRI machine, a PET machine, a SPECT machine and a DOT machine.
  • Examples of a medical image 32 include, but are not limited, to (1) an image frame or a video as acquired by a medical imaging machine, (2) an image segmentation of feature(s) illustrated within the medical imaging 30 and (3) an image overlay of an image frame/video acquired by one type of a medical imaging machine onto an image frame/video acquired by a different type of medical imaging machine (e.g., an overlay of an 2D ultrasound image onto a 2D/3D X-ray image).
  • a medical image 32 may be displayed as a planar view or a volumetric view of the medical imaging 30.
  • a salient image 33 encompasses an illustration of feature assessment 31.
  • Examples of a salient image 33 include, but are not limited to, a heatmap, a feature segmentation and an activation diagram.
  • a heatmap encompasses a planar or a volumetric graphical representation of feature assessment 31 utilizing a color scheme (e.g., grayscale and visible spectrum) to highlight any relevancy differential between features to the feature assessment 31 of medical imaging 30.
  • a color scheme e.g., grayscale and visible spectrum
  • segmentation encompasses a planar or a volumetric partitioning of feature(s) of a particular minimal relevancy to feature assessment 31 of medical imaging 30 whereby the feature(s) may be displayed individually or highlighted within a displayed medical image 32.
  • an activation diagram encompasses a planar view of a n x m matrix of outputs of the machine learning model(s) (e.g., filtered outputs of neural networks) arranged in order of relevancy (e.g., left to right or up to down), n > 1 and m > 1.
  • the machine learning model(s) e.g., filtered outputs of neural networks
  • relevancy e.g., left to right or up to down
  • the present disclosure provides for a relevancy reslicing and a user specified reslicing of a volumetric salient image 33.
  • a relevancy reslicing of a volumetric salient image encompasses a reslicing of a planar salient image from the volumetric salient image based on a relevancy level of each feature to a feature assessment 31 of the volumetric medical image by the machine learning model(s). More particularly, a salient voxel of a volumetric salient image has a relevancy level exceeding a relevancy threshold and a plurality of salient voxels will define a salient view of the assessed features of medical imagine 30.
  • a center point and an orientation of the resliced planar salient image within the volumetric salient image is derived from an intersection of a resliced salient plane with the volumetric salient image that includes one or more salient voxels.
  • the center point and the orientation of the resliced planar salient image within the volumetric salient image is derived from an intersection of a resliced salient plane with the volumetric salient image that includes a highest spatial distribution of salient voxels, a highest summation of salient voxels, or a highest average of salient voxels relative to a coordinate plane of the volumetric salient image.
  • the location and the orientation of the resliced planar salient image within the volumetric salient image is derived from an intersection of a resliced salient plane with the volumetric salient image based on a coordinate plane of the volumetric salient image (i.e., the XY plane, the XZ plane or the YZ plane).
  • the location and the orientation of the resliced planar salient image within the volumetric salient image is derived from an intersection of a resliced salient plane with the volumetric salient image that includes a highest spatial distribution salient voxels, a highest summation of salient voxels, or a highest average of salient voxels relative to the coordinate plane of the volumetric salient image.
  • a location and an orientation of the resliced planar salient image within the volumetric salient image is derived from an intersection of a resliced salient plane with the volumetric salient image based on a centroid of salient voxels. More particularly, the location and the orientation of the resliced planar salient image within the volumetric salient image is derived from an intersection of a resliced salient plane with the volumetric salient image that includes a highest spatial distribution salient voxels, a highest summation of salient voxels, or a highest average of salient voxels relative to the centroid of the salient voxels.
  • a user specified reslicing of a volumetric salient image encompasses a reslicing of a planar salient image from the volumetric salient image based on a user specified center point or orientation of the planar salient image includes one or more salient voxels. More particularly, the location and the orientation of the resliced planar salient image within the volumetric salient image is derived from an intersection of a resliced salient plane with the volumetric salient image that includes a highest spatial distribution salient voxels, a highest summation of salient voxels, or a highest average of salient voxels based on the user specified center point or orientation the planar salient image. [0072] Still referring to FIG. 1, prior to an initial rendering of feature assessment
  • salient manipulation stage S26 involves a user interaction with the artificial intelligence engine, via a graphical user interface (GUI), to manipulate a salient visualization of the feature assessment 31 of medical imaging 30.
  • GUI graphical user interface
  • the salient visualization of feature assessment 31 of medical imaging 30 encompasses a static or an interactive display of salient image(s) 33 and optionally of medical images 32 in a manner supporting the underlying procedure, and the manipulation of the salient visualization of medical imaging 30 encompasses a medical imaging interaction 34 with the artificial intelligence engine to mask or revise medical imaging 30, or a feature assessment interaction 35 to vary relevancy thresholds for the features of medical imaging 40 or hypothesize alternative feature assessments 31 of medical imaging 30.
  • a displayed GUI enables a clinician to interact with a planar medical image or a volumetric medical image directly and see how the artificial intelligence engine responds.
  • a clinician may analysis salient image(s) 33 to ascertain if one or more particular area(s) of a planar medical image or a volumetric medical image is(are) irrelevant to a current feature assessment 31.
  • the clinician may mask the irrelevant area(s) and view the salient image(s) to see the impact such masking has on the feature assessment 31 of medical imaging 30.
  • a displayed GUI enables a clinician to test the influence of each input into the artificial intelligence engine on feature assessment 31 of medical imaging 30 as visualized by salient image(s) 30.
  • Examples of a data revision include, but are not limited to, (1) an enabling/disabling a medical imaging input or a combination of medical imaging inputs of the artificial intelligence engine, (2) an increasing/decreasing the pixeFvoxel intensity value of medical imaging input or a combination of medical imaging inputs of the artificial intelligence engine, (3) an enabling/disabling an auxiliary information input or a combination of auxiliary information inputs of the artificial intelligence engine, or (4) an altering an auxiliary information input or a combination of auxiliary information inputs of the artificial intelligence engine.
  • a displayed GUI enables a clinician setting of a relevance level of the pixels of a planar medical image to the feature assessment 31 of the planar medical image or a clinician setting of a relevance level of the voxels of a volumetric medical image to the feature assessment 31 of the volumetric medical image.
  • the insight is that not all pixels of a planar medical image or all voxels of a volumetric planar image are equally relevant to the feature assessment 31 and clinicians typically focus on feature(s) of the image they think are most suspicious.
  • the relevancy level of pixels of a planar medical image or voxels of a volumetric planar image may be set by a clinician to a value so that a feature of the image will be more distinctively highlighted within the salient image(s) 33 if this feature reaches that level of relevance.
  • a clinician may hypothesize between different predictions or different classifications of the feature assessment 31 of medical imaging 30 (e.g., prediction/classification x and
  • prediction/classification x of medical imaging 30 as visualized by salient image(s) 31 illustrative of prediction/classification x of medical imaging 30, then the clinician may select prediction/classification y via the GUI to see the features that are most relevant to prediction/classification y of medical imaging 30 as visualized by salient image(s) 31 illustrative of prediction/classification y of medical imaging 30.
  • medical imaging interaction 34 and feature assessment interaction 35 may be utilized individually or in combination.
  • Flowchart 20 will continually conditionally loop through stages S22-S26 until an associated procedure or a related phase of the associated procedure is terminated.
  • FIGS. 2A-2C teach various embodiments of an artificial intelligence engine and a medical image display engine of the present disclosure. From the description of FIGS. 2A-2C, those having ordinary skill in the art of the present disclosure will appreciate how to apply the present disclosure for making and using numerous and various additional embodiments of artificial intelligence engines and medical image display engines of the present disclosure.
  • artificial intelligence engine 40 renders a feature assessment 31 of medical imaging 30 as previously described for stage S22 of FIG. 1.
  • artificial intelligence engine 40 includes a data pre-processor 41 , one or more machine learning models 42 and a salient image manager 43 for implementation of a feature assessment method represented by a flowchart 140 of FIG. 2B.
  • data pre -processor 41 processes planar medical imaging data 30a or volumetric medical imaging data 30b during a stage S 142 of flowchart 140 by inputting data 30a/30b into the machine learning model(s) 42 to thereby render a feature assessment 3la during a stage S144 of flowchart 140.
  • data pre -processor 41 may further process auxiliary imaging information 30c from a medical imaging machine (e.g., 2D/3D ultrasound machine, 2D/3D X-ray machine, CT machine, MRI machine, PET machine, SPECT machine and DOT machine) related to planar medical imaging data 30a or volumetric medical imaging data 30b (e.g., a radius and/or a sphericity of an anatomical organ segmented from reconstructed medical image 132a) for inputting information 30c into the machine learning model(s) 42 to thereby render the feature assessment 3 la during a stage S144.
  • auxiliary imaging information 30c may further include information related to the patient in terms of age, gender and health history.
  • Examples of processing medical image data 30a/30b and auxiliary imaging information 30c by data pre -processor 25 include, but are not limited to, an encoding, an embedding or a temporal sampling of the medical image data medical image data 30a/30b and auxiliary imaging information 30c.
  • data pre -processor 41 may be omitted from AI engine 40 if such pre-processing is unnecessary or previously performed prior to transmission of medical image data medical image data 30a/30b and optional auxiliary imaging information 30c to AI engine 40.
  • a machine learning model 42 is a classifying machine learning model or a predicting machine learning model including, but not limited to, (1) a deep neural network (e.g., a convolutional neural network, a recurrent neural network, etc.) and (2) a supervised learning machine (e.g., a linear or nonlinear support vector machine, a boosting classifier, etc.).
  • a deep neural network e.g., a convolutional neural network, a recurrent neural network, etc.
  • a supervised learning machine e.g., a linear or nonlinear support vector machine, a boosting classifier, etc.
  • a machine learning model 43 may be trained to render feature assessment 3 la of medical imaging 30a/30b for any type of medical procedure, any type of medical imaging machine or any type of patient, or alternatively may be trained to render a feature assessment of medical imaging data 30a/30b for one or more particular medical procedures or one or more particular type of medical imaging machines whereby the machine learning model 43 may be further trained for one or more particular types of patient.
  • AI engine 40 may include a single machine learning model 42 or a plurality of similar and/or dissimilar machine learning models 42 including, but not limited to, (1) a distributed arrangement of individual machine learning models 42, (2) a network of two or more similar and/or dissimilar deep neural networks or (3) a network of two or more similar and/or dissimilar supervised learning machines.
  • AI engine 40 input the (pre-processed) medical imaging data 30a/30b to render a feature assessment 3 la of the medical imaging data 30a/30b during a stage S 144 of flowchart 140 as previously described for feature assessment stage S22 of FIG. 1.
  • salient image manager 43 Upon an initial execution of stage S144, during a stage S 146 of flowchart 140, salient image manager 43 generates salient image data 36a representative of the relevance of each feature of medical imaging data 30a/30b to feature assessment 3 la.
  • salient image data 36a is representative of the relevance of each feature of masked/revised medical imaging data 30a/30b to the new feature assessment 3 la.
  • salient image data 36a provides a scaling of each pixel represented by planar medical imaging data 30a and each voxel of volumetric medical imaging data 40a from least relevant pixel(s)/voxel(s) (e.g., 0) to most relevant pixel(s)/voxel(s) (e.g., 169).
  • the scaling enables salient image generator 51 to generate a heatmap or a feature map.
  • scaling image data 36a provides filter outputs ranging from least relevant filter output(s) to most relevant filter output(s).
  • a deep neural network encompass multiple layers of activation neurons (i.e., feature elements) interconnected and trained for feature extraction and transformation to thereby calculate complex mappings between (pre-processed) medical imaging data 30a/30b (e.g., 2D pixels or 3D voxels)/auxiliary information 30c (if applicable) and a prediction or a classification via one or more output neurons (i.e., prediction elements).
  • Each activation neuron applies a nonlinear activation function to a weighted linear combination of inputs (e.g., 2D pixels, 3D voxels, an upstream activation neuron output or a downstream activation neuron output).
  • the parameters of importance to a deep neural network are the structure of the connective neuron network, the nonlinear activation functions and weights of the activation neurons. Also of importance is a capability to execute any type of relevancy mapping for ascertaining an explanation of the prediction or the classification whereby such relevancy mapping may be used to generate salient image data 36a of the present disclosure (e.g., back propagation, guided back propagation, deconvolution, class activation mapping, gradient class activation mapping, etc.). For example, detected class activations may be projected back through the network to the input pixel/voxel space to detect which parts of the input medical image 30a/30b were most relevant to the prediction or classification.
  • any type of relevancy mapping for ascertaining an explanation of the prediction or the classification whereby such relevancy mapping may be used to generate salient image data 36a of the present disclosure (e.g., back propagation, guided back propagation, deconvolution, class activation mapping, gradient class activation mapping, etc.).
  • detected class activations may be projected back through the network to
  • salient image manager 43 executes the aforementioned relevancy mapping based on default feature relevancy thresholds for each activation to thereby generate the salient image data 36a for the rendered feature assessment 3 la.
  • a support vector machine as known in the art of the present disclosure encompasses support vectors trained for delineating a hyperplane (i.e., prediction element) for linearly or nonlinearly separating feature vectors (i.e., feature elements) of medical imaging data 30a/30b (e.g., 2D pixels or 3D voxels) auxiliary information 30c (if applicable) into one of two classes.
  • Finear support vector machines in particular learn a linear discriminant function whereby weights of the linear discriminant function may be utilized to assign a score to each 2D pixel or 3D voxel to thereby support a rendering of feature assessment 3 la.
  • salient image manager 43 may utilize the 2D pixel/3D voxel scoring in conjunction with a nearest neighbor algorithm relative to the default feature relevancy thresholds to thereby generate the salient image data 36a for the rendered feature assessment 3 la.
  • salient image manager 43 upon generation of salient information data 36a during stage S146, salient image manager 43 proceeds to a stage S148 of flowchart 140 whereby salient image manager 43 returns to stage S142 upon receipt of data masking specification 34a or data revising specification 34b or to stage S 146 upon receipt of feature relevancy threshold specification 35a or alternative feature assessment specification 35b.
  • salient image manager 43 directs data pre processor 41 to perform a data masking of planar medical image data 30a or volumetric medical image data 30b as directed by data masking specification 34a as previously described for salient manipulation stage S26 of FIG. 1, or a data revision of planar medical image data 30a or volumetric medical image data 30b as directed by data masking specification 34a as previously described for salient manipulation stage S26 of FIG. 1. Subsequently, stage S 144 is repeated to render a new feature assessment 3 la based on the data masking or the data revisions, and stage S 146 is repeated to generate new salient image data 36a representative of the relevance of each feature of
  • salient image manager 43 generates salient image data 36a based on the feature relevancy thresholds specified by feature relevancy threshold specification 35a as previously described for salient manipulation stage S26 of FIG. 1 or the alternative feature assessment specified by alternative feature assessment specification 35b as previously described for salient manipulation stage S26 of FIG. 1.
  • FIGS. 3A-3J illustrate exemplary examples of trained convolutional neural network (CNN) to render a feature assessment of a prediction or a classification of planar medical image data 30a or volumetric medical image data 30b.
  • the convolutional neural networks employ one or more convolution/rectified linear unit (ReLU)/pooling layers and a fully connected layer to thereby output the feature assessment (e.g., a convolutional neural network (CNN) having four (4) convolutional layers trained to provide a particular type of prediction including, but not limited to, tumor classification, vessel segmentation, etc.).
  • a fully connected layer of a CNN is connected to or replaced with a support vector machine (CNN/SVM).
  • CNN/SVM support vector machine
  • a CNN of the present disclosure is trained on 2D or 3D medical imaging of a body encompassing the entire range of prediction scores of entire set of classifications Additionally, a CNN of the present disclosure are structured to output salient image data for purposes of generating a planar salient image or a volumetric salient image as will be further explained in the present disclosure.
  • salient image data is filter outputs of the last convolutional layer of the CNN.
  • a CNN of the present disclosure may process medical imaging interaction data indicative of (1) a clinician masking of area of medical planar medical image or a medical volumetric medical image to see the impact of data masking on the feature assessment as previously described for stage S26 of FIG. 1 , or (2) clinician enabling/disabling and/or variance of image and auxiliary information inputs to the CNN to see the impact of data revision on the feature assessment as previously described for stage S26 of FIG. 1.
  • a CNN of the present disclosure may process feature assessment interaction data indicative of (1) a clinician setting of relevancy levels of the pixels of the planar medical imaging data or the voxels of the volumetric medical imaging for generating salient image(s) as previously described for stage S26 of FIG. lor (2) indicative of a clinician selection between different hypotheses (e.g., x and y) as previously described for stage S26 of FIG. 1.
  • a 2D CNN 242a is trained as known in the art of the present disclosure on a varied of classes of interpreted medical images of prior patients to render a feature assessment 23 la of the medical imaging of a body represented by planar medical imaging data 30a. Additionally, 2D CNN 242a is operated via a relevancy mapping by salient image manager 43 to output 2D salient image data 236a for purposes of generating a planar salient image illustrative of feature assessment 23 la (e.g., a 2D heatmap, a 2D feature map or an activation map). Further, 2D CNN 242a is operated via GUI(s) as previously described in the present disclosure in accordance with medical imaging interaction data 34c or feature assessment interaction data 35c.
  • CNN/SVM 242b is trained as known in the art of the present disclosure on a varied of classes of interpreted medical images of prior patients to render a feature assessment 23 lb of the medical imaging of a body represented by planar medical imaging data 30a.
  • 2D CNN/SVM 242b is operated via a relevancy mapping by salient image manager 43 to output 2D salient image data 236b for purposes of generating a planar salient image illustrative of feature assessment 23 lb (e.g., a 2D heatmap, a 2D feature map or an activation map). Further, 2D CNN/SVM 242b is operated via GUI(s) as previously described in the present disclosure in accordance with medical imaging interaction data 34c or feature assessment interaction data 35c.
  • 242c is trained as known in the art of the present disclosure on a varied of classes of interpreted medical images of prior patients to render a feature assessment 23 lc of the medical imaging of a body represented by volumetric medical imaging data 30b.
  • AI engine 242c is operated via a relevancy mapping by salient image manager 43 to output 2D salient image data 236c for purposes of generating a volumetric salient image illustrative of feature assessment 23 lc (e.g., a 3D heatmap, a 3D feature map or an activation map).
  • 3D CNN 242c is operated via GUI(s) as previously described in the present disclosure in accordance with medical imaging interaction data 34d or feature assessment interaction data 35d.
  • CNN/SVM 242d is trained as known in the art of the present disclosure on a varied of classes of interpreted medical images of prior patients to render a feature assessment 231 d of the medical imaging of a body represented by volumetric medical imaging data 30b. Additionally, 3D CNN/SVM 242d is operated via a relevancy mapping by salient image manager 43 to output 2D salient image data 236d for purposes of generating a volumetric salient image illustrative of feature assessment 23 ld (e.g., a 3D heatmap, a 3D feature map or an activation map). Further, 3D CNN/SVM 242d is operated via GUI(s) as previously described in the present disclosure in accordance with medical imaging interaction data 34d or feature assessment interaction data 35d.
  • 3D CNN/SVM 242d is operated via GUI(s) as previously described in the present disclosure in accordance with medical imaging interaction data 34d or feature assessment interaction data 35d.
  • FIG. 3A serves as a secondary CNN providing feature assessment 23 la and salient image data 236 as inputs to a 2D CNN 242e serving as a primary CNN.
  • 2D CNN 242e is trained as known in the art of the present disclosure on a varied of classes of interpreted medical images of prior patients and feature assessment 23 la to render a feature assessment 23 le of the medical imaging of a body as represented by volumetric medical imaging data 30b.
  • 2D CNN 242e is operated via a relevancy mapping by salient image manager 43 to output 2D salient image data 236e for purposes of generating a planar salient image illustrative of feature assessment 23 la (e.g., a 2D heatmap, a 2D feature map or an activation map).
  • 2D CNN 242e is operated via GUI(s) as previously described in the present disclosure in accordance with medical imaging interaction data 34e or feature assessment interaction data 35e.
  • 3D CNN 242c serves as a secondary CNN providing feature assessment 23 lc and salient image data 23c as inputs to a 3D CNN 242f serving as a primary CNN.
  • 3D CNN 242f is trained as known in the art of the present disclosure on a varied of classes of interpreted medical images of prior patients and feature assessment 23 lc to render a feature assessment 23 lf of the medical imaging of a body represented by volumetric medical imaging data 30b.
  • 3D CNN 242f is operated via a relevancy mapping to output 3D salient image data 236f for purposes of generating a volumetric salient image illustrative of feature assessment 23 lf (e.g., a 3D heatmap, a 3D feature map or an activation map).
  • 3D CNN 242f is operated via GUI(s) as previously described in the present disclosure in accordance with medical imaging interaction data 34f or feature assessment interaction data 35f.
  • 2D CNN 242g is trained as known in the art of the present disclosure on class of positively interpreted medical images of prior patients (e.g., images of cancerous patients, images of a presence of a surgical tool within an anatomical region, etc.) to render a preliminary feature assessment 23 lg of the medical imaging of a body represented by planar medical imaging data 30a.
  • 2D CNN 242g is operated via a relevancy mapping by salient image manager 43 to output 2D salient image data 236g for purposes of generating a planar salient image illustrative of feature assessment 23 lg (e.g., a 2D heatmap, a 2D feature map or an activation map).
  • 2D CNN 242g is operated via GUI(s) as previously described in the present disclosure in accordance with medical imaging interaction data 34g or feature assessment interaction data 35 g.
  • 2D CNN 242h is trained as known in the art of the present disclosure on class of negatively interpreted medical images of prior patients (e.g., images of non-cancerous patients, images of an absence of a surgical tool within an anatomical region, etc.) to render a preliminary feature assessment 23 lh of the medical imaging of a body represented by planar medical imaging data 30a. Additionally, 2D CNN 242h is operated via a relevancy mapping by salient image manager 43 to output 2D salient image data 236h for purposes of generating a planar salient image illustrative of feature assessment 23 lh (e.g., a 2D heatmap, a 2D feature map or an activation map). Further, 2D CNN 242h is operated via GUI(s) as previously described in the present disclosure in accordance with medical imaging interaction data 34h or feature assessment interaction data 35h.
  • a planar salient image illustrative of feature assessment 23 lh e.g., a 2D heatmap, a 2D feature map or
  • a classifier/predictor 242i is structured to generate a final feature assessment 231 i of the medical imaging of a body represented by planar medical imaging data 30a.
  • classifier/predictor 242i may implement any technique as known in the art of the present disclosure for combining preliminary feature assessment 23 lg and preliminary feature assessment 23 lh to render a final feature assessment 23 li.
  • classifier/predictor 242i may average risk scores associated with preliminary feature assessment 23 lg and preliminary feature assessment 23 lh to thereby render final feature assessment 23 li.
  • classifier/predictor 242i may implement a fully connected layer on behalf of networks 242g and 242h.
  • classifier/predictor 242i is further structured to output salient image data 236i for purposes of generating a planar salient image illustrative of feature assessment 23 li (e.g., a 2D heatmap, a 2D feature map or an activation map).
  • classifier/predictor 242i may implement any technique as known in the art of the present disclosure for combining filter outputs of the
  • classifier/predictor 242i may average the filter outputs of the last convolutional layers of networks 242g and 242h to produce salient image data 236i.
  • 3D CNN 242j is trained as known in the art of the present disclosure on class of positively interpreted medical images of prior patients (e.g., images of cancerous patients, images of a presence of a surgical tool within an anatomical region, etc.) to render a preliminary feature assessment 23 lj of the medical imaging of a body represented by volumetric medical imaging data 30b.
  • 3D CNN 242j is operated via a relevancy mapping by salient image manager 43 to output 3D salient image data 236j for purposes of generating a volumetric salient image illustrative of feature assessment 23 lj (e.g., a 3D heatmap, a 3D feature map or an activation map).
  • 3D CNN 242j is operated via GUI(s) as previously described in the present disclosure in accordance with medical imaging interaction data 34j or feature assessment interaction data 35j.
  • 3D CNN 242k is trained as known in the art of the present disclosure on class of negatively interpreted medical images of prior patients (e.g., images of non-cancerous patients, images of an absence of a surgical tool within an anatomical region, etc.) to render a preliminary feature assessment 23 lk of the medical imaging of a body represented by volumetric medical imaging data 30b. Additionally,
  • 3D CNN 242k is operated via a relevancy mapping by salient image manager 43 to output 3D salient image data 236k for purposes of generating a volumetric salient image illustrative of feature assessment 23 lk (e.g., a 3D heatmap, a 3D feature map or an activation map). Further, 3D CNN 242k is operated via GUI(s) as previously described in the present disclosure in accordance with medical imaging interaction data 34k or feature assessment interaction data 35k.
  • a classifier/predictor 2421 is structured to generate a final feature assessment 2311 of the medical imaging of a body represented by volumetric medical imaging data 30b.
  • classifier/predictor 2421 may implement any technique as known in the art of the present disclosure for combining preliminary feature assessment 23 lj and preliminary feature assessment 23 lk to render a final feature assessment 2311.
  • classifier/predictor 2421 may average risk scores associated with preliminary feature assessment 23 lj and preliminary feature assessment 23 lk to thereby render final feature assessment 2311.
  • classifier/predictor 2421 may implement a fully connected layer on behalf of networks 242j and 242k.
  • classifier/predictor 2421 is further structured to output salient image data 2361 for purposes of generating a volumetric salient image (e.g., a 3D heatmap, a 3D feature map or an activation map).
  • classifier/predictor 2421 may implement any technique as known in the art of the present disclosure for combining filter outputs of the convolutional layers of networks 242j and 242k to produce salient image data 2361.
  • classifier/predictor 2421 may average the filter outputs of the last convolutional layers of networks 242j and 242k to produce salient image data 2361.
  • classifier/predictor 242m is structured to generate a final feature assessment 23 lm of the medical imaging of a body represented by planar medical imaging data 30a.
  • classifier/predictor 242m may implement any technique as known in the art of the present disclosure for combining a preliminary feature assessment 23 lc from 2D CNN 242a (FIG. 3A), preliminary feature assessment 23 lg from 2D CNN 242g (FIG. 3G) and preliminary feature assessment 23 lh from 3D CNN 242h (FIG. 3G) to render final feature assessment 23 lm.
  • classifier/predictor 242m may average risk scores associated with preliminary feature assessment 23 lc, preliminary feature assessment 23 lg and preliminary feature assessment 23 lh to thereby render final feature assessment 23 lm.
  • classifier/predictor 242m may implement a fully connected layer on behalf of networks 242a, 242g and 242h.
  • classifier/predictor 24 lm is further structured to output salient image data 236m for purposes of generating a planar salient image illustrative of feature assessment 23 lm (e.g., a 2D heatmap, a 2D feature map or an activation map).
  • classifier/predictor 24lm may implement any technique as known in the art of the present disclosure for combining filter outputs of the
  • classifier/predictor 242i may average the filter outputs of the last convolutional layers of networks 242c, 242g and 242h to produce salient image data 236m.
  • classifier/predictor 242n is structured to generate a final feature assessment 23 ln of the medical imaging of a body represented by volumetric medical imaging data 30b.
  • classifier/predictor 242n may implement any technique as known in the art of the present disclosure for combining a preliminary feature assessment 231 c from 3D CNN 242a (FIG. 3 A), preliminary feature assessment 23 lj from 3D CNN 242J (FIG. 3H) and preliminary feature assessment 231K from 3D CNN 242k (FIG. 3h) to render final feature assessment 231N.
  • classifier/predictor 242n may average risk scores associated with preliminary feature assessment 23 lc, preliminary feature assessment 23 lj and preliminary feature assessment 23 lk to thereby render final feature assessment 23 ln.
  • classifier/predictor 242mn may implement a fully connected layer on behalf of networks 242c, 242j and 242k.
  • classifier/predictor 242n is further structured to output salient image data 236n for purposes of generating a volumetric salient image illustrative of feature assessment 23 ln (e.g., a 3D heatmap, a 3D feature map or an activation map).
  • classifier/predictor 242n may implement any technique as known in the art of the present disclosure for combining filter outputs of the
  • classifier/predictor 242n may average the filter outputs of the last convolutional layers of networks 242c, 242j and 242k to produce salient image data 236n.
  • medical image display engine 50 controls a display of medical image(s) 32 and salient image(s) 33 as previously described for stage S24 of FIG. 1.
  • medical image display engine 50 includes an image viewer 51 , a medical image generator 52, and a salient image generator 53 for implementation of a salient visualization method represented by a flowchart 150 of FIG. 2C.
  • planar medical imaging data 30a or volumetric medical imaging data 40a may be received by medical image display engine 50 in viewable form. Subsequent to such receipt, autonomously or via clinician activation, image viewer 51 proceeds to implement a display of medical images represented by planar medical imaging data 30a or volumetric medical imaging data 40a, and further provide an image navigation function (e.g., zoom in , zoom out, rotation, etc.) and an image annotation function for a clinician viewing the displayed medical images.
  • image navigation function e.g., zoom in , zoom out, rotation, etc.
  • medical image generator 52 may, autonomously or via clinician activation, implement (1) a volume rendering of a series of planar medical images represented by planar medical imaging data 30, (2) an image segmentation of a volumetric medical image represented by volumetric medical imaging data 40 or by a volume rendering, or (3) any other techniques known in the art for generating additional medical images.
  • planar medical imaging data 30a or volumetric medical imaging data 40a may alternatively be received by medical image display engine 50 in raw form. Subsequent to such receipt, medical image generator 51 may implement an image reconstruction technique of planar medical imaging data 30a or volumetric medical imaging data 30b. Subsequent to such imaging reconstruction, autonomously or via clinician activation, image viewer 51 proceeds to implement a display of the
  • medical image generator 52 may, autonomously or via clinician activation, implement (1) a volume rendering of a series of reconstructed planar medical images, (2) an image segmentation of a volumetric medical image represented by a reconstructed volumetric medical image or by a volume rendering, or (3) any other techniques known in the art for generating additional medical images.
  • stage S 154 of flowchart feature assessment data 3 la and salient image data 36a are received by medical image display engine 50.
  • image viewer 51 proceeds to display the feature assessment data 3 la in a textual format or a graphical format, and salient image generator 53 process salient image data 36a to generate salient image(s) 33 for display by image viewer 51.
  • salient image data 36a is representative of the relevance of each feature of masked/revised medical imaging data 30a/30b to the new feature assessment 3 la.
  • salient image data 36a provides a scaling of each pixel represented by planar medical imaging data 30a and each voxel of volumetric medical imaging data 40a from least relevant pixel(s)/voxel(s) (e.g., 0) to most relevant pixel(s)/voxel(s) (e.g., 169).
  • the scaling enables salient image generator 51 to generate a heatmap or a feature map.
  • scaling image data 36a provides filter outputs ranging from least relevant filter output(s) to most relevant filter output(s).
  • salient image manager [00122] In one convolutional neural network embodiment, salient image manager
  • FIG. 4A shows an exemplary planar CT image l32a with associated intensity scale l37a ranging from 0 to 255, and a corresponding planar heatmap l33a with associated importance scale 137b ranging from 0 to 169.
  • FIG. 4A shows an exemplary planar CT image l32a with associated intensity scale l37a ranging from 0 to 255, and a corresponding planar heatmap l33a with associated importance scale 137b ranging from 0 to 169.
  • each pixel of planar CT image l32a has been assigned an intensity valve based on relevancy level to the feature assessment 3 la
  • an importance level of pixel as displayed in planar heatmap l33a signifies the relevancy level of the pixel to feature assessment 3 la
  • a pixel shown at level of 0 within planar heatmap l33a signifies this pixel has a least relevancy level to feature assessment 3la
  • a pixel shown at level of 169 within planar heatmap l33a signifies this pixel has a most relevancy level to feature assessment 31 a.
  • FIG. 4B shows an exemplary volumetric CT image l32b with associated intensity scale l37a ranging from 0 to 255, and a corresponding volumetric heatmap 133b with associated importance scale 137b ranging from 0 to 169.
  • each voxel of volumetric CT image l32b has been assigned an intensity valve based on relevancy level to the feature assessment 3 la
  • an importance level of voxel as displayed in volumetric heatmap 133b signifies the relevancy level of the pixel to feature assessment 3 la
  • a voxel shown at level of 0 within volumetric heatmap 133b signifies this voxel has a least relevancy level to feature assessment 3 la
  • a voxel shown at level of 169 within volumetric heatmap 133b signifies this voxel has a most relevancy level to feature assessment 31 a.
  • stage S154 further encompasses salient image generator 53 (FIG. 2A) defining reslicing salient planes intersecting with a volumetric heatmap to expose the most relevant or descriptive information for feature assessment 3 la of the medical imaging 30, and image viewer 51 providing user interaction functions for a manual exploration and navigation by a clinician within the volumetric space of the medical imaging 30 in view of a relevance based guidance.
  • salient image generator 53 FIG. 2A
  • salient image generator 53 executes, autonomously or via a clinician activation, (1) a relevancy reslicing of a planar salient image from the volumetric salient image based on a relevancy level of each feature to a feature assessment 31 of the volumetric medical image by the machine learning model(s), or (2) a user specified reslicing of a center point or an orientation of a planar salient image from the volumetric salient image.
  • S154 are premised on a salient voxel of a volumetric salient image has a relevancy level exceeding a relevancy threshold and a plurality of salient voxels will define a salient view of the assessed features of volumetric medical imaging data 30b.
  • a center point and an orientation of the resliced planar salient image within the volumetric salient image is derived from an intersection of a resliced salient plane with the volumetric salient image that includes one or more salient voxels.
  • a relevancy reslicing involves an identification of salient voxels, a delineating of plane intersections including one or more salient voxels and then a selection of a one of the plane intersections optimizing a view of salient voxels, such as, for example, a selection of having a a highest spatial distribution salient voxels, a highest summation of salient voxels, or a highest average of salient voxels.
  • a user specified reslicing involves an identification of salient voxels, a user specification of a center point or an orientation of a plane intersection, a delineating of plane intersections including one or more salient voxels and then a selection of a one of the plane intersections optimizing a view of salient voxels, such as, for example, a selection of having a highest spatial distribution salient voxels, a highest summation of salient voxels, or a highest average of salient voxels.
  • a volumetric salient image 230 includes a set 231 of rows of salient voxels having relevancy level 232-235 of increasing intensity with each relevancy level exceeding a relevancy threshold.
  • the relevancy threshold may be 200
  • relevancy level 232 may 210
  • relevancy level 233 may 225
  • relevancy level 234 may be 240
  • relevancy level 235 may be 255.
  • FIG. 5B illustrates a XY planar view of rows 231 of salient voxels with the horizontal lines representing intersections of candidate reslice salient YZ planes with the volumetric salient image 230 that includes one or more salient voxels and the vertical line representing intersections of candidate reslice salient XZ planes with the volumetric salient image 230 that includes one or more salient voxels.
  • reslice salient plane 236a or reslice salient plane 236b may be chosen in view of a highest spatial distribution salient voxels, a highest summation of salient voxels, or a highest average of salient voxels.
  • FIG. 5C illustrates a XY planar view of rows 231 of salient voxels with the horizontal represent intersections of candidate reslice salient YZ planes with the volumetric salient image 230 that includes one or more salient voxels.
  • reslice salient plane 236c may be chosen in view of a highest spatial distribution salient voxels, a highest summation of salient voxels, or a highest average of salient voxels.
  • FIG. 5D illustrates a XY planar view of rows 231 of salient voxels with the black dot representing a centroid of the salient voxels and the lines represent intersections of candidate reslice salient YZ planes with the volumetric salient image 230 through the centroid that includes one or more salient voxels.
  • reslice salient plane 236d may be chosen in view of a highest spatial distribution salient voxels, a highest summation of salient voxels, or a highest average of salient voxels relative to the centroid.
  • centroid may be calculated as an arithmetic mean of all the
  • FIG. 5E illustrates a XY planar view of rows 231 of salient voxels with the black dot representing user specified center point and the lines represent intersections of candidate reslice salient YZ planes with the volumetric salient image 230 through the user specified location that includes one or more salient voxels.
  • reslice salient plane 236e may be chosen in view of a highest spatial distribution salient voxels, a highest summation of salient voxels, or a highest average of salient voxels relative to the user specified center point.
  • 5F illustrates a XY planar view of rows 231 of salient voxels with the black lines representing intersections of candidate reslice salient YZ planes with the volumetric salient image 230 at a user specified orientation that includes one or more salient voxels.
  • reslice salient plane 236F may be chosen in view of a highest spatial distribution salient voxels, a highest summation of salient voxels, or a highest average of salient voxels relative to the user specified center point.
  • image viewer 51 may for example control a display of:
  • 3D volume renderings of reconstructed medical image l32a and a corresponding salient visual volume image e.g., 3D volumetric CT image 133b with associated intensity scale 1371 and of a corresponding 3D
  • a manual reslicing reconstructed medical image and corresponding salient visual volume image as defined by the user via a user input device (e.g., a touch screen, a mouse, a keyboard, augmented reality glasses, etc.).
  • a user input device e.g., a touch screen, a mouse, a keyboard, augmented reality glasses, etc.
  • volume rendering is calculated using methods known in art of the present disclosure, such as, for example, ray casting algorithm and shown in comparison to the input image.
  • a comparison to input image may be achieved using side-by-side visualization or image blending.
  • Importance map may be resampled to the size of the input image using common interpolation
  • Reslicing plane orientation and position may be defined manually by the user via GUI or automatically by salient image generator 53.
  • salient mage generator 53 may position reslicing plane in one of the standard axial, sagittal, and coronal views or according to principal planes along highest variance (spread) in one of the most prominent salient features in the importance map (e.g., a salient feature 338 of 3D volumetric CT image 333 as shown in FIG. 6A). These most important salient feature(s) may be extracted by salient mage generator 53 from the reconstructed medical image l32a in order to make the feature assessment. Principal planes are defined by the center point and normal.
  • centroid is located at the centroid calculated from the locations of the most important salient feature(s) within the heatmap 133b (FIG. 4B).
  • centroid may be calculated as an arithmetic mean of all the 3D points (peR ') of the salient voxels. Normal vectors to principal planes are defined by three
  • All reslicing planes may be either shown in a 3D context or as a 2D image, as well as compared to 3D volumetric CT image l32a that is also resliced at the same orientation.
  • voxels of salient feature 338 of 3D volumetric CT image l32a as shown in FIG. 6A is automatically extracted by medical imaging display engine 50 as one of the features with the highest values in the importance map that were prioritized by a 3D artificial intelligence engine in the decision-making process.
  • Voxels of salient feature 338 is automatically extracted from the importance map using segmentation methods known in the art of the present disclosure, such as, for example, an instance intensity-based thresholding or a model-based segmentation.
  • FIG. 6B illustrates a reslicing 138a of 3D volumetric CT image l32a and a reslicing l39a of 3D salient visual image 133b at a first principle plane that is defined by a center point and a normal to salient feature 338 (FIG. 6A).
  • FIG. 6C illustrates a reslicing 138b of 3D volumetric CT image l32a and a reslicing 139b of 3D salient visual image 133b at a second principle plane that is defined by a center point and a normal to salient feature 338.
  • FIG. 6D illustrates a reslicing l38c of 3D volumetric CT image l32a and a reslicing l39c of 3D salient visual image 133b at a third principle plane to salient feature 338.
  • the center point of the reslicings 139a- 139c are located at a centroid calculated from the location of salient feature 338 within 3D salient visual image 133b.
  • Normal to the first principle plane is defined by an eigenvector with the second highest eigenvalue that was calculated from the salient feature 338 using Principal Component Analysis method that is known in art of the present disclosure.
  • Normal to the second principle plane is defined by an eigenvector with the highest eigenvalue that was calculated from the salient feature 338 using Principal Component Analysis method that is known in art of the present disclosure.
  • Normal to the third principle plane is defined by an eigenvector with the lowest eigenvalue that was calculated from the salient feature 338 using Principal Component Analysis method that is known in art of the present disclosure.
  • a user may modify the position of the first principle plane, the second principle plane and the third principle plane (e.g., move the center point along its normal via a GUI).
  • salient image generator 53 may control a display of an activation diagram derived from feature elements assessed by AI engine 40.
  • salient image generator 53 may display each individual filter output in an activation diagram 433 of activation maps as shown in FIG. 7.
  • a threshold of total weights e.g. 90%
  • the activation maps are ranked by the top n activations associated with those top weights to thereby list the activation maps in descending order.
  • 3D salient visual image provides richer information on contours than 2D salient visual image.
  • user-specified grayscale pixel value visualizes a contour of an area of interest (e.g., a nodule) whereby the contour may be translated to clinical information that otherwise requires human annotation (e.g., an area of interest volume, largest radius, the axis direction of the largest radius, the texture of the contour, etc.) and then generate text output.
  • an area of interest e.g., a nodule
  • clinician reporting aspect of the present disclosure provides for automatically linking of generated text information 531 to images 30a, 133b and 132b.
  • FIGS. 9A and 9B teach various embodiments of a graphical user interface of the present disclosure. From the description of FIGS. 9A and 9B, those having ordinary skill in the art of the present disclosure will appreciate how to apply the present disclosure for making and using numerous and various additional embodiments of graphical user interface of the present disclosure.
  • a graphical user interface 70 employs a
  • GUI home tab 71 an image masking tab 72, a data revising tab 73, a relevancy threshold tab 74 and an alternative assessment tab 75 for executing a flowchart 170 as an implementation of salient manipulation stage S26 as previously described in connection with FIG. 1.
  • stage S172 of flowchart 170 encompasses an activation of
  • GUI 70 for GUI 70 to thereby received user input data via a user input device e.g., a mouse, a keyboard, a touchscreen, augmented reality glasses, etc.
  • GUI 70 processes through states S174-S188 of flowchart 170 to thereby interact with AI engine 40 as user specified to manipulate a salient visualization of the feature assessment 31 of medical imaging 30.
  • GUI 70 ascertains if the input data is data masking data 34a received via image masking tab 72 whereby a clinician interacts with a planar medical image or a volumetric medical image directly and see how the artificial intelligence engine 40 responds.
  • the clinician may analysis salient image(s) 33 (FIG. 1) to ascertain if one or more particular area(s) of a planar medical image or a volumetric medical image is(are) irrelevant to a current feature assessment 31 (FIG. 1).
  • the clinician may mask the irrelevant area(s) and view the salient image(s) to see the impact such masking has on the feature assessment 31 of medical imaging 30.
  • FIG. 10A shows an embodiment 72a of an image masking GUI 72 providing for a clinician specification of image masking by data pre -processor 41 of AI engine 40 (FIG. 2A) to thereby see how AI engine 40 responds to a masking of one or more regions of a 2D medical planar image or a 3D medical volume image. If the clinician suspects the highlighted region of the 2D medical planar image or the 3D medical volume image is irrelevant, then the clinician may mask the irrelevant region and see the impact such masking has on prediction x or on a hypothesized prediction y. As shown in FIG, 7B, image masking GUI 72a of the present disclosure may concurrently display medical image 46a and 46b whereby a clinician may utilizes tool icons 273a,
  • GUI 70 ascertains during stage S174 that the input data is for data masking, then GUI 70 proceed to a stage S176 of flowchart 170 to communicate data masking data 34a to AI engine 40 whereby AI engine 40 executes S142-S146 of flowchart 140 (FIG. 2B) as previously described herein and medical imaging display engine 50 executes stages S154 (FIG. 2C) as previously described herein.
  • GUI 70 ascertains during stage S178 that the input data is not for data masking, the GUI 70 proceeds to a stage S178 of flowchart 170 to ascertain if the input data is data revising data 34b received via data revising tab 73 whereby a clinician may test the influence of each input into the artificial intelligence engine 40 on feature assessment 31 of medical imaging 30 as visualized by salient image(s) 30.
  • Examples of a data revision include, but are not limited to, (1) an enab ling/ disabling a medical imaging input or a combination of medical imaging inputs of the artificial intelligence engine, (2) an increasing/decreasing the pixel/voxel intensity value of medical imaging input or a combination of medical imaging inputs of the artificial intelligence engine, (3) an enabling/disabling an auxiliary information input or a combination of auxiliary information inputs of the artificial intelligence engine, or (4) an altering an auxiliary information input or a combination of auxiliary information inputs of the artificial intelligence engine.
  • FIG. 10B shows an embodiment 73a of data revising GUI 73 providing a clinician specification of the data input configuration of a AI engine 40 to test the influence of each input to the resulting feature assessment 31 of the 2D/3D medical imaging 30.
  • a data manipulation GUI of the present disclosure enables a clinician to switch on and off one or combination of image elements (e.g., pixels or voxels) as well as vary the input values via sliders, toggle buttons, etc.
  • image elements e.g., pixels or voxels
  • the clinician may visually assess the influence of each input on the prediction accuracy and therefore either eliminate in the future all irrelevant information or understand the importance of each input. More particularly as shown in FIG.
  • AI engine 40 may contain both main input image elements 370 (e.g., pixels or voxels) and several auxiliary information (e.g., a nodule radius 371, a nodule sphericity 372 and an output from another machine learning model 373).
  • the main input may be either the 2D or 3D image.
  • Auxiliary inputs may also include set of words present in the image annotation, patient age or medical history, output or combination of outputs from image processing or machine learning algorithms, as well as images from other modalities that are registered to the main input image.
  • a data revising GUI 73a of the present disclosure provides for the revision of inputs 370- 373, such as, for example, via revised inputs 370'-373' having revisions symbolized by blacked or whitened inputs.
  • GUI 70 ascertains during stage S178 that the input data is for data revising, then GUI 70 proceed to a stage S 180 of flowchart 170 to communicate data revising data 34a to AI engine 40 whereby AI engine 40 executes S142-S146 of flowchart 140 (FIG. 2B) as previously described herein and medical imaging display engine 50 executes stages S154 (FIG. 2C) as previously described herein.
  • GUI 70 determines during stage S178 that the input data is not for data revising, the GUI 70 proceeds to a stage S182 of flowchart 170 to ascertain if the input data is feature relevancy threshold data 35a received via relevance threshold tab 74 whereby a clinician setting of a relevance level of the pixels of a planar medical image to the feature assessment 31 of the planar medical image or a clinician setting of a relevance level of the voxels of a volumetric medical image to the feature assessment 31 of the volumetric medical image.
  • the insight is that not all pixels of a planar medical image or all voxels of a volumetric planar image are equally relevant to the feature assessment 31 and clinicians typically focus on feature(s) of the image they think are most suspicious.
  • the relevancy level of pixels of a planar medical image or voxels of a volumetric planar image may be set by a clinician to a value so that a feature of the image will be more distinctively highlighted within the salient image(s) 33 if this feature reaches that level of relevance.
  • FIG. 10C shows an embodiment 74a of relevancy threshold GUI 74 providing for a clinician specification of a configuration of a AI engine 40 (FIG. 1) as to the relevancy levels of pixels of a 2D medical imaging of an anatomical region/organ to a particular risk prediction/status classification or the voxels of a 3D medical imaging to a particular risk prediction/status classification.
  • the insight is that not all areas of a 2D medical planar image or of a 3D medical volume image are equally relevant to a specific diagnosis decision and clinicians typically focus on areas they think are most suspicious.
  • an image relevancy GUI of the present disclosure controls a display of a 2D reconstructed medical image or 3D medical volume image whereby the clinician may set the level of relevance of one or more regions to a value so that region(s) in the 2D medical planar image or in the 3D medical volume image will be highlighted if these region(s) reach that level of relevance. For example, as shown in FIG.
  • image relevancy GUI 74a of the present disclosure may display a 2D heat map 44a (or alternatively a 3D heat map) from the feature assessment whereby a clinician may use an interface 475 to revise relevancy levels (r) of feature elements of AI engine 40 to control a display of a revised 2D heat map 44b (or alternatively a revised 3D heat map) focusing on areas deemed most relevant to the feature assessment. More particularly as related to FIGS.
  • the clinician may exemplarily specify any intensity below 135.2 may be the same color (e.g., blue) and any intensity above 135.2 may be the same color (e.g., red) or the clinician may exemplarily specify any intensity below 67.6 may be the same color (e.g., blue) and all intensity above 135.2 will follow a linear change in color as shown.
  • any intensity below 135.2 may be the same color (e.g., blue) and any intensity above 135.2 may be the same color (e.g., red) or the clinician may exemplarily specify any intensity below 67.6 may be the same color (e.g., blue) and all intensity above 135.2 will follow a linear change in color as shown.
  • GUI 70 ascertains during stage S182 that the input data is for relevancy threshold specification, then GUI 70 proceed to a stage S184 of flowchart 170 to communicate relevancy threshold data 34a to AI engine 40 whereby AI engine 40 executes S144 and S 146 of flowchart 140 (FIG. 2B) as previously described herein and medical imaging display engine 50 executes stages S154 (FIG. 2C) as previously described herein.
  • GUI 70 determines during stage S182 that the input data is not for relevancy threshold specification, the GUI 70 proceeds to a stage S 186 of flowchart 170 to ascertain if the input data is alternative feature assessment data 35b received via alternative assessment tab 75 whereby a clinician may hypothesize between different predictions or different classifications of the feature assessment 31 of medical imaging 30 (e.g., prediction/classification x and prediction/classification y), and if the artificial intelligence engine renders prediction/classification x of medical imaging 30 as visualized by salient image(s) 31 illustrative of prediction/classification x of medical imaging 30, then the clinician may select prediction/classification y via the GUI to see the features that are most relevant to prediction/classification y of medical imaging 30 as visualized by salient image(s) 31 illustrative of prediction/classification y of medical imaging 30.
  • a clinician may hypothesize between different predictions or different classifications of the feature assessment 31 of medical imaging 30 (e.g., prediction/classification x and prediction/classification
  • FIG. 10D shows an embodiment 74a of alternative assessment GUI 75 providing user selection of an alternative feature assessment whereby if AI engine 40 renders an initial feature assessment, then the clinician may see the areas that are most relevant to the alternative feature assessment.
  • alternative assessment GUI 75a displays a planar heat map 575a (or alternatively a 3D heat map) of the initial feature assessment 13 la.
  • Alternative assessment GUI 75 enables a clinician to select one of alternative feature assessments 13 lb- 131 d to display a revised 2D heat map 575b illustrative of the image features relevant to the selected alternative feature assessment.
  • assessment outputs of a AI engine 40 FIG.
  • the initial feature assessment 131 a may be sarcoma and 2D heat map 575b illustrates the sarcoma assessment whereby the clinician may select normal (13 lb), glioblastoma (131 c) or brain metastatic tumor (131 d) to thereby see the image features relevant to the selected alternative feature assessment.
  • GUI 70 ascertains during stage S186 that the input data is for relevancy threshold specification, then GUI 70 proceed to a stage S 188 of flowchart 170 to communicate relevancy threshold data 34a to AI engine 40 whereby AI engine 40 executes S144 and S146 of flowchart 140 (FIG. 2B) as previously described herein and medical imaging display engine 50 executes stages S154 (FIG. 2C) as previously described herein.
  • GUI 70 If GUI 70 ascertains during stage S186 that the input data is not for an alternative assessment specification, the GUI 70 returns to stage S172.
  • FIG. 11 teaches various embodiments of a salient medical imaging controller of the present disclosure and the following description of FIG. 12 teaches various embodiments of salient visualization system of the present disclosure. From the description of FIGS. 11 and 12, those having ordinary skill in the art of the present disclosure will appreciate how to apply the present disclosure for making and using numerous and various additional embodiments of salient medical imaging controller of the present disclosure.
  • a salient medical imaging controller of the present disclosure may be embodied as hardware/circuity/software/firmware for implementation of a salient medical imaging method of the present disclosure as previously described herein. Further in practice, a salient medical imaging controller may be customized and installed in a server, workstation, etc. or programmed on a general purpose computer.
  • a salient medical imaging controller 80 includes a processor 81, a memory 82, a user interface 83, a network interface 84, and a storage 85 interconnected via one or more system bus(es) 86.
  • processor 81 the actual organization of the components 81-85 of controller 80 may be more complex than illustrated.
  • the processor 81 may be any hardware device capable of executing instructions stored in memory or storage or otherwise processing data.
  • the processor 81 may include a microprocessor, field programmable gate array (FPGA), application-specific integrated circuit (ASIC), or other similar devices.
  • FPGA field programmable gate array
  • ASIC application-specific integrated circuit
  • the memory 82 may include various memories such as, for example Ll ,
  • the memory 82 may include static random access memory (SRAM), dynamic RAM (DRAM), flash memory, read only memory (ROM), or other similar memory devices.
  • SRAM static random access memory
  • DRAM dynamic RAM
  • ROM read only memory
  • the user interface 83 may include one or more devices for enabling communication with a user such as an administrator.
  • the user interface 83 may include a display, a mouse, and a keyboard for receiving user commands.
  • the user interface 83 may include a command line interface or graphical user interface that may be presented to a remote terminal via the network interface 84.
  • the network interface 84 may include one or more devices for enabling communication with other hardware devices.
  • the network interface 84 may include a network interface card (NIC) configured to communicate according to the Ethernet protocol.
  • the network interface 84 may implement a TCP/IP stack for communication according to the TCP/IP protocols.
  • NIC network interface card
  • TCP/IP protocols Various alternative or additional hardware or configurations for the network interface will be apparent.
  • the storage 85 may include one or more machine -readable storage media such as read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, or similar storage media.
  • the storage 85 may store instructions for execution by the processor 81 or data upon with the processor 81 may operate.
  • the storage 85 store a base operating system (not shown) for controlling various basic operations of the hardware.
  • storage 85 may store control modules 87 in the form of a AI engine 40a as an embodiment of AI engine 40 (FIG. 2A), a medical image display engine 50a as an embodiment of medical image display engine 50 (FIG. 2A) and graphical user interface(s) 70a as an embodiment of AI engine 70 (FIG. 8A).
  • salient medical imaging controller 80 may be installed/programmed within an application server 100 accessible by a plurality of clients (e.g., a client 101 and a client 102 as shown) and/or is installed/programmed within a workstation 103 employing a monitor 104, a keyboard 105 and a computer 106.
  • clients e.g., a client 101 and a client 102 as shown
  • workstation 103 employing a monitor 104, a keyboard 105 and a computer 106.
  • salient medical imaging controller 80 inputs medical imaging data 30, planar or volumetric, from medical imaging data sources 80 during a training phase and a phase.
  • Medical imaging data sources 90 may include any number and types of medical imaging machines (e.g., a MRI machine 91, a CT machine 93, an X-ray machine 95 and an ultrasound machine 97 as shown) and may further includes database management/file servers (e.g., MRI database management server 92, CT server 94, X-ray database management server 96 and ultrasound database manager server 97 as shown).
  • application server 100 or workstation 103 may be directly or networked connected to a medical imaging data source 90 to thereby input medical imaging data 30 for salient medical imaging controller 80.
  • a medical imaging data source 90 and application server 100 or workstation 103 may be directly integrated whereby the salient medical imaging controller 80 has direct access to medical imaging data 30.
  • FIG. 12B illustrates an alternative embodiment of salient medical imaging controller 80 segregated into an assessment controller 80a for implementing AI engine 40a and GUIs 70a, and a display controller 80b for medical imaging display engine 50a.
  • assessment controller 80a and display controller 80b may be installed/programmed within application server 100 or workstation 103, or alternatively may be distributively installed/programmed between application server and workstation 103.
  • FIGS. 1-12 those having ordinary skill in the art will appreciate the many benefits of the present disclosure including, but not limited to, a visualization by a clinician of a feature assessment by machine learning model(s) of a medical imaging of a body, human or animal, that facilitates a clear and concise understanding by the clinician of the feature assessment.
  • the memory may also be considered to constitute a“storage device” and the storage may be considered a“memory.”
  • the memory and storage may both be considered to be“non-transitory machine- readable media.”
  • the term“non-transitory” will be understood to exclude transitory signals but to include all forms of storage, including both volatile and non volatile memories.
  • the various components may be duplicated in various embodiments.
  • the processor may include multiple microprocessors that are configured to independently execute the methods described in the present disclosure or are configured to perform steps or subroutines of the methods described in the present disclosure such that the multiple processors cooperate to achieve the functionality described in the present disclosure.
  • the various hardware components may belong to separate physical systems.
  • the processor may include a first processor in a first server and a second processor in a second server.
  • various exemplary embodiments may be implemented as instructions stored on a machine-readable storage medium, which may be read and executed by at least one processor to perform the operations described in detail herein.
  • a machine-readable storage medium may include any mechanism for storing information in a form readable by a machine, such as a personal or laptop computer, a server, or other computing device.
  • a machine-readable storage medium may include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and similar storage media.

Abstract

La présente invention porte, selon divers modes de réalisation, sur un dispositif de commande d'imagerie médicale saillante (80) utilisant un moteur d'intelligence artificielle (40) et un moteur d'affichage d'image médicale (50). En fonctionnement, le moteur d'intelligence artificielle (40) comprend un ou plusieurs modèles d'apprentissage machine (42) formés pour rendre une évaluation de caractéristique d'une image médicale volumétrique. Le moteur d'affichage d'image médicale (50) génère une image volumétrique saillante représentant l'évaluation de caractéristique de l'image médicale volumétrique et coupe à nouveau une image saillante à partir de l'image volumétrique saillante sur la base d'une pertinence de chaque caractéristique par rapport à l'évaluation de caractéristique de l'image médicale volumétrique par le ou les modèles d'apprentissage machine (42).
PCT/EP2019/071700 2018-08-21 2019-08-13 Pertinence visuelle saillante d'évaluations de caractéristique par des modèles d'apprentissage machine WO2020038771A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862720266P 2018-08-21 2018-08-21
US62/720,266 2018-08-21

Publications (1)

Publication Number Publication Date
WO2020038771A1 true WO2020038771A1 (fr) 2020-02-27

Family

ID=67620490

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/071700 WO2020038771A1 (fr) 2018-08-21 2019-08-13 Pertinence visuelle saillante d'évaluations de caractéristique par des modèles d'apprentissage machine

Country Status (1)

Country Link
WO (1) WO2020038771A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023158834A1 (fr) * 2022-02-18 2023-08-24 The Johns Hopkins University Systèmes et procédés de détection et de localisation d'objets de corps étrangers

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140254922A1 (en) * 2013-03-11 2014-09-11 Microsoft Corporation Salient Object Detection in Images via Saliency

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140254922A1 (en) * 2013-03-11 2014-09-11 Microsoft Corporation Salient Object Detection in Images via Saliency

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WU GUORONG ET AL: "Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning", IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, IEEE SERVICE CENTER, PISCATAWAY, NJ, USA, vol. 63, no. 7, 1 July 2016 (2016-07-01), pages 1505 - 1516, XP011614639, ISSN: 0018-9294, [retrieved on 20160621], DOI: 10.1109/TBME.2015.2496253 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023158834A1 (fr) * 2022-02-18 2023-08-24 The Johns Hopkins University Systèmes et procédés de détection et de localisation d'objets de corps étrangers

Similar Documents

Publication Publication Date Title
US20210327563A1 (en) Salient visual explanations of feature assessments by machine learning models
Han et al. Synthesizing diverse lung nodules wherever massively: 3D multi-conditional GAN-based CT image augmentation for object detection
EP3399501B1 (fr) Apprentissage machine de renforcement profond multi-échelles pour segmentation n-dimensionnel en imagerie médicale
CN108701370B (zh) 基于机器学习的基于内容的医学成像渲染
CN110807755B (zh) 使用定位器图像进行平面选择
EP3791316A1 (fr) Localisation et classification d'anomalies dans des images médicales
CN110249367B (zh) 用于实时渲染复杂数据的系统和方法
US20210319879A1 (en) Method and system for computer aided detection of abnormalities in image data
KR20190105460A (ko) 의료 진단 리포트 생성 장치 및 방법
CN112529834A (zh) 病理图像模式在3d图像数据中的空间分布
CN114282588A (zh) 提供分类解释和生成函数
WO2020038771A1 (fr) Pertinence visuelle saillante d'évaluations de caractéristique par des modèles d'apprentissage machine
EP3989172A1 (fr) Procédé à utiliser pour générer une visualisation informatique de données d'images médicales 3d
CN112541882A (zh) 医学体积渲染中的隐式表面着色
Bornik et al. Interactive editing of segmented volumetric datasets in a hybrid 2D/3D virtual environment
Ma et al. Visualization of medical volume data based on improved k-means clustering and segmentation rules
Kumar A method of segmentation in 3d medical image for selection of region of interest (ROI)
US20230419602A1 (en) Rendering and displaying a 3d representation of an anatomical structure
EP4266251A1 (fr) Apprentissage de représentation d'organes à risque et volumes de tumeur globale pour la prédiction de réponse à un traitement
Paulo et al. 3D Reconstruction from CT Images Using Free Software Tools
van der Heijden et al. GENERATION OF LUNG CT IMAGES USING SEMANTIC LAYOUTS
Al-Rei Automated 3D Visualization of Brain Cancer
Müller et al. nnOOD: A Framework for Benchmarking Self-supervised Anomaly Localisation Methods
Hombecka et al. Enhancing Vascular Analysis with Distance Visualizations: An Overview and Implementation
Li Visual Analytics and Interactive Machine Learning for Human Brain Data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19753080

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19753080

Country of ref document: EP

Kind code of ref document: A1