WO2019210353A1 - Medical virtual reality and mixed reality collaboration platform - Google Patents

Medical virtual reality and mixed reality collaboration platform Download PDF

Info

Publication number
WO2019210353A1
WO2019210353A1 PCT/AU2019/050380 AU2019050380W WO2019210353A1 WO 2019210353 A1 WO2019210353 A1 WO 2019210353A1 AU 2019050380 W AU2019050380 W AU 2019050380W WO 2019210353 A1 WO2019210353 A1 WO 2019210353A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
data
dimensional
image data
representation
Prior art date
Application number
PCT/AU2019/050380
Other languages
French (fr)
Inventor
Arthur Chi Teong ONG
Jason Jit Sun TAN
Thomas William MORRELL
Original Assignee
MedVR Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2018901434A external-priority patent/AU2018901434A0/en
Application filed by MedVR Pty Ltd filed Critical MedVR Pty Ltd
Priority to AU2019262082A priority Critical patent/AU2019262082A1/en
Priority to CN201980028755.3A priority patent/CN112424870A/en
Publication of WO2019210353A1 publication Critical patent/WO2019210353A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/46Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment with special arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • A61B6/466Displaying means of special interest adapted to display 3D data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/004Annotating, labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/024Multi-user, collaborative environment

Definitions

  • This invention relates to medical virtual reality and mixed reality collaboration platform. More specifically, this invention relates to a method of collaborating on medical imaging data and to an associated collaboration system.
  • a method of collaborating on medical image data including the steps of:
  • DICOM Digital Imaging and Communications in Medicine
  • the method may include permitting a user to annotate the data.
  • Annotation allows users to selectively isolate areas of interest in the scan data and mark them for future reference using toolsets.
  • the annotated areas of interest can be sent to the collaboration platform and shared with users either by images or a video recording of the screen, of which the user's voice is recorded.
  • the annotated areas of interest may be sent to the collaboration platform automatically.
  • the data may be passed through an analysis system which segments the image data and begins to separate the image data into its respective parts, combining with the existing toolsets to allow futher refinement on the segmentation.
  • Permitting multiple users securely to access the 3D representation of the data may include permitting users to view the 3D data by means of Virtual Reality (VR) and Mixed Reality (MR) goggles.
  • Permitting multiple users securely to access the 3D representation of the data may include receiving authentication information from a user, such as a predefined username and password, which are uniquely associated with the user, before allowing a user to access the 3D representation of the data.
  • the step of receiving authentication information protects the privacy rights of the patients to which the two-dimensional image data files relates .
  • Permitting multiple users securely to access the 3D representation of the data may include permitting a user to view the data on a two dimensional display screen, such as a computer display, a handheld tablet or a mobile telephone.
  • the method may include hosting a conference call between users. Permitting multiple users securely to access the 3D representation of the data may then include permitting multiple users to access the data in real time, thereby permitting the users to collaborate on the data via a virtual conference call.
  • a medical practitioner may then discuss the data with a patient or with other colleagues.
  • the method may further include the step of analysing the 3D representation with graphical tools.
  • Analysing the 3D representation with graphical tools may include options to:
  • a measurement line or lines are drawn as a rule and to display to a user the real life dimensions of the line or lines and to display the ruler with the rest of the image on a 2D screen or in a 3D VR environment ;
  • Analysing the 3D representation with graphical tools may include adjusting the brightness of the image via a viewing window level.
  • Analysing the 3D representation with graphical tools may include adjusting the contrast via a viewing window width.
  • the method may include implementing a single step conversion of the data, which refers to a single click interaction from the user in order to bring Diacom data into a virtual reality environment ready to be viewed, annotated and enable use of the collaboration platform. It handles the data into their individual series's to be presented inside the VR space .
  • the method of collaborating on medical image data may include implementing Machine Learning from medical imaging.
  • Machine Learning from medical imaging may include displaying hotspots for the user to investigate from certain present values.
  • the Machine Learning may then include identifying more hotspots as more medical specialists use the medical imaging data.
  • Machine Learning may include presenting medical imaging with presets and settings aligned to the medical speciality, such as for a liver surgeon, the image will load up with the correct windowing values, contrast and brightness.
  • a collaboration system which includes:
  • a data modelling processor operable to receive the two- dimensional image data files from the input interface and to compile the series of two-dimensional image data files to a three dimensional (3D) representation of the data;
  • a database connected to the data modelling processor, on which any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data is stored;
  • an output interface for presenting any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data securely to a remote user.
  • the collaboration system may include a display device in the form of any one of a mobile telephone, a tablet, a laptop computer, a desktop computer, a pair of virtual reality goggles, or the like, operable to display any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data.
  • the display device may be connectable to the output interface via a private network, or a public network, such as the Internet.
  • the database may be connected to the data modelling processor via a private network or a public network, such as the Internet.
  • FIG. 1 shows a functional block diagram of a collaboration system in accordance with one aspect of the invention
  • Figure 2 shows a method of collaborating on medical image data
  • Figure 3 shows certain aspects of the method of Figure 2 in more detail
  • Figures 4 and 5 show three dimensional (3D) representations of the data taken from an MRI scanner forming part of the collaboration system of Figure 1.
  • reference numeral 10 is used throughout this specification to indicate, generally, a collaboration system.
  • the collaboration system includes an input interface 12 into which a series of two-dimensional image data files in Digital Imaging and Communications in Medicine (DICOM) format are receivable, the series of image data files representing successive two dimensional scans of a body area.
  • DICOM Digital Imaging and Communications in Medicine
  • a data modelling processor 14 is connected to the input interface 12 and is operable to compile the series of two-dimensional image data files to a three dimensional (3D) representation of the data.
  • a database 16 is connected to the data modelling processor 14 and on the database 16 any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data is stored.
  • An output interface 18 is connected to the data modelling processor 14 for presenting any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data securely to a remote user (not shown) using display devices 20.
  • the collaboration system includes display devices in the form of a tablet 20.1, a desktop computer 20.2, a laptop computer 20.3, a mobile telephone 20.4 and two pairs of Virtual Reality (VR) goggles 20.5, 20.6.
  • the tablet 20.1, the desktop computer 20.2, the laptop computer 20.3, the mobile telephone 20.4 are operable to display two- dimensional (2D) image data files.
  • the two Virtual Reality (VR) goggles 20.5, 20.6 are operable to display three dimensional (3D) representations of the data.
  • the display devices 20 are connectable to the output interface via the Internet 24.
  • the collaboration system 10 includes a second database 22, connected to the data modelling processor 14 via the Internet 24.
  • the collaboration system 10 provides a method of collaborating on medical image data.
  • the method includes receiving a series of two-dimensional image data files in Digital Imaging and Communications in Medicine (DICOM) format onto the data modelling processor 14.
  • the series of data files are received from the data sources 26, 28 via the Internet 24 and the input interface 12.
  • DICOM Digital Imaging and Communications in Medicine
  • the method then entails on the data modelling processor 14 converting the series of two-dimensional image data files to a three dimensional (3D) representation of the data, such as the images shown in Figures 3 and 4.
  • the method may include implementing a single step conversion of the data, which refers to a single click interaction from the user in order to bring DICOM data into a virtual reality environment ready to be viewed, annotated and enable use of the collaboration platform. It handles the data into their individual series' to be presented inside the VR space .
  • the two-dimensional (2D) image data files and the three dimensional (3D) representation of the data are stored on the database 16.
  • the data is also stored on the remote database 22.
  • the method further includes permitting multiple users of the display devices 20, securely to access the 2D and 3D representations of the data.
  • one user can be a medical practitioner who was responsible for taking the images on the data sources 26, 28 and another user can be a patient viewing the 3D images on a set of VR goggles 20.5.
  • a further user can be a second specialist medical practitioner, who views the 3D images on another display device, such as the laptop 20.3.
  • the display devices 20 may include a voice interface by means of which the various users can communicate with each other .
  • the method of collaboration then entails that the users of the various display devices can view and consult on the 2D or 3D images in real time.
  • the method of collaboration may include the step of analysing the 3D representation by means of graphical tools.
  • Analysing the 3D representation may include options to translate the image, to rotate the image, in which the 3D image is rotated on 2D screen or in a 3D VR environment; to intersect the image, in which the 3D image is intersected to show certain portions of the 3D image more clearly on a 2D screen on in a 3D VR environment; to measure a portion of the image, in which a measurement line or lines are drawn as a rule and to display to a user the real life dimensions of the line or lines and to display the ruler with the rest of the image on a 2D screen or in a 3D VR environment; to draw an overlaid image on a portion of the image, to display the image and overlaid image on a 2D screen or in a 3D VR environment; to produce an overlaid mark on a portion of the image, to display the mark on a 2D screen or in a 3D VR environment; to record a video of any
  • FIG. 2 shows a flow diagram of the method of collaboration in accordance with one aspect of the invention.
  • the flow diagram initiates at 52, where the 2D files originating from the data sources 26, 28 are scanned. If a valid directory which contains the relevant folders does not exist as tested at 54, execution directs to 52, alternatively execution directs to 56.
  • a check is performed to check if the series of two-dimensional (2D) image data files (files) have previously been handled by the collaboration system, if it has, execution is directed to 58, if not, execution is directed to 60.
  • the files are arranged for the conversion of the series of two-dimensional image data files to a three dimensional (3D) representation of the data to begin.
  • the conversion of the files takes place. Execution then proceeds to 64.
  • execution was directed to 58, then a test is performed to determine if patient information is attached in accordance with the Digital Imaging and Communications in Medicine (DICOM) format. If the patient data has been attached, execution directs to 66 alternatively execution directs to 64. At 64, header files are read and the information is extracted to be used in the conversion process, execution then proceeds to 68. At 68 the information is translated into machine learning algorithms to assist in data placement and in diagnosis. Execution is then directed back to 54.
  • DICOM Digital Imaging and Communications in Medicine
  • a test is performed to confirm that the data has been anonymized, if it has, execution is directed to 70, if it has not been anonymized, execution is directed to 72.
  • a check is performed to see if the data has previously been opened, if it has, the previously opened data settings are used at 74 to place data and to establish the evolving user interface (UI) system, if it has not, the normal user setting is used at 76 with the machine learning from other data groups from similar speciality characteristics.
  • UI evolving user interface
  • the data is loaded into the data modelling processor 16. Execution is directed from 74 and 76 to 78 where a viewing platform is initialized to permit multiple users securely to access the 3D representation of the data.
  • a viewing platform is initialized to permit multiple users securely to access the 3D representation of the data.
  • a user is presented with various operating functions of the data modelling processor 16.
  • the user interface presents a user with options to set up the user interface at 84, to use data manipulation tools at 86 and subsequently the tools are enabled based on the user's medical speciality at 88.
  • the data isolation tools are presented and subsequently the windowing values are set up at 92 based on data used by other users of similar data groups .
  • Setting up the windowing values at 92 may include analysing the 3D representation with graphical tools which further includes adjusting the brightness of the image via the window level. Analysing the 3D representation with graphical tools may then further include adjusting the contrast via the window width.
  • a user can select the option to record a sequence, to capture a selection or to record a dictation at 94. Further from the operating functions presented at 80, a user can select the option to enable a virtual reality platform at 96 that provides the options for users to collaborate at 98 and/or to present a VR multi-user experience at 100.
  • Figure 3 illustrate certain aspects of the process in more detail and illustrate certain additional aspects in addition to the method of collaboration shown in Figure 2. Instances where the steps correspond with the steps in Figure 2 are not described again. However the additional steps are described below.
  • Figures 4 and 5 show three dimensional (3D) representations of the data taken from an MRI scanner forming part of the collaboration system of Figure 1. From the images it is clear how the two dimensional data is represented in three dimensions. Certain features of the scan can then be better illustrated, highlighted and annotated.
  • the Applicant is of the opinion that the invention provides a useful method of collaborating on medical data and a useful collaboration system.
  • Optional embodiments of the present invention may also be said to broadly consist in the parts, elements and features referred to or indicated herein, individually or collectively, in any or all combinations of two or more of the parts, elements or features, and wherein specific integers are mentioned herein which have known equivalents in the art to which the invention relates, such known equivalents are deemed to be incorporated herein as if individually set forth.
  • well-known processes, well-known device structures, and well-known technologies are not described in detail, as such will be readily understood by the skilled addressee .

Abstract

Provided is a collaboration system (10) for collaborating on medical image data. The system (10) includes an input interface (12) into which a series of two-dimensional image data files in Digital Imaging and Communications in Medicine (DICOM) format are receivable, the series of image data files representing successive two dimensional scans of a body area. Also included is a data modelling processor (14) operable to receive the two-dimensional image data files from the input interface (12) and to compile the series of two-dimensional image data files to a three dimensional (3D) representation of the data. Further included is database (16) connected to the data modelling processor (14), on which any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data is stored, and output interface (18) for presenting any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data securely to a remote user using display devices (20).

Description

MEDICAL VIRTUAL REALITY AND MIXED REALITY
COLLABORATION PLATFORM
TECHNICAL FIELD
[0001] This invention relates to medical virtual reality and mixed reality collaboration platform. More specifically, this invention relates to a method of collaborating on medical imaging data and to an associated collaboration system.
BACKGROUND
[0002] The following discussion of the background art is intended to facilitate an understanding of the present invention only. The discussion is not an acknowledgement or admission that any of the material referred to is or was part of the common general knowledge as at the priority date of the application .
[0003] The inventors are aware of Virtual Reality (VR) viewers used in the medical field. However, such viewers are often restricted for viewing by a single user and such viewers often require specialized knowledge to use and to interpret images created thereby.
[0004] It is an object of the present invention to address some of the shortcomings of existing three dimensional (3D) Virtual Reality (VR) viewers of which the Applicant is aware.
SUMMARY OF THE INVENTION [0005] According to one aspect of the invention, there is provided a method of collaborating on medical image data, the method including the steps of:
receiving a series of two-dimensional image data files in Digital Imaging and Communications in Medicine (DICOM) format ;
converting the series of two-dimensional image data files to a three dimensional (3D) representation of the medical image data;
storing the 3D representation on a database; and
permitting multiple users securely to access the 3D representation of the medical image data.
[0006] The method may include permitting a user to annotate the data. Annotation allows users to selectively isolate areas of interest in the scan data and mark them for future reference using toolsets. The annotated areas of interest can be sent to the collaboration platform and shared with users either by images or a video recording of the screen, of which the user's voice is recorded. The annotated areas of interest may be sent to the collaboration platform automatically.
[0007] The data may be passed through an analysis system which segments the image data and begins to separate the image data into its respective parts, combining with the existing toolsets to allow futher refinement on the segmentation.
[0008] Permitting multiple users securely to access the 3D representation of the data may include permitting users to view the 3D data by means of Virtual Reality (VR) and Mixed Reality (MR) goggles. [0009] Permitting multiple users securely to access the 3D representation of the data may include receiving authentication information from a user, such as a predefined username and password, which are uniquely associated with the user, before allowing a user to access the 3D representation of the data. Advantageously, the step of receiving authentication information protects the privacy rights of the patients to which the two-dimensional image data files relates .
[0010] Permitting multiple users securely to access the 3D representation of the data may include permitting a user to view the data on a two dimensional display screen, such as a computer display, a handheld tablet or a mobile telephone.
[0011] The method may include hosting a conference call between users. Permitting multiple users securely to access the 3D representation of the data may then include permitting multiple users to access the data in real time, thereby permitting the users to collaborate on the data via a virtual conference call. Advantageously, a medical practitioner may then discuss the data with a patient or with other colleagues.
[0012] The method may further include the step of analysing the 3D representation with graphical tools.
[0013] Analysing the 3D representation with graphical tools may include options to:
translate the image, in which the 3D image is moved into position in a 3D VR environment;
rotate the image, in which the 3D image is rotated on 2D screen or in a 3D VR environment; intersect the image, in which the 3D image is intersected to show certain portions of the 3D image more clearly on a 2D screen on in a 3D VR environment;
measure a portion of the image, in which a measurement line or lines are drawn as a rule and to display to a user the real life dimensions of the line or lines and to display the ruler with the rest of the image on a 2D screen or in a 3D VR environment ;
draw an overlaid image on a portion of the image, to display the image and overlaid image on a 2D screen or in a 3D VR environment;
produce an overlaid mark on a portion of the image, to display the mark on a 2D screen or in a 3D VR environment; record a video of any of the above annotations together with manipulation of the image in 2D or 3D on the server; take an image snapshot of the 2D representation of the 3D image and storing the snapshot on the server;
adjust the contrast of the 2D or 3D image;
adjust the brightness of the 2D or 3D image;
adjust the opacity of certain individual elements in the 2D or 3D image; and/or
adjust the way a user sees particular structures of the 3D image by manipulating the CT numbers, to change the appearance of the picture to highlight particular structures.
[ 0014 ] Analysing the 3D representation with graphical tools may include adjusting the brightness of the image via a viewing window level.
[ 0015 ] Analysing the 3D representation with graphical tools may include adjusting the contrast via a viewing window width. [0016] The method may include implementing a single step conversion of the data, which refers to a single click interaction from the user in order to bring Diacom data into a virtual reality environment ready to be viewed, annotated and enable use of the collaboration platform. It handles the data into their individual series's to be presented inside the VR space .
[0017] The method of collaborating on medical image data may include implementing Machine Learning from medical imaging. Machine Learning from medical imaging may include displaying hotspots for the user to investigate from certain present values. The Machine Learning may then include identifying more hotspots as more medical specialists use the medical imaging data.
[0018] Machine Learning may include presenting medical imaging with presets and settings aligned to the medical speciality, such as for a liver surgeon, the image will load up with the correct windowing values, contrast and brightness.
[0019] According to another aspect of the invention there is provided a collaboration system, which includes:
an input interface into which a series of two-dimensional image data files in Digital Imaging and Communications in Medicine (DICOM) format are receivable, the series of image data files representing successive two dimensional scans of a body area;
a data modelling processor operable to receive the two- dimensional image data files from the input interface and to compile the series of two-dimensional image data files to a three dimensional (3D) representation of the data; a database, connected to the data modelling processor, on which any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data is stored;
an output interface for presenting any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data securely to a remote user.
[0020] The collaboration system may include a display device in the form of any one of a mobile telephone, a tablet, a laptop computer, a desktop computer, a pair of virtual reality goggles, or the like, operable to display any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data. The display device may be connectable to the output interface via a private network, or a public network, such as the Internet.
[0021] The database may be connected to the data modelling processor via a private network or a public network, such as the Internet.
BRIEF DESCRIPTION OF THE DRAWINGS
The description will be made with reference to the accompanying drawings in which:
Figure 1 shows a functional block diagram of a collaboration system in accordance with one aspect of the invention;
Figure 2 shows a method of collaborating on medical image data; Figure 3 shows certain aspects of the method of Figure 2 in more detail; and
Figures 4 and 5 show three dimensional (3D) representations of the data taken from an MRI scanner forming part of the collaboration system of Figure 1.
DETAILED DESCRIPTION OF EMBODIMENTS
[0022] Further features of the present invention are more fully described in the following description of several non limiting embodiments thereof. This description is included solely for the purposes of exemplifying the present invention to the skilled addressee. It should not be understood as a restriction on the broad summary, disclosure or description of the invention as set out above. In the figures, incorporated to illustrate features of the example embodiment or embodiments, like reference numerals are used to identify like parts throughout.
[0023] With reference to the Figures, reference numeral 10 is used throughout this specification to indicate, generally, a collaboration system. The collaboration system includes an input interface 12 into which a series of two-dimensional image data files in Digital Imaging and Communications in Medicine (DICOM) format are receivable, the series of image data files representing successive two dimensional scans of a body area.
[0024] A data modelling processor 14 is connected to the input interface 12 and is operable to compile the series of two-dimensional image data files to a three dimensional (3D) representation of the data. [0025] A database 16 is connected to the data modelling processor 14 and on the database 16 any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data is stored.
[0026] An output interface 18 is connected to the data modelling processor 14 for presenting any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data securely to a remote user (not shown) using display devices 20.
[0027] In this instance the collaboration system includes display devices in the form of a tablet 20.1, a desktop computer 20.2, a laptop computer 20.3, a mobile telephone 20.4 and two pairs of Virtual Reality (VR) goggles 20.5, 20.6. The tablet 20.1, the desktop computer 20.2, the laptop computer 20.3, the mobile telephone 20.4 are operable to display two- dimensional (2D) image data files. The two Virtual Reality (VR) goggles 20.5, 20.6 are operable to display three dimensional (3D) representations of the data.
[0028] The display devices 20 are connectable to the output interface via the Internet 24. In this example, the collaboration system 10 includes a second database 22, connected to the data modelling processor 14 via the Internet 24.
[0029] Two dimensional data sources in the form of a Computer Tomography (CT) scanner 26 and a Magnetic Resonance Imaging (MRI) scanner 28 are connected to the input interface 12 via the Internet 24. [0030] In use, the collaboration system 10 provides a method of collaborating on medical image data. The method includes receiving a series of two-dimensional image data files in Digital Imaging and Communications in Medicine (DICOM) format onto the data modelling processor 14. The series of data files are received from the data sources 26, 28 via the Internet 24 and the input interface 12.
[0031] The method then entails on the data modelling processor 14 converting the series of two-dimensional image data files to a three dimensional (3D) representation of the data, such as the images shown in Figures 3 and 4.
[0032] The method may include implementing a single step conversion of the data, which refers to a single click interaction from the user in order to bring DICOM data into a virtual reality environment ready to be viewed, annotated and enable use of the collaboration platform. It handles the data into their individual series' to be presented inside the VR space .
[0033] The two-dimensional (2D) image data files and the three dimensional (3D) representation of the data are stored on the database 16. As a backup for the data, or as an alternative to the database 16, the data is also stored on the remote database 22.
[0034] The method further includes permitting multiple users of the display devices 20, securely to access the 2D and 3D representations of the data. Typically one user can be a medical practitioner who was responsible for taking the images on the data sources 26, 28 and another user can be a patient viewing the 3D images on a set of VR goggles 20.5. A further user can be a second specialist medical practitioner, who views the 3D images on another display device, such as the laptop 20.3. The display devices 20 may include a voice interface by means of which the various users can communicate with each other .
[0035] The method of collaboration then entails that the users of the various display devices can view and consult on the 2D or 3D images in real time.
[0036] The method of collaboration may include the step of analysing the 3D representation by means of graphical tools. Analysing the 3D representation may include options to translate the image, to rotate the image, in which the 3D image is rotated on 2D screen or in a 3D VR environment; to intersect the image, in which the 3D image is intersected to show certain portions of the 3D image more clearly on a 2D screen on in a 3D VR environment; to measure a portion of the image, in which a measurement line or lines are drawn as a rule and to display to a user the real life dimensions of the line or lines and to display the ruler with the rest of the image on a 2D screen or in a 3D VR environment; to draw an overlaid image on a portion of the image, to display the image and overlaid image on a 2D screen or in a 3D VR environment; to produce an overlaid mark on a portion of the image, to display the mark on a 2D screen or in a 3D VR environment; to record a video of any of the above annotations together with manipulation of the image in 2D or 3D on the server; to take an image snapshot of the 2D representation of the image and storing the snapshot on the server; to adjust the contrast of the 2D or 3D image; to adjust the brightness of the 2D or 3D image; and to adjust the opacity of certain individual elements in the 2D or 3D image. [0037] Figure 2 shows a flow diagram of the method of collaboration in accordance with one aspect of the invention. The flow diagram initiates at 52, where the 2D files originating from the data sources 26, 28 are scanned. If a valid directory which contains the relevant folders does not exist as tested at 54, execution directs to 52, alternatively execution directs to 56. At 56 a check is performed to check if the series of two-dimensional (2D) image data files (files) have previously been handled by the collaboration system, if it has, execution is directed to 58, if not, execution is directed to 60. At 60 the files are arranged for the conversion of the series of two-dimensional image data files to a three dimensional (3D) representation of the data to begin. At 62 the conversion of the files takes place. Execution then proceeds to 64.
[0038] If execution was directed to 58, then a test is performed to determine if patient information is attached in accordance with the Digital Imaging and Communications in Medicine (DICOM) format. If the patient data has been attached, execution directs to 66 alternatively execution directs to 64. At 64, header files are read and the information is extracted to be used in the conversion process, execution then proceeds to 68. At 68 the information is translated into machine learning algorithms to assist in data placement and in diagnosis. Execution is then directed back to 54.
[0039] At 66 a test is performed to confirm that the data has been anonymized, if it has, execution is directed to 70, if it has not been anonymized, execution is directed to 72. At 72 a check is performed to see if the data has previously been opened, if it has, the previously opened data settings are used at 74 to place data and to establish the evolving user interface (UI) system, if it has not, the normal user setting is used at 76 with the machine learning from other data groups from similar speciality characteristics.
[0040] At 70 the data is loaded into the data modelling processor 16. Execution is directed from 74 and 76 to 78 where a viewing platform is initialized to permit multiple users securely to access the 3D representation of the data. At 80 a user is presented with various operating functions of the data modelling processor 16.
[0041] At 82 the user interface presents a user with options to set up the user interface at 84, to use data manipulation tools at 86 and subsequently the tools are enabled based on the user's medical speciality at 88. At 90 the data isolation tools are presented and subsequently the windowing values are set up at 92 based on data used by other users of similar data groups .
[0042] Setting up the windowing values at 92 may include analysing the 3D representation with graphical tools which further includes adjusting the brightness of the image via the window level. Analysing the 3D representation with graphical tools may then further include adjusting the contrast via the window width.
[0043] From the operating functions presented at 80, a user can select the option to record a sequence, to capture a selection or to record a dictation at 94. Further from the operating functions presented at 80, a user can select the option to enable a virtual reality platform at 96 that provides the options for users to collaborate at 98 and/or to present a VR multi-user experience at 100.
[0044] Following the option to collaborate at 98, a global specialist platform is enabled at 102, individual patient sessions is enabled at 104 and the collaboration is presented visually in 2D and 3D at 106.
[0045] Following the selection of a VR multi-user experience at 100 the interactions between medical practitioners are grouped into the group of Doctor to Doctor, Doctor to patient and Doctor to team at 108 and at 110 the multi-user experience is extended to Educator to students, student to student and to individual study be students.
[0046] Figure 3 illustrate certain aspects of the process in more detail and illustrate certain additional aspects in addition to the method of collaboration shown in Figure 2. Instances where the steps correspond with the steps in Figure 2 are not described again. However the additional steps are described below.
[0047] Referring to Figure 3, as shown in broken line, once the execution files were arranged at 60 for the conversion of the series of two-dimensional image data files to a three dimensional (3D) representation of the data and once the conversion has taken place at 62, a test is done at 61 to determine if the data conversion was adequate. If the data conversion was not adequate, the system reverts to default settings and converts the data again at 63. If the data conversion was adequate, any conversions that did not meet a certain criterion or criteria are removed at 65 and the required file structure is constructed and the files are validated at 67.
[0048] In addition to the method shown in Figure 2, as can be seen in Figure 3, after the check is performed at 72 to determine if the data has previously been opened, if it has, the previously opened data settings are used at 74 to place data and to establish the evolving user interface (UI) system, if it has not, the normal user setting is used at 76 with the machine learning from other data groups from similar speciality characteristics. Then at 75 a step is taken where machine learning is implemented to assist the data setup for display and the patient records are read into the platform. Execution then continues at 78.
[0049] Figures 4 and 5 show three dimensional (3D) representations of the data taken from an MRI scanner forming part of the collaboration system of Figure 1. From the images it is clear how the two dimensional data is represented in three dimensions. Certain features of the scan can then be better illustrated, highlighted and annotated.
[0050] The Applicant is of the opinion that the invention provides a useful method of collaborating on medical data and a useful collaboration system.
[0051] Optional embodiments of the present invention may also be said to broadly consist in the parts, elements and features referred to or indicated herein, individually or collectively, in any or all combinations of two or more of the parts, elements or features, and wherein specific integers are mentioned herein which have known equivalents in the art to which the invention relates, such known equivalents are deemed to be incorporated herein as if individually set forth. In the example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail, as such will be readily understood by the skilled addressee .
[ 0052 ] The use of the terms "a", "an", "said", "the", and/or similar referents in the context of describing various embodiments (especially in the context of the claimed subject matter) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms "comprising," "having," "including, " and "containing" are to be construed as open- ended terms (i.e., meaning "including, but not limited to,") unless otherwise noted. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. No language in the specification should be construed as indicating any non-claimed subject matter as essential to the practice of the claimed subject matter .
[ 0053 ] It is to be appreciated that reference to "one example" or "an example" of the invention, or similar exemplary language (e.g., "such as") herein, is not made in an exclusive sense. Various substantially and specifically practical and useful exemplary embodiments of the claimed subject matter are described herein, textually and/or graphically, for carrying out the claimed subject matter.
[ 0054 ] Accordingly, one example may exemplify certain aspects of the invention, whilst other aspects are exemplified in a different example. These examples are intended to assist the skilled person in performing the invention and are not intended to limit the overall scope of the invention in any way unless the context clearly indicates otherwise. Variations (e.g. modifications and/or enhancements) of one or more embodiments described herein might become apparent to those of ordinary skill in the art upon reading this application. The inventor (s) expects skilled artisans to employ such variations as appropriate, and the inventor (s) intends for the claimed subject matter to be practiced other than as specifically described herein.
[0055] Any method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed.

Claims

1. A method of collaborating on medical image data, the method including the steps of:
receiving a series of two-dimensional image data files in Digital Imaging and Communications in Medicine (DICOM) format ;
converting the series of two-dimensional image data files to a three dimensional (3D) representation of the medical image data;
storing the 3D representation on a database; and
permitting multiple users securely to access the 3D representation of the medical image data.
2. The method of claim 1, which includes the step of permitting a user to annotate the 3D representation of the medical image data.
3. The method of claim 2, wherein annotation allows users to selectively isolate areas of interest in the image data files and mark them for future reference.
4. The method of claim 3, wherein the annotated isolated areas of interest are sent to a secure online collaboration platform and shared with users either by images or a video recording of a screen.
5. The method of claim 4, the annotated isolated areas of interest are sent to the collaboration platform automatically.
6. The method of any one of claims 2 to 5, wherein the user's voice is recorded during annotation of said data.
7. The method of any one of claims 1 to 5, wherein the image data is passed through an analysis system which segments the image data and begins to separate the image data into its respective parts, combining with existing toolsets to allow further refinement on the segmentation.
8. The method of any one of the preceding claims, wherein the step of permitting multiple users securely to access the 3D representation of the data includes permitting users to view the 3D data by means of Virtual Reality (VR) and Mixed Reality (MR) goggles.
9. The method of claim 8, wherein the step of permitting multiple users securely to access the 3D representation of the data includes receiving authentication information from a user which are uniquely associated with the user, before allowing a user to access the 3D representation of the data.
10. The method of claim 9, wherein the step of receiving authentication information protects the privacy rights of the patients to which the two-dimensional image data files relate.
11. The method of any one of claims 8 to 10, wherein permitting multiple users securely to access the 3D representation of the data includes permitting a user to view the data on a two dimensional display screen, such as a computer display, a handheld tablet, or a mobile telephone.
12. The method of any one of the preceding claims, which includes hosting a conference call between users.
13. The method of claim 12, which includes the step of permitting multiple users securely to access the 3D representation of the data by permitting multiple users to access the data in real time, thereby permitting the users to collaborate on the data via a virtual conference call.
14. The method of claim 13, which includes a step of a medical practitioner discussing the data with a patient or with other colleagues .
15. The method of any one of the preceding claims, wherein the method further includes the step of analysing the 3D representation with graphical tools.
16. The method of claim 15, wherein the step of analysing the 3D representation with graphical tools includes any one or more options to:
translate the image, in which the 3D image is moved into position in a 3D VR environment;
rotate the image, in which the 3D image is rotated on 2D screen or in a 3D VR environment;
intersect the image, in which the 3D image is intersected to show certain portions of the 3D image more clearly on a 2D screen in a 3D VR environment;
measure a portion of the image, in which a measurement line or lines are drawn as a rule and to display to a user the real life dimensions of the line or lines and to display the ruler with the rest of the image on a 2D screen or in a 3D VR environment ;
draw an overlaid image on a portion of the image, to display the image and overlaid image on a 2D screen or in a 3D VR environment;
produce an overlaid mark on a portion of the image, to display the mark on a 2D screen or in a 3D VR environment; record a video of any of the above annotations together with manipulation of the image in 2D or 3D on the server; take an image snapshot of the 2D representation of the 3D image and storing the snapshot on the server;
adjust the contrast of the 2D or 3D image;
adjust the brightness of the 2D or 3D image;
adjust the opacity of certain individual elements in the 2D or 3D image; and/or
adjust the way a user sees particular structures of the 3D image by manipulating the CT numbers, to change the appearance of the picture to highlight particular structures.
17. The method of claim 16, wherein analysing the 3D representation with graphical tools includes adjusting the brightness of the image via a viewing window level.
18. The method of claim 16 or claim 17, wherein analysing the 3D representation with graphical tools may include adjusting the contrast of the image via a viewing window width.
19. The method of any one of the preceding claims, wherein collaborating on medical image data includes implementing Machine Learning from medical imaging.
20. The method of claim 19, wherein Machine Learning from medical imaging includes displaying hotspots for the user to investigate from certain present values.
21. The method of claim 20, wherein the Machine Learning includes identifying more hotspots as more medical specialists use the medical image data.
22. The method of any one of claims 19 to 21, wherein Machine Learning includes presenting the 3D medical image data with presets and settings aligned to the medical speciality.
23. A collaboration system for collaborating on medical image data, said system including:
an input interface into which a series of two-dimensional image data files in Digital Imaging and Communications in Medicine (DICOM) format are receivable, the series of image data files representing successive two dimensional scans of a body area;
a data modelling processor operable to receive the two- dimensional image data files from the input interface and to compile the series of two-dimensional image data files to a three dimensional (3D) representation of the data;
a database, connected to the data modelling processor, on which any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data is stored;
an output interface for presenting any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data securely to a remote user.
24. The system of claim 23, which includes implementing a single step conversion of the data, which refers to a single click interaction from the user in order to bring DICOM data into a virtual reality environment ready to be viewed, annotated and enable use of the collaboration platform.
25. The system of claim 23 or claim 24, wherein the data is handled into their individual series' to be presented inside the VR space .
26. The system of any one of claims 23 to 25, wherein the collaboration system includes a display device in the form of any one of a mobile telephone, a tablet, a laptop computer, a desktop computer, a pair of virtual reality goggles, such display device operable to display any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data.
27. The system of claim 26, wherein the display device is connectable to the output interface via a private network, or a public network, including the Internet.
28. The system of claim 27, wherein the database is connected to the data modelling processor via a private network or a public network, including the Internet.
PCT/AU2019/050380 2018-04-30 2019-04-29 Medical virtual reality and mixed reality collaboration platform WO2019210353A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2019262082A AU2019262082A1 (en) 2018-04-30 2019-04-29 Medical virtual reality and mixed reality collaboration platform
CN201980028755.3A CN112424870A (en) 2018-04-30 2019-04-29 Medical virtual reality and mixed reality collaboration platform

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2018901434 2018-04-30
AU2018901434A AU2018901434A0 (en) 2018-04-30 Medical virtual reality and mixed reality collaboration platform

Publications (1)

Publication Number Publication Date
WO2019210353A1 true WO2019210353A1 (en) 2019-11-07

Family

ID=68386911

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2019/050380 WO2019210353A1 (en) 2018-04-30 2019-04-29 Medical virtual reality and mixed reality collaboration platform

Country Status (3)

Country Link
CN (1) CN112424870A (en)
AU (2) AU2019101736A4 (en)
WO (1) WO2019210353A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11750794B2 (en) 2015-03-24 2023-09-05 Augmedics Ltd. Combining video-based and optic-based augmented reality in a near eye display
US11766296B2 (en) 2018-11-26 2023-09-26 Augmedics Ltd. Tracking system for image-guided surgery
US11801115B2 (en) 2019-12-22 2023-10-31 Augmedics Ltd. Mirroring in image guided surgery
US11896445B2 (en) 2021-07-07 2024-02-13 Augmedics Ltd. Iliac pin and adapter

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013188850A1 (en) * 2012-06-14 2013-12-19 Trice Imaging, Inc. Systems and devices for encrypting, converting and interacting with medical images
US20140176661A1 (en) * 2012-12-21 2014-06-26 G. Anthony Reina System and method for surgical telementoring and training with virtualized telestration and haptic holograms, including metadata tagging, encapsulation and saving multi-modal streaming medical imagery together with multi-dimensional [4-d] virtual mesh and multi-sensory annotation in standard file formats used for digital imaging and communications in medicine (dicom)
US20150347682A1 (en) * 2011-10-04 2015-12-03 Quantant Technology Inc. Remote cloud based medical image sharing and rendering semi-automated or fully automated, network and/or web-based, 3d and/or 4d imaging of anatomy for training, rehearsing and/or conducting medical procedures, using multiple standard x-ray and/or other imaging projections, without a need for special hardware and/or systems and/or pre-processing/analysis of a captured image data
US9503681B1 (en) * 2015-05-29 2016-11-22 Purdue Research Foundation Simulated transparent display with augmented reality for remote collaboration
US20170053437A1 (en) * 2016-06-06 2017-02-23 Jian Ye Method and apparatus for positioning navigation in a human body by means of augmented reality based upon a real-time feedback
WO2017165301A1 (en) * 2016-03-21 2017-09-28 Washington University Virtual reality or augmented reality visualization of 3d medical images
WO2018069736A1 (en) * 2016-10-14 2018-04-19 Axial Medical Printing Limited A method for generating a 3d physical model of a patient specific anatomic feature from 2d medical images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150347682A1 (en) * 2011-10-04 2015-12-03 Quantant Technology Inc. Remote cloud based medical image sharing and rendering semi-automated or fully automated, network and/or web-based, 3d and/or 4d imaging of anatomy for training, rehearsing and/or conducting medical procedures, using multiple standard x-ray and/or other imaging projections, without a need for special hardware and/or systems and/or pre-processing/analysis of a captured image data
WO2013188850A1 (en) * 2012-06-14 2013-12-19 Trice Imaging, Inc. Systems and devices for encrypting, converting and interacting with medical images
US20140176661A1 (en) * 2012-12-21 2014-06-26 G. Anthony Reina System and method for surgical telementoring and training with virtualized telestration and haptic holograms, including metadata tagging, encapsulation and saving multi-modal streaming medical imagery together with multi-dimensional [4-d] virtual mesh and multi-sensory annotation in standard file formats used for digital imaging and communications in medicine (dicom)
US9503681B1 (en) * 2015-05-29 2016-11-22 Purdue Research Foundation Simulated transparent display with augmented reality for remote collaboration
WO2017165301A1 (en) * 2016-03-21 2017-09-28 Washington University Virtual reality or augmented reality visualization of 3d medical images
US20170053437A1 (en) * 2016-06-06 2017-02-23 Jian Ye Method and apparatus for positioning navigation in a human body by means of augmented reality based upon a real-time feedback
WO2018069736A1 (en) * 2016-10-14 2018-04-19 Axial Medical Printing Limited A method for generating a 3d physical model of a patient specific anatomic feature from 2d medical images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CUKOVIC, S. ET AL.: "Marker Based vs. Natural Feature Tracking Augmented Reality Visualization of the 3D Foot Phantom", January 2015 (2015-01-01), XP055648764, Retrieved from the Internet <URL:https://www.researchgate.net/profile/Sasa_Cukovic2/publication/278668320_Marker_Based_vs_Natural_Feature_Tracking_Augmented_Reality_Visualization_of_the_3D_Foot_Phantom/links/5581de5c08ae6cf036c16fcb/Marker-Based-vs-Natural-Feature-Tracking-Augmented-Reality-Visualization-of-the-3D-Foot-Phantom.pdf> [retrieved on 20190613] *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11750794B2 (en) 2015-03-24 2023-09-05 Augmedics Ltd. Combining video-based and optic-based augmented reality in a near eye display
US11766296B2 (en) 2018-11-26 2023-09-26 Augmedics Ltd. Tracking system for image-guided surgery
US11801115B2 (en) 2019-12-22 2023-10-31 Augmedics Ltd. Mirroring in image guided surgery
US11896445B2 (en) 2021-07-07 2024-02-13 Augmedics Ltd. Iliac pin and adapter

Also Published As

Publication number Publication date
AU2019262082A1 (en) 2020-06-04
AU2019101736A4 (en) 2020-07-09
CN112424870A (en) 2021-02-26
AU2019262082A2 (en) 2020-06-25

Similar Documents

Publication Publication Date Title
AU2019101736A4 (en) Medical virtual reality and mixed reality collaboration platform
Hanna et al. Augmented reality technology using Microsoft HoloLens in anatomic pathology
US10229497B2 (en) Integration of medical software and advanced image processing
US20090204437A1 (en) System and method for improving diagnoses of medical image reading
US20090182577A1 (en) Automated information management process
JP2007141245A (en) Real-time interactive completely transparent collaboration within pacs for planning and consultation
CN101657819A (en) Hanging protocol display system and method
Huang Pacs-based multimedia imaging informatics: Basic principles and applications
US7756326B2 (en) Method for forming and distributing a composite file including a dental image and associated diagnosis
US11627944B2 (en) Ultrasound case builder system and method
JP6711676B2 (en) Medical report creating apparatus and control method thereof, medical report creating system, and program
JP2015176456A (en) Image processor and program
JP2006314626A (en) Apparatus, method, program and system for transmitting medical image
US20050002483A1 (en) Apparatus and method for radiological image interpretation using different time zones
Lebre et al. Collaborative framework for a whole-slide image viewer
JP2015100424A (en) Information processing device, and information processing method
RU2757711C2 (en) Certificates of medical visual display for mobile devices
US7961935B2 (en) Method for forming and distributing a composite file including a dental image and associated diagnosis
GB2459128A (en) An Apparatus and a Method for Facilitating Patient Referrals
JP6825606B2 (en) Information processing device and information processing method
CN103324823A (en) Radiogram interpretation report creation assistance device
Lebre et al. Pathobox: the collaborative tele-pathology platform with access management
JP6399712B1 (en) Program and browsing system
US11922668B2 (en) Asynchronous region-of-interest adjudication for medical images
JP5760551B2 (en) Medical image display apparatus and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19796276

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
ENP Entry into the national phase

Ref document number: 2019262082

Country of ref document: AU

Date of ref document: 20190429

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19796276

Country of ref document: EP

Kind code of ref document: A1