MEDICAL VIRTUAL REALITY AND MIXED REALITY
COLLABORATION PLATFORM
TECHNICAL FIELD
[0001] This invention relates to medical virtual reality and mixed reality collaboration platform. More specifically, this invention relates to a method of collaborating on medical imaging data and to an associated collaboration system.
BACKGROUND
[0002] The following discussion of the background art is intended to facilitate an understanding of the present invention only. The discussion is not an acknowledgement or admission that any of the material referred to is or was part of the common general knowledge as at the priority date of the application .
[0003] The inventors are aware of Virtual Reality (VR) viewers used in the medical field. However, such viewers are often restricted for viewing by a single user and such viewers often require specialized knowledge to use and to interpret images created thereby.
[0004] It is an object of the present invention to address some of the shortcomings of existing three dimensional (3D) Virtual Reality (VR) viewers of which the Applicant is aware.
SUMMARY OF THE INVENTION
[0005] According to one aspect of the invention, there is provided a method of collaborating on medical image data, the method including the steps of:
receiving a series of two-dimensional image data files in Digital Imaging and Communications in Medicine (DICOM) format ;
converting the series of two-dimensional image data files to a three dimensional (3D) representation of the medical image data;
storing the 3D representation on a database; and
permitting multiple users securely to access the 3D representation of the medical image data.
[0006] The method may include permitting a user to annotate the data. Annotation allows users to selectively isolate areas of interest in the scan data and mark them for future reference using toolsets. The annotated areas of interest can be sent to the collaboration platform and shared with users either by images or a video recording of the screen, of which the user's voice is recorded. The annotated areas of interest may be sent to the collaboration platform automatically.
[0007] The data may be passed through an analysis system which segments the image data and begins to separate the image data into its respective parts, combining with the existing toolsets to allow futher refinement on the segmentation.
[0008] Permitting multiple users securely to access the 3D representation of the data may include permitting users to view the 3D data by means of Virtual Reality (VR) and Mixed Reality (MR) goggles.
[0009] Permitting multiple users securely to access the 3D representation of the data may include receiving authentication information from a user, such as a predefined username and password, which are uniquely associated with the user, before allowing a user to access the 3D representation of the data. Advantageously, the step of receiving authentication information protects the privacy rights of the patients to which the two-dimensional image data files relates .
[0010] Permitting multiple users securely to access the 3D representation of the data may include permitting a user to view the data on a two dimensional display screen, such as a computer display, a handheld tablet or a mobile telephone.
[0011] The method may include hosting a conference call between users. Permitting multiple users securely to access the 3D representation of the data may then include permitting multiple users to access the data in real time, thereby permitting the users to collaborate on the data via a virtual conference call. Advantageously, a medical practitioner may then discuss the data with a patient or with other colleagues.
[0012] The method may further include the step of analysing the 3D representation with graphical tools.
[0013] Analysing the 3D representation with graphical tools may include options to:
translate the image, in which the 3D image is moved into position in a 3D VR environment;
rotate the image, in which the 3D image is rotated on 2D screen or in a 3D VR environment;
intersect the image, in which the 3D image is intersected to show certain portions of the 3D image more clearly on a 2D screen on in a 3D VR environment;
measure a portion of the image, in which a measurement line or lines are drawn as a rule and to display to a user the real life dimensions of the line or lines and to display the ruler with the rest of the image on a 2D screen or in a 3D VR environment ;
draw an overlaid image on a portion of the image, to display the image and overlaid image on a 2D screen or in a 3D VR environment;
produce an overlaid mark on a portion of the image, to display the mark on a 2D screen or in a 3D VR environment; record a video of any of the above annotations together with manipulation of the image in 2D or 3D on the server; take an image snapshot of the 2D representation of the 3D image and storing the snapshot on the server;
adjust the contrast of the 2D or 3D image;
adjust the brightness of the 2D or 3D image;
adjust the opacity of certain individual elements in the 2D or 3D image; and/or
adjust the way a user sees particular structures of the 3D image by manipulating the CT numbers, to change the appearance of the picture to highlight particular structures.
[ 0014 ] Analysing the 3D representation with graphical tools may include adjusting the brightness of the image via a viewing window level.
[ 0015 ] Analysing the 3D representation with graphical tools may include adjusting the contrast via a viewing window width.
[0016] The method may include implementing a single step conversion of the data, which refers to a single click interaction from the user in order to bring Diacom data into a virtual reality environment ready to be viewed, annotated and enable use of the collaboration platform. It handles the data into their individual series's to be presented inside the VR space .
[0017] The method of collaborating on medical image data may include implementing Machine Learning from medical imaging. Machine Learning from medical imaging may include displaying hotspots for the user to investigate from certain present values. The Machine Learning may then include identifying more hotspots as more medical specialists use the medical imaging data.
[0018] Machine Learning may include presenting medical imaging with presets and settings aligned to the medical speciality, such as for a liver surgeon, the image will load up with the correct windowing values, contrast and brightness.
[0019] According to another aspect of the invention there is provided a collaboration system, which includes:
an input interface into which a series of two-dimensional image data files in Digital Imaging and Communications in Medicine (DICOM) format are receivable, the series of image data files representing successive two dimensional scans of a body area;
a data modelling processor operable to receive the two- dimensional image data files from the input interface and to compile the series of two-dimensional image data files to a three dimensional (3D) representation of the data;
a database, connected to the data modelling processor, on which any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data is stored;
an output interface for presenting any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data securely to a remote user.
[0020] The collaboration system may include a display device in the form of any one of a mobile telephone, a tablet, a laptop computer, a desktop computer, a pair of virtual reality goggles, or the like, operable to display any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data. The display device may be connectable to the output interface via a private network, or a public network, such as the Internet.
[0021] The database may be connected to the data modelling processor via a private network or a public network, such as the Internet.
BRIEF DESCRIPTION OF THE DRAWINGS
The description will be made with reference to the accompanying drawings in which:
Figure 1 shows a functional block diagram of a collaboration system in accordance with one aspect of the invention;
Figure 2 shows a method of collaborating on medical image data;
Figure 3 shows certain aspects of the method of Figure 2 in more detail; and
Figures 4 and 5 show three dimensional (3D) representations of the data taken from an MRI scanner forming part of the collaboration system of Figure 1.
DETAILED DESCRIPTION OF EMBODIMENTS
[0022] Further features of the present invention are more fully described in the following description of several non limiting embodiments thereof. This description is included solely for the purposes of exemplifying the present invention to the skilled addressee. It should not be understood as a restriction on the broad summary, disclosure or description of the invention as set out above. In the figures, incorporated to illustrate features of the example embodiment or embodiments, like reference numerals are used to identify like parts throughout.
[0023] With reference to the Figures, reference numeral 10 is used throughout this specification to indicate, generally, a collaboration system. The collaboration system includes an input interface 12 into which a series of two-dimensional image data files in Digital Imaging and Communications in Medicine (DICOM) format are receivable, the series of image data files representing successive two dimensional scans of a body area.
[0024] A data modelling processor 14 is connected to the input interface 12 and is operable to compile the series of two-dimensional image data files to a three dimensional (3D) representation of the data.
[0025] A database 16 is connected to the data modelling processor 14 and on the database 16 any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data is stored.
[0026] An output interface 18 is connected to the data modelling processor 14 for presenting any one, or both of the series of two-dimensional (2D) image data files and three dimensional (3D) representations of the data securely to a remote user (not shown) using display devices 20.
[0027] In this instance the collaboration system includes display devices in the form of a tablet 20.1, a desktop computer 20.2, a laptop computer 20.3, a mobile telephone 20.4 and two pairs of Virtual Reality (VR) goggles 20.5, 20.6. The tablet 20.1, the desktop computer 20.2, the laptop computer 20.3, the mobile telephone 20.4 are operable to display two- dimensional (2D) image data files. The two Virtual Reality (VR) goggles 20.5, 20.6 are operable to display three dimensional (3D) representations of the data.
[0028] The display devices 20 are connectable to the output interface via the Internet 24. In this example, the collaboration system 10 includes a second database 22, connected to the data modelling processor 14 via the Internet 24.
[0029] Two dimensional data sources in the form of a Computer Tomography (CT) scanner 26 and a Magnetic Resonance Imaging (MRI) scanner 28 are connected to the input interface 12 via the Internet 24.
[0030] In use, the collaboration system 10 provides a method of collaborating on medical image data. The method includes receiving a series of two-dimensional image data files in Digital Imaging and Communications in Medicine (DICOM) format onto the data modelling processor 14. The series of data files are received from the data sources 26, 28 via the Internet 24 and the input interface 12.
[0031] The method then entails on the data modelling processor 14 converting the series of two-dimensional image data files to a three dimensional (3D) representation of the data, such as the images shown in Figures 3 and 4.
[0032] The method may include implementing a single step conversion of the data, which refers to a single click interaction from the user in order to bring DICOM data into a virtual reality environment ready to be viewed, annotated and enable use of the collaboration platform. It handles the data into their individual series' to be presented inside the VR space .
[0033] The two-dimensional (2D) image data files and the three dimensional (3D) representation of the data are stored on the database 16. As a backup for the data, or as an alternative to the database 16, the data is also stored on the remote database 22.
[0034] The method further includes permitting multiple users of the display devices 20, securely to access the 2D and 3D representations of the data. Typically one user can be a medical practitioner who was responsible for taking the images on the data sources 26, 28 and another user can be a patient viewing the 3D images on a set of VR goggles 20.5. A further
user can be a second specialist medical practitioner, who views the 3D images on another display device, such as the laptop 20.3. The display devices 20 may include a voice interface by means of which the various users can communicate with each other .
[0035] The method of collaboration then entails that the users of the various display devices can view and consult on the 2D or 3D images in real time.
[0036] The method of collaboration may include the step of analysing the 3D representation by means of graphical tools. Analysing the 3D representation may include options to translate the image, to rotate the image, in which the 3D image is rotated on 2D screen or in a 3D VR environment; to intersect the image, in which the 3D image is intersected to show certain portions of the 3D image more clearly on a 2D screen on in a 3D VR environment; to measure a portion of the image, in which a measurement line or lines are drawn as a rule and to display to a user the real life dimensions of the line or lines and to display the ruler with the rest of the image on a 2D screen or in a 3D VR environment; to draw an overlaid image on a portion of the image, to display the image and overlaid image on a 2D screen or in a 3D VR environment; to produce an overlaid mark on a portion of the image, to display the mark on a 2D screen or in a 3D VR environment; to record a video of any of the above annotations together with manipulation of the image in 2D or 3D on the server; to take an image snapshot of the 2D representation of the image and storing the snapshot on the server; to adjust the contrast of the 2D or 3D image; to adjust the brightness of the 2D or 3D image; and to adjust the opacity of certain individual elements in the 2D or 3D image.
[0037] Figure 2 shows a flow diagram of the method of collaboration in accordance with one aspect of the invention. The flow diagram initiates at 52, where the 2D files originating from the data sources 26, 28 are scanned. If a valid directory which contains the relevant folders does not exist as tested at 54, execution directs to 52, alternatively execution directs to 56. At 56 a check is performed to check if the series of two-dimensional (2D) image data files (files) have previously been handled by the collaboration system, if it has, execution is directed to 58, if not, execution is directed to 60. At 60 the files are arranged for the conversion of the series of two-dimensional image data files to a three dimensional (3D) representation of the data to begin. At 62 the conversion of the files takes place. Execution then proceeds to 64.
[0038] If execution was directed to 58, then a test is performed to determine if patient information is attached in accordance with the Digital Imaging and Communications in Medicine (DICOM) format. If the patient data has been attached, execution directs to 66 alternatively execution directs to 64. At 64, header files are read and the information is extracted to be used in the conversion process, execution then proceeds to 68. At 68 the information is translated into machine learning algorithms to assist in data placement and in diagnosis. Execution is then directed back to 54.
[0039] At 66 a test is performed to confirm that the data has been anonymized, if it has, execution is directed to 70, if it has not been anonymized, execution is directed to 72. At 72 a check is performed to see if the data has previously been opened, if it has, the previously opened data settings
are used at 74 to place data and to establish the evolving user interface (UI) system, if it has not, the normal user setting is used at 76 with the machine learning from other data groups from similar speciality characteristics.
[0040] At 70 the data is loaded into the data modelling processor 16. Execution is directed from 74 and 76 to 78 where a viewing platform is initialized to permit multiple users securely to access the 3D representation of the data. At 80 a user is presented with various operating functions of the data modelling processor 16.
[0041] At 82 the user interface presents a user with options to set up the user interface at 84, to use data manipulation tools at 86 and subsequently the tools are enabled based on the user's medical speciality at 88. At 90 the data isolation tools are presented and subsequently the windowing values are set up at 92 based on data used by other users of similar data groups .
[0042] Setting up the windowing values at 92 may include analysing the 3D representation with graphical tools which further includes adjusting the brightness of the image via the window level. Analysing the 3D representation with graphical tools may then further include adjusting the contrast via the window width.
[0043] From the operating functions presented at 80, a user can select the option to record a sequence, to capture a selection or to record a dictation at 94. Further from the operating functions presented at 80, a user can select the option to enable a virtual reality platform at 96 that provides
the options for users to collaborate at 98 and/or to present a VR multi-user experience at 100.
[0044] Following the option to collaborate at 98, a global specialist platform is enabled at 102, individual patient sessions is enabled at 104 and the collaboration is presented visually in 2D and 3D at 106.
[0045] Following the selection of a VR multi-user experience at 100 the interactions between medical practitioners are grouped into the group of Doctor to Doctor, Doctor to patient and Doctor to team at 108 and at 110 the multi-user experience is extended to Educator to students, student to student and to individual study be students.
[0046] Figure 3 illustrate certain aspects of the process in more detail and illustrate certain additional aspects in addition to the method of collaboration shown in Figure 2. Instances where the steps correspond with the steps in Figure 2 are not described again. However the additional steps are described below.
[0047] Referring to Figure 3, as shown in broken line, once the execution files were arranged at 60 for the conversion of the series of two-dimensional image data files to a three dimensional (3D) representation of the data and once the conversion has taken place at 62, a test is done at 61 to determine if the data conversion was adequate. If the data conversion was not adequate, the system reverts to default settings and converts the data again at 63. If the data conversion was adequate, any conversions that did not meet a certain criterion or criteria are removed at 65 and the
required file structure is constructed and the files are validated at 67.
[0048] In addition to the method shown in Figure 2, as can be seen in Figure 3, after the check is performed at 72 to determine if the data has previously been opened, if it has, the previously opened data settings are used at 74 to place data and to establish the evolving user interface (UI) system, if it has not, the normal user setting is used at 76 with the machine learning from other data groups from similar speciality characteristics. Then at 75 a step is taken where machine learning is implemented to assist the data setup for display and the patient records are read into the platform. Execution then continues at 78.
[0049] Figures 4 and 5 show three dimensional (3D) representations of the data taken from an MRI scanner forming part of the collaboration system of Figure 1. From the images it is clear how the two dimensional data is represented in three dimensions. Certain features of the scan can then be better illustrated, highlighted and annotated.
[0050] The Applicant is of the opinion that the invention provides a useful method of collaborating on medical data and a useful collaboration system.
[0051] Optional embodiments of the present invention may also be said to broadly consist in the parts, elements and features referred to or indicated herein, individually or collectively, in any or all combinations of two or more of the parts, elements or features, and wherein specific integers are mentioned herein which have known equivalents in the art to which the invention relates, such known equivalents are deemed
to be incorporated herein as if individually set forth. In the example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail, as such will be readily understood by the skilled addressee .
[ 0052 ] The use of the terms "a", "an", "said", "the", and/or similar referents in the context of describing various embodiments (especially in the context of the claimed subject matter) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms "comprising," "having," "including, " and "containing" are to be construed as open- ended terms (i.e., meaning "including, but not limited to,") unless otherwise noted. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. No language in the specification should be construed as indicating any non-claimed subject matter as essential to the practice of the claimed subject matter .
[ 0053 ] It is to be appreciated that reference to "one example" or "an example" of the invention, or similar exemplary language (e.g., "such as") herein, is not made in an exclusive sense. Various substantially and specifically practical and useful exemplary embodiments of the claimed subject matter are described herein, textually and/or graphically, for carrying out the claimed subject matter.
[ 0054 ] Accordingly, one example may exemplify certain aspects of the invention, whilst other aspects are exemplified in a different example. These examples are intended to assist the skilled person in performing the invention and are not
intended to limit the overall scope of the invention in any way unless the context clearly indicates otherwise. Variations (e.g. modifications and/or enhancements) of one or more embodiments described herein might become apparent to those of ordinary skill in the art upon reading this application. The inventor (s) expects skilled artisans to employ such variations as appropriate, and the inventor (s) intends for the claimed subject matter to be practiced other than as specifically described herein.
[0055] Any method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed.