WO2019173677A1 - Systems and methods for diagnostic applications of augmented reality - Google Patents

Systems and methods for diagnostic applications of augmented reality Download PDF

Info

Publication number
WO2019173677A1
WO2019173677A1 PCT/US2019/021291 US2019021291W WO2019173677A1 WO 2019173677 A1 WO2019173677 A1 WO 2019173677A1 US 2019021291 W US2019021291 W US 2019021291W WO 2019173677 A1 WO2019173677 A1 WO 2019173677A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
virtual patient
processor
augmented reality
display
Prior art date
Application number
PCT/US2019/021291
Other languages
French (fr)
Inventor
Tarik Erol James AWAD
Mark Christian
Franco DI MISCIO
Matthew David SCOTT
Daniel TONKS
Joanna WEBB
Original Assignee
Pearson Education, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pearson Education, Inc. filed Critical Pearson Education, Inc.
Publication of WO2019173677A1 publication Critical patent/WO2019173677A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/743Displaying an image simultaneously with additional graphical information, e.g. symbols, charts, function plots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1118Determining activity level
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/002Monitoring the patient using a local or closed circuit, e.g. in a room or building
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/01Measuring temperature of body parts ; Diagnostic temperature sensing, e.g. for malignant or inflamed tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14542Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]

Definitions

  • Mixed reality or augmented reality display devices such as head-mounted display devices, may be used in a variety of real-world environments and contexts. Such devices provide a view of a physical environment that is augmented by a virtual environment including virtual images, such as two-dimensional virtual objects and three- dimensional holographic objects, and/or other virtual reality information. Such devices may also include various sensors for collecting data from the surrounding environment.
  • An augmented reality device may display virtual images that are interspersed with real-world physical objects to create a mixed reality environment.
  • a user of the device may desire to interact with a virtual or physical object using the mixed reality device.
  • conventional augmented reality devices may not provide sufficiently interactive and specialized mixed reality environments.
  • FIG. 1 illustrates a system level block diagram for an augmented reality device, in accordance with an embodiment.
  • FIG. 2 illustrates a system level block diagram for a system that includes networked augmented reality devices, in accordance with an embodiment.
  • FIG. 4 shows an illustrative augmented reality scene in which a menu is presented from which different scenarios may be selected, in accordance with an embodiment.
  • FIG. 5 shows an illustrative augmented reality scene in which a heads up display and a video control interface are presented, in accordance with an embodiment.
  • FIG. 6 shows an illustrative process flow that may be performed by an augmented reality device to start, progress through, and end a scenario, in accordance with an embodiment.
  • FIG. 7 shows an illustrative process flow that may be performed by an augmented reality device to progress through a selected scenario, in accordance with an embodiment.
  • FIG. 8 shows an illustrative branched pathways for the automatic selection of virtual patient video sequences based on student interaction with the virtual patient, in accordance with an embodiment.
  • the AR device 100 may include a processor 102, a memory 104 that includes an AR component 110 and an operating system 106, input/output (I/O) devices 108, a network interface 112, cameras 120, a display device 130 and an accelerometer 140.
  • the processor 102 may retrieve and execute programming instructions stored in the memory 104.
  • the processor 102 may be a single CPU, multiple CPUs, a single CPU having multiple processing cores, GPUs having multiple execution paths, or any other applicable processing hardware and arrangement thereof.
  • the memory 104 may represent both random access memory (RAM) and non-volatile memory (e.g., the non- volatile memory of one or more hard disk drives, solid state drives, flash memory, etc.) In some embodiments, the memory 104 may be considered to include memory physically located elsewhere (e.g., a computer in electronic communication with the AR device 100 via network interface 112).
  • the operating system 106 included in memory 104 may control the execution of application programs on the AR device 100 (e.g., executed by processor 102).
  • the I/O devices 108 may include a variety of input and output devices, including displays, keyboards, touchscreens, buttons, switches, microphones, and any other applicable input or output device.
  • the network interface 112 may enable the AR device 100 to connect to and transmit and receive data over an electronic data communications network (e.g., via Ethernet, WiFi, Bluetooth, or any other applicable electronic data communications network).
  • the cameras 120 may include an outward facing camera that may detect movements within its field of view, such as gesture-based inputs or other movements performed by a user or by a person or physical object within the field of view.
  • the outward facing camera may also capture two-dimensional image information and depth information from a physical scene and physical objects within the physical scene.
  • the outward facing camera may include a depth camera, a visible light camera, an infrared light camera, and/or a position tracking camera.
  • the cameras 120 may also include one or more user-facing cameras, which may, among other applications, track eye movement of the user.
  • the AR component 110 could use such a user-facing camera of cameras 120 to determine which portion of the display device 130, and thereby which portion of the displayed AR scene, that the user is looking at.
  • the accelerometer 140 is a device capable of measuring the physical (or proper) acceleration of the AR device 100.
  • the AR component 110 may use the accelerometer 140 to determine when the position of the AR device 100 is changing in order to track the location of the AR device 100.
  • the cameras 120 may capture video data of the physical scene, such that activity of the user may be monitored and analyzed to identify actions (e.g., gestures and other movements of the user) indicative of desired interactions between the user and virtual objects within the virtual scene of the AR scene shown on the AR device 100.
  • the AR component 110 is configured to adjust the depiction of the physical scene viewed on or through the display device 130 via the display of virtual objects of a virtual scene, thereby producing an AR scene.
  • These virtual objects have locations and orientations within the physical scene that are defined based on a calibration that may be performed by the user.
  • the collection of virtual components e.g., the virtual objects, respective positions and orientations of the virtual objects, and optional state information for the virtual objects
  • the AR component 110 may cause one or more virtual objects of a virtual scene to be displayed at one or more defined (e.g., predefined or defined by a user's action) locations within the physical scene to generate an AR scene.
  • the AR component 110 could determine an area of three- dimensional space for a given virtual object to occupy.
  • the virtual object may be, for example, repositioned, rotated, resized, or otherwise manipulated by a user.
  • a database of virtual objects may be stored in the memory 104 (e.g., as 3D models).
  • the AR component 110 may retrieve the virtual object from this database in the memory 104 and may place the virtual object in the virtual scene.
  • the AR component 110 may be configured to initialize a single-user session, to host a network session to which other networked AR devices may connect, or to connect the AR device 100 to a network session hosted by another AR device or host server.
  • a "network session” refers to an instantiation of a virtual scene (e.g., instantiated at a host AR device or a host server), which may be loaded onto (e.g., streamed to) an AR device connected to the session (e.g., communicatively connected to a host of the session) to generate an AR scene that includes the virtual scene displayed over a physical scene.
  • Each of the network sessions 204 and 208 may be managed via one or more CMS databases 212, which may, for example, be implemented via a cloud-based system.
  • CMS databases 212 may, for example, maintain a list of open network sessions which, in the present example, includes the network sessions 204 and 208.
  • the list of open network sessions may further include, for a given network session (e.g., network session 204), identifying information (e.g., IP addresses, MAC addresses, device names, etc.) corresponding to networked AR devices connected in the given network session (e.g., AR devices 202).
  • a new network session may be added to the list of open network sessions maintained by the CMS databases 212 in response to a host AR device initiating the new network session.
  • An open network session may be removed from the list of open network sessions maintained by the CMS databases 212 in response to a host AR device for the open network session closing the open network session.
  • the CMS databases 212 may, for example, further include one or more databases of virtual objects that may be retrieved an AR device connected to CMS databases 212 (e.g., using AR devices 202 or 206 connected to CMS databases 212 via networks 210) and loaded into the virtual scene running on the AR device or that may be downloaded to the memory of an AR device for future use in generating virtual scenes.
  • Frame 304 shows a virtual patient 302 that has been added to the AR scene, but that has not yet been placed.
  • a "virtual patient” may refer to a pre- recorded volumetric video representations of a standardized patient displaying symptoms of a disease or condition corresponding to a given clinical scenario.
  • the "virtual patient” may be a three-dimensional computer-generated animation representing a standardized patient displaying symptoms of a disease or condition corresponding to a given clinical scenario.
  • frame 306 may be what is shown on the display of the AR device immediately after the user performs a gesture to add the virtual patient 302 to the AR scene.
  • FIG. 4 shows an illustrative AR scene 400, as displayed on an AR device (e.g., AR device 100 of FIG. 1 ) during a session.
  • the AR scene 400 includes a menu 404, which is presented next to a virtual patient 402, shown to be seated in a physical chair of the physical scene.
  • Menu 404 allows a user to select from a variety of clinical scenarios when the user selects menu option 406.
  • the selectable clinical scenarios include an anaphylaxis scenario 408 and a COPD scenario 410.
  • the virtual patient 402 may begin a video sequence corresponding to the selected clinical scenario.
  • the user may also be presented with the opportunity to select different severities for a selected clinical scenario, with each level of severity having a different corresponding video sequence for the virtual patient 402.
  • the virtual patient 402 may be partially transparent (e.g., translucent) such that physical objects behind the virtual patient 402 may be seen through the virtual patient 402. This partial transparency may aid the user in distinguishing between which objects are virtual and which are physical.
  • FIG. 5 shows an illustrative AR scene 500, as displayed on an AR device (e.g., AR device 100 of FIG. 1 ) during a session.
  • the AR scene 500 includes a control interface 504 and a heads up display (HUD) 506, both of which are displayed at different respective locations next to a virtual patient 502.
  • the FIUD may sometimes be referred to as a "virtual display.”
  • the HUD 506 displays vital signs corresponding to the virtual patient 502. For example, these vital signs may include body temperature, heart rate, electrocardiogram (ECG) signal graphs, respiration rate, oxygen saturation, blood pressure, and other vital sign information.
  • ECG electrocardiogram
  • the vital signs displayed on the HUD 506 may be animated and synced with the video of the virtual patient 502 so as to simulate the vital signs of a patient in a given clinical scenario having a given level of severity.
  • the virtual patient 502 may be partially transparent (e.g., translucent) such that physical objects behind the virtual patient 502 may be seen through the virtual patient 502.
  • the HUD 506 may similarly be partially transparent. This partial transparency may aid the user in distinguishing between which objects are virtual and which are physical.
  • the control interface 504 includes buttons that may be selected by a user for use in controlling the video sequence currently being expressed by the virtual patient 502.
  • a "video sequence" refers to a prerecorded, video loop of the virtual patient, with different video sequences for a given clinical scenario corresponding to different levels of severity or reactions to treatment for the disease or condition corresponding to the given clinical scenario.
  • a user may fast-forward, rewind, pause, play, or restart a video sequence of the virtual patient 502.
  • the user may be presented, via the control interface 504, with one or more options for navigating between one or more video sequences.
  • each navigable video sequence may correspond to a different level of severity for the clinical scenario being simulated by the virtual patient 502.
  • virtual patient 502 may progress through various video sequences.
  • navigation through these video sequences may be selected by a teacher based on the teacher's observation of a student's interaction with the virtual patient 502.
  • these video sequences may be selected automatically based on user interactions with the virtual patient 502 that are detected by the AR device, as will be explained in more detail, below.
  • the AR device may generate and display a prompt to the user (e.g., by executing instructions on a processor of the AR device) in the AR scene shown on the display of the AR device.
  • the prompt allows the user to select from a variety of displayed modes of operation for the AR device. These modes may include starting or joining a multi-user network session and starting a single-user session.
  • the prompt may be shown on a display of the AR device, and the mode may be selected by the user via the user's performance of a detectable gesture. For example, the user may perform an air tap gesture in a location of one of the modes displayed in the prompt in the AR scene shown on the display of the AR device.
  • the gesture may be performed within the field of view of one or more cameras of the AR device so that the gesture may be detected and interpreted by the processor of the AR device.
  • the method 600 progresses based on which mode was selected in step 604. If the mode of starting a multi-user network session was selected, the method 600 progresses to step 608. If the mode of beginning a single-user session was selected, the method 600 progresses to step 620.
  • the AR device may generate and display a prompt to the user (e.g., by executing instructions on a processor of the AR device) in the AR scene shown on the display of the AR device.
  • the prompt allows the user to select from the options of joining an existing network session or hosting a network session with the AR device of the user.
  • the prompt may be shown on a display of the AR device, and the option may be selected by the user via the user's performance of a detectable gesture.
  • the user may perform an air tap gesture in a location of one of the "join" and "host” options displayed in the prompt in the AR scene shown on the display of the AR device.
  • the gesture may be performed within the field of view of one or more cameras of the AR device so that the gesture may be detected and interpreted by the processor of the AR device.
  • step 610 the method 600 progresses based on which option was selected in step 608. If the "host” option was selected, the method 600 progresses to step 614. If the "join” option was selected, the method 600 progresses to step 616.
  • step 614 in response to the selection of the "host” option, the user's AR device initiates a network session as the host.
  • the AR device may generate and display a list of network sessions that are available to be joined by the AR device of the user. The user may then select a network session from the displayed list.
  • the AR device of the user joins the selected network session.
  • the AR device presents a list of selectable scenarios on the display of the AR device.
  • Each scenario may correspond to a different clinical scenario.
  • these clinical scenarios may include anaphylaxis, COPD, burns, stroke, domestic violence, chest pain, asthma, diabetes, and any other applicable clinical scenarios that may be presented for diagnostic training.
  • different levels of severity may be selected for each available clinical scenario included in the displayed list.
  • the AR device initiates a clinical scenario selected in step 620.
  • a virtual patient is generated and added to a AR scene at a user-selected location in the AR scene depicted on the display of the AR device.
  • the virtual patient may be placed via a networked AR device of another user (student or teacher) in the network session.
  • the virtual patient will be initiated in animation video sequence corresponding to the selected clinical scenario and, optionally, the selected level of severity of the clinical scenario.
  • a HUD may also be added to the AR scene at a location adjacent to the virtual patient.
  • the HUD may be animated to show vital signs of the virtual patient corresponding to the selected clinical scenario and, optionally the selected level of severity of the clinical scenario.
  • the animation of the HUD may be synced with the video sequence of the virtual patient.
  • the AR device progresses through the scenario while collecting analytics data corresponding to the interactions of one or more students with the virtual patient.
  • various video sequences of the virtual patient may be progressed through, each corresponding to different levels of disease, symptoms thereof, reactions to appropriate treatment/mistreatment, etc. and each being initiated in response to detected student interactions or observations.
  • Collected analytics data for a given student may, for example, include information pertaining to how long it takes the student to accurately identify symptoms of a disease and the disease itself, incorrect attempts of the student at identifying symptoms or diseases, how long the student spends looking at various portions of the patient's body (e.g., portions of the patient's body that are relevant vs.
  • step 626 once the AR device (or another AR device in the network session) detects that a student has successfully identified and/or treated the disease or condition of the virtual patient. Alternatively, the scenario may be ended in response to a detected user command to end the scenario.
  • step 630 the method 600 progresses based whether the option of starting a new scenario or the option of closing the application was selected in step 628. If the option of starting a new scenario was selected, the method 600 returns to step 620. If the option of closing the application was selected, the method 600 proceeds to step 632 and the application is closed.
  • FIG. 7 shows an illustrative process flow for a method 700 for progressing through a clinical scenario in an AR environment displayed on an AR device (e.g., the AR device 100 of FIG. 1 ).
  • the method 700 may be performed, at least in part, by executing, with a processor (e.g., processor 102 shown in FIG. 1 ), instructions (e.g., part of the augmented reality component 110 shown in FIG. 1 ) stored in memory (e.g., the memory 104 shown in FIG. 1 ) of the AR device.
  • the method 700 may, for example, be performed during steps 622-626 of method 600 of FIG. 6.
  • the AR device starts a clinical scenario.
  • the clinical scenario may be selected by a user of the AR device, or may be selected by the user (e.g., a student or a teacher) of another AR device connected to the same network session as the AR device.
  • the clinical scenario may, for example, correspond to one of a multitude of possible clinical scenarios including: clinical scenarios may include anaphylaxis, COPD, burns, stroke, domestic violence, chest pain, asthma, diabetes, and any other applicable clinical scenarios that may be presented for diagnostic training. In some embodiments, different levels of severity may be selected for the clinical scenario.
  • the AR scene displayed by the AR device may be calibrated. The locations and orientations of virtual objects in the virtual scene (e.g., the virtual patient and the HUD) will be defined based on this calibration.
  • a virtual patient is generated and added to an AR scene at a user- selected location in the AR scene depicted on the display of the AR device.
  • the virtual patient may be placed via another, networked AR device in the same network session as the AR device on which method 700 is being performed.
  • the virtual patient will be initiated in a video sequence corresponding to the selected clinical scenario and, optionally, a selected level of severity of the clinical scenario.
  • interactions between one or more students participating in the session and the virtual patient may be monitored and analytics data corresponding to these interactions may be collected and saved to the memory of the AR device or, optionally, to the memory of a remote CMS database (e.g., CMS databases 212 of FIG. 2).
  • this analytics data may include information pertaining to how long it takes the student to accurately identify symptoms of a disease and the disease itself, incorrect attempts of the student at identifying symptoms or diseases, how long the student spends looking at various portions of the patient's body (e.g., portions of the patient's body that are relevant vs.
  • keywords spoken by the student related to the virtual patient's condition e.g., captured as audio data by a microphone of the AR device, such as a microphone included in the I/O devices 108 of FIG. 1 ), or any other applicable information.
  • the video sequence of the virtual patient and the animation of the HUD may progress to new video and animation sequences based on a pre-defined branching path stored in memory. If the AR device detects that a student has said a predefined key word or phrase, that a student has interacted with the virtual patient in a predefined way, or that a predefined amount of time has elapsed since the start of the present video sequence of the virtual patient and animation sequence of the HUD, then the AR device may update the video sequence for the virtual patient and the animation sequence for the HUD according to the pre-defined branching path.
  • the video sequence of the virtual patient and animation sequence of the HUD may be progressed manually by the AR device of a teacher (e.g., a doctor or nurse) based on the teacher's observations of the interactions and observations made by the student or students with the virtual patient.
  • a teacher e.g., a doctor or nurse
  • the AR device may detect (e.g., based on audio data captured by a microphone of the AR device, such as a microphone included in the I/O devices 108 of FIG. 1 ) that a student has said the words "anaphylaxis" and "epinephrine” when observing a virtual patient having a video sequence corresponding to the expression of the symptoms of anaphylaxis during the clinical scenario of diagnosing and treating a patient having an allergic reaction.
  • the AR device may detect that the student has accurately diagnosed and suggested treatment for the condition of the virtual patient.
  • the AR device may update the video of the virtual patient to show a video sequence corresponding to a lessening in the severity of the virtual patient's symptoms of anaphylaxis, or a complete abatement of these symptoms.
  • the AR device may detect that a student has said the words "asthma” and "inhaler” when observing a virtual patient having a video sequence corresponding to the expression of the symptoms of anaphylaxis during the clinical scenario of diagnosing and treating a patient having an allergic reaction.
  • the AR device may detect that the student has misdiagnosed the virtual patient and has recommended a treatment that may not be immediately effective or may be detrimental.
  • the AR device may update the video of the virtual patient to show a video sequence corresponding to an increase in the severity of the virtual patient's symptoms of anaphylaxis.
  • incorrect diagnoses may instead correspond to no change in the virtual patient's condition, and the video sequence of the virtual patient may remain unchanged.
  • the AR device may determine that a predetermined amount of time has passed without any student correctly suggesting a correct diagnosis or treatment for the virtual patient.
  • the AR device may update the video of the virtual patient to show a video sequence corresponding to an increase in the severity of the virtual patient's symptoms. This response may be appropriate for clinical scenarios such as anaphylaxis for which not providing a correct diagnosis and treatment quickly allows the condition to worsen.
  • the AR device determines whether the virtual patient was successfully diagnosed and/or treated (e.g., depending on the circumstances of the particular clinical scenario). If a correct diagnosis and/or treatment has been proposed, the method 700 may proceed to step 716 at which the scenario may end. Otherwise, the method 700 may return to step 710 at which students may continue to attempt to determine an appropriate diagnosis and treatment for the virtual patient. In this way, method 700 may recursively update the video sequence of the virtual patient as students interact with the virtual patient through the progression of the clinical scenario. In alternative embodiments, the decision of whether to end the scenario may be made manually via the AR device of a teacher/facilitatior.
  • FIG. 8 shows an example of branched pathways 800 that may be defined in memory of an AR device (e.g., AR device 100 of FIG. 1 ) that is displaying a virtual patient in a selected clinical scenario (e.g., via the execution of methods 600 and/or 700 of FIGS. 6 and 7).
  • the branched pathways 800 may be defined in a remote CMS database (e.g., CMS databases 212 of FIG. 2) with which the AR device is in electronic communication.
  • Branched pathways 800 includes multiple nodes, each corresponding to a different video sequence for the virtual patient. Each video sequence corresponds to the virtual patient exhibiting a different level of severity or reaction to treatment for the disease or condition of a particular clinical scenario.
  • the nodes may be progressed through based on detected interactions and observations made by students regarding the virtual patient. These students may include the user of the AR device and/or the users of other AR devices connected to the same network session as the AR device of the present embodiment.
  • Positive actions correspond to actions that may be made by a student, which would, in the real world, improve the condition of a patient in a clinical scenario of the type being simulated on the AR device.
  • Negative actions correspond to actions that may be made by a student which would, in the real world, deteriorate the condition of a patient (e.g., such that symptoms displayed by the patient are shown to be more severe) in a clinical scenario of the type being simulated by the AR device.
  • the AR device may move to a new node of the branched pathways corresponding to a new video sequence depicting either an improvement or a deterioration of the virtual patient's condition compared to that of the present node. For example, at node 802, if the AR device detects that a student has performed a positive action, the AR device may proceed to node 808 at which the video of the virtual patient will be updated to a new video sequence to depict an improvement in the virtual patient's condition.
  • the AR device may proceed to node 804 at which the video of the virtual patient will be updated to a new video sequence that depicts a deterioration in the virtual patient's condition.
  • a student may correct their initial negative action, causing the AR device to return to node 802 and causing a return to the initial video sequence depicting the virtual patient's initial condition.
  • a student may continue performing negative actions, causing the AR device to proceed to node 806 at which the video of the virtual patient will be updated to a new video sequence depicting a further worsening of the condition of the virtual patient.
  • a student may perform a negative action that causes the AR device to return to node 802, or may perform a different type of negative action that causes the AR device to proceed to node 810 at which the video of the virtual patient may be updated to a new video sequence depicting a worsening of the virtual patient's condition in a way that may differ from the video sequence of node 802.
  • the video sequence at node 802 may show a virtual patient exhibiting symptoms of a constricted airway.
  • a student may successfully recommend treatment that improves the constriction of the virtual patient's airway, causing these symptoms to be exhibited less severely by the virtual patient in the video sequence of node 808.
  • the student may then suggest an inappropriate treatment for the virtual patient which, while not causing the virtual patient's airway to constrict further, may cause a new or secondary symptom to be exhibited by the virtual patient as part of the video sequence of node 810.
  • the AR device may proceed to node 812 at which the video of the virtual patient may be updated to a new video sequence corresponding to the alleviation of the virtual patient's symptoms, indicating that the patient has been successfully/appropriately diagnosed and treated. It is here that the clinical scenario would end.
  • an augmented reality device may include a display, a memory, a camera, a microphone and a processor.
  • the display may be configured to display an augmented reality scene that includes a physical scene that is overlaid with a virtual scene.
  • the display may be periodically refreshed to show changes to the virtual scene.
  • the camera may be configured to capture video data.
  • the microphone may be configured to capture audio data.
  • the processor may be configured to execute instructions for initiating a clinical scenario by adding a virtual patient to the virtual scene at a selected location of the physical scene, monitoring first activity of a user, saving analytics data to the memory based on the first activity, and updating a first video sequence of the virtual patient shown on the display based on the first activity.
  • the virtual patient may display symptoms corresponding to the clinical scenario.
  • the processor may be further configured to execute instructions for monitoring second activity of the user indicative of first desired interactions between the user and the virtual patient based on first video data captured by the camera and/or first audio data captured by the microphone, determining, based on the first video data and/or the first audio data, that at least one of the first desired interactions corresponds to a predefined positive action corresponding to an appropriate treatment of the virtual patient based on the clinical scenario, and updating the first video sequence of the virtual patient shown on the display such that the symptoms displayed by the virtual patient are improved or eliminated.
  • the processor may be further configured to execute instructions for monitoring third activity of the user indicative of second desired interactions between the user and the virtual patient based on second video data captured by the camera and/or second audio data captured by the microphone, determining, based on the second video data and/or the second audio data, that at least one of the second desired interactions corresponds to a predefined negative action corresponding to an inappropriate treatment of the virtual patient based on the clinical scenario, and updating the video sequence of the virtual patient shown on the display such that the symptoms displayed by the virtual patient are more severe.
  • the processor may be further configured to execute instructions for, prior to initiating the clinical scenario, displaying, via the display, a list of clinical scenarios, detecting, based on video data captured by the camera, selection of the clinical scenario from the list of clinical scenarios, and calibrating the augmented reality scene.
  • the processor may be further configured to execute instructions for adding a virtual display to the virtual scene shown on the display at a location in the physical scene adjacent to the virtual patient, the virtual display depicting vital signs for the virtual patient, and updating a second video sequence of the virtual display based on the first activity.
  • the processor may be further configured to execute instructions for determining, based on video data captured by the camera and/or audio data captured by the microphone, that the virtual patient has been successfully diagnosed and/or treated based on the first activity and based on the clinical scenario, and ending the clinical scenario.
  • the first video sequence may include a prerecorded loop of volumetric video of a live person.
  • a method may include steps of, with a processor of an augmented reality device, causing an augmented reality scene to be displayed by a display of the augmented reality device, the augmented reality scene comprising a physical scene overlaid with a virtual scene, periodically refreshing the display to show updates to the virtual scene, with the processor, initiating a clinical scenario by adding a virtual patient to the virtual scene at a selected location of the physical scene, with the processor, with the processor, monitoring first activity of a user of the augmented reality device, with the processor, saving analytics data to a memory of the augmented reality device based on the first activity, and, with the processor, updating a first video sequence of the virtual patient shown on the display based on the first activity.
  • the virtual patient may display symptoms corresponding to the clinical scenario.
  • the method may further include steps of, with the processor, monitoring second activity of the user indicative of first desired interactions between the user and the virtual patient based on first video data captured by a camera of the augmented reality device and/or first audio data captured by a microphone of the augmented reality device, with the processor, determining, based on the first video data, that at least one of the first desired interactions corresponds to a predefined positive action corresponding to an appropriate treatment of the virtual patient based on the clinical scenario, and, with the processor, updating the first video sequence of the virtual patient shown on the display such that the symptoms displayed by the virtual patient are improved or eliminated.
  • the method may further include steps of, with the processor, monitoring third activity of the user indicative of second desired interactions between the user and the virtual patient based on second video data captured by the camera and/or second audio data captured by the microphone, with the processor, determining, based on the second video data and/or the second audio data, that at least one of the second desired interactions corresponds to a predefined negative action corresponding to an inappropriate treatment of the virtual patient based on the clinical scenario, and, with the processor, updating the first video sequence of the virtual patient shown on the display such that the symptoms displayed by the virtual patient are more severe.
  • the method may further include steps for, with the processor, prior to initiating the clinical scenario, displaying, via the display, a list of clinical scenarios, with the processor, based on video data captured by a camera of the augmented reality device, detecting selection of the clinical scenario from the list of clinical scenarios, and, with the processor, calibrating the augmented reality scene.
  • the method may further include steps for, with the processor, adding a virtual display to the virtual scene shown on the display at a location in the physical scene adjacent to the virtual patient, the virtual display depicting vital signs for the virtual patient, and, with the processor, updating a second video sequence of the virtual display based on the first activity.
  • the vital signs may include one or more of body temperature, heart rate, electrocardiogram signal graphs, respiration rate, oxygen saturation, and blood pressure.
  • the method may further include, with the processor, determining, based on video data captured by a camera of the augmented reality device, that the virtual patient has been successfully diagnosed and/or treated based on the first activity and based on the clinical scenario, and, with the processor, ending the clinical scenario.

Abstract

Systems and methods are provided for diagnostic applications of augmented reality. A diagnostic scenario may be selected by a user to be executed using an augmented reality device. The augmented reality device may be calibrated using the augmented reality device. A virtual patient may be added to the virtual scene at a selected location corresponding to the physical scene. A virtual display may be added to the virtual scene, showing vital signs corresponding to the virtual patient. The virtual patient and virtual display may exhibit characteristics corresponding to an illness, injury, or disorder defined in the diagnostic scenario. The student's interactions with the virtual patient may be monitored and saved. Based on these interactions, the condition of the virtual patient may be updated by the augmented reality device to be shown to improve or deteriorate.

Description

SYSTEMS AND METHODS FOR DIAGNOSTIC APPLICATIONS OF AUGMENTED
REALITY
CROSS-REFERENCE TO RELATED APPLICATIONS
[001] This application claims priority to U.S. Provisional Application No. 62/640,782 filed March 9, 2018.
BACKGROUND
[002] Mixed reality or augmented reality display devices, such as head-mounted display devices, may be used in a variety of real-world environments and contexts. Such devices provide a view of a physical environment that is augmented by a virtual environment including virtual images, such as two-dimensional virtual objects and three- dimensional holographic objects, and/or other virtual reality information. Such devices may also include various sensors for collecting data from the surrounding environment.
[003] An augmented reality device may display virtual images that are interspersed with real-world physical objects to create a mixed reality environment. A user of the device may desire to interact with a virtual or physical object using the mixed reality device. However, conventional augmented reality devices may not provide sufficiently interactive and specialized mixed reality environments.
[004] Additionally, conventional methods of training medical professionals generally involve the use of "standardized patients" played by actors who pretend to be suffering from various illnesses in order to provide the medical professionals with training on a variety of clinical scenarios. However, using live actors for this purpose can be costly and the number of students that can be effectively trained in-person using a single actor may be limited. Additionally, the imitation of clinical scenarios using live actors may be costly, may require training institutions to be sufficiently well-equipped, and may not allow for the standardization of training quality across different training sessions. It is within this context that embodiments of the present invention arise. BRIEF DESCRIPTION OF THE DRAWINGS
[005] FIG. 1 illustrates a system level block diagram for an augmented reality device, in accordance with an embodiment.
[006] FIG. 2 illustrates a system level block diagram for a system that includes networked augmented reality devices, in accordance with an embodiment.
[007] FIG. 3 shows illustrative placement of a virtual patient in an augmented reality scene, in accordance with an embodiment.
[008] FIG. 4 shows an illustrative augmented reality scene in which a menu is presented from which different scenarios may be selected, in accordance with an embodiment.
[009] FIG. 5 shows an illustrative augmented reality scene in which a heads up display and a video control interface are presented, in accordance with an embodiment.
[0010] FIG. 6 shows an illustrative process flow that may be performed by an augmented reality device to start, progress through, and end a scenario, in accordance with an embodiment.
[0011] FIG. 7 shows an illustrative process flow that may be performed by an augmented reality device to progress through a selected scenario, in accordance with an embodiment.
[0012] FIG. 8 shows an illustrative branched pathways for the automatic selection of virtual patient video sequences based on student interaction with the virtual patient, in accordance with an embodiment.
DETAILED DESCRIPTION
[0013] The present invention will now be discussed in detail with regard to the attached drawing figures that were briefly described above. In the following description, numerous specific details are set forth illustrating the Applicant’s best mode for practicing the invention and enabling one of ordinary skill in the art to make and use the invention. It will be obvious, however, to one skilled in the art that the present invention may be practiced without many of these specific details. In other instances, well-known machines, structures, and method steps have not been described in particular detail in order to avoid unnecessarily obscuring the present invention. Unless otherwise indicated, like parts and method steps are referred to with like reference numerals.
[0014]As described above, conventional methods of training medical professionals may involve the use of in-person live actors to imitate a clinical scenario. However, due to the cost and limited availability of imitating clinical scenarios in such a way, this method may not be practical or widely available. Embodiments of the present invention provide methods and apparatus for simulating clinical scenarios using augmented reality (AR) devices. For example, AR devices described herein may display pre-recorded volumetric video representations of standardized patients exhibiting symptoms for a variety of selectable clinical scenarios, overlaid over a display of a real-world, physical scene. In this way, traditional limitations regarding the number of students able to participate in and the cost of simulated clinical scenarios using standardized patients may be reduced or eliminated via the application of AR technology. Additionally, using different actors to train different students may result in varied results due to the differences in quality between actors and their performances. By instead using a pre- recorded video of an actor in AR simulated clinical scenarios, consistent training quality may be ensured. For example, by using the same actor and performance for all students trained with a given AR simulated clinical scenario, training quality is standardized for all students. Additionally, by using pre-recorded video rather than a live performance, it can be ensured that the quality of the actor's performance meets acceptable standards before it is used in an AR simulated clinical scenario to train students.
[0015] FIG. 1 is a block diagram depicting functional components of an illustrative AR device 100 that may, for example, display a view of a physical scene that is augmented through the integration of a virtual scene that includes virtual images, such as two- dimensional (2D) and three-dimensional (3D) virtual objects. While shown as a block diagram here, the AR device 100 may be a smartphone, tablet device, head mounted display (HMD) device, smart glasses, or any other applicable portable electronic device as may be readily understood by one of ordinary skill. For examples in which the AR device 100 is an HMD device or smart glasses, the AR device 100 may be a holographic computer built into a headset having one or more semitransparent holographic lenses by which virtual objects may be displayed overlapping a physical scene (e.g., via the projection of light onto the lenses) so that a user perceives an augmented reality environment. For examples in which the AR device 100 is a smartphone or a tablet device, the AR device 100 may overlay captured video images of a physical scene with virtual objects to generate an augmented reality scene that may then be presented to a user via an electronic display. The AR device 100 may allow a user to see, hear, and interact with virtual objects displayed within a real-world environment such as a classroom, living room, or office space.
[0016] The AR device 100 may include a processor 102, a memory 104 that includes an AR component 110 and an operating system 106, input/output (I/O) devices 108, a network interface 112, cameras 120, a display device 130 and an accelerometer 140. Generally, the processor 102 may retrieve and execute programming instructions stored in the memory 104. The processor 102 may be a single CPU, multiple CPUs, a single CPU having multiple processing cores, GPUs having multiple execution paths, or any other applicable processing hardware and arrangement thereof. The memory 104 may represent both random access memory (RAM) and non-volatile memory (e.g., the non- volatile memory of one or more hard disk drives, solid state drives, flash memory, etc.) In some embodiments, the memory 104 may be considered to include memory physically located elsewhere (e.g., a computer in electronic communication with the AR device 100 via network interface 112). The operating system 106 included in memory 104 may control the execution of application programs on the AR device 100 (e.g., executed by processor 102). The I/O devices 108 may include a variety of input and output devices, including displays, keyboards, touchscreens, buttons, switches, microphones, and any other applicable input or output device. The network interface 112 may enable the AR device 100 to connect to and transmit and receive data over an electronic data communications network (e.g., via Ethernet, WiFi, Bluetooth, or any other applicable electronic data communications network).
[0017] The cameras 120 may include an outward facing camera that may detect movements within its field of view, such as gesture-based inputs or other movements performed by a user or by a person or physical object within the field of view. The outward facing camera may also capture two-dimensional image information and depth information from a physical scene and physical objects within the physical scene. For example, the outward facing camera may include a depth camera, a visible light camera, an infrared light camera, and/or a position tracking camera. The cameras 120 may also include one or more user-facing cameras, which may, among other applications, track eye movement of the user. For example, the AR component 110 could use such a user-facing camera of cameras 120 to determine which portion of the display device 130, and thereby which portion of the displayed AR scene, that the user is looking at. Generally, the accelerometer 140 is a device capable of measuring the physical (or proper) acceleration of the AR device 100. For example, the AR component 110 may use the accelerometer 140 to determine when the position of the AR device 100 is changing in order to track the location of the AR device 100. The cameras 120 may capture video data of the physical scene, such that activity of the user may be monitored and analyzed to identify actions (e.g., gestures and other movements of the user) indicative of desired interactions between the user and virtual objects within the virtual scene of the AR scene shown on the AR device 100.
[0018] Generally, the AR component 110 is configured to adjust the depiction of the physical scene viewed on or through the display device 130 via the display of virtual objects of a virtual scene, thereby producing an AR scene. These virtual objects have locations and orientations within the physical scene that are defined based on a calibration that may be performed by the user. The collection of virtual components (e.g., the virtual objects, respective positions and orientations of the virtual objects, and optional state information for the virtual objects) in a given AR scene may be referred to herein as a "virtual scene." For instance, the AR component 110 may cause one or more virtual objects of a virtual scene to be displayed at one or more defined (e.g., predefined or defined by a user's action) locations within the physical scene to generate an AR scene. As the AR scene displayed via the display device 130 represents a three- dimensional space, the AR component 110 could determine an area of three- dimensional space for a given virtual object to occupy. Once a virtual object has been placed, the virtual object may be, for example, repositioned, rotated, resized, or otherwise manipulated by a user. A database of virtual objects may be stored in the memory 104 (e.g., as 3D models). When a user inputs a command (e.g., using one or more detectable gestures) to add (e.g., import) a virtual object to a virtual scene, the AR component 110 may retrieve the virtual object from this database in the memory 104 and may place the virtual object in the virtual scene. In some embodiments, other information about each virtual object (e.g., color, the present frame of the virtual object in an video or animation sequence, etc.) may be defined in memory. As virtual objects are added or removed from a virtual scene or otherwise changed within the virtual scene, the display 130 may be periodically refreshed to show these changes.
[0019] In some embodiments, the AR component 110 may be configured to initialize a single-user session, to host a network session to which other networked AR devices may connect, or to connect the AR device 100 to a network session hosted by another AR device or host server. As used herein, a "network session" refers to an instantiation of a virtual scene (e.g., instantiated at a host AR device or a host server), which may be loaded onto (e.g., streamed to) an AR device connected to the session (e.g., communicatively connected to a host of the session) to generate an AR scene that includes the virtual scene displayed over a physical scene. As used herein, a "single- user session" refers to an instantiation of a virtual scene on a single AR device that is not accessible by other AR devices that may otherwise be in communication with the single AR device over, for example, an electronic data communications network. Network sessions may be accessed by AR device 100 via the network interface 112 and an electronic data communications network. As an example, a network session may be established by a separate host AR device or host server and may be joined by the AR device 100 and/or by additional networked AR devices. Alternatively, the AR device 100 may be configured to act as a host and may establish its own network session that is available for other AR devices to join via the electronic data communications network. The virtual scene of a network session may be loaded onto the AR device 100 such that the display 130 shows the same virtual scene as the other networked AR devices that have joined the network session.
[0020] FIG. 2 shows a network architecture 200 which may enable multiple network sessions to be established between networked AR devices. Two network sessions 204 and 208 may be established between AR devices 202-1 through 202-N and AR devices 206-1 through 206-M, respectively, where N represents the number of networked AR devices in the network session 204 and where M represents the number of networked AR devices in the network session 208. The networked AR devices of each network session may communicate via one or more electronic data communications networks 210, which may include one or more of local area networks (LANs), wide area networks (WANs), the internet, or any other applicable network. For example, the network session 204 may be established by the AR device 202-1 , which may act as a host device in establishing network session 204.
[0021] Each of the network sessions 204 and 208 may be managed via one or more CMS databases 212, which may, for example, be implemented via a cloud-based system. CMS databases 212 may, for example, maintain a list of open network sessions which, in the present example, includes the network sessions 204 and 208. The list of open network sessions may further include, for a given network session (e.g., network session 204), identifying information (e.g., IP addresses, MAC addresses, device names, etc.) corresponding to networked AR devices connected in the given network session (e.g., AR devices 202). A new network session may be added to the list of open network sessions maintained by the CMS databases 212 in response to a host AR device initiating the new network session. An open network session may be removed from the list of open network sessions maintained by the CMS databases 212 in response to a host AR device for the open network session closing the open network session. The CMS databases 212 may, for example, further include one or more databases of virtual objects that may be retrieved an AR device connected to CMS databases 212 (e.g., using AR devices 202 or 206 connected to CMS databases 212 via networks 210) and loaded into the virtual scene running on the AR device or that may be downloaded to the memory of an AR device for future use in generating virtual scenes.
[0022]Turning now to FIG. 3, a sequence of frames 304-308 are shown, depicting the placement of a virtual object (in this case, hologram depicting video of a virtual patient) into an AR scene using an AR device. Each of the frames 304-308 depicts a different AR scene that may be shown via the display of an AR device (e.g., the display 130 of AR device 100 of FIG. 1 ).
[0023] Frame 304 shows a virtual patient 302 that has been added to the AR scene, but that has not yet been placed. As used herein, a "virtual patient" may refer to a pre- recorded volumetric video representations of a standardized patient displaying symptoms of a disease or condition corresponding to a given clinical scenario. Alternatively, the "virtual patient" may be a three-dimensional computer-generated animation representing a standardized patient displaying symptoms of a disease or condition corresponding to a given clinical scenario. For example, frame 306 may be what is shown on the display of the AR device immediately after the user performs a gesture to add the virtual patient 302 to the AR scene. The virtual patient 302 is shown here in frame 304 to have a partially transparent wireframe form that may help a user to accurately place the virtual patient 302 in a desired location. This wireframe view of the virtual patient 302 is meant to be illustrative and not limiting. Additionally, a partially transparent box may be shown around the wireframe of the virtual patient 302, which may help a user in understanding the dimensions of the virtual patient 302. If desired, other views of the virtual patient 302 may be displayed during placement of the virtual patient 302 (e.g., a view of the virtual patient as it will appear once placed).
[0024] Frame 306 shows the virtual patient 302 having been moved to the location desired by the user, but not yet having been placed by the user. A virtual plane 310 may be displayed in the AR scene when the virtual patient 302 is moved within a predefined distance of a physical object on which the virtual patient 302 may be placed (e.g., when the virtual patient 302 is moved with the predefined distance of the chair shown in the AR scene). The virtual plane 310 may assist the user in identifying the distance, depth, and/or orientation of the virtual patient 302 during placement of the virtual patient 302. In some embodiments, the color of the virtual plane 310 may indicate whether the virtual patient 302 is in a valid location for placement (e.g., with green corresponding to a valid location and red corresponding to an invalid location).
[0025] Frame 308 shows the virtual patient 302 after placement in the physical chair in the AR scene. Upon placement by the user, the virtual patient 302 may transition from a wireframe view to a solid view and, in some embodiments, videos (e.g., corresponding to captured volumetric video of a live person) that may be performed by the virtual patient 302 may begin.
[0026] FIG. 4 shows an illustrative AR scene 400, as displayed on an AR device (e.g., AR device 100 of FIG. 1 ) during a session. The AR scene 400 includes a menu 404, which is presented next to a virtual patient 402, shown to be seated in a physical chair of the physical scene. Menu 404 allows a user to select from a variety of clinical scenarios when the user selects menu option 406. In the present example, the selectable clinical scenarios include an anaphylaxis scenario 408 and a COPD scenario 410. Upon the user's selection of one of the clinical scenarios 408 or 410, the virtual patient 402 may begin a video sequence corresponding to the selected clinical scenario. In some embodiments, the user may also be presented with the opportunity to select different severities for a selected clinical scenario, with each level of severity having a different corresponding video sequence for the virtual patient 402. As shown, the virtual patient 402 may be partially transparent (e.g., translucent) such that physical objects behind the virtual patient 402 may be seen through the virtual patient 402. This partial transparency may aid the user in distinguishing between which objects are virtual and which are physical.
[0027] FIG. 5 shows an illustrative AR scene 500, as displayed on an AR device (e.g., AR device 100 of FIG. 1 ) during a session. The AR scene 500 includes a control interface 504 and a heads up display (HUD) 506, both of which are displayed at different respective locations next to a virtual patient 502. The FIUD may sometimes be referred to as a "virtual display." The HUD 506 displays vital signs corresponding to the virtual patient 502. For example, these vital signs may include body temperature, heart rate, electrocardiogram (ECG) signal graphs, respiration rate, oxygen saturation, blood pressure, and other vital sign information. The vital signs displayed on the HUD 506 may be animated and synced with the video of the virtual patient 502 so as to simulate the vital signs of a patient in a given clinical scenario having a given level of severity. As shown, the virtual patient 502 may be partially transparent (e.g., translucent) such that physical objects behind the virtual patient 502 may be seen through the virtual patient 502. The HUD 506 may similarly be partially transparent. This partial transparency may aid the user in distinguishing between which objects are virtual and which are physical.
[0028] The control interface 504 includes buttons that may be selected by a user for use in controlling the video sequence currently being expressed by the virtual patient 502. As used in the context of the virtual patient, a "video sequence" refers to a prerecorded, video loop of the virtual patient, with different video sequences for a given clinical scenario corresponding to different levels of severity or reactions to treatment for the disease or condition corresponding to the given clinical scenario. For example, a user may fast-forward, rewind, pause, play, or restart a video sequence of the virtual patient 502. In some scenarios (e.g., when the user is signed in as a teacher, rather than as a student) the user may be presented, via the control interface 504, with one or more options for navigating between one or more video sequences. For example, each navigable video sequence may correspond to a different level of severity for the clinical scenario being simulated by the virtual patient 502. As a user interacts with the virtual patient 502, virtual patient 502 may progress through various video sequences. In some embodiments, navigation through these video sequences may be selected by a teacher based on the teacher's observation of a student's interaction with the virtual patient 502. In other embodiments, these video sequences may be selected automatically based on user interactions with the virtual patient 502 that are detected by the AR device, as will be explained in more detail, below. As used herein, a student's "interaction" with the virtual patient 502, or any other virtual object of the AR scene, may be defined as activity of the student that is indicative of a desired interaction with the virtual patient 502 or other virtual object (e.g., as a student cannot necessarily physically interact with virtual objects). Activity of the student or user that is detected by the AR device may generally be considered to have been detected based on video data produced by one or more cameras of the AR device (e.g., cameras 120 of FIG. 1 ).
[0029] FIG. 6 shows an illustrative process flow for a method 600 for executing an application for simulating a clinical scenario with an AR device (e.g., the AR device 100 of FIG. 1 ). For example, the method 600 may be performed, at least in part, by executing, with a processor (e.g., processor 102 shown in FIG. 1 ), instructions (e.g., part of the augmented reality component 110 shown in FIG. 1 ) stored in memory (e.g., the memory 104 shown in FIG. 1 ) of the AR device.
[0030] At step 602, an application may be started on the AR device. The application may be, for example, a software application stored in the memory of the AR device, which includes a database of 2D and/or 3D virtual objects (e.g., video sequences for a virtual patient , FIUDs showing vital signs, menus, control interfaces, virtual medical diagnostics equipment, etc.) designed for use with the application. In some embodiments, additional virtual objects may be downloaded to the memory of the AR device or otherwise accessed from one or more remote databases communicatively coupled to the AR device via an electronic data communications network.
[0031]At step 604, the AR device may generate and display a prompt to the user (e.g., by executing instructions on a processor of the AR device) in the AR scene shown on the display of the AR device. The prompt allows the user to select from a variety of displayed modes of operation for the AR device. These modes may include starting or joining a multi-user network session and starting a single-user session. The prompt may be shown on a display of the AR device, and the mode may be selected by the user via the user's performance of a detectable gesture. For example, the user may perform an air tap gesture in a location of one of the modes displayed in the prompt in the AR scene shown on the display of the AR device. The gesture may be performed within the field of view of one or more cameras of the AR device so that the gesture may be detected and interpreted by the processor of the AR device. [0032] At step 606, the method 600 progresses based on which mode was selected in step 604. If the mode of starting a multi-user network session was selected, the method 600 progresses to step 608. If the mode of beginning a single-user session was selected, the method 600 progresses to step 620.
[0033]At step 608, in response to the selection of the mode of starting a multi-user network session, the AR device may generate and display a prompt to the user (e.g., by executing instructions on a processor of the AR device) in the AR scene shown on the display of the AR device. The prompt allows the user to select from the options of joining an existing network session or hosting a network session with the AR device of the user. The prompt may be shown on a display of the AR device, and the option may be selected by the user via the user's performance of a detectable gesture. For example, the user may perform an air tap gesture in a location of one of the "join" and "host" options displayed in the prompt in the AR scene shown on the display of the AR device. The gesture may be performed within the field of view of one or more cameras of the AR device so that the gesture may be detected and interpreted by the processor of the AR device.
[0034] At step 610, the method 600 progresses based on which option was selected in step 608. If the "host" option was selected, the method 600 progresses to step 614. If the "join" option was selected, the method 600 progresses to step 616.
[0035] At step 614, in response to the selection of the "host" option, the user's AR device initiates a network session as the host.
[0036] At step 616, in response to the selection of the "join" option, the AR device may generate and display a list of network sessions that are available to be joined by the AR device of the user. The user may then select a network session from the displayed list.
[0037] At step 618, the AR device of the user joins the selected network session.
[0038] At step 620, the AR device presents a list of selectable scenarios on the display of the AR device. Each scenario may correspond to a different clinical scenario. For example, these clinical scenarios may include anaphylaxis, COPD, burns, stroke, domestic violence, chest pain, asthma, diabetes, and any other applicable clinical scenarios that may be presented for diagnostic training. In some embodiments, different levels of severity may be selected for each available clinical scenario included in the displayed list.
[0039] At step 622, the AR device initiates a clinical scenario selected in step 620. For example, a virtual patient is generated and added to a AR scene at a user-selected location in the AR scene depicted on the display of the AR device. In some embodiments, the virtual patient may be placed via a networked AR device of another user (student or teacher) in the network session. The virtual patient will be initiated in animation video sequence corresponding to the selected clinical scenario and, optionally, the selected level of severity of the clinical scenario. A HUD may also be added to the AR scene at a location adjacent to the virtual patient. The HUD may be animated to show vital signs of the virtual patient corresponding to the selected clinical scenario and, optionally the selected level of severity of the clinical scenario. The animation of the HUD may be synced with the video sequence of the virtual patient.
[0040] At step 624, the AR device progresses through the scenario while collecting analytics data corresponding to the interactions of one or more students with the virtual patient. For example, various video sequences of the virtual patient may be progressed through, each corresponding to different levels of disease, symptoms thereof, reactions to appropriate treatment/mistreatment, etc. and each being initiated in response to detected student interactions or observations. Collected analytics data for a given student may, for example, include information pertaining to how long it takes the student to accurately identify symptoms of a disease and the disease itself, incorrect attempts of the student at identifying symptoms or diseases, how long the student spends looking at various portions of the patient's body (e.g., portions of the patient's body that are relevant vs. those that are irrelevant to the disease or condition), keywords related to the virtual patient's condition, or any other applicable information. Detailed examples of this clinical scenario progression are described in detail below in connection with FIGS. 7 and 8. [0041] At step 626, once the AR device (or another AR device in the network session) detects that a student has successfully identified and/or treated the disease or condition of the virtual patient. Alternatively, the scenario may be ended in response to a detected user command to end the scenario.
[0042] At step 628, after ending the scenario, the AR device may present a prompt on the display providing the user the options of starting a new scenario or closing the application. In some embodiments, this prompt may only be displayed on the AR device of the teacher.
[0043] At step 630, the method 600 progresses based whether the option of starting a new scenario or the option of closing the application was selected in step 628. If the option of starting a new scenario was selected, the method 600 returns to step 620. If the option of closing the application was selected, the method 600 proceeds to step 632 and the application is closed.
[0044] FIG. 7 shows an illustrative process flow for a method 700 for progressing through a clinical scenario in an AR environment displayed on an AR device (e.g., the AR device 100 of FIG. 1 ). For example, the method 700 may be performed, at least in part, by executing, with a processor (e.g., processor 102 shown in FIG. 1 ), instructions (e.g., part of the augmented reality component 110 shown in FIG. 1 ) stored in memory (e.g., the memory 104 shown in FIG. 1 ) of the AR device. The method 700 may, for example, be performed during steps 622-626 of method 600 of FIG. 6.
[0045]At step 702, the AR device starts a clinical scenario. As described previously, the clinical scenario may be selected by a user of the AR device, or may be selected by the user (e.g., a student or a teacher) of another AR device connected to the same network session as the AR device. The clinical scenario may, for example, correspond to one of a multitude of possible clinical scenarios including: clinical scenarios may include anaphylaxis, COPD, burns, stroke, domestic violence, chest pain, asthma, diabetes, and any other applicable clinical scenarios that may be presented for diagnostic training. In some embodiments, different levels of severity may be selected for the clinical scenario. [0046] At step 704, the AR scene displayed by the AR device may be calibrated. The locations and orientations of virtual objects in the virtual scene (e.g., the virtual patient and the HUD) will be defined based on this calibration.
[0047] At step 706, a virtual patient is generated and added to an AR scene at a user- selected location in the AR scene depicted on the display of the AR device. In some embodiments, the virtual patient may be placed via another, networked AR device in the same network session as the AR device on which method 700 is being performed. The virtual patient will be initiated in a video sequence corresponding to the selected clinical scenario and, optionally, a selected level of severity of the clinical scenario.
[0048]At step 708, a HUD may be added to the AR scene at a location adjacent to the virtual patient. The HUD may be animated to show vital signs of the virtual patient corresponding to the selected clinical scenario and, optionally the selected level of severity of the clinical scenario. The animation of the vital signs displayed on the HUD may be synced with the video sequence of the virtual patient.
[0049] At step 710, interactions between one or more students participating in the session and the virtual patient may be monitored and analytics data corresponding to these interactions may be collected and saved to the memory of the AR device or, optionally, to the memory of a remote CMS database (e.g., CMS databases 212 of FIG. 2). For example, this analytics data may include information pertaining to how long it takes the student to accurately identify symptoms of a disease and the disease itself, incorrect attempts of the student at identifying symptoms or diseases, how long the student spends looking at various portions of the patient's body (e.g., portions of the patient's body that are relevant vs. those that are irrelevant to the disease or condition), keywords spoken by the student related to the virtual patient's condition (e.g., captured as audio data by a microphone of the AR device, such as a microphone included in the I/O devices 108 of FIG. 1 ), or any other applicable information.
[0050] At step 712, based on the monitored student interactions with the virtual patient, the video sequence of the virtual patient and the animation of the HUD may progress to new video and animation sequences based on a pre-defined branching path stored in memory. If the AR device detects that a student has said a predefined key word or phrase, that a student has interacted with the virtual patient in a predefined way, or that a predefined amount of time has elapsed since the start of the present video sequence of the virtual patient and animation sequence of the HUD, then the AR device may update the video sequence for the virtual patient and the animation sequence for the HUD according to the pre-defined branching path. Alternatively, the video sequence of the virtual patient and animation sequence of the HUD may be progressed manually by the AR device of a teacher (e.g., a doctor or nurse) based on the teacher's observations of the interactions and observations made by the student or students with the virtual patient.
[0051] For example, the AR device may detect (e.g., based on audio data captured by a microphone of the AR device, such as a microphone included in the I/O devices 108 of FIG. 1 ) that a student has said the words "anaphylaxis" and "epinephrine" when observing a virtual patient having a video sequence corresponding to the expression of the symptoms of anaphylaxis during the clinical scenario of diagnosing and treating a patient having an allergic reaction. In other words, the AR device may detect that the student has accurately diagnosed and suggested treatment for the condition of the virtual patient. In response, the AR device may update the video of the virtual patient to show a video sequence corresponding to a lessening in the severity of the virtual patient's symptoms of anaphylaxis, or a complete abatement of these symptoms.
[0052] As another example, the AR device may detect that a student has said the words "asthma" and "inhaler" when observing a virtual patient having a video sequence corresponding to the expression of the symptoms of anaphylaxis during the clinical scenario of diagnosing and treating a patient having an allergic reaction. In other words, the AR device may detect that the student has misdiagnosed the virtual patient and has recommended a treatment that may not be immediately effective or may be detrimental. In response, the AR device may update the video of the virtual patient to show a video sequence corresponding to an increase in the severity of the virtual patient's symptoms of anaphylaxis. In some embodiments, for clinical scenarios that are less time- constrained than anaphylaxis, incorrect diagnoses may instead correspond to no change in the virtual patient's condition, and the video sequence of the virtual patient may remain unchanged.
[0053]As another example, the AR device may determine that a predetermined amount of time has passed without any student correctly suggesting a correct diagnosis or treatment for the virtual patient. In response, the AR device may update the video of the virtual patient to show a video sequence corresponding to an increase in the severity of the virtual patient's symptoms. This response may be appropriate for clinical scenarios such as anaphylaxis for which not providing a correct diagnosis and treatment quickly allows the condition to worsen.
[0054] At step 714, the AR device determines whether the virtual patient was successfully diagnosed and/or treated (e.g., depending on the circumstances of the particular clinical scenario). If a correct diagnosis and/or treatment has been proposed, the method 700 may proceed to step 716 at which the scenario may end. Otherwise, the method 700 may return to step 710 at which students may continue to attempt to determine an appropriate diagnosis and treatment for the virtual patient. In this way, method 700 may recursively update the video sequence of the virtual patient as students interact with the virtual patient through the progression of the clinical scenario. In alternative embodiments, the decision of whether to end the scenario may be made manually via the AR device of a teacher/facilitatior.
[0055] FIG. 8 shows an example of branched pathways 800 that may be defined in memory of an AR device (e.g., AR device 100 of FIG. 1 ) that is displaying a virtual patient in a selected clinical scenario (e.g., via the execution of methods 600 and/or 700 of FIGS. 6 and 7). Alternatively, the branched pathways 800 may be defined in a remote CMS database (e.g., CMS databases 212 of FIG. 2) with which the AR device is in electronic communication. Branched pathways 800 includes multiple nodes, each corresponding to a different video sequence for the virtual patient. Each video sequence corresponds to the virtual patient exhibiting a different level of severity or reaction to treatment for the disease or condition of a particular clinical scenario. The nodes may be progressed through based on detected interactions and observations made by students regarding the virtual patient. These students may include the user of the AR device and/or the users of other AR devices connected to the same network session as the AR device of the present embodiment.
[0056] At node 802, at the start of the clinical scenario being executed by the AR device, a first video sequence may be played for the virtual patient in the AR scene displayed by the AR device. This first video sequence may correspond to an initial level of severity of the disease or condition that may be selected by a user of the AR device before beginning the clinical scenario, or may be preset. As the AR device monitors interactions and observations of the students regarding the virtual patient, the video sequence portrayed by the virtual patient may be updated based on certain detected actions. For example, the AR device (or the remote CMS database) may maintain separate lists (e.g., look-up tables (LUTs)) of predefined positive actions and negative actions. Positive actions correspond to actions that may be made by a student, which would, in the real world, improve the condition of a patient in a clinical scenario of the type being simulated on the AR device. Negative actions correspond to actions that may be made by a student which would, in the real world, deteriorate the condition of a patient (e.g., such that symptoms displayed by the patient are shown to be more severe) in a clinical scenario of the type being simulated by the AR device.
[0057] When the AR device determines that a positive action or a negative action has been performed by a student, the AR device may move to a new node of the branched pathways corresponding to a new video sequence depicting either an improvement or a deterioration of the virtual patient's condition compared to that of the present node. For example, at node 802, if the AR device detects that a student has performed a positive action, the AR device may proceed to node 808 at which the video of the virtual patient will be updated to a new video sequence to depict an improvement in the virtual patient's condition. Alternatively, if the AR device detects that a student has performed a negative action at node 802, the AR device may proceed to node 804 at which the video of the virtual patient will be updated to a new video sequence that depicts a deterioration in the virtual patient's condition. [0058] At node 804, a student may correct their initial negative action, causing the AR device to return to node 802 and causing a return to the initial video sequence depicting the virtual patient's initial condition. Alternatively, a student may continue performing negative actions, causing the AR device to proceed to node 806 at which the video of the virtual patient will be updated to a new video sequence depicting a further worsening of the condition of the virtual patient.
[0059] At node 808, a student may perform a negative action that causes the AR device to return to node 802, or may perform a different type of negative action that causes the AR device to proceed to node 810 at which the video of the virtual patient may be updated to a new video sequence depicting a worsening of the virtual patient's condition in a way that may differ from the video sequence of node 802. For example, the video sequence at node 802 may show a virtual patient exhibiting symptoms of a constricted airway. A student may successfully recommend treatment that improves the constriction of the virtual patient's airway, causing these symptoms to be exhibited less severely by the virtual patient in the video sequence of node 808. However, the student may then suggest an inappropriate treatment for the virtual patient which, while not causing the virtual patient's airway to constrict further, may cause a new or secondary symptom to be exhibited by the virtual patient as part of the video sequence of node 810.
[0060] If, at node 808, the AR device detects that a student has performed a positive action, the AR device may proceed to node 812 at which the video of the virtual patient may be updated to a new video sequence corresponding to the alleviation of the virtual patient's symptoms, indicating that the patient has been successfully/appropriately diagnosed and treated. It is here that the clinical scenario would end.
[0061] In an example embodiment, an augmented reality device may include a display, a memory, a camera, a microphone and a processor. The display may be configured to display an augmented reality scene that includes a physical scene that is overlaid with a virtual scene. The display may be periodically refreshed to show changes to the virtual scene. The camera may be configured to capture video data. The microphone may be configured to capture audio data. The processor may be configured to execute instructions for initiating a clinical scenario by adding a virtual patient to the virtual scene at a selected location of the physical scene, monitoring first activity of a user, saving analytics data to the memory based on the first activity, and updating a first video sequence of the virtual patient shown on the display based on the first activity. The virtual patient may display symptoms corresponding to the clinical scenario.
[0062] In some embodiments, the processor may be further configured to execute instructions for monitoring second activity of the user indicative of first desired interactions between the user and the virtual patient based on first video data captured by the camera and/or first audio data captured by the microphone, determining, based on the first video data and/or the first audio data, that at least one of the first desired interactions corresponds to a predefined positive action corresponding to an appropriate treatment of the virtual patient based on the clinical scenario, and updating the first video sequence of the virtual patient shown on the display such that the symptoms displayed by the virtual patient are improved or eliminated.
[0063] In some embodiments, the processor may be further configured to execute instructions for monitoring third activity of the user indicative of second desired interactions between the user and the virtual patient based on second video data captured by the camera and/or second audio data captured by the microphone, determining, based on the second video data and/or the second audio data, that at least one of the second desired interactions corresponds to a predefined negative action corresponding to an inappropriate treatment of the virtual patient based on the clinical scenario, and updating the video sequence of the virtual patient shown on the display such that the symptoms displayed by the virtual patient are more severe.
[0064] In some embodiments, the processor may be further configured to execute instructions for, prior to initiating the clinical scenario, displaying, via the display, a list of clinical scenarios, detecting, based on video data captured by the camera, selection of the clinical scenario from the list of clinical scenarios, and calibrating the augmented reality scene. [0065] In some embodiments, the processor may be further configured to execute instructions for adding a virtual display to the virtual scene shown on the display at a location in the physical scene adjacent to the virtual patient, the virtual display depicting vital signs for the virtual patient, and updating a second video sequence of the virtual display based on the first activity.
[0066] In some embodiments, the vital signs may include one or more of body temperature, heart rate, and electrocardiogram signal graphs, respiration rate, oxygen saturation, and blood pressure.
[0067] In some embodiments, the processor may be further configured to execute instructions for determining, based on video data captured by the camera and/or audio data captured by the microphone, that the virtual patient has been successfully diagnosed and/or treated based on the first activity and based on the clinical scenario, and ending the clinical scenario.
[0068] In some embodiments, the first video sequence may include a prerecorded loop of volumetric video of a live person.
[0069] In an example embodiment, a method may include steps of, with a processor of an augmented reality device, causing an augmented reality scene to be displayed by a display of the augmented reality device, the augmented reality scene comprising a physical scene overlaid with a virtual scene, periodically refreshing the display to show updates to the virtual scene, with the processor, initiating a clinical scenario by adding a virtual patient to the virtual scene at a selected location of the physical scene, with the processor, with the processor, monitoring first activity of a user of the augmented reality device, with the processor, saving analytics data to a memory of the augmented reality device based on the first activity, and, with the processor, updating a first video sequence of the virtual patient shown on the display based on the first activity. The virtual patient may display symptoms corresponding to the clinical scenario.
[0070] In some embodiments, the method may further include steps of, with the processor, monitoring second activity of the user indicative of first desired interactions between the user and the virtual patient based on first video data captured by a camera of the augmented reality device and/or first audio data captured by a microphone of the augmented reality device, with the processor, determining, based on the first video data, that at least one of the first desired interactions corresponds to a predefined positive action corresponding to an appropriate treatment of the virtual patient based on the clinical scenario, and, with the processor, updating the first video sequence of the virtual patient shown on the display such that the symptoms displayed by the virtual patient are improved or eliminated.
[0071] In some embodiments, the method may further include steps of, with the processor, monitoring third activity of the user indicative of second desired interactions between the user and the virtual patient based on second video data captured by the camera and/or second audio data captured by the microphone, with the processor, determining, based on the second video data and/or the second audio data, that at least one of the second desired interactions corresponds to a predefined negative action corresponding to an inappropriate treatment of the virtual patient based on the clinical scenario, and, with the processor, updating the first video sequence of the virtual patient shown on the display such that the symptoms displayed by the virtual patient are more severe.
[0072] In some embodiments, the method may further include steps for, with the processor, prior to initiating the clinical scenario, displaying, via the display, a list of clinical scenarios, with the processor, based on video data captured by a camera of the augmented reality device, detecting selection of the clinical scenario from the list of clinical scenarios, and, with the processor, calibrating the augmented reality scene.
[0073] In some embodiments, the method may further include steps for, with the processor, adding a virtual display to the virtual scene shown on the display at a location in the physical scene adjacent to the virtual patient, the virtual display depicting vital signs for the virtual patient, and, with the processor, updating a second video sequence of the virtual display based on the first activity. [0074] In some embodiments, the vital signs may include one or more of body temperature, heart rate, electrocardiogram signal graphs, respiration rate, oxygen saturation, and blood pressure.
[0075] In some embodiments, the method may further include, with the processor, determining, based on video data captured by a camera of the augmented reality device, that the virtual patient has been successfully diagnosed and/or treated based on the first activity and based on the clinical scenario, and, with the processor, ending the clinical scenario.
[0076] While the systems and methods described herein are presented in the context of clinical scenarios involving virtual patients, it should be noted that the described training environment and methods of progressing through animation sequences for a virtual human based on detected user actions may be applied to scenarios that are not necessarily medical in nature. For example, the systems and methods of the present invention may be applied to legal, veterinary, sales, or other scenarios so that a user may interact with a virtual person in a corresponding AR environment (e.g., for the purpose of training the user).
[0077] Other embodiments and uses of the above inventions will be apparent to those having ordinary skill in the art upon consideration of the specification and practice of the invention disclosed herein. The specification and examples given should be considered exemplary only, and it is contemplated that the appended claims will cover any other such embodiments or modifications as fall within the true scope of the invention.

Claims

1. An augmented reality device (100) comprising:
a display (130) configured to display an augmented reality scene (400, 500) comprising a physical scene that is overlaid with a virtual scene, the display being periodically refreshed to show changes to the virtual scene;
a memory (104);
a camera (120) configured to capture video data;
a microphone (108) configured to capture audio data; and
a processor (102) configured to execute instructions for:
initiating a clinical scenario by adding a virtual patient (302, 402, 502) to the virtual scene at a selected location of the physical scene, wherein the virtual patient displays symptoms corresponding to the clinical scenario (622);
monitoring first activity of a user of the augmented reality device(624,
710);
saving analytics data to the memory based on the first activity; updating a first video sequence of the virtual patient shown on the display based on the first activity (712).
2. The augmented reality device of claim 1 , wherein the processor is further configured to execute instructions for:
monitoring second activity of the user indicative of first desired interactions between the user and the virtual patient based on first video data captured by the camera and/or first audio data captured by the microphone (710);
determining, based on the video data and/or the first audio data, that at least one of the first desired interactions corresponds to a predefined positive action
corresponding to an appropriate treatment of the virtual patient based on the clinical scenario; and
updating the first video sequence of the virtual patient shown on the display such that the symptoms displayed by the virtual patient are improved or eliminated (712, 808, 812).
3. The augmented reality device of any of claims 1 or 2, wherein the processor is further configured to execute instructions for:
monitoring third activity of the user indicative of second desired interactions between the user and the virtual patient based on second video data captured by the camera and/or second audio data captured by the microphone (710);
determining, based on the second video data and/or second audio data, that at least one of the second desired interactions corresponds to a predefined negative action corresponding to an inappropriate treatment of the virtual patient based on the clinical scenario; and
updating the video sequence of the virtual patient shown on the display such that the symptoms displayed by the virtual patient are more severe (712, 804, 806, 810).
4. The augmented reality device of claim 1 , wherein the processor is further configured to execute instructions for:
prior to initiating the clinical scenario, displaying, via the display, a list (404) of clinical scenarios (408, 410) (620);
detecting, based on video data captured by the camera, selection of the clinical scenario from the list of clinical scenarios; and
calibrating the augmented reality scene (704).
5. The augmented reality device of claim 1 , wherein the processor is further configured to execute instructions for:
adding a virtual display (506) to the virtual scene shown on the display at a location in the physical scene adjacent to the virtual patient, the virtual display depicting vital signs for the virtual patient (708); and
updating a second video sequence of the virtual display based on the first activity
(712).
6. The augmented reality device of claim 5, wherein the vital signs comprise one or more of: body temperature, heart rate, electrocardiogram signal graphs, respiration rate, oxygen saturation, and blood pressure.
7. The augmented reality device of claim 1 , wherein the processor is further configured to execute instructions for:
determining, based on video data captured by the camera and/or audio data captured by the microphone, that the virtual patient has been successfully diagnosed and/or treated based on the first activity and based on the clinical scenario (714, 812); and
ending the clinical scenario (626, 716).
8. The augmented reality device of claim 1 , wherein the first video sequence comprises a prerecorded loop of volumetric video of a live person.
9. A method (600, 700) comprising:
with a processor (102) of an augmented reality device (100), causing an augmented reality scene (400, 500) to be displayed by a display (130) of the augmented reality device, the augmented reality scene comprising a physical scene overlaid with a virtual scene;
periodically refreshing the display to show updates to the virtual scene;
with the processor, initiating a clinical scenario (408, 410) by adding a virtual patient (302, 402, 502) to the virtual scene at a selected location of the physical scene, wherein the virtual patient displays symptoms corresponding to the clinical scenario (622);
with the processor, monitoring first activity of a user of the augmented reality device (624, 710);
with the processor, saving analytics data to a memory (104) of the augmented reality device based on the first activity (710); and
with the processor, updating a first video sequence of the virtual patient shown on the display based on the first activity (712).
10. The method of claim 9, further comprising:
with the processor, monitoring second activity of the user indicative of first desired interactions between the user and the virtual patient based on first video data captured by a camera (120) of the augmented reality device and/or first audio data captured by a microphone (108) of the augmented reality device (710);
with the processor, determining, based on the first video data and/or first audio data, that at least one of the first desired interactions corresponds to a predefined positive action corresponding to an appropriate treatment of the virtual patient based on the clinical scenario; and
with the processor, updating the first video sequence of the virtual patient shown on the display such that the symptoms displayed by the virtual patient are improved or eliminated (712, 808, 812).
11. The method of any of claims 9 or 10, further comprising:
with the processor, monitoring third activity of the user indicative of second desired interactions between the user and the virtual patient based on second video data captured by the camera and/or second audio data captured by the
microphone(710);
with the processor, determining, based on the second video data and/or the second audio data, that at least one of the second desired interactions corresponds to a predefined negative action corresponding to an inappropriate treatment of the virtual patient based on the clinical scenario; and
with the processor, updating the first video sequence of the virtual patient shown on the display such that the symptoms displayed by the virtual patient are more severe (712, 804, 806, 810).
12. The method of claim 9, further comprising:
with the processor, prior to initiating the clinical scenario, displaying, via the display, a list (404) of clinical scenarios (408, 410)(620);
with the processor, based on video data captured by a camera (120) of the augmented reality device, detecting selection of the clinical scenario from the list of clinical scenarios; and
with the processor, calibrating the augmented reality scene (704).
13. The method of claim 9, further comprising:
with the processor, adding a virtual display (506) to the virtual scene shown on the display at a location in the physical scene adjacent to the virtual patient, the virtual display depicting vital signs for the virtual patient (708); and
with the processor, updating a second video sequence of the virtual display based on the first activity (712).
14. The method of claim 13, wherein the vital signs comprise one or more of: body temperature, heart rate, and electrocardiogram signal graphs, respiration rate, oxygen saturation, and blood pressure.
15. The method of claim 9, further comprising:
with the processor, determining, based on video data captured by a camera (120) of the augmented reality device, that the virtual patient has been successfully diagnosed and/or treated based on the first activity and based on the clinical scenario (714, 812); and
with the processor, ending the clinical scenario (626, 716).
PCT/US2019/021291 2018-03-09 2019-03-08 Systems and methods for diagnostic applications of augmented reality WO2019173677A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862640782P 2018-03-09 2018-03-09
US62/640,782 2018-03-09

Publications (1)

Publication Number Publication Date
WO2019173677A1 true WO2019173677A1 (en) 2019-09-12

Family

ID=67846314

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/021291 WO2019173677A1 (en) 2018-03-09 2019-03-08 Systems and methods for diagnostic applications of augmented reality

Country Status (1)

Country Link
WO (1) WO2019173677A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210053126A (en) * 2019-10-30 2021-05-11 주식회사 뉴베이스 Method and apparatus for providing training for treating emergency patients
WO2021085912A3 (en) * 2019-10-30 2021-07-01 주식회사 뉴베이스 Method and apparatus for providing treatment training for emergency patient
US11074730B1 (en) 2020-01-23 2021-07-27 Netapp, Inc. Augmented reality diagnostic tool for data center nodes

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6747672B1 (en) * 1999-11-01 2004-06-08 Medical Learning Company, Inc. Virtual patient hot spots
US20130252216A1 (en) * 2012-03-20 2013-09-26 Microsoft Corporation Monitoring physical therapy via image sensor
US20150282796A1 (en) * 2012-09-17 2015-10-08 DePuy Synthes Products, Inc. Systems And Methods For Surgical And Interventional Planning, Support, Post-Operative Follow-Up, And, Functional Recovery Tracking
US20150287330A1 (en) * 2006-07-12 2015-10-08 Medical Cyberworlds, Inc. Computerized medical training system
US20150306340A1 (en) * 2014-03-06 2015-10-29 Virtual Realty Medical Applications, Inc. Virtual reality medical application system
WO2016040376A1 (en) * 2014-09-08 2016-03-17 Simx, Llc Augmented reality simulator for professional and educational training
US20170188976A1 (en) * 2015-09-09 2017-07-06 WellBrain, Inc. System and methods for serving a custom meditation program to a patient

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6747672B1 (en) * 1999-11-01 2004-06-08 Medical Learning Company, Inc. Virtual patient hot spots
US20150287330A1 (en) * 2006-07-12 2015-10-08 Medical Cyberworlds, Inc. Computerized medical training system
US20130252216A1 (en) * 2012-03-20 2013-09-26 Microsoft Corporation Monitoring physical therapy via image sensor
US20150282796A1 (en) * 2012-09-17 2015-10-08 DePuy Synthes Products, Inc. Systems And Methods For Surgical And Interventional Planning, Support, Post-Operative Follow-Up, And, Functional Recovery Tracking
US20150306340A1 (en) * 2014-03-06 2015-10-29 Virtual Realty Medical Applications, Inc. Virtual reality medical application system
WO2016040376A1 (en) * 2014-09-08 2016-03-17 Simx, Llc Augmented reality simulator for professional and educational training
US20170188976A1 (en) * 2015-09-09 2017-07-06 WellBrain, Inc. System and methods for serving a custom meditation program to a patient

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210053126A (en) * 2019-10-30 2021-05-11 주식회사 뉴베이스 Method and apparatus for providing training for treating emergency patients
WO2021085912A3 (en) * 2019-10-30 2021-07-01 주식회사 뉴베이스 Method and apparatus for providing treatment training for emergency patient
KR102328572B1 (en) * 2019-10-30 2021-11-19 주식회사 뉴베이스 Method and apparatus for providing training for treating emergency patients
EP4053825A4 (en) * 2019-10-30 2022-12-21 Newbase Inc. Method and apparatus for providing treatment training for emergency patient
US11615712B2 (en) 2019-10-30 2023-03-28 Newbase Inc. Method and apparatus for providing training for treating emergency patients
US11915613B2 (en) 2019-10-30 2024-02-27 Newbase Inc. Method and apparatus for providing training for treating emergency patients
US11074730B1 (en) 2020-01-23 2021-07-27 Netapp, Inc. Augmented reality diagnostic tool for data center nodes
US11610348B2 (en) 2020-01-23 2023-03-21 Netapp, Inc. Augmented reality diagnostic tool for data center nodes

Similar Documents

Publication Publication Date Title
Pan et al. Why and how to use virtual reality to study human social interaction: The challenges of exploring a new research landscape
US10922890B1 (en) Multi-user virtual and augmented reality tracking systems
US20200020171A1 (en) Systems and methods for mixed reality medical training
US10453172B2 (en) Sparse-data generative model for pseudo-puppet memory recast
US9892655B2 (en) Method to provide feedback to a physical therapy patient or athlete
US20130252216A1 (en) Monitoring physical therapy via image sensor
WO2019173677A1 (en) Systems and methods for diagnostic applications of augmented reality
JP7448530B2 (en) Systems and methods for virtual and augmented reality
US20200410891A1 (en) Computer systems and methods for creating and modifying a multi-sensory experience to improve health or performrance
Herbelin Virtual reality exposure therapy for social phobia
Jakl et al. Enlightening patients with augmented reality
Marks et al. Head tracking based avatar control for virtual environment teamwork training
Mathi Augment HoloLens’ Body Recognition and Tracking Capabilities Using Kinect
WO2021036839A1 (en) Camera control method and apparatus, and terminal device
EP4189594A1 (en) Methods and systems for communication and interaction using 3d human movement data
WO2022036055A1 (en) Systems and methods for mental health improvement
Takacs Cognitive, Mental and Physical Rehabilitation Using a Configurable Virtual Reality System.
Rebol et al. CPR Emergency Assistance Through Mixed Reality Communication
Cooper et al. Robot to support older people to live independently
Wirth et al. Extended realities (XRs): how immersive technologies influence assessment and training for extreme environments
US20230381673A1 (en) eSPORTS SPECTATOR ONBOARDING
Sharma et al. Virtual Reality: Robotic Improved Surgical Precision Using AI Techniques
Lerga Valencia Merging augmented reality and virtual reality
Ahsen Exploring the Design and Development of Augmented Reality Applications for Challenging Scenarios Using User-Driven Design Approaches
MAFESSONI Educational cooperative play in wearable virtual reality for persons with neurodevelopmental disorders

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19764882

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19764882

Country of ref document: EP

Kind code of ref document: A1