WO2023069021A2 - Augmented reality system and method for cardiopulmonary resuscitation training - Google Patents

Augmented reality system and method for cardiopulmonary resuscitation training Download PDF

Info

Publication number
WO2023069021A2
WO2023069021A2 PCT/SG2022/050745 SG2022050745W WO2023069021A2 WO 2023069021 A2 WO2023069021 A2 WO 2023069021A2 SG 2022050745 W SG2022050745 W SG 2022050745W WO 2023069021 A2 WO2023069021 A2 WO 2023069021A2
Authority
WO
WIPO (PCT)
Prior art keywords
manikin
computer device
training
user
projector
Prior art date
Application number
PCT/SG2022/050745
Other languages
French (fr)
Other versions
WO2023069021A3 (en
Inventor
Xiaoshan WANG
Soh Khim Ong
Yeh Ching Andrew NEE
Original Assignee
National University Of Singapore
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University Of Singapore filed Critical National University Of Singapore
Publication of WO2023069021A2 publication Critical patent/WO2023069021A2/en
Publication of WO2023069021A3 publication Critical patent/WO2023069021A3/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • G09B23/288Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine for artificial respiration or heart massage

Definitions

  • the present disclosure generally relates to an augmented reality system and method for cardiopulmonary resuscitation training.
  • CPR cardiopulmonary resuscitation
  • an augmented reality system for cardiopulmonary resuscitation training.
  • the system comprises: a camera configured for capturing a visual feed of a real-world environment comprising a manikin positioned within a field of view of the camera; a projector configured for projecting onto the manikin positioned within a field of view of the projector; and a local computer device communicative with the camera and projector.
  • the local computer device is configured for: aligning the fields of view of the camera and the projector to the manikin; receiving the visual feed from the camera; and sending, to the projector, virtual content generated based on the visual feed to facilitate the cardiopulmonary resuscitation training for a user.
  • the projector is configured for projecting the virtual content as augmented reality content onto the manikin in the real-world environment, the augmented reality content viewable by the user performing the cardiopulmonary resuscitation training on the manikin.
  • a computer- implemented augmented reality method for cardiopulmonary resuscitation training comprises: aligning, using a local computer device, fields of view of a camera and a projector to a manikin in a real-world environment; capturing, by the camera, a visual feed of the real-world environment comprising the manikin positioned within the field of view of the camera; receiving, by the local computer device, the visual feed from the camera; sending virtual content from the local computer device to the projector, the virtual content generated based on the visual feed to facilitate the cardiopulmonary resuscitation training for a user; and projecting, by the projector, the virtual content as augmented reality content onto the manikin in the real-world environment and positioned within the field of view of the projector, the augmented reality content viewable by the user performing the cardiopulmonary resuscitation training on the manikin.
  • a manikin for cardiopulmonary resuscitation training comprises: a head comprising an inertial measurement unit for measuring movement of the head to detect opening of an airway in the head; and a torso that is resiliently compressible, the torso comprising a distance sensor for measuring compression of the torso.
  • the inertial measurement unit and distance sensor are communicative with a local computer device to process the measurements for the cardiopulmonary resuscitation training.
  • Figures 1A and 1 B are illustrations of an augmented reality system for cardiopulmonary resuscitation training.
  • Figure 2 is a flowchart illustration of an augmented reality method for cardiopulmonary resuscitation training.
  • Figures 3A to 3D are illustrations of a software for cardiopulmonary resuscitation training.
  • Figures 4A to 4D are further illustrations of the software for cardiopulmonary resuscitation training.
  • Figure 5 is an illustration of a user interface of the software for cardiopulmonary resuscitation training.
  • Figures 6A and 6B are illustrations of opening an airway during cardiopulmonary resuscitation training.
  • Figures 7A to 7C are illustrations of performing chest compressions during cardiopulmonary resuscitation training.
  • Figure 8 is an illustration of simulating use of an automated external defibrillator during cardiopulmonary resuscitation training.
  • Figure 9 is an illustration of results from the cardiopulmonary resuscitation training.
  • depiction of a given element or consideration or use of a particular element number in a particular figure or a reference thereto in corresponding descriptive material can encompass the same, an equivalent, or an analogous element or element number identified in another figure or descriptive material associated therewith.
  • References to “an embodiment I example”, “another embodiment I example”, “some embodiments I examples”, “some other embodiments I examples”, and so on, indicate that the embodiment(s) I example(s) so described may include a particular feature, structure, characteristic, property, element, or limitation, but that not every embodiment I example necessarily includes that particular feature, structure, characteristic, property, element or limitation.
  • repeated use of the phrase “in an embodiment I example” or “in another embodiment I example” does not necessarily refer to the same embodiment I example.
  • the terms “a” and “an” are defined as one or more than one.
  • the use of in a figure or associated text is understood to mean “and/or” unless otherwise indicated.
  • the term “set” is defined as a non-empty finite organization of elements that mathematically exhibits a cardinality of at least one (e.g. a set as defined herein can correspond to a unit, singlet, or single-element set, or a multiple-element set), in accordance with known mathematical definitions.
  • the recitation of a particular numerical value or value range herein is understood to include or be a recitation of an approximate numerical value or value range.
  • an augmented reality system 100 for cardiopulmonary resuscitation (CPR) training.
  • CPR cardiopulmonary resuscitation
  • augmented reality allows a user to view real-world scenes superimposed with virtual objects, such as graphics and text, that are displayed to users.
  • the augmented reality system 100 can provide users with additional information elements, such as where to position their hands, to guide them in the CPR training which is normally performed on a dummy or manikin 110.
  • the augmented reality system 100 includes a camera 120, a projector 130, and a local computer device 140 communicative with the camera 120 and projector 130.
  • the manikin 110 is positioned within a field of view of the camera 120 and a field of view of the projector 130.
  • the camera 120 can be any device that is capable of recording visual content including images and video.
  • the projector 130 can be any device that is capable of projecting digital content, such as images and video, onto a surface.
  • the local computer device 140 can be a desktop device, mobile device, tablet device, laptop computer, or any other electronic device which may have processors, central processing units, or controllers.
  • the local computer device 140 includes suitable user input and display devices for users to operate. Additionally, the local computer device 140 may include one or more microcontrollers or microprocessors, such as WeMos-D1 R2 and the PC Uno.
  • the augmented reality system 100 may further include a remote computer device 150 that is communicative with the local computer device 140, such as across the communication network 160.
  • the remote computer device 150 can be a desktop device, mobile device, tablet device, laptop computer, or any other electronic device. Additionally, the remote computer device 150 includes suitable user input and display devices for a trainer or instructor to remotely conduct the CPR training for users.
  • the communication network 160 is a medium or environment through which content, notifications, and/or messages are communicated among various components. Suitable security protocols, such as encryption protocols, may be implemented in the communication network 160 for secure communications among the components.
  • Some non-limiting examples of the communication network 160 include a virtual private network (VPN), wireless fidelity (Wi-Fi) network, light fidelity (Li-Fi) network, local area network (LAN), wide area network (WAN), metropolitan area network (MAN), satellite network, Internet, fibre optic network, coaxial cable network, infrared (IR) network, radio frequency (RF) network, and any combination thereof.
  • VPN virtual private network
  • Wi-Fi wireless fidelity
  • Li-Fi light fidelity
  • LAN local area network
  • WAN wide area network
  • MAN metropolitan area network
  • satellite network Internet
  • fibre optic network coaxial cable network
  • IR infrared
  • RF radio frequency
  • Various components in the communication network 160 may connect to the communication network 160 in accordance with various wired and wireless communication protocols, such as Transmission Control Protocol I Internet Protocol (TCP/IP), User Datagram Protocol (UDP), 2nd to 5th Generation (2G to 5G) communication protocols, Long Term Evolution (LTE) communication protocols, and any combination thereof.
  • Each component to the communication network 160 includes a data communication or transceiver module to communicate and transmit I receive data over the communication network 160.
  • a transceiver module include an antenna module, a radio frequency transceiver module, a wireless transceiver module, a Bluetooth transceiver module, an Ethernet port, a Universal Serial Bus (USB) port, or any other module I component I device configured for transmitting and receiving data.
  • USB Universal Serial Bus
  • the manikin 110 is positioned within a field of view of the camera 120 and a field of view of the projector 130.
  • the camera 120 and projector 130 may be supported on suitable structures, such as tripods, to stabilize them.
  • the camera 120 is configured for capturing a visual feed of a real-world environment including the manikin 110 positioned within the field of view of the camera 120.
  • the camera 120 captures a video feed of a physical scene in the real-world environment and the manikin 110 is inside the physical scene.
  • the projector 130 is configured for projecting onto the manikin 110 positioned within a field of view of the projector 130.
  • the projector 130 projects some information elements, such as graphics or text, onto the manikin 110.
  • the local computer device 140 is configured for aligning the field of view of the camera 120 and the field of view of the projector 130 to the manikin 110. For example, the centres of the fields of view are aligned to a region of the manikin 110, such as the head 112. The scenes in the respective fields of view can be aligned and processed by the local computer device 140 so that virtual content can be generated based on the visual feed captured by the camera 120. Additionally, augmented reality content derived from the generated virtual content can be accurately projected by the projector 130 onto the manikin 110 so that the augmented reality content can be viewed by the user working on the manikin 110.
  • the local computer device 140 is further configured for receiving the visual feed from the camera 120.
  • the local computer device 140 is further configured for sending, to the projector 130, virtual content generated based on the visual feed to facilitate the cardiopulmonary resuscitation training for the user.
  • Virtual content can be defined as information that exists digitally and can be communicated across a computer network, such as the communications network 160.
  • the projector 130 is further configured for projecting the virtual content as the augmented reality content onto the manikin 110 in the real-world environment.
  • the augmented reality content is a projection of the corresponding virtual content onto real physical objects so that the information can be perceived in the real-world environment.
  • the augmented reality content allows the virtual content to be presented in various formats, such as but not limited to text, graphics, and sounds.
  • the augmented reality content is viewable by the user performing the CPR training on the manikin 110.
  • the virtual content includes information for projection as the augmented reality content onto the manikin 110 to guide a user in the CPR training.
  • the augmented reality content is within the field of view of the user as the user works on the manikin 110.
  • the user can make use of the augmented reality content to guide his/her CPR training practice without the assistance of the trainer or instructor.
  • the virtual content is generated by the local computer device 140 and includes information such as a set of instructions or training tasks to guide the user in the CPR training.
  • the training tasks include instructions to open an airway and to perform chest compressions on the manikin 110.
  • the user can practise the CPR training with the assistance of the trainer using the remote computer device 150.
  • the trainer can use the remote computer device 150 to generate the virtual content based on the visual feed received on the remote computer device 150.
  • the virtual content may include ad hoc instructions or annotations to guide the user. This allows the trainer to guide the user in correcting mistakes made during the CPR training.
  • the visual feed includes a digital or virtual representation of the manikin 110.
  • the virtual content may include text and/or graphics generated on a digital or virtual representation of the manikin 110 in the visual feed.
  • the local computer device 140 sends the visual feed to the remote computer device 150 so that the trainer can see the digital representation of the manikin 110, i.e. its digital twin, on the remote computer device 150.
  • the trainer generates the virtual content based on the visual feed sent to the remote computer device 150 and the local computer device 140 receives the virtual content from the remote computer device 150.
  • the virtual content includes virtual annotations made by the trainer based on the digital representation of the manikin 110.
  • the projector 130 projects the virtual content as the augmented reality content onto the manikin 110 in the physical scene for the user to see.
  • the augmented reality content includes augmented annotations derived from the virtual annotations.
  • the alignment of the fields of view of the camera 120 and the projector 130 allows for the augmented reality content to be projected on to the same location of the manikin 110 as that of the virtual content on the digital representation of the manikin 110.
  • the method 200 includes a step 202 of aligning, using the local computer device 140, the fields of view of the camera 120 and the projector 130 to the manikin 110 in the real-world environment.
  • the method 200 includes a step 204 of capturing, by the camera 120, a visual feed of the real-world environment including the manikin 110 positioned within the field of view of the camera 120.
  • the method 200 includes a step 206 of receiving, by the local computer device 140, the visual feed from the camera 120.
  • the method 200 includes a step 208 of sending virtual content from the local computer device 140 to the projector 130, the virtual content generated based on the visual feed to facilitate the CPR training for a user.
  • the method 200 includes a step 210 of projecting, by the projector 130, the virtual content as augmented reality content onto the manikin 110 in the real-world environment and positioned within the field of view of the projector 130, the augmented reality content viewable by the user performing the CPR training on the manikin 110.
  • the method 200 may include steps of sending the visual feed from the local computer device 140 to the remote computer device 150, generating, by the remote computer device 150, the virtual content based on the visual feed, and receiving, by the local computer device 140, the virtual content from the remote computer device 150.
  • a CPR training software application is executed on the local computer device 140 to performing the CPR training for users or trainees.
  • Figure 3A illustrates the main page of the training software. A user is required to login, or register if he/she is a new user, before beginning the CPR training.
  • MAMP software and suitable PHP scripts may be used to install a local server environment on the local computer device 140 and to manage a database 142, such as a MySQL database, that stores user information such as a user identifier (e.g. an integer), username, password, and training score.
  • a user identifier e.g. an integer
  • the user creates a new username and password during registration as shown in Figure 3B, the PHP scripts then checks whether the username has already been used. If there is no existing identical username, then the registration is successful. This ensures each unique username is associated with a unique user.
  • the password may be encrypted using suitable encryption protocols, such as the salt-hash method. The purpose of encryption is to protect private data so that developers cannot directly know the password. Some requirements may be defined for the username and password. For example, the username must have more than 2 characters and the password must have more than 8 characters.
  • the user can proceed to login to the training software by entering his/her username and password, as shown in Figure 3C.
  • the database 142 searches for the entered username first. If the username exists and is unique, it means that the user is registered. The database 142 then compares the hash string of the entered password with the hash string in the user information. If both hash strings match, the user login would be successful. As shown in Figure 3D, the main page changes in response to the successful login.
  • the user can proceed with the CPR training by selecting the “Play Game” button. Users may not be allowed to select the button without logging in.
  • the rescuer needs to complete some precautionary checks before performing CPR on the patient. Firstly, upon encountering the patient, the rescuer needs to confirm whether the surrounding environment is safe. This includes determining whether the site environment is suitable for CPR, such as whether there are safety hazards and whether the site environment can allow the patient to lie down. As shown in Figure 4A, the training software displays a preparatory page to train the user to have an awareness of checking and confirming the real-world environment.
  • the user is requested to select the gender of the patient, which would be the manikin 110 for the CPR training, as shown in the preparatory page in Figure 4B.
  • the preparatory page changes into one of two pages as shown in Figures 4C and 4D, respectively, both of which presents similar information to the user.
  • the gender selection is to train the user that there is almost no difference between the rescue methods when rescuing adult men and adult women.
  • a survey showed that when the patient is female, more people may feel embarrassed and delay the emergency rescue time, especially if the patient is female and the rescuer is male.
  • users would become aware of the existence of this problem, and this would reduce people’s hesitation when faced with female patients needing CPR.
  • the user After the user has completed the gender selection, the user is reminded that the trainer or administrator can assess the user during the CPR training.
  • the user may be requested to share the screen of the local computer device 140 with the remote computer device 150, so that the remote computer device 150 can receive the visual feed for the trainer to see.
  • the user Once the CPR training begins, the user only needs to interact with the manikin 110 for the CPR training and this interaction will be captured by the camera 120 and sent as the visual feed to the remote computer device 150.
  • the projector 130 is arranged to project virtual content as augmented reality content onto the manikin 110 and/or the floor where the manikin 110 is.
  • the virtual content may include a user interface 300 that is projected as an augmented user interface 350 on the floor, as shown in Figures 5 and 6A.
  • the manikin 110 and the augmented user interface 350 would be within the field of view of the user’s eyes, and the user does not have to frequently turn his/her head and shift attention to a display screen of the local computer device 140. This allows the user to interact directly with the manikin 110 and view information from the user interface 300 on the augmented user interface 350, thereby helping the user to focus on the CPR training.
  • the user interface 300 includes the training tasks to guide the user in the CPR training.
  • the training tasks may be defined by boxes 301 -306 that represent the different steps of rescuing a patient.
  • the boxes 301 -306 tell the user which CPR steps have been completed and which steps need to be performed next.
  • the rescuer should check that the patient is in the supine position and check the patient for responsiveness (such as by questioning, shouting, and/or shaking the patient), as represented by boxes 301 ,302 respectively.
  • One of the main steps of CPR is to open the airway of the patient, as represented by box 303.
  • the rescuer should check the patient’s breathing and give rescue breaths if needed, as represented by box 304.
  • the standard action is to gently place one hand on the patient’s forehead, gently tilt the head back, and lift the patient’s chin with the two fingers of the other hand, so that the base of the tongue leaves the lower position of the throat.
  • the training tasks include instructions to move the head 112 of the manikin 110 to open the airway in the head 112.
  • the head 112 of the manikin 110 includes an inertial measurement unit 114 for measuring movement of the head 112 to detect opening of the airway inside the head 112, as shown in Figures 6A and 6B.
  • the inertial measurement unit 114 is communicative with the local computer device 140 to process measurements for the CPR training.
  • the inertial measurement unit 114 may include an accelerometer such as MPU6050 to measure the acceleration of the head relative to its initial position.
  • the accelerometer is capable of detecting acceleration in three directions at the same time.
  • measurement data in one direction can be used to determine whether the airway has been opened.
  • the user is assessed to have successfully opened the airway if the measurement value from the accelerometer is larger than a predefined threshold value.
  • the predefined threshold value is 1 .0.
  • the measurement value ranges around 0.7 to 0.9.
  • the measurement value ranges around 1 .0 to 1 .3.
  • the predefined threshold value of 1 .0 thus allows for the two different states of the airway to be distinguished.
  • Box 305 represents another main step of CPR of performing external cardiac compression.
  • the training tasks include instructions to perform chest compressions on the manikin 110. Before starting the chest compressions, a preparation time of 4 seconds is provided, as shown by the user interface element 308.
  • the user interface element 308 may countdown the preparation time by displaying “3”, “2”, “1”, and “GO”, each lasting for one second. This preparation time helps the user to prepare for the correct posture to perform the chest compressions.
  • the total duration of each session of compression session is 30 seconds, as shown in the timer 310.
  • the CPR training may continue with additional compression sessions after suitable pauses, such as to account for giving rescue breaths.
  • the torso 116 of the manikin 110 is resiliently compressible for compressing the torso 116, as shown in Figure 7A.
  • the torso 116 includes a distance sensor for measuring compression of the torso 116, wherein the distance sensor is communicative with the local computer device 140 to process measurements for the CPR training.
  • the distance sensor may be disposed on the torso 116 or embedded in the material of the torso 116.
  • the torso 116 includes a chest cavity, wherein the torso 116 is resi I iently compressible for compressing the chest cavity.
  • the distance sensor may be disposed in the chest cavity for measuring compression of the chest cavity.
  • the distance sensor such as the VL53L0X time-of-flight sensor, can measure continuously the depth of compression as the user performs external compression on the torso 116 and the measurement data may be shown on the user interface 300.
  • the user interface 300 has a health bar 312 that would drop at a constant rate when there is no external compression. When the distance sensor measures that a compression reaches a predefined compression depth, the health bar 312 will increase. If the compression does not reach the predefined compression depth, the health bar 312 will not increase. Thus, the health bar 312 can be used to reflect intuitively to the user the quality of the external compression, and the user can adjust his/her compression actions accordingly.
  • the user interface 300 has a user interface element 314 that shows the compression depth and frequency or rate of compressions.
  • the user interface 300 has an instrument panel 316 that shows whether the compressions are performed correctly.
  • the pointer 318 of the instrument panel 316 rotates clockwise.
  • the pointer 318 rotates counterclockwise as the torso 116 returns to its initial uncompressed state.
  • the pointer 318 points to its initial position. If the pointer 318 has not returned to its initial position, but the user has already started the next compression, it means that the quality of the compression does not meet the CPR standard procedures.
  • the augmented reality content including the augmented user interface 350 is projected on the floor so that the manikin 110 and augmented reality content are within the user’s field of view. This helps the user to focus on the CPR training and perform higher-quality chest compressions.
  • the camera 120 captures the real-world environment or physical scene, including the projected augmented reality content, and sends it to the remote computer device 150 for the trainer to see and to monitor the progress of the user. The trainer is thus able to watch the user undergoing CPR training via the camera 120.
  • objects in the real physical scene that are captured by the camera 120 are removed or masked out so that they will not be projected back onto the floor and they will not appear as the augmented reality content. For example, objects like the manikin 110 and sensors such as the inertial measurement unit 114, which can be seen in the real physical scene, are not projected onto the floor.
  • the local computer device 140 is configured for generating the virtual content based on the visual feed.
  • the virtual content includes a virtual heart 320 which may be part of the user interface 300, as shown in Figure 5.
  • the virtual heart 320 is projected as an augmented heart 352 on the manikin 110, as shown in Figure 7A.
  • the augmented heart 352 may be animated to inform the user on his/her progression. For example, when half the duration of a compression has passed, i.e. 15 seconds, the augmented heart 352 animates to show a beating heart.
  • This beating heart animation can give the user some encouragement as the user may perceive that the chest compressions and rescue efforts are working, so that he/she can still complete high-quality chest compressions as much as possible in the next session.
  • the virtual content includes virtual annotations and these annotations are projected as augmented annotations 354 as shown in Figure 7B.
  • the virtual annotations may be generated by the local computer device 140 or made by the trainer on the remote computer device 150 based on the digital representation of the manikin 110. More specifically, the trainer observes the user’s CPR training and provides feedback by making or drawing the annotations on the visual feed displayed on the remote computer device 150 to guide the user in the CPR training. For example, if the trainer finds that the user has made an error, such as the user’s action, gesture, or position is wrong, the trainer can provide a suggestion to the user by making virtual annotations on the view of the training scene shown on the remote computer device 150.
  • These virtual annotations are transmitted in real-time to the local computer device 140 and the projector 130 projects them as the augmented annotations 354 on the manikin 110 to guide the user in the CPR training.
  • a software application such as Unity is executed on the remote computer device 150 for the trainer to make the annotations.
  • the WebCamTexture class in Unity receives the visual feed captured by the camera 120, processes the visual feed, and renders a texture that is displayed on the remote computer device 150.
  • Prefab objects in Unity are used by the trainer to make the annotations on the texture.
  • Prefab objects are virtual objects, which can be static or animated, that have been developed and packaged. These prefab objects are portable and can be used for this CPR training project as well as for other projects.
  • annotations are made using prefab objects on the texture, the prefab objects are shown on the texture and location data of the prefab objects is recorded on a database 152, such as a Firebase database.
  • the annotations and the location data are sent to the local computer device 140 and the projector 130 projects the augmented annotations 354 according to the location data.
  • the remote computer device 150 includes suitable user input devices such as a mouse or digital pen for the trainer to make the annotations. Whenever the mouse is clicked or held down, the prefab object will be placed at the position of the mouse cursor.
  • the prefab object may be in various shapes and sizes, such as round dots.
  • the prefab object is a dot and the augmented annotations 354 are projected as an array of dots 356.
  • the mouse coordinates depend on the pixel resolution of the display screen of the remote computer device 150.
  • the mouse coordinates are synchronized with the database 152 based on the pixel resolution of the display screen. This ensures that the mouse coordinates from the remote computer device 150 are mapped correctly to the local computer device 140 such that the augmented annotations 354 are projected onto the correct location.
  • some of the augmented reality content is projected for a predefined duration.
  • the virtual content such as annotations generated on the remote computer device 150 is configured to expire after the predefined duration.
  • the augmented reality content such as the augmented annotations 354 derived from the virtual annotations disappear after the predefined duration.
  • the augmented annotations 354 are displayed to the user in real-time and should be immediately noticed by the user so that the user can make appropriate corrections to the CPR steps.
  • a predefined duration of around 5 seconds should be sufficient for the augmented annotations 354 to be noticed by the user.
  • the augmented annotations 354 does not need to be displayed permanently and will projection of the augmented annotations 354 is stopped after the predefined duration. This also allows for the trainer to make new annotations to guide the user.
  • the next main step of CPR is to simulate use of an automated external defibrillator (AED) on the manikin 110.
  • AED automated external defibrillator
  • the AED is normally used on a patient if the patient is non-responsive after chest compressions have been performed on the patient. Additionally, box 306 tells the rescuer to call for emergency services if the patient is still non-responsive.
  • the torso 116 of the manikin 110 includes two touch sensors 118.
  • the touch sensors 118 are located at the positions for electrodes of the AED. More specifically, the touch sensors 118 are at positions where the AED electrodes are supposed to be placed on the patient, i.e. one on the right chest just below the collar bone and the other on the left chest just below and to the left of the left nipple.
  • the training tasks include instructions to activate the touch sensors 118 to simulate using the AED.
  • the user can touch the torso 116 with his/her fingers at the positions of the touch sensors 118.
  • the user can place two AED electrode pads on the torso 116 at the positions of the touch sensors 118.
  • the touch sensors 118 would detect the pressure signals caused by the placement of the user’s fingers or the AED electrode pads on the torso 116.
  • the CPR training is completed in response to the touch sensors 118 detecting the pressure signals.
  • the AED step helps the user to learn the correct usage of the AED, especially on the correct positions to place the AED electrode pads.
  • the touch sensors 118 are visibly disposed on the torso 116. This helps users who are fresh trainees to identify the correct positions for the AED electrode pads.
  • the touch sensors 118 are disposed in the torso 116 so that users cannot clearly see where the correct positions are.
  • the touch sensors 118 are disposed in the chest cavity or embedded in the material of the torso 116. This may be for more experienced users who should be familiar with the correct positions. The trainer may make use of annotations to guide the user on where to position the AED electrode pads.
  • the user’s results 400 are shown as in Figure 9.
  • the user’s results 400 include a score 402 and a line chart 404.
  • the points on the line chart 404 are obtained by measuring the value of the health bar 312 at predefined intervals, such as every second.
  • the line chart 404 roughly reflects the quality of the external cardiac compression performed by the user.
  • the score 402 is calculated from the average of all points on the line chart 404.
  • the user’s results 400 further include feedback 406 for the user to learn and improve in the next training.
  • the feedback 406 may include data about the average rate of compressions, average compression depth, completion status, and number of annotations made by the trainer.
  • the augmented reality system 100 and method 200 provide an improved way of training users to perform emergency CPR while giving users feedback during their CPR training.
  • Augmented reality is used to project data around the manikin 110 so that users can receive feedback about their CPR actions.
  • Real-time guidance can be provided by a remote trainer, such as by making annotations in the augmented reality projection, to help users correct any mistakes while they interact with the manikin 110.
  • the user does not need to wear or hold any devices, and the user can perform CPR on the manikin 110 directly with two free hands.
  • Real-life CPR is a two-hand operation and this factor is critical for the CPR to be successful.
  • the user does not need to wear any cumbersome, and expensive, head-mounted device that restrict his/her field of vision.
  • the user can have a full view of the manikin 110 and the surrounding environment so that the user can perform CPR properly. This helps the user to learn better and the user would be equipped with better CPR skills useful for rescuing patients and casualties in real life.
  • As the user is not tethered to any wired devices, as would be the case for a head-mounted device, bystanders can stay at the site environment to observe the CPR training.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Medical Informatics (AREA)
  • Medicinal Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Algebra (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Cardiology (AREA)
  • Mathematical Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure generally relates to an augmented reality system (100) and method (200) for cardiopulmonary resuscitation training. A camera (120) captures a visual feed of a real-world environment comprising a manikin (110). A computer device (140) sends to a projector (130) virtual content generated based on the visual feed. The projector (130) projects the virtual content as augmented reality content onto the manikin (110) to facilitate the cardiopulmonary resuscitation training.

Description

AUGMENTED REALITY SYSTEM AND METHOD FOR CARDIOPULMONARY RESUSCITATION TRAINING
Cross Reference to Related Application(s)
The present disclosure claims the benefit of Singapore Patent Application No. 10202111767U filed on 22 October 2021 , which is incorporated in its entirety by reference herein.
Technical Field
The present disclosure generally relates to an augmented reality system and method for cardiopulmonary resuscitation training.
Background
Several research papers have been published about simulation software related to cardiopulmonary resuscitation (CPR) training to improve users' learning efficiency. Some examples include the HoloCPR (Reference [1]), LISSA (Reference [2]), and VR- CPR (Reference [3]), all of which uses virtual reality to guide CPR training.
Reference [1 ] - Johnson, Janet & Rodrigues, Danilo & Madhuri, Gubbala & Weibel, Nadir. (2018). HoloCPR: Designing and Evaluating a Mixed Reality Interface for Time- Critical Emergencies. 67-76. 10.1145/3240925.3240984.
Reference [2] - Wattanasoontorn, V., Boada, I., & Sbert, M. (2013). LISSA: A Serious Game to learn Cardiopulmonary Resuscitation.
Reference [3] - Yang, C., Liu, S., Lin, C., & Liu, C. (2020). Immersive Virtual Reality- Based Cardiopulmonary Resuscitation Interactive Learning Support System. IEEE Access, 8, 120870-120880.
However, there are some challenges for users of existing software for CPR training. For example, users need to wear or hold electronic devices, such as virtual reality glasses or mobile devices, during the CPR training. Therefore, in order to address some of these challenges, there is a need to provide an improved system and method for CPR training.
Summary
According to a first aspect of the present disclosure, there is an augmented reality system for cardiopulmonary resuscitation training. The system comprises: a camera configured for capturing a visual feed of a real-world environment comprising a manikin positioned within a field of view of the camera; a projector configured for projecting onto the manikin positioned within a field of view of the projector; and a local computer device communicative with the camera and projector. The local computer device is configured for: aligning the fields of view of the camera and the projector to the manikin; receiving the visual feed from the camera; and sending, to the projector, virtual content generated based on the visual feed to facilitate the cardiopulmonary resuscitation training for a user. The projector is configured for projecting the virtual content as augmented reality content onto the manikin in the real-world environment, the augmented reality content viewable by the user performing the cardiopulmonary resuscitation training on the manikin.
According to a second aspect of the present disclosure, there is a computer- implemented augmented reality method for cardiopulmonary resuscitation training. The method comprises: aligning, using a local computer device, fields of view of a camera and a projector to a manikin in a real-world environment; capturing, by the camera, a visual feed of the real-world environment comprising the manikin positioned within the field of view of the camera; receiving, by the local computer device, the visual feed from the camera; sending virtual content from the local computer device to the projector, the virtual content generated based on the visual feed to facilitate the cardiopulmonary resuscitation training for a user; and projecting, by the projector, the virtual content as augmented reality content onto the manikin in the real-world environment and positioned within the field of view of the projector, the augmented reality content viewable by the user performing the cardiopulmonary resuscitation training on the manikin. According to a third aspect of the present disclosure, there is a manikin for cardiopulmonary resuscitation training. The manikin comprises: a head comprising an inertial measurement unit for measuring movement of the head to detect opening of an airway in the head; and a torso that is resiliently compressible, the torso comprising a distance sensor for measuring compression of the torso. The inertial measurement unit and distance sensor are communicative with a local computer device to process the measurements for the cardiopulmonary resuscitation training.
An augmented reality system and method for cardiopulmonary resuscitation training is thus disclosed herein. Various features, aspects, and advantages of the present disclosure will become more apparent from the following detailed description of the embodiments of the present disclosure, by way of non-limiting examples only, along with the accompanying drawings.
Brief Description of the Drawings
Figures 1A and 1 B are illustrations of an augmented reality system for cardiopulmonary resuscitation training.
Figure 2 is a flowchart illustration of an augmented reality method for cardiopulmonary resuscitation training.
Figures 3A to 3D are illustrations of a software for cardiopulmonary resuscitation training.
Figures 4A to 4D are further illustrations of the software for cardiopulmonary resuscitation training.
Figure 5 is an illustration of a user interface of the software for cardiopulmonary resuscitation training.
Figures 6A and 6B are illustrations of opening an airway during cardiopulmonary resuscitation training. Figures 7A to 7C are illustrations of performing chest compressions during cardiopulmonary resuscitation training.
Figure 8 is an illustration of simulating use of an automated external defibrillator during cardiopulmonary resuscitation training.
Figure 9 is an illustration of results from the cardiopulmonary resuscitation training.
Detailed Description
For purposes of brevity and clarity, descriptions of embodiments of the present disclosure are directed to an augmented reality system and method for cardiopulmonary resuscitation training, in accordance with the drawings. While aspects of the present disclosure will be described in conjunction with the embodiments provided herein, it will be understood that they are not intended to limit the present disclosure to these embodiments. On the contrary, the present disclosure is intended to cover alternatives, modifications and equivalents to the embodiments described herein, which are included within the scope of the present disclosure as defined by the appended claims. Furthermore, in the following detailed description, specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it will be recognized by an individual having ordinary skill in the art, i.e. a skilled person, that the present disclosure may be practiced without specific details, and/or with multiple details arising from combinations of aspects of particular embodiments. In a number of instances, well-known systems, methods, procedures, and components have not been described in detail so as to not unnecessarily obscure aspects of the embodiments of the present disclosure.
In embodiments of the present disclosure, depiction of a given element or consideration or use of a particular element number in a particular figure or a reference thereto in corresponding descriptive material can encompass the same, an equivalent, or an analogous element or element number identified in another figure or descriptive material associated therewith. References to “an embodiment I example”, “another embodiment I example”, “some embodiments I examples”, “some other embodiments I examples”, and so on, indicate that the embodiment(s) I example(s) so described may include a particular feature, structure, characteristic, property, element, or limitation, but that not every embodiment I example necessarily includes that particular feature, structure, characteristic, property, element or limitation. Furthermore, repeated use of the phrase “in an embodiment I example” or “in another embodiment I example” does not necessarily refer to the same embodiment I example.
The terms “comprising”, “including”, “having”, and the like do not exclude the presence of other features I elements I steps than those listed in an embodiment. Recitation of certain features I elements I steps in mutually different embodiments does not indicate that a combination of these features I elements I steps cannot be used in an embodiment.
As used herein, the terms “a” and “an” are defined as one or more than one. The use of in a figure or associated text is understood to mean “and/or” unless otherwise indicated. The term “set” is defined as a non-empty finite organization of elements that mathematically exhibits a cardinality of at least one (e.g. a set as defined herein can correspond to a unit, singlet, or single-element set, or a multiple-element set), in accordance with known mathematical definitions. The recitation of a particular numerical value or value range herein is understood to include or be a recitation of an approximate numerical value or value range.
In representative or exemplary embodiments of the present disclosure, there is an augmented reality system 100 for cardiopulmonary resuscitation (CPR) training. In general terms, augmented reality allows a user to view real-world scenes superimposed with virtual objects, such as graphics and text, that are displayed to users. In embodiments of the present disclosure, the augmented reality system 100 can provide users with additional information elements, such as where to position their hands, to guide them in the CPR training which is normally performed on a dummy or manikin 110. As shown in Figure 1A and 1 B, the augmented reality system 100 includes a camera 120, a projector 130, and a local computer device 140 communicative with the camera 120 and projector 130. During CPR training, the manikin 110 is positioned within a field of view of the camera 120 and a field of view of the projector 130. The camera 120 can be any device that is capable of recording visual content including images and video. The projector 130 can be any device that is capable of projecting digital content, such as images and video, onto a surface. The local computer device 140 can be a desktop device, mobile device, tablet device, laptop computer, or any other electronic device which may have processors, central processing units, or controllers. The local computer device 140 includes suitable user input and display devices for users to operate. Additionally, the local computer device 140 may include one or more microcontrollers or microprocessors, such as WeMos-D1 R2 and the Arduino Uno.
The augmented reality system 100 may further include a remote computer device 150 that is communicative with the local computer device 140, such as across the communication network 160. Like the local computer device 140, the remote computer device 150 can be a desktop device, mobile device, tablet device, laptop computer, or any other electronic device. Additionally, the remote computer device 150 includes suitable user input and display devices for a trainer or instructor to remotely conduct the CPR training for users.
The communication network 160 is a medium or environment through which content, notifications, and/or messages are communicated among various components. Suitable security protocols, such as encryption protocols, may be implemented in the communication network 160 for secure communications among the components. Some non-limiting examples of the communication network 160 include a virtual private network (VPN), wireless fidelity (Wi-Fi) network, light fidelity (Li-Fi) network, local area network (LAN), wide area network (WAN), metropolitan area network (MAN), satellite network, Internet, fibre optic network, coaxial cable network, infrared (IR) network, radio frequency (RF) network, and any combination thereof. Various components in the communication network 160 may connect to the communication network 160 in accordance with various wired and wireless communication protocols, such as Transmission Control Protocol I Internet Protocol (TCP/IP), User Datagram Protocol (UDP), 2nd to 5th Generation (2G to 5G) communication protocols, Long Term Evolution (LTE) communication protocols, and any combination thereof. Each component to the communication network 160 includes a data communication or transceiver module to communicate and transmit I receive data over the communication network 160. Some non-limiting examples of a transceiver module include an antenna module, a radio frequency transceiver module, a wireless transceiver module, a Bluetooth transceiver module, an Ethernet port, a Universal Serial Bus (USB) port, or any other module I component I device configured for transmitting and receiving data.
During CPR training, the manikin 110 is positioned within a field of view of the camera 120 and a field of view of the projector 130. The camera 120 and projector 130 may be supported on suitable structures, such as tripods, to stabilize them. The camera 120 is configured for capturing a visual feed of a real-world environment including the manikin 110 positioned within the field of view of the camera 120. For example, the camera 120 captures a video feed of a physical scene in the real-world environment and the manikin 110 is inside the physical scene. The projector 130 is configured for projecting onto the manikin 110 positioned within a field of view of the projector 130. For example, the projector 130 projects some information elements, such as graphics or text, onto the manikin 110.
The local computer device 140 is configured for aligning the field of view of the camera 120 and the field of view of the projector 130 to the manikin 110. For example, the centres of the fields of view are aligned to a region of the manikin 110, such as the head 112. The scenes in the respective fields of view can be aligned and processed by the local computer device 140 so that virtual content can be generated based on the visual feed captured by the camera 120. Additionally, augmented reality content derived from the generated virtual content can be accurately projected by the projector 130 onto the manikin 110 so that the augmented reality content can be viewed by the user working on the manikin 110. The local computer device 140 is further configured for receiving the visual feed from the camera 120. The local computer device 140 is further configured for sending, to the projector 130, virtual content generated based on the visual feed to facilitate the cardiopulmonary resuscitation training for the user. Virtual content can be defined as information that exists digitally and can be communicated across a computer network, such as the communications network 160.
The projector 130 is further configured for projecting the virtual content as the augmented reality content onto the manikin 110 in the real-world environment. The augmented reality content is a projection of the corresponding virtual content onto real physical objects so that the information can be perceived in the real-world environment. The augmented reality content allows the virtual content to be presented in various formats, such as but not limited to text, graphics, and sounds.
As the projector 130 is aligned to the manikin 110, the augmented reality content is viewable by the user performing the CPR training on the manikin 110. For example, the virtual content includes information for projection as the augmented reality content onto the manikin 110 to guide a user in the CPR training. The augmented reality content is within the field of view of the user as the user works on the manikin 110.
In some embodiments, the user can make use of the augmented reality content to guide his/her CPR training practice without the assistance of the trainer or instructor. The virtual content is generated by the local computer device 140 and includes information such as a set of instructions or training tasks to guide the user in the CPR training. For example, the training tasks include instructions to open an airway and to perform chest compressions on the manikin 110.
In some embodiments, the user can practise the CPR training with the assistance of the trainer using the remote computer device 150. For example, the trainer can use the remote computer device 150 to generate the virtual content based on the visual feed received on the remote computer device 150. The virtual content may include ad hoc instructions or annotations to guide the user. This allows the trainer to guide the user in correcting mistakes made during the CPR training. As the manikin 110 is positioned within the field of view of the camera 120, the visual feed includes a digital or virtual representation of the manikin 110. The virtual content may include text and/or graphics generated on a digital or virtual representation of the manikin 110 in the visual feed. For example, the local computer device 140 sends the visual feed to the remote computer device 150 so that the trainer can see the digital representation of the manikin 110, i.e. its digital twin, on the remote computer device 150. The trainer generates the virtual content based on the visual feed sent to the remote computer device 150 and the local computer device 140 receives the virtual content from the remote computer device 150. For example, the virtual content includes virtual annotations made by the trainer based on the digital representation of the manikin 110. The projector 130 then projects the virtual content as the augmented reality content onto the manikin 110 in the physical scene for the user to see. For example, the augmented reality content includes augmented annotations derived from the virtual annotations. The alignment of the fields of view of the camera 120 and the projector 130 allows for the augmented reality content to be projected on to the same location of the manikin 110 as that of the virtual content on the digital representation of the manikin 110.
In various embodiments of the present disclosure, with reference to Figure 2, there is a computer-implemented augmented reality method 200 for CPR training, such as using the augmented reality system 100. The method 200 includes a step 202 of aligning, using the local computer device 140, the fields of view of the camera 120 and the projector 130 to the manikin 110 in the real-world environment. The method 200 includes a step 204 of capturing, by the camera 120, a visual feed of the real-world environment including the manikin 110 positioned within the field of view of the camera 120. The method 200 includes a step 206 of receiving, by the local computer device 140, the visual feed from the camera 120. The method 200 includes a step 208 of sending virtual content from the local computer device 140 to the projector 130, the virtual content generated based on the visual feed to facilitate the CPR training for a user. The method 200 includes a step 210 of projecting, by the projector 130, the virtual content as augmented reality content onto the manikin 110 in the real-world environment and positioned within the field of view of the projector 130, the augmented reality content viewable by the user performing the CPR training on the manikin 110.
In some embodiments, the method 200 may include steps of sending the visual feed from the local computer device 140 to the remote computer device 150, generating, by the remote computer device 150, the virtual content based on the visual feed, and receiving, by the local computer device 140, the virtual content from the remote computer device 150.
A CPR training software application is executed on the local computer device 140 to performing the CPR training for users or trainees. Figure 3A illustrates the main page of the training software. A user is required to login, or register if he/she is a new user, before beginning the CPR training.
In some embodiments, MAMP software and suitable PHP scripts may be used to install a local server environment on the local computer device 140 and to manage a database 142, such as a MySQL database, that stores user information such as a user identifier (e.g. an integer), username, password, and training score. The user creates a new username and password during registration as shown in Figure 3B, the PHP scripts then checks whether the username has already been used. If there is no existing identical username, then the registration is successful. This ensures each unique username is associated with a unique user. The password may be encrypted using suitable encryption protocols, such as the salt-hash method. The purpose of encryption is to protect private data so that developers cannot directly know the password. Some requirements may be defined for the username and password. For example, the username must have more than 2 characters and the password must have more than 8 characters.
After registration, the user can proceed to login to the training software by entering his/her username and password, as shown in Figure 3C. The database 142 searches for the entered username first. If the username exists and is unique, it means that the user is registered. The database 142 then compares the hash string of the entered password with the hash string in the user information. If both hash strings match, the user login would be successful. As shown in Figure 3D, the main page changes in response to the successful login. The user can proceed with the CPR training by selecting the “Play Game” button. Users may not be allowed to select the button without logging in.
According to standard procedures for performing CPR in real life, when a rescuer (person giving CPR) encounters a patient (person needing CPR), the rescuer needs to complete some precautionary checks before performing CPR on the patient. Firstly, upon encountering the patient, the rescuer needs to confirm whether the surrounding environment is safe. This includes determining whether the site environment is suitable for CPR, such as whether there are safety hazards and whether the site environment can allow the patient to lie down. As shown in Figure 4A, the training software displays a preparatory page to train the user to have an awareness of checking and confirming the real-world environment.
After the user has completed the environment check, the user is requested to select the gender of the patient, which would be the manikin 110 for the CPR training, as shown in the preparatory page in Figure 4B. After the user selects the gender, the preparatory page changes into one of two pages as shown in Figures 4C and 4D, respectively, both of which presents similar information to the user. The gender selection is to train the user that there is almost no difference between the rescue methods when rescuing adult men and adult women. However, a survey showed that when the patient is female, more people may feel embarrassed and delay the emergency rescue time, especially if the patient is female and the rescuer is male. Through these gender selection steps, users would become aware of the existence of this problem, and this would reduce people’s hesitation when faced with female patients needing CPR.
After the user has completed the gender selection, the user is reminded that the trainer or administrator can assess the user during the CPR training. The user may be requested to share the screen of the local computer device 140 with the remote computer device 150, so that the remote computer device 150 can receive the visual feed for the trainer to see. Once the CPR training begins, the user only needs to interact with the manikin 110 for the CPR training and this interaction will be captured by the camera 120 and sent as the visual feed to the remote computer device 150.
The projector 130 is arranged to project virtual content as augmented reality content onto the manikin 110 and/or the floor where the manikin 110 is. The virtual content may include a user interface 300 that is projected as an augmented user interface 350 on the floor, as shown in Figures 5 and 6A. The manikin 110 and the augmented user interface 350 would be within the field of view of the user’s eyes, and the user does not have to frequently turn his/her head and shift attention to a display screen of the local computer device 140. This allows the user to interact directly with the manikin 110 and view information from the user interface 300 on the augmented user interface 350, thereby helping the user to focus on the CPR training.
The user interface 300, and correspondingly the augmented user interface 350, includes the training tasks to guide the user in the CPR training. The training tasks may be defined by boxes 301 -306 that represent the different steps of rescuing a patient. The boxes 301 -306 tell the user which CPR steps have been completed and which steps need to be performed next. In a real-life situation, the rescuer should check that the patient is in the supine position and check the patient for responsiveness (such as by questioning, shouting, and/or shaking the patient), as represented by boxes 301 ,302 respectively.
One of the main steps of CPR is to open the airway of the patient, as represented by box 303. After opening the patient’s airway, the rescuer should check the patient’s breathing and give rescue breaths if needed, as represented by box 304. The standard action is to gently place one hand on the patient’s forehead, gently tilt the head back, and lift the patient’s chin with the two fingers of the other hand, so that the base of the tongue leaves the lower position of the throat. The training tasks include instructions to move the head 112 of the manikin 110 to open the airway in the head 112. To train the user to open the airway, the head 112 of the manikin 110 includes an inertial measurement unit 114 for measuring movement of the head 112 to detect opening of the airway inside the head 112, as shown in Figures 6A and 6B. The inertial measurement unit 114 is communicative with the local computer device 140 to process measurements for the CPR training.
The inertial measurement unit 114 may include an accelerometer such as MPU6050 to measure the acceleration of the head relative to its initial position. The accelerometer is capable of detecting acceleration in three directions at the same time. For the CPR training, measurement data in one direction can be used to determine whether the airway has been opened. The user is assessed to have successfully opened the airway if the measurement value from the accelerometer is larger than a predefined threshold value. For example, the predefined threshold value is 1 .0. When the airway has not been opened, the measurement value ranges around 0.7 to 0.9. When the airway has been opened, the measurement value ranges around 1 .0 to 1 .3. The predefined threshold value of 1 .0 thus allows for the two different states of the airway to be distinguished.
Box 305 represents another main step of CPR of performing external cardiac compression. The training tasks include instructions to perform chest compressions on the manikin 110. Before starting the chest compressions, a preparation time of 4 seconds is provided, as shown by the user interface element 308. The user interface element 308 may countdown the preparation time by displaying “3”, “2”, “1”, and “GO”, each lasting for one second. This preparation time helps the user to prepare for the correct posture to perform the chest compressions. The total duration of each session of compression session is 30 seconds, as shown in the timer 310. The compression session ends after 30 seconds. The CPR training may continue with additional compression sessions after suitable pauses, such as to account for giving rescue breaths.
To facilitate the external cardiac compression performed by the user on the manikin 110, the torso 116 of the manikin 110 is resiliently compressible for compressing the torso 116, as shown in Figure 7A. The torso 116 includes a distance sensor for measuring compression of the torso 116, wherein the distance sensor is communicative with the local computer device 140 to process measurements for the CPR training. The distance sensor may be disposed on the torso 116 or embedded in the material of the torso 116. In some embodiments, the torso 116 includes a chest cavity, wherein the torso 116 is resi I iently compressible for compressing the chest cavity. The distance sensor may be disposed in the chest cavity for measuring compression of the chest cavity.
The distance sensor, such as the VL53L0X time-of-flight sensor, can measure continuously the depth of compression as the user performs external compression on the torso 116 and the measurement data may be shown on the user interface 300. The user interface 300 has a health bar 312 that would drop at a constant rate when there is no external compression. When the distance sensor measures that a compression reaches a predefined compression depth, the health bar 312 will increase. If the compression does not reach the predefined compression depth, the health bar 312 will not increase. Thus, the health bar 312 can be used to reflect intuitively to the user the quality of the external compression, and the user can adjust his/her compression actions accordingly.
The user interface 300 has a user interface element 314 that shows the compression depth and frequency or rate of compressions. The user interface 300 has an instrument panel 316 that shows whether the compressions are performed correctly. Each time the torso 116 is compressed, the pointer 318 of the instrument panel 316 rotates clockwise. When the compression force is released, the pointer 318 rotates counterclockwise as the torso 116 returns to its initial uncompressed state. When torso 116 has returned to its initial uncompressed state, the pointer 318 points to its initial position. If the pointer 318 has not returned to its initial position, but the user has already started the next compression, it means that the quality of the compression does not meet the CPR standard procedures.
As described above, the augmented reality content including the augmented user interface 350 is projected on the floor so that the manikin 110 and augmented reality content are within the user’s field of view. This helps the user to focus on the CPR training and perform higher-quality chest compressions. The camera 120 captures the real-world environment or physical scene, including the projected augmented reality content, and sends it to the remote computer device 150 for the trainer to see and to monitor the progress of the user. The trainer is thus able to watch the user undergoing CPR training via the camera 120. To prevent cluttering, objects in the real physical scene that are captured by the camera 120 are removed or masked out so that they will not be projected back onto the floor and they will not appear as the augmented reality content. For example, objects like the manikin 110 and sensors such as the inertial measurement unit 114, which can be seen in the real physical scene, are not projected onto the floor.
In some embodiments, the local computer device 140 is configured for generating the virtual content based on the visual feed. For example, the virtual content includes a virtual heart 320 which may be part of the user interface 300, as shown in Figure 5. The virtual heart 320 is projected as an augmented heart 352 on the manikin 110, as shown in Figure 7A. The augmented heart 352 may be animated to inform the user on his/her progression. For example, when half the duration of a compression has passed, i.e. 15 seconds, the augmented heart 352 animates to show a beating heart. This beating heart animation can give the user some encouragement as the user may perceive that the chest compressions and rescue efforts are working, so that he/she can still complete high-quality chest compressions as much as possible in the next session.
In some embodiments, the virtual content includes virtual annotations and these annotations are projected as augmented annotations 354 as shown in Figure 7B. The virtual annotations may be generated by the local computer device 140 or made by the trainer on the remote computer device 150 based on the digital representation of the manikin 110. More specifically, the trainer observes the user’s CPR training and provides feedback by making or drawing the annotations on the visual feed displayed on the remote computer device 150 to guide the user in the CPR training. For example, if the trainer finds that the user has made an error, such as the user’s action, gesture, or position is wrong, the trainer can provide a suggestion to the user by making virtual annotations on the view of the training scene shown on the remote computer device 150. These virtual annotations are transmitted in real-time to the local computer device 140 and the projector 130 projects them as the augmented annotations 354 on the manikin 110 to guide the user in the CPR training.
In some embodiments, a software application such as Unity is executed on the remote computer device 150 for the trainer to make the annotations. The WebCamTexture class in Unity receives the visual feed captured by the camera 120, processes the visual feed, and renders a texture that is displayed on the remote computer device 150. Prefab objects in Unity are used by the trainer to make the annotations on the texture. Prefab objects are virtual objects, which can be static or animated, that have been developed and packaged. These prefab objects are portable and can be used for this CPR training project as well as for other projects. When annotations are made using prefab objects on the texture, the prefab objects are shown on the texture and location data of the prefab objects is recorded on a database 152, such as a Firebase database. The annotations and the location data are sent to the local computer device 140 and the projector 130 projects the augmented annotations 354 according to the location data.
It will be appreciated that the remote computer device 150 includes suitable user input devices such as a mouse or digital pen for the trainer to make the annotations. Whenever the mouse is clicked or held down, the prefab object will be placed at the position of the mouse cursor. The prefab object may be in various shapes and sizes, such as round dots. In one embodiment as shown in Figure 7C, the prefab object is a dot and the augmented annotations 354 are projected as an array of dots 356. It will be appreciated that the mouse coordinates depend on the pixel resolution of the display screen of the remote computer device 150. The mouse coordinates are synchronized with the database 152 based on the pixel resolution of the display screen. This ensures that the mouse coordinates from the remote computer device 150 are mapped correctly to the local computer device 140 such that the augmented annotations 354 are projected onto the correct location.
In some embodiments, some of the augmented reality content is projected for a predefined duration. More specifically, the virtual content such as annotations generated on the remote computer device 150 is configured to expire after the predefined duration. The augmented reality content such as the augmented annotations 354 derived from the virtual annotations disappear after the predefined duration. The augmented annotations 354 are displayed to the user in real-time and should be immediately noticed by the user so that the user can make appropriate corrections to the CPR steps. A predefined duration of around 5 seconds should be sufficient for the augmented annotations 354 to be noticed by the user. The augmented annotations 354 does not need to be displayed permanently and will projection of the augmented annotations 354 is stopped after the predefined duration. This also allows for the trainer to make new annotations to guide the user.
After the user has completed the external cardiac compression, the next main step of CPR is to simulate use of an automated external defibrillator (AED) on the manikin 110. The AED is normally used on a patient if the patient is non-responsive after chest compressions have been performed on the patient. Additionally, box 306 tells the rescuer to call for emergency services if the patient is still non-responsive.
As shown in Figure 8, the torso 116 of the manikin 110 includes two touch sensors 118. The touch sensors 118 are located at the positions for electrodes of the AED. More specifically, the touch sensors 118 are at positions where the AED electrodes are supposed to be placed on the patient, i.e. one on the right chest just below the collar bone and the other on the left chest just below and to the left of the left nipple.
The training tasks include instructions to activate the touch sensors 118 to simulate using the AED. For example, the user can touch the torso 116 with his/her fingers at the positions of the touch sensors 118. Alternatively, the user can place two AED electrode pads on the torso 116 at the positions of the touch sensors 118. The touch sensors 118 would detect the pressure signals caused by the placement of the user’s fingers or the AED electrode pads on the torso 116. The CPR training is completed in response to the touch sensors 118 detecting the pressure signals.
The AED step helps the user to learn the correct usage of the AED, especially on the correct positions to place the AED electrode pads. In one embodiment, the touch sensors 118 are visibly disposed on the torso 116. This helps users who are fresh trainees to identify the correct positions for the AED electrode pads. In another embodiment, the touch sensors 118 are disposed in the torso 116 so that users cannot clearly see where the correct positions are. For example, the touch sensors 118 are disposed in the chest cavity or embedded in the material of the torso 116. This may be for more experienced users who should be familiar with the correct positions. The trainer may make use of annotations to guide the user on where to position the AED electrode pads.
After completing the CPR training, the user’s results 400 are shown as in Figure 9. The user’s results 400 include a score 402 and a line chart 404. The points on the line chart 404 are obtained by measuring the value of the health bar 312 at predefined intervals, such as every second. The line chart 404 roughly reflects the quality of the external cardiac compression performed by the user. The score 402 is calculated from the average of all points on the line chart 404. The user’s results 400 further include feedback 406 for the user to learn and improve in the next training. The feedback 406 may include data about the average rate of compressions, average compression depth, completion status, and number of annotations made by the trainer.
The augmented reality system 100 and method 200 provide an improved way of training users to perform emergency CPR while giving users feedback during their CPR training. Augmented reality is used to project data around the manikin 110 so that users can receive feedback about their CPR actions. Real-time guidance can be provided by a remote trainer, such as by making annotations in the augmented reality projection, to help users correct any mistakes while they interact with the manikin 110.
More particularly, the user does not need to wear or hold any devices, and the user can perform CPR on the manikin 110 directly with two free hands. Real-life CPR is a two-hand operation and this factor is critical for the CPR to be successful. For example, the user does not need to wear any cumbersome, and expensive, head-mounted device that restrict his/her field of vision. The user can have a full view of the manikin 110 and the surrounding environment so that the user can perform CPR properly. This helps the user to learn better and the user would be equipped with better CPR skills useful for rescuing patients and casualties in real life. As the user is not tethered to any wired devices, as would be the case for a head-mounted device, bystanders can stay at the site environment to observe the CPR training.
In the foregoing detailed description, embodiments of the present disclosure in relation to an augmented reality system and method for CPR training are described with reference to the provided figures. The description of the various embodiments herein is not intended to call out or be limited only to specific or particular representations of the present disclosure, but merely to illustrate non-limiting examples of the present disclosure. The present disclosure serves to address at least one of the mentioned problems and issues associated with the prior art. Although only some embodiments of the present disclosure are disclosed herein, it will be apparent to a person having ordinary skill in the art in view of this disclosure that a variety of changes and/or modifications can be made to the disclosed embodiments without departing from the scope of the present disclosure. Therefore, the scope of the disclosure as well as the scope of the following claims is not limited to embodiments described herein.

Claims

Claims
1 . An augmented reality system for cardiopulmonary resuscitation training, the system comprising: a camera configured for capturing a visual feed of a real-world environment comprising a manikin positioned within a field of view of the camera; a projector configured for projecting onto the manikin positioned within a field of view of the projector; and a local computer device communicative with the camera and projector, the local computer device configured for: aligning the fields of view of the camera and the projector to the manikin; receiving the visual feed from the camera; and sending, to the projector, virtual content generated based on the visual feed to facilitate the cardiopulmonary resuscitation training for a user, wherein the projector is configured for projecting the virtual content as augmented reality content onto the manikin in the real-world environment, the augmented reality content viewable by the user performing the cardiopulmonary resuscitation training on the manikin.
2. The system according to claim 1 , wherein the local computer device is configured for. sending the visual feed to a remote computer device; and receiving the virtual content from the remote computer device, the virtual content generated by the remote computer device based on the visual feed sent to the remote computer device.
3. The system according to claim 2, wherein the virtual content comprises virtual annotations made by the remote computer device based on a virtual representation of the manikin in the visual feed.
4. The system according to claim 3, wherein the augmented reality content comprises augmented annotations derived from the virtual annotations, the augmented annotations to guide the user in the cardiopulmonary resuscitation training.
5. The system according to claim 4, wherein the augmented annotations disappear after a predefined duration.
6. The system according to any one of claims 1 to 5, wherein the virtual content comprises a set of training tasks to guide a user in the cardiopulmonary resuscitation training
7. The system according to claim 6, wherein the training tasks comprise instructions to move a head of the manikin to open an airway in the head.
8. The system according to claim 6 or 7, wherein the training tasks comprise instructions to perform chest compressions on the manikin.
9. The system according to any one of claims 6 to 8, wherein the training tasks comprise instructions to activate touch sensors in the manikin to simulate using an automated external defibrillator.
10. A computer-implemented augmented reality method for cardiopulmonary resuscitation training, the method comprising: aligning, using a local computer device, fields of view of a camera and a projector to a manikin in a real-world environment; capturing, by the camera, a visual feed of the real-world environment comprising the manikin positioned within the field of view of the camera; receiving, by the local computer device, the visual feed from the camera; sending virtual content from the local computer device to the projector, the virtual content generated based on the visual feed to facilitate the cardiopulmonary resuscitation training for a user; and projecting, by the projector, the virtual content as augmented reality content onto the manikin in the real-world environment and positioned within the field of view of the projector, the augmented reality content viewable by the user performing the cardiopulmonary resuscitation training on the manikin.
11 . The method according to claim 10, further comprising: sending the visual feed from the local computer device to a remote computer device; generating, by the remote computer device, the virtual content based on the visual feed; and receiving, by the local computer device, the virtual content from the remote computer device.
12. The method according to claim 11 , wherein the virtual content comprises virtual annotations made by the remote computer device based on a virtual representation of the manikin in the visual feed.
13. The method according to claim 12, wherein the augmented reality content comprises augmented annotations derived from the virtual annotations, the augmented annotations to guide the user in the cardiopulmonary resuscitation training.
14. The method according to claim 13, further comprising stopping projection of the augmented annotations after a predefined duration.
15. The method according to any one of claims 10 to 14, wherein the virtual content comprises a set of training tasks to guide a user in the cardiopulmonary resuscitation training
16. The method according to claim 15, wherein the training tasks comprise instructions to move a head of the manikin to open an airway in the head.
17. The method according to claim 15 or 16, wherein the training tasks comprise instructions to perform chest compressions on the manikin.
18. The method according to any one of claims 15 to 17, wherein the training tasks comprise instructions to activate touch sensors in the manikin to simulate using an automated external defibrillator.
19. A manikin for cardiopulmonary resuscitation training, the manikin comprising: a head comprising an inertial measurement unit for measuring movement of the head to detect opening of an airway in the head; and a torso that is resiliently compressible, the torso comprising a distance sensor for measuring compression of the torso, wherein the inertial measurement unit and distance sensor are communicative with a local computer device to process the measurements for the cardiopulmonary resuscitation training.
20. The manikin according to claim 19, wherein the torso comprises a chest cavity, the distance sensor being disposed in the chest cavity for measuring compression of the chest cavity.
21. The manikin according to claim 19 or 20, wherein the torso further comprises two touch sensors located at positions for electrodes of an automated external defibrillator.
PCT/SG2022/050745 2021-10-22 2022-10-20 Augmented reality system and method for cardiopulmonary resuscitation training WO2023069021A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG10202111767U 2021-10-22
SG10202111767U 2021-10-22

Publications (2)

Publication Number Publication Date
WO2023069021A2 true WO2023069021A2 (en) 2023-04-27
WO2023069021A3 WO2023069021A3 (en) 2023-07-13

Family

ID=86059761

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2022/050745 WO2023069021A2 (en) 2021-10-22 2022-10-20 Augmented reality system and method for cardiopulmonary resuscitation training

Country Status (1)

Country Link
WO (1) WO2023069021A2 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150325148A1 (en) * 2013-07-16 2015-11-12 I.M.Lab Inc. Cardio pulmonary resuscitation (cpr) training simulation system and method for operating same
KR101504633B1 (en) * 2013-07-16 2015-03-23 주식회사 아이엠랩 Appartus and system for interactive cpr simulator based on augmented reality
US10499025B2 (en) * 2017-10-06 2019-12-03 Accenture Global Solutions Limited Projecting interactive information from internally within a mannequin
US11819369B2 (en) * 2018-03-15 2023-11-21 Zoll Medical Corporation Augmented reality device for providing feedback to an acute care provider

Also Published As

Publication number Publication date
WO2023069021A3 (en) 2023-07-13

Similar Documents

Publication Publication Date Title
KR102045260B1 (en) Simulation method for training first aid treatment using augmented reality and virtual reality
CN111465970B (en) Augmented reality system for teaching patient care
KR101756792B1 (en) System for monitoring and controlling head mount display type virtual reality contents
US20180203238A1 (en) Method of education and simulation learning
US11749137B2 (en) System and method for multisensory psychomotor skill training
US20150325148A1 (en) Cardio pulmonary resuscitation (cpr) training simulation system and method for operating same
KR101445978B1 (en) Education system of Cardiopulmonary Resuscitation
US20140147820A1 (en) Method to Provide Feedback to a Physical Therapy Patient or Athlete
KR20130098770A (en) Expanded 3d space based virtual sports simulation system
KR101609808B1 (en) Mannequin for educating cardiopulmonary resuscitation, apparatus and method for simulating cardiopulmonary resuscitation
US11270597B2 (en) Simulated reality technologies for enhanced medical protocol training
KR102191027B1 (en) Cardiopulmonary resuscitation training system based on virtual reality
Wattanasoontorn et al. A kinect-based system for cardiopulmonary resuscitation simulation: A pilot study
US20240153407A1 (en) Simulated reality technologies for enhanced medical protocol training
WO2015121253A1 (en) Methods, systems, and computer-readable media for treating a subject
Boonbrahm et al. Interactive marker-based augmented reality for CPR training
TWM597960U (en) Mixed reality evaluation system based on gestures
JP2002108196A (en) System for virtual experience of refuge
KR20160092805A (en) Apparatus and method for one-to-many cardiopulmonary resuscitation training among heterogeneous devices
JP2012237971A (en) Medical simulation system
WO2023069021A2 (en) Augmented reality system and method for cardiopulmonary resuscitation training
KR102045272B1 (en) Cpr trading chest compression device with multiple access based on augmented reality and virtual reality
KR101554429B1 (en) Education system of Cardiopulmonary Resuscitation by using AED
Sararit et al. A VR simulator for emergency management in endodontic surgery
US11436806B1 (en) Dual perspective rendering in virtual reality