WO2003015057A1 - Augmented reality-based firefighter training system and method - Google Patents

Augmented reality-based firefighter training system and method Download PDF

Info

Publication number
WO2003015057A1
WO2003015057A1 PCT/US2002/025065 US0225065W WO03015057A1 WO 2003015057 A1 WO2003015057 A1 WO 2003015057A1 US 0225065 W US0225065 W US 0225065W WO 03015057 A1 WO03015057 A1 WO 03015057A1
Authority
WO
WIPO (PCT)
Prior art keywords
firefighter
ruggedized
scba
extinguishing agent
nozzle
Prior art date
Application number
PCT/US2002/025065
Other languages
French (fr)
Inventor
John Franklin Ebersole
John Franklin Ebersole, Jr.
Todd Joseph Furlong
Mark Stanley Bastian
John Franklin Walker
Original Assignee
Information Decision Technologies Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/927,043 external-priority patent/US7110013B2/en
Priority claimed from US10/123,364 external-priority patent/US6822648B2/en
Application filed by Information Decision Technologies Llc filed Critical Information Decision Technologies Llc
Priority to EP02752732A priority Critical patent/EP1423834A1/en
Priority to CA002456858A priority patent/CA2456858A1/en
Publication of WO2003015057A1 publication Critical patent/WO2003015057A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • AHUMAN NECESSITIES
    • A62LIFE-SAVING; FIRE-FIGHTING
    • A62BDEVICES, APPARATUS OR METHODS FOR LIFE-SAVING
    • A62B9/00Component parts for respiratory or breathing apparatus
    • A62B9/006Indicators or warning devices, e.g. of low pressure, contamination
    • AHUMAN NECESSITIES
    • A62LIFE-SAVING; FIRE-FIGHTING
    • A62CFIRE-FIGHTING
    • A62C99/00Subject matter not provided for in other groups of this subclass
    • A62C99/0081Training methods or equipment for fire-fighting
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0176Head mounted characterised by mechanical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • This invention relates to training firefighters in an augmented reality (AR) simulation that includes creation of graphics depicting fire, smoke, and application of an extinguishing agent; extinguishing agent interaction with fire and airborne particles; applying computer generated extinguishing agent via an instrumented and ruggedized firefighter's nozzle; displaying the simulated phenomena anchored to real-world locations as seen through an instrumented and ruggedized head- worn display; and occlusion of virtual objects by a real person and/or other moveable objects.
  • AR augmented reality
  • the firefighter should wear a real SCBA and use a real firefighter's vari-nozzle to extinguish the fires.
  • These items can be instrumented to measure their positions and orientations for use in the system.
  • these items should be rugged so that they can withstand the rigorous treatment they would undergo in a real operational environment. This includes the need for protection against shock and penetration from contaminants, such as dirt and water. This allows safe, cost-effective training with greater realism than pure virtual reality (VR) simulations.
  • VR virtual reality
  • Tasks (1) to (4) are applicable to any fire situation - reactive or interactive. Therefore, any significant improvement in developing training skills for Tasks (1) to (4) will result in a significantly skilled firefighter for both reactive and interactive scenarios.
  • An objective of this invention is to demonstrate the feasibility of augmented reality as the basis for an untethered, ARBT system to train firefighters.
  • Two enabling technologies will be exploited: a flexible, wearable belt PC and an augmented reality head-mounted display (HMD).
  • the HMD can be integrated with a firefighter's SCBA. This augments the experience by allowing the user to wear a real firefighter's SCBA (Self- Contained Breathing Apparatus) while engaging in computer-enhanced fire training situations.
  • SCBA Self- Contained Breathing Apparatus
  • Augmented reality is a hybrid of a virtual world and the physical world in which virtual stimuli (e.g. visual, acoustic, thermal, olfactory) are dynamically superimposed on sensory stimuli from the physical world.
  • virtual stimuli e.g. visual, acoustic, thermal, olfactory
  • the inventive ARBT system has the significant potential to produce
  • a training program that aims to increase skills in the Tasks (1) to (4) is adaptable to essentially any fire department, large or small, whether on land, air, or sea.
  • Opportunities for Augmented Reality for Training Augmented reality has emerged as a training tool. Augmented reality can be a medium for successful delivery of training.
  • the cost of an effective training program built around augmented reality-based systems arises primarily from considerations of the computational complexity and the number of senses required by the training exercises. Because of the value of training firefighters in Tasks (1) to (4) for any fire situation, and because the program emphasizes firefighter reactions to (vs. interactions with) fire and smoke, training scenarios can be precomputed.
  • PC technology is capable of generating virtual world stimuli - in real time.
  • augmented reality the opportunity identified above- which has focused on reactions of firefighters to fire and smoke in training scenarios - is amenable to augmented reality.
  • Opportunities for Augmented Reality for Training In augmented reality, sensory stimuli from portions of a virtual world are superimposed on sensory stimuli from the real world. If we consider a continuous scale going from the physical world to completely virtual worlds, then hybrid situations are termed augmented reality.
  • the position on a reality scale is determined by the ratio of virtual world sensory information to real world information.
  • This invention creates a firefighter training solution that builds on the concept of an augmented physical world, known as augmented reality. Ideally, all training should take place in the real world. However, due to such factors as cost, safety, and environment, we have moved some or all of the hazards of the real world to the virtual world while maintaining the critical training parameters of the real world, e.g., we are superimposing virtual fire and smoke onto the real world.
  • HMD head mounted display
  • Mitler (1991) divides fire models into two basic categories: deterministic and stochastic models. Deterministic models are further divided into zone models, field models, hybrid zone/field models, and network models. For purposes of practicality and space limitations, we limit the following discussions to deterministic models, specifically zone type fire models. Mitler goes on to prescribe that any good fire model must describe convective heat and mass transfer, radiative heat transfer, ignition, pyrolysis and the formation of soot. For our purposes, models of flame structure are also of importance.
  • This aspect of the invention comprises an orthogonal plane billboarding technique that allows textures with fuzzy edges to be used to convey the sense of a soft-edged 3D model in the space. This is done by creating three orthogonal (perpendicular) planes. A texture map is mapped onto each of these planes, consisting of profile views of the object of interest as silhouettes of how they look from each of these directions.
  • This orthogonal plane billboard is the modeling of a human head and torso which can be used to occlude fire, water, and smoke in the invention.
  • fidelity is increased by providing sufficient accuracy in real-time such that a computer can generate virtual images and mix them with the image data from the specially mounted camera in a way that the user sees the virtual and real images mixed in real time.
  • the head-mounted tracker allows a computer to synchronize the virtual and real images such that the user sees an image being updated correctly with his or her head.
  • the headphones further enhance the virtual/real experience by providing appropriate aural input.
  • Augmented Reality Equipment A description of augmented reality was presented above. Commercial off the shelf technologies exist with which to implement augmented reality applications. This includes helmet-mounted displays (HMDs), position tracking equipment, and live/virtual mixing of imagery.
  • FIG 3 illustrates the geometric particle representation associated with flames.
  • FIG 4 illustrates the three particle systems used to represent a fire.
  • FIG 5 illustrates the idea of two-layer smoke obscuration.
  • FIG 6 illustrates particle arrangement for a surface representation of a particle system.
  • FIG 7 illustrates a surface mesh for a surface representation of a particle system.
  • FIG 8 illustrates the technologies that combine to create an AR firefighter training system, and method.
  • FIG 9 is a diagram indicating a nozzle, extinguishing agent stream, and fire and smoke plume.
  • FIG 10 is a diagram is a variant of FIG 9 where the extinguishing agent stream is modeled to have multiple cone layers to represent multiple velocities in the profile of the stream.
  • FIG 11 is a diagram of the three orthogonal planes that contain the three texture mapped images of a human head, useful in understanding the invention.
  • FIG 12 is a diagram of a human head and torso, which will be compared to graphical components in FIG 13; and
  • FIG 13 is a diagram of two sets of orthogonal planes, along with the joint between the two sets, for the human head and torso of FIG 12.
  • FIG 14 is a diagram of the main components of the HMD integrated with the SCBA.
  • FIG 16 is a diagram of a headphone attachment design.
  • FIG 17 is a diagram of a bumper that can be used to protect a mirror.
  • FIG 18 is an exploded view of a mirror mount design that places minimum pressure on the mirror to minimize distortion due to compression.
  • FIG 19 depicts a structure that can be used to protect a mirror from being bumped and to prevent the mirror mount from being hooked from underneath.
  • FIG 20 is a cross-sectional view of the structure of FIG19.
  • FIGS 21-23 are top, side, and end views, respectively, of the structure in FIGS 19 and 20.
  • FIG 26 is a perspective view of a nozzle and all of the major instrumentation components involved in the preferred embodiment of the invention, except for the cover;
  • FIG 27 is the same as FIG 26, but with the cover in place;
  • FIG 28 is a top view of the fully assembled ruggedized nozzle of FIG 27;
  • FIG 29 is a front view of the fully assembled ruggedized nozzle of FIG 27.
  • FIG 30 schematically depicts the basic optical and tracking components of the preferred embodiment of the invention and one possible arrangement of them.
  • FIG 31 shows the same components with an overlay of the optical paths and relationships to the components.
  • FIG 32 is a dimensioned engineering drawing of the prism of FIG 30.
  • FIG 33 shows the components of FIG 30 in relation to the SCBA face mask and protective shell.
  • FIG 34 shows externally visible components, including the SCBA and protective shell.
  • FIG 1 is a block diagram indicating the hardware components of the augmented reality
  • AR firefighter training system
  • Imagery from a head-worn video camera 4 is mixed in video mixer 3 via a linear luminance key with computer-generated (CG) output that has been converted to NTSC using VGA-to-NTSC encoder 2.
  • the luminance key removes white portions of the computer-generated imagery and replaces them with the camera imagery.
  • Black computer graphics remain in the final image, and luminance values for the computer graphics in between white and black are blended appropriately with the camera imagery.
  • the final image is displayed to a user in head-mounted display (HMD) 5.
  • HMD head-mounted display
  • This system requires significant hardware to accomplish the method of the invention.
  • the user interfaces with the system using an instrumented and ruggedized firefighter's SCBA and vari-nozzle. This equipment must be ruggedized since shock sensitive parts are mounted on it. Additionally, other equipment is used to mix the real and computer generated images and run the simulation.
  • Two preferred trackers for this invention are the INTERSENSE (InterSense, Inc., 73 Second Avenue, Burlington, MA 01803, USA) IS-900TM and the INTERSENSE (InterSense, Inc., 73 Second Avenue, Burlington, MA 01803, USA) IS-600 Mark 2 PlusTM.
  • the HMD 52 can be either see-through or non-see-through in this invention.
  • the preferred method of attachment to the SCBA mask is to cut out part or all of the transparent (viewing) portion of the mask to allow the bulk of the HMD to stick out, while placing the HMD optics close to the wearer's eyes. Holes drilled through the mask provide attachment points for the HMD.
  • the preferred HMD for this invention is the VIRTUAL RESEARCH (Virtual Research Systems, Inc., 3824 Vienna Drive, Aptos , California 95003) V6TM for a non-see-through method. (See FIG 14)
  • any SCBA mask 53 (FIG 14) can be used with this invention.
  • One preferred mask is the SCOTT (Scott Aviation, A Scott Technologies Company, Erie. Lancaster, NY 14086) AV2000TM.
  • SCOTT Scott Aviation, A Scott Technologies Company, Erie. Lancaster, NY 14086
  • This mask is an example of the state of the art for firefighting equipment, and the design of the mask has a hole near the wearer's mouth that allows easy breathing and speaking when a regulator is not attached.
  • the mask face seal, the "catcher's mask” style straps for attaching the mask, and the nose cup are features that are preserved.
  • the rest of the SCBA can be blacked out by using an opaque substance such as tape, foam, plastic, rubber, silicone, paint, or preferably a combination of plastic, silicone, and paint. This is done to ensure that the trainee doesn't see the un-augmented real world by using his or her un-augmented peripheral vision to see around AR smoke or other artificial (computer-generated virtual) obstacles.
  • an opaque substance such as tape, foam, plastic, rubber, silicone, paint, or preferably a combination of plastic, silicone, and paint.
  • the instrumentation that is protected in the instrumented SCBA consists of (1) the head mounted display (HMD) used to show an image to the user; (2) the camera used to acquire the image the user is looking at; (3) the InterSense InertiaCube used to measure the SCBA orientation; (4) two InterSense SoniDiscs used to measure the SCBA position; and (5) the prism used to shift the image in front of the user's eyes so that the image is in front of the camera. All of this equipment, except for the prism, has electrical connections that carry signals through a tether to a computer, which receives and processes these signals.
  • HMD head mounted display
  • the camera used to acquire the image the user is looking at
  • the InterSense InertiaCube used to measure the SCBA orientation
  • (4) two InterSense SoniDiscs used to measure the SCBA position
  • the prism used to shift the image in front of the user's eyes so that the image is in front of the camera. All of this equipment, except for the prism, has electrical connections that carry signals
  • the eye 137 of the person wearing the SCBA looks through the optics 138 to see the image formed on the display element inside the electronics portion 139 of the HMD.
  • the image of the outside world is captured by camera 135, which looks through a prism 140 that has two reflective surfaces to bend the path of light to the camera 135.
  • the tracking components include the InertiaCube 136 and two SoniDiscs 141 which are positioned on either side of the camera, one going into the figure, and one coming out of the figure.
  • the InertiaCube 136 can be placed anywhere there is room on the structure, but the SoniDiscs 141 must be at the top of the structure in order to have access to the external tracking components.
  • FIG 31 shows detailed sketches of the light paths.
  • the FOV 151 of the camera 155 Upon entering the prism 150 at the polished transparent entrance point 153, the FOV 151 of the camera 155 is temporarily reduced due to the refractive index of the glass, preferably SFL6 as it has a very high index of refraction while maintaining a relatively low density.
  • This reduced FOV 151 is reflected first off mirrored surface 152 and then mirrored surface 148 before exiting through surface 149.
  • the FOV 151 is restored to its original size, and any aberrations due to the glass is eliminated since the FOV 151 is entering and exiting the prism perpendicular to surfaces 153 and 149.
  • the image captured by the camera 155 is effectively taken from the virtual eye-point 147, even though the real eye-point of the camera is at point 154.
  • the virtual eye-point 147 would ideally be at the same point as the user's eye- point 144. To make this happen, however, the optics would have to be bigger. It is preferred to use smaller optics that place the virtual eye-point 147 slightly forward of the user's eye- point 144. This arrangement tends to be acceptable to most users. Even though the virtual eye-point 147 isn't lined up exactly with the eye 144, the HMD (146 and 145) as well as the prism 150 are all co-located on the same optical axis 143, thereby minimizing the disorientation of the user.
  • mirrors are used, there are several choices for materials. The lightest and cheapest are plastic mirrors. A step up in quality would be the use of glass mirrors, especially if they are front-surface mirrors (versus the typical back-surfaced mirrors used in households). The highest durability and quality can be achieved with metallic mirrors.
  • Metallic mirrors preferably aluminum, can be manufactured that have tough, highly reflective surfaces. Metal mirrors can be made to have built-in mounting points as well, enabling a mirror very well suited for the needs of the invention for light weight, compact size, and durability.
  • the very simplest alternative method of camera arrangement would be to put the camera parallel to and directly above the optical axis of HMD 146 and 145. This method negates the need for a prism or mirrors, but it loses all of the benefit of the virtual eye-point on the optical axis of the HMD 146 and 145.
  • a hard plastic electronics enclosure or shell 170 attaches to the SCBA 174 preferably with bolts 173, providing a means for hiding from view and protecting from the elements all electronic equipment, except for the SoniDisc speakers 169, which must be exposed to the air to allow the separate tracking devices (not shown) to receive the ultrasonic chirps coming from the SoniDiscs.
  • the plastic shell 170 that surrounds all of the equipment should be made of a tough material, such as nylon, that can withstand the shock of being dropped, yet is slightly bendable, allowing for a little bit of inherent shock-mounting for the equipment.
  • the HMD 176, prism 171, camera 168, and/or tracking equipment 167 and 179 can be mounted to the SCBA 174 and plastic shell 170 with rubber mounting points (not shown).
  • the HMD, prism, camera, and/or tracking equipment can all be mounted together with a very rigid structure, for example a metallic frame (not shown). That rigid structure could then be mounted separately to the plastic shell, preferably with shock- absorbing mounts.
  • the signal wires (not shown) coming from the instrumentation 168, 176, 167, and 179 come out of the plastic shell 170 through a single hole 166 with built-in strain relief, ensuring that the cables cannot be pulled out of the plastic shell 170 through normal use, and also ensuring that pulling on the cables will not create unacceptable cable wear.
  • the cables coming out of the plastic shell can be attached to a single, specialized connector either mounted on the plastic cover at the exit point 166, or preferably attached to the belt of the wearer. From that connector, another single cable connects this one connector to the computer (not shown) and other equipment (not shown) by splitting the cable into its sub-connectors as needed by the various components.
  • This specialized connector provides for easy connect and disconnect of the equipment from the user.
  • the equipment is protected by a plastic cover which protects the overall assembly from both shock and penetration by foreign agents, such as water and dirt.
  • the InertiaCube and SoniDiscs are somewhat easy to uncalibrate and are sensitive to shock. This provides a great deal of shock protection.
  • One potential issue with the HMD 176 is the build-up of heat, as the HMD gives off a substantial amount.
  • One method is to put vent holes (not shown) in the plastic cover 170, allowing direct access to cool air outside the cover 170, however that can allow in foreign contaminants.
  • the preferred method is to have one-way valves 172, 180 inside the SCBA mask 174. In this preferred method, as the user breathes in air, the air comes in through the mouth piece as is normal, but then the air gets redirected by a one-way valve 175, up through a one-way valve 172 and thus into shell 170, then over the electronics, and away from the electronics through another one-way valve 180 before entering the user's airway. When exhaling, the moist, warm air would get a direct path to the outside via the one-way valve 175. This redirection of air can be preferably accomplished through the use of typical, oneway rubber valves.
  • Additional hardware is attached to a firefighter's vari-nozzle 193 to allow control of a virtual water stream.
  • the nozzle used is an Elkhart vari-nozzle.
  • the instrumentation for the nozzle consists of (1) a potentiometer used to measure the nozzle fog pattern; (2) a potentiometer used to measure the nozzle bail angle; (3) an INTERSENSE (Burlington, MA) InertiaCube used to measure the nozzle orientation; and (4) two INTERSENSE (Burlington, MA) SoniDiscs used to measure the nozzle position. All of this equipment is connected by wiring that carries message signals through a tether to a computer and associated equipment (including an analog-to-digital converter) which receives and processes these signals.
  • the InertiaCube and SoniDiscs are equipment from the InterSense IS-600 line of tracking equipment. If the end user of the nozzle calls for the use of tracking equipment other than the IS-600 line, the invention could readily be adapted to protect equipment from the IS-900 line from InterSense, and 3rd Tech's optical tracking equipment.
  • At least two potentiometers are mounted directly to the vari-nozzle 193.
  • the InertiaCube 194 and SoniDiscs 183 are attached to a rigid, yet floating hard plastic island 181 which holds these items firmly in place.
  • This island 181 is attached to a rigid hard plastic base 190 by two narrow, flexible posts 189.
  • the base 190 is rigidly attached to the nozzle.
  • connection can be held much more securely with this method rather than standard plugs. By using solder or strong screw-down terminals, the wire connections can be assured a quality connection.
  • a separate cable (not shown) connects this common mounting block 195 to the computer and associated equipment which receives data from the nozzle.
  • the specific wires that can be easily mounted in this way include (a) the leads from the SoniDiscs 183, and (b) the wires attached to the leads 188 of the potentiometer under the plate 187 and the potentiometer inside the pattern selector 192.
  • the cable connection to the InertiaCube 194 may not be suitable to separately wire in this fashion since the InertiaCube signals may be sensitive to interference due to shielding concerns, though it should be possible to use a connector provided from InterSense.
  • the wires and/or cables are routed through a hole in the nozzle (not shown), and down the hose (not shown) to the end where they can come out and connect to the needed equipment. This method keeps the wires from being visible to the user.
  • the equipment is protected by a plastic cover 198, which protects the overall assembly from both shock and penetration by foreign agents, such as water and dirt.
  • the INTERSENSE (Burlington, MA) InertiaCube is sensitive to shock, especially the action of the metal bail handle 184, hitting the metal stops at either extreme of its range of motion.
  • the InertiaCube and SoniDiscs are mounted on an island which is held up by two soft polymeric pillars. This provides a great deal of protection against shock from slamming the bail handle all the way forward very quickly.
  • This island is also surrounded by a thin layer of padding (not shown in the figures) located between the island 181, and the cover 198 to protect the island from horizontal shock.
  • This thin layer also provides further protection from penetration by foreign agents, and can be made such that an actual seal is made around the circumference of the island.
  • a small shoe made of soft material is used as a bumper 191 to prevent shock caused by setting the device down too rapidly or dropping it.
  • the shoe also provides wear resistance to the base part in the assembly.
  • the shoe is also a visual cue to the user that the device is to be set down using the shoe as a contact point.
  • the protective design uses an alternating yellow and black color scheme (not shown) to get the user's attention that this is not a standard part. Additionally, a sign attached to the cover (not shown) is used which indicates that the device is to be used for training purposes only.
  • FIG 1 One alternative to the display setup diagrammed in FIG 1 is the use of optical see-through AR.
  • camera 4 and video mixer 3 are absent, and HMD 5 is one that allows its wearer to see computer graphics overlaid on his/her direct view of the real world.
  • This embodiment is not currently preferred for fire fighting because current see-through technology does not allow black smoke to obscure a viewer's vision.
  • a second alternative to the display setup diagrammed in FIG 1 is capturing and overlaying the camera video signal in the computer, which removes the video mixer 3 from the system diagram.
  • This allows high-quality imagery to be produced because the alpha, or transparency channel of the computer 1 graphics system may be used to specify the amount of blending between camera and CG imagery.
  • This embodiment is not currently preferred because the type of image blending described here requires additional delay of the video signal over the embodiment of FIG 1, which is undesirable in a fire fighting application because it reduces the level of responsiveness and interactivity of the system.
  • a third alternative to the display setup diagrammed in FIG 1 is producing two CG images and using one as an external key for luminance keying in a video mixer.
  • two VGA-to-NTSC encoders 2 are used to create two separate video signals from two separate windows created on the computer 1.
  • One window is an RGB image of the scene
  • a second window is a grayscale image representing the alpha channel.
  • the RGB image may be keyed with the camera image using the grayscale alpha signal as the keying image.
  • Such an embodiment allows controllable transparency with a minimum of real-world video delay.
  • FIG 1 diagrams the two 6 degree-of-freedom (6DOF) tracking stations 7 and 8 present in all embodiments of the system.
  • One tracking station 7 is attached to the HMD 5 and is used to measure a user's eye location and orientation in order to align the CG scene with the real world. In addition to matching the real-world and CG eye locations, the fields of view must be matched for proper registration.
  • the second tracking station 8 measures the location and orientation of a nozzle 9 that may be used to apply virtual extinguishing agents. Prediction of the 6DOF locations of 7 and 8 is done to account for system delays and allow correct alignment of real and virtual imagery. The amount of prediction is varied to allow for a varying CG frame rate.
  • the system uses an InterSense IS-600 tracking system 6, and it also supports the InterSense IS-900 and Ascension Flock of Birds. SOFTWARE
  • FIGS 2-4 A method for real-time depiction of fire is diagrammed in FIGS 2-4.
  • a particle system is employed for each of the persistent flame, intermittent flame, and buoyant plume components of a fire, as diagrammed in FIG 4.
  • the particles representing persistent and intermittent flames are created graphically as depicted in FIG 3.
  • Four triangles make up a fire particle, with transparent vertices 12-15 at the edges and an opaque vertex 16 in the center. Smooth shading of the triangles interpolates vertex colors over the triangle surfaces.
  • the local Y axis 27 of a fire particle is aligned to the direction of particle velocity, and the particle is rotated about the local Y axis 27 to face the viewer, a technique known as "billboarding.”
  • a fire texture map is projected through both the persistent and intermittent flame particle systems and rotated about a vertical axis to give a horizontal swirling effect.
  • the flame base 17 is used as the particle emitter for the three particle systems, and buoyancy and drag forces are applied to each system to achieve acceleration in the persistent flame, near-constant velocity in the intermittent flame, and deceleration in the buoyant plume.
  • An external force representing wind or vent flow may also be applied to affect the behavior of the fire plume particles.
  • flame particles When flame particles are born, they are given a velocity directed towards the center of the fire and a life span inversely proportional to their initial distance from the flame center.
  • the emission rate of intermittent flame particles fluctuates sinusoidally at a rate determined by a correlation with the flame base area. Flame height may be controlled by appropriately specifying the life span of particles in the center portion of the flame.
  • a number of graphical features contribute to the realistic appearance of the fire and smoke plume diagrammed in FIG 4.
  • Depth buffer writing is disabled when drawing the particles to allow blending without the need to order the drawing of the particles from back to front.
  • a light source is placed in the center of the flames, and its brightness fluctuates in unison with the emission rate of the intermittent flame particle system. The light color is based on the average color of the pixels in the fire texture map applied to the flame particles. Lighting is disabled when drawing the flame particles to allow them to be at full brightness, and lighting is enabled when drawing the smoke particles to allow the light source at the center of the flame to cast light on the smoke plume.
  • a billboarded, texture-mapped, polygon with a texture that is a round shape fading from bright white in the center to transparent at the edges is placed in the center of the flame to simulate a glow.
  • the RGB color of the polygon is the same as the light source, and the alpha of the polygon is proportional to the density of smoke in the atmosphere. When smoke is dense, the glow polygon masks the individual particles, making the flames appear as a flickering glow through smoke.
  • the glow width and height is scaled accordingly with the flame dimensions.
  • FIG 5 describes the concept of two layers of smoke in a compartment.
  • smoke from the buoyant plume rises to the top of a room and spreads out into a layer, creating an upper layer 20 and a lower layer 21 with unique optical densities.
  • the lower layer has optical density kj
  • the upper layer has density fo.
  • a polygonal model of the real room and contents is created.
  • the model is aligned to the corresponding real-world using the system of FIG 1.
  • the above equations are applied to modify the vertex colors to reflect smoke obscuration. Smooth shading interpolates between vertex colors so that per-pixel smoke calculations are not required. If the initial color, C,-, of the vertices is white, and the smoke color, C s , is black, the correct amount of obscuration of the real world will be achieved using the luminance keying method described above.
  • the above equations can be applied to the alpha value of vertices of the room model.
  • color values are generally specified using and integer range of 0 to 255 or a floating point range of 0 to 1.0.
  • this color specification does not take into account light sources such as windows to the outdoors, overhead fluorescent lights, or flames; which will shine through smoke more than non-luminous objects such as walls and furniture.
  • a luminance component was added to the color specification to affect how objects are seen through smoke.
  • the same polygonal model used for smoke obscuration is also used to allow real-world elements to occlude the view of virtual objects such as smoke and fire.
  • a fire plume behind a real desk that has been modeled is occluded by the polygonal model. In the combined AR view, it appears as if the real desk is occluding the view of the fire plume.
  • Graphical elements such as flame height, smoke layer height, upper layer optical density, and lower layer optical density may be given a basis in physics by allowing them to be controlled by a zone fire model.
  • a file reader developed for the system allows CFAST models to control the simulation.
  • CFAST or consolidated fire and smoke transport, is a zone model developed by the National Institute of Standards and Technology (NIST) and used worldwide for compartment fire modeling.
  • Upper layer temperature calculated by CFAST is monitored by the simulation to predict the occurrence of flashover, or full room involvement in a fire. The word "flashover" is displayed to a trainee and the screen is turned red to indicate that this dangerous event in the development of a fire has occurred.
  • a key component in a fire fighting simulation is simulated behavior and appearance of an extinguishing agent.
  • water application from a vari-nozzle 9, 23, and 25 has been simulated using a particle system.
  • a surface representation of a particle system was devised. This representation allows very few particles to represent a water stream, as opposed to alternative methods that would require the entire volume of water to be filled with particles.
  • Behavior such as initial water particle velocity and hose stream range for different nozzle settings is assigned to a water particle system. Water particles are then constrained to emit in a ring pattern from the nozzle location each time the system is updated. This creates a series of rings of particles 22 as seen FIG 6.
  • collision detection with the polygonal room and contents model is employed.
  • a ray is created from a particle's current position and its previous position, and the ray is tested for intersection with room polygons to detect collisions.
  • the particle's velocity component normal to the surface is reversed and scaled according to an elasticity coefficient.
  • the same collision method is applied to smoke particles when they collide with the ceiling of a room. Detection of collision may be accomplished in a number of ways.
  • the "brute force" approach involves testing every particle against every polygon.
  • a space partitioning scheme may be applied to the room polygons in a preprocessing stage to divide the room into smaller units.
  • Some space partitioning schemes include creation of a uniform 3- D grid, binary space partitioning (BSP), and octree space partitioning (OSP).
  • Water particles that collide with the surface on which the flame base is located are stored as particles that can potentially contribute to extinguishment.
  • the average age of these particles is used in conjunction with the nozzle angle to determine the average water density for the extinguishing particles.
  • Triangles are created using the particle locations as vertices. If a triangle is determined to be on top of the fire, then an extinguishment algorithm is applied to the fire.
  • Extinguishing a fire primarily involves reducing and increasing the flame height in a realistic manner. This is accomplished by managing three counters that are given initial values representing extinguish time, soak time, and reflash time. If intersection between water stream and flame base is detected, the extinguish time counter is decremented, and the flame height is proportionately decreased until both reach zero. If water is removed before the counter reaches zero, the counter is incremented until it reaches its initial value, which increments the flame height back to its original value. After flame height reaches zero, continued application of water decrements the soak counter until it reaches zero. If water is removed before the soak counter reaches zero, the reflash counter decrements to zero and the flames re-ignite and grow to their original height.
  • FIG 9 represents the preferred embodiment of a real-time graphical simulation of an extinguishing agent 29 (e.g., water or foam) exiting a nozzle 28 in the vicinity of a fire and smoke plume 30.
  • extinguishing agent 29 e.g., water or foam
  • each extinguishing agent, fire, or smoke particle will have a mass and a velocity associated with it.
  • a force on the fire and smoke particles can be calculated from the speed and direction of extinguishing agent particles.
  • an equation of the form (other actual forms are envisioned, but they will mainly show similar characteristics):
  • K is a factor that can be adjusted (from a nominal value of 1) for:
  • a force on the fire and smoke particles can be calculated based on the change in velocity:
  • Mass is the mass of the fire or smoke particles
  • ⁇ t is the time in between simulation updates
  • the application of the calculated force simulates the visual effect of the extinguishing agent stream causing airflow that alters the motion of fire and smoke particles. Additionally, the calculations can be applied to smoke or other particles that are not part of a fire and smoke plume, such as extinguishing agent passing through ambient steam or ambient smoke particles in a room.
  • the invention extends to other aerosols, gases, and particulate matter, such as dust, chemical smoke, and fumes.
  • a further refinement for determining a force to apply to particles in the fire and smoke plume 35 would entail modeling extinguishing agent 32-34 in cones 32, 33, and 34 (which are affected by gravity and will droop) from the nozzle 31, where the multiple additional concentric cones 32 and 33 to apply varying force.
  • One embodiment that can produce the cones 32, 33, and 34 can be a system of rings (the system of rings may be modeled as a particle system) emitted from the nozzle, which, when connected, form cones 32, 33, and 34.
  • fire and smoke particles 35 which are contained mostly inside the inner cone 34 of the extinguishing agent 32-34 can have one level of force applied, and fire and smoke particles 35 which are not contained within cone 34, but are contained within cones 33 or 32 can have a different, often smaller, force applied to them.
  • multiple levels of velocity from extinguishing agent and air entrainment can be easily simulated to apply multiple levels of force to the fire and smoke.
  • the additional cones 33 and 32 do not have to be drawn in the simulation, as they could be used strictly in determining the force to apply to the fire and smoke.
  • the force applied to a particle can be modeled as: (A) the extinguishing agent cone(s) 32, 33, 34 each having a velocity associated with them, (B) a difference in velocity between a particle 35 and the extinguishing agent cone(s) 32, 33, 34 can be calculated, (C) a force can be calculated that scales with that difference, and (D) the particles 35 will accelerate based on the force calculated, approaching the velocity of the extinguishing agent inside of the cone(s) 32, 33, 34.
  • the results of the simulated effects described above can be observed by drawing particles as computer-generated graphics primitives using real-time graphics software and hardware.
  • the invention is applicable to such areas as training simulations and computer games.
  • the texture map is faded between these two extremes as the orientation changes between these two extremes, to accomplish a desirable graphical mixing result that matches the silhouette of the object (or human) while maintaining a softer border around the edge of the silhouette contained in the texture map.
  • the appearance of a soft (“fuzzy") border is made by fading to transparent the edges of the object silhouette in the texture map.
  • a series of three texture maps containing silhouettes 36, 42, and 37 are shown mapped onto each of three orthogonal planes 38, 40 and 41, respectively.
  • the texture maps may fade to transparent at their edges for a fuzzy appearance to the shape.
  • the orthogonal planes are each broken up into 4 quadrants defined by the intersection of the planes, and the 41 resulting quadrants are rendered from back to front for correct alpha blending in OpenGL of the texture maps and planes, with depth buffering enabled.
  • a plane is perpendicular to the view plane of a virtual viewpoint looking at the object, the plane is rendered to be completely transparent.
  • a linear fade is used to completely fade the texture map to completely opaque when the plane is parallel to the view plane. This fade from opaque to transparent as the planes are turned relative to a viewer is responsible for the large part of the desirable fuzzy appearance to the shape.
  • the texture maps used to shade the three planes were created from a digital image of a person, then made into grayscale silhouettes, and so match the silhouette of a human user very well.
  • the edges 39 of the human silhouettes in the texture maps were blurred so that they would fade linearly from solid white (which represents the silhouette) to solid black (which represents non- silhouette portions of the texture map) to look better in an augmented reality situation where a little bit of a fuzzy edge is desirable. This fuzzy edge spans what is equivalently approximately 0.5 inches in real distance.
  • FIG 12 depicts a diagram of a human 44 wearing a head tracker 43 on a head mounted display.
  • the virtual representation of human 44 is shown used in the inventive technique in FIG 13.
  • Item 43 in FIG 12 is a motion tracker, in this case a six-degree-of-freedom motion tracker that measures the head location and orientation.
  • the position and orientation information from tracker 43 can be applied to orthogonal plane billboards.
  • an orthogonal plane billboard torso 47 can be created in approximately the correct place relative to the joint.
  • the torso in this instance may be designed to remain upright, only rotating about a vertical axis.
  • the head in this instance has full 6 degree-of-freedom motion capability based on the data coming from the tracker worn on the head of the user. This allows the head orthogonal plane billboard to be lined up correspondingly with the user's head.
  • the torso orthogonal plane billboard is attached to the pivot point 46 and is placed "hanging" straight down from that point, and has 4 degrees of freedom: three to control its position in space, and one controlling the horizontal orientation.
  • the head and torso models when lined up to a real human, occlude computer-generated graphics in a scene. If augmented reality video mixing is achieved with a luminance key to combine live and virtual images, white head and torso models will mask out a portion of a computer-generated image for replacement with a live video image of the real world.
  • This invention can be applied to any real world movable objects for which an occlusion model may be needed.
  • the above technique is also applicable to a movable non-person real world object. If the non-person object has no joints, then such an implementation is simpler since the complexity of coupling the separate head and torso models is avoided.
  • the technique for the single movable real-world physical object is functionally identical to the above method when only the head model is used.
  • 3-D audio allows sound volume to diminish with distance from a sound emitter, and it allow works with stereo headphones to give directionality to sounds.
  • 3-D audio emitters are attached to the fire and the hose nozzle.
  • the fire sound volume is proportional to physical volume of the fire.
  • Appendix A contains settings for the parameters of particle systems used in the invention. These parameters are meant to be guidelines that give realistic behavior for the particles. Many of the parameters are changed within the program, but the preferred starting parameters for flames, smoke, steam, and water are listed in the appendix.
  • Flashover refers to the point in the evolution of a compartment fire in which the fire transitions from local burning to involvement of the entire compartment.
  • zone-type fire model should provide sufficient accuracy for meeting our training objectives.
  • zone models including the Consolidated Fire and Smoke Transport Model (CFAST) from NIST and the WPI fire model from Worcester Polytechnic Institute, among others.
  • the outputs of a zone-type fire model can be extended to achieve a visual representation of a compartment fire.
  • Task 1 Infrastructure for Real-Time Display of Fire Task Summary.
  • the organization and structuring of information to be displayed is as important as actual display processing for real-time dynamical presentation of augmented environments.
  • a firefighter moves through a scenario (using an augmented reality device) the location, extent, and density of fire and smoke change.
  • an approach is to examine the transfer of data to and from the hard disk, through system memory, to update display memory with the desired frequency.
  • Precomputation of the bulk of a firefighter training simulation implies that most of the operations involved in real-time presentation of sensory information revolve around data transfer.
  • Task 2 Visual Representation of Smoke and Fire Task Summary.
  • the way in which sensory stimuli are presented in an ARBT scenario may or may not effect task performance by a student. It is essential to capture the aspects of the sensory representations of fire and smoke that affect student behavior in a training scenario without the computational encumbrance of those aspects that do not affect behavior.
  • For the purposes of providing sensory stimuli for firefighter training we need to know not only the spatial distribution and time evolution of temperature and hot gases in a compartment fire, but also the visible appearance of smoke and flame, along with sounds associated with a burning compartment, taken over time. There are three tiers of attributes of fire and smoke:
  • a zone-type fire model can be used to determine the location and extent of the smoke and flame. In addition to these quantities, the zone-type fire model also will yield aerosol densities in a given layer. Values for optical transmission through smoke can be calculated using a standard model such as found in the CFAST (Consolidated Fire and Smoke Transport) model, or in the EOSAEL (Electro-Optical Systems Atmospheric Effects Library) code.
  • the intermittent flame region in a fire oscillates with regularity, and that the oscillations arise from instabilities at the boundary between the fire plume and the surrounding air.
  • the instabilities generate vortex structures in the flame which in turn rise through the flame resulting in observed oscillations.
  • the visual dynamics of flame can be modeled from empirical data such as is known in the art. Measures of Success. This task can be judged on the aesthetics of the visual appearance of the simulated fire and smoke. Ultimately, the visual appearance of fire and smoke should be evaluated relative to the efficacy of an ARBT system.
  • Task 3 Position Anchoring Task Summary. Augmented reality techniques rely on superimposing information onto a physical scene. Superposition means that information is tied to objects or events in the scene. As such, it is necessary then to compensate for movement by an observer in order to maintain the geometric relations between superimposed information and underlying physical structures in the scene.
  • Position sensors in the form of a head tracker can, in real-time, calculate changes in location caused by movement of a firefighter within a training scenario. Virtual objects will be adjusted accordingly to remain "fixed" to the physical world.
  • Middle Color 1.0, 1.0, 1.0, 0.85
  • Middle Color 1.0, 1.0, 1.0, 1.0
  • Middle Color 1.0, 1.0, 1.0, 0.45

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Emergency Management (AREA)
  • Optics & Photonics (AREA)
  • Pulmonology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Public Health (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Method and apparatus are presented for an augmented reality-based firefighter training system. The system includes hardware for motion tracking, display, and vari-nozzle instrumentation. This hardware incorporates a display (5) with a firefighter's self contained breathing apparatus (SCBA). Additionally, this display has been instrumented for measuring head position and acquiring a view through a camera (4) from the user's perspective. Headphones in SCBA also provide audio stimuli to the user. This instrumented SCBA has been ruggedized for protection against shock and undesirable environmental polluants. The instrumented vari-nozzle (9) has been ruggedized so it is protected against both shock and undesirable environmental pollutant penetration through a hard cover and soft equipment mounts. Shock sensitive equipment is seated on an island which is supported by one or more soft vertical posts. System software includes a real-time fire model, a layered smoke obscuration model, simulation of an extinguishing agent, real-time extinguishing agent interaction with airborne system particles, and an interface to a zone fire model. Physical modeling and graphical elements in the software combine to create realistic-looking fire, smoke, and extinguishing graphics. Extinguishing agent force computations are applied in a realistic manner to approximate interactions with airborne particles such as fire, smoke, and aerosols. Occlusion of virtual objects by real people or objects using orthogonal billboards, soft edges, and motion tracking data is described and implemented. The hardware and software components together contribute to a realistic, interactive training experience for firefighters.

Description

AUGMENTED REALITY-BASED FIREFIGHTER TRAINING SYSTEM AND METHOD
GOVERNMENT RIGHTS CLAUSE
This invention was made with Government support under Contract Numbers N-61339- 01-C-1008 and N-61339-98-C-0036 awarded by the Naval Air Warfare Center Training Systems Division of Orlando, FL. The Government has certain rights in the invention.
FIELD OF THE INVENTION This invention relates to training firefighters in an augmented reality (AR) simulation that includes creation of graphics depicting fire, smoke, and application of an extinguishing agent; extinguishing agent interaction with fire and airborne particles; applying computer generated extinguishing agent via an instrumented and ruggedized firefighter's nozzle; displaying the simulated phenomena anchored to real-world locations as seen through an instrumented and ruggedized head- worn display; and occlusion of virtual objects by a real person and/or other moveable objects.
COPYRIGHT INFORMATION A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office records but otherwise reserves all copyright works whatsoever.
BACKGROUND OF THE INVENTION Current fire simulation for firefighter training is accomplished at facilities that use propane burners and extinguishing agent collectors to simulate the behavior of various types of fires. This approach presents numerous disadvantages, such as safety risks attributable to unintended reflash and explosion; environmental damage attributable to combustion byproducts; health risks to crews due to iiihalable combustion byproducts; high operation costs attributable to fuel requirements; high maintenance costs to ensure system integrity and safety; and unrealistic fire simulations for some types of fires (all simulations appear as propane fires as opposed to oil, electrical or paper; and simulated smoke is white instead of black).
A need exists for a new generation of fire fighting/damage control simulation system which does not use live fires. These systems must be capable of providing a high fidelity representation of the smoke and flames, as well as a realistic representation of the environment (to include fellow crew members). Augmented reality (AR) technology allows overlay of computer-generated graphics on a person's view of the real world. With AR, computer generated fire, smoke, and extinguishing agents can safely replace live fire training while still allowing trainees to view and interact with each other and the real-world environment. This training can be done most effectively by the firefighter using the actual equipment that would be used in a real world scenario. For example, the firefighter should wear a real SCBA and use a real firefighter's vari-nozzle to extinguish the fires. These items can be instrumented to measure their positions and orientations for use in the system. In addition, these items should be rugged so that they can withstand the rigorous treatment they would undergo in a real operational environment. This includes the need for protection against shock and penetration from contaminants, such as dirt and water. This allows safe, cost-effective training with greater realism than pure virtual reality (VR) simulations.
The majority of current generation fire fighting training systems use live, propane-based fires which are unsafe, particularly for use in contained areas such as onboard ships, and in real structures. In a training environment, the use of live propane-based fires presents safety, health and environmental risks.
The primary objective of this invention is the development of an augmented reality-based training (ARBT) system for fire fighting, with application to rescue and hazardous material mitigation. In fact, in any fire situation there are multiple goals, including:
• Search, rescue, and extrication
• Ingress into, and egress from, a structure
• Fire suppression
• Structure stabilization
• Team coordination - command & control
• Fire cause determination
In each of the goals, firefighters engage in a number of cognitive and physical tasks critical to the survival of both fire victims and firefighters, as well as to the timely suppression of a fire. Tasks that fall under this category are
(1) Navigation
(2) Situation awareness
(3) Decision making/problem solving
(4) Stress management
These tasks are undertaken, usually in concert with one another, to achieve the above goals. Training in these four tasks provides the foundation for a firefighter to combat any fire situation. An opportunity exists to develop an ARBT system which educates firefighters in these-tasks in a safe and potentially less expensive environment, in almost any location.
It is important at this juncture to distinguish between the concept of reaction versus interaction with fire and smoke. By reaction we connote responses made by a firefighter to conditions caused by fire and smoke; in this situation he/she does not alter the evolution of the fire and smoke. By interaction we mean that the firefighter directly affects the evolution of the fire and smoke by such actions as fire suppression and ventilation. As stated above, Tasks (1) to (4) are applicable to any fire situation - reactive or interactive. Therefore, any significant improvement in developing training skills for Tasks (1) to (4) will result in a significantly skilled firefighter for both reactive and interactive scenarios.
To preserve the fidelity of an ARBT simulation in which interaction is to take place, flow from a nozzle should affect airflow in an environment and cause motion of aerosols and gases to be affected. To allow for real-time simulation, such an effect should have considerably less computational cost than a computational fluid dynamics (CFD) simulation. This aspect of the invention also applies to graphical components other than fire and smoke, including ambient steam, ambient smoke, visible fumes, invisible fumes, and„other aerosols, gases, and particulate matter.
In order to improve realism, it is also necessary for real objects to occlude virtual objects. To do this, virtual models of such real objects must be created that correspond in space and time to the location of the real objects. One problem with traditional modeling techniques is that hard edges introduced by polygonal models can be distracting if there is misregistration in augmented reality due to tracking errors and lag.
SUMMARY OF THE INVENTION An objective of this invention is to demonstrate the feasibility of augmented reality as the basis for an untethered, ARBT system to train firefighters. Two enabling technologies will be exploited: a flexible, wearable belt PC and an augmented reality head-mounted display (HMD). Specifically, the HMD can be integrated with a firefighter's SCBA. This augments the experience by allowing the user to wear a real firefighter's SCBA (Self- Contained Breathing Apparatus) while engaging in computer-enhanced fire training situations. Key aspects of the this feature include (1) a real firefighter's SCBA, including catcher's mask-style straps, which gives the user the sensation of being in a real firefighting situation, (2) a head motion tracker, which is mounted on the SCBA to track the position and orientation of the firefighter's head, (3) a specially mounted camera and mirror or camera and prism configuration which takes a video image of what is directly in front of the user's eyes, (4) a video display screen, which is used to project the computer-enhanced image the firefighter will see in front of his eyes, and (5) specially mounted head phones which can be used to project appropriate sounds into the firefighter's ears. Furthermore, the user would carry a firefighter's vari-nozzle, instrumented and ruggedized for use.
Unlike traditional augmented reality systems in which an individual is tied to a large workstation by cables from head mounted displays and position trackers, the computer technology is worn by an individual, resulting in an untethered, augmented reality system.
Augmented reality is a hybrid of a virtual world and the physical world in which virtual stimuli (e.g. visual, acoustic, thermal, olfactory) are dynamically superimposed on sensory stimuli from the physical world.
This invention demonstrates a foundation for developing a prototype untethered ARBT system which will support the critical fire fighting tasks of (1) navigation, (2) situation awareness, (3) stress management, and (4) problem solving. The system and method of this invention can be not only a low-cost training tool for fire academies and community fire departments, but also provides a test bed for evaluating future fire fighting technologies, such as decision aids, heads-up displays, and global positioning systems for the 21st century firefighter.
Accordingly, the primary opportunity for an ARBT system is the training of firefighters in the areas of Tasks (1) to (4) above for reactive scenarios.
Significance of the Opportunity Overall Payoffs. The inventive ARBT system has the significant potential to produce
• Increased safety
• Increased task performance
• Decreased workload
• Reduced operating costs
A training program that aims to increase skills in the Tasks (1) to (4) is adaptable to essentially any fire department, large or small, whether on land, air, or sea. Opportunities for Augmented Reality for Training. Augmented reality has emerged as a training tool. Augmented reality can be a medium for successful delivery of training. The cost of an effective training program built around augmented reality-based systems arises primarily from considerations of the computational complexity and the number of senses required by the training exercises. Because of the value of training firefighters in Tasks (1) to (4) for any fire situation, and because the program emphasizes firefighter reactions to (vs. interactions with) fire and smoke, training scenarios can be precomputed.
As described elsewhere in this document, models exist which can predict the evolution of fire and smoke suitable for training applications. An opportunity exists to exercise these models off line to compute reactive fire fighting scenarios. These precomputations can lay out various fire-and-smoke induced phenomena which evolve dynamically in time and space and can produce multi-sensor stimuli to the firefighter in 3D space. (For example, if the firefighter stands up, he/she may find his/her visibility reduced due to smoke, whereas if he/she crawls, he/she can see more clearly.)
It has been demonstrated that PC technology is capable of generating virtual world stimuli - in real time. We can then apply our augmented reality capabilities to the development of an augmented reality-based training system.
In summary, the opportunity identified above- which has focused on reactions of firefighters to fire and smoke in training scenarios - is amenable to augmented reality. Opportunities for Augmented Reality for Training. In augmented reality, sensory stimuli from portions of a virtual world are superimposed on sensory stimuli from the real world. If we consider a continuous scale going from the physical world to completely virtual worlds, then hybrid situations are termed augmented reality.
The position on a reality scale is determined by the ratio of virtual world sensory information to real world information. This invention creates a firefighter training solution that builds on the concept of an augmented physical world, known as augmented reality. Ideally, all training should take place in the real world. However, due to such factors as cost, safety, and environment, we have moved some or all of the hazards of the real world to the virtual world while maintaining the critical training parameters of the real world, e.g., we are superimposing virtual fire and smoke onto the real world.
For a fire example, consider the following. Suppose an office room fire were to be addressed using augmented reality. In this problem, a real room with real furniture is visible in real time through a head mounted display (HMD) with position tracker. Virtual fire and smoke due to virtual combustion of office furniture can be superimposed on the HMD view of the physical office without ever having to actually ignite a piece of real furniture.
The inventive approach allows the firefighter to both react and interact with the real world components and the virtual components of the augmented reality. Examples of potential real-world experiences to be offered by our approach are given below in Table 1-1. Clearly, simulation of training problems for firefighters can comprise both physical and virtual elements. In many instances augmented reality may be a superior approach when compared to completely virtual reality. For example, exercise simulators such as stationary bicycles, treadmills or stair climbing machines do not adequately capture either the physical perception or the distribution of workload on the musculoskeletal systems that would be produced by actually walking or crawling in the physical world. Additionally, a firefighter can see his/her fellow firefighters, not just a computer representation as in pure virtual reality. Opportunities for Self-Contained Augmented Reality. A low-cost, flexible, wearable belt PC technology may be used in augmented reality firefighter training. This technology, combined with augmented reality and precomputed fire scenarios to handle tasks (1) to (4) above for various physical locations, allows a firefighter to move untethered anywhere, anytime, inexpensively and safely. This will significantly add more realistic training experiences.
Background Review of Fire Simulation. Mitler (1991) divides fire models into two basic categories: deterministic and stochastic models. Deterministic models are further divided into zone models, field models, hybrid zone/field models, and network models. For purposes of practicality and space limitations, we limit the following discussions to deterministic models, specifically zone type fire models. Mitler goes on to prescribe that any good fire model must describe convective heat and mass transfer, radiative heat transfer, ignition, pyrolysis and the formation of soot. For our purposes, models of flame structure are also of importance.
Zone models are based on finite element analysis (FEA). In a zone model of a fire, a region is divided into a few control volumes - zones. The conditions within each volume are usually assumed to be approximately constant. In the study of compartment fires, two or more zones typically are used: an upper layer, a lower layer, and, optionally, the fire plume, the ceiling, and, if present, a vent. Zone models take the form of an initial value problem for a system of differential and algebraic equations. Limitations of zone models include ambiguity in the number and location of zones, doubt on the validity of empirical expressions used to describe processes within and between zones, and inapplicability of zones to structures with large area or complex internal configurations.
For many training applications, such effects are not significant for purposes of this invention. Friedman (1992) performed a survey of fire and smoke models. Of 31 models of compartment fire, Friedman found 21 zone models and 10 field models. Most of the zone models can run on a PC, while most of the field models require more powerful computational resources.
Background Review of Virtual Reality-Based Training - and Potential for Augmented Reality-Based Training. Probably the core issue surrounding the development of any training system or program is the efficiency of the transfer of knowledge and skills back into the workplace. Individual development ultimately rests on the ability to adapt acquired skills to novel situations. This is referred to, by some, as a metaskill. The transference of skills and the building of metaskills are fundamental concepts against which virtual reality must be considered for its suitability as a basis for the delivery of training.
Experiential learning is based on the premise that people best learn new skills by successfully performing tasks requiring those skills. The application of virtual reality to the delivery of training builds on the promise of experiential learning to maximize the transfer of training into the task environment. Furthermore, virtual reality interfaces also hold the potential for being more motivating than traditional training delivery media by making the training experience itself more fun and interesting. Augmented reality retains these strengths while providing a real world experience for the firefighter.
When concerned with the transfer of skills from a virtual world to the real world, the issue of virtual world fidelity is often raised. Alessi (1988) examined the issue of simulator fidelity in both initial learning and transfer of learning and found that the impact of simulator fidelity increases with the level of expertise of the student. He goes on to recommend that fidelity increase along lines of instruction phases: presentation, guidance, practice, and assessment. Alessi's results are corroborated by Lintern et al. (1990) in their work on the transfer of training using flight simulators for initial training of landing skills. Most notably the authors found that feedback, related to correct performance of the landing task, resulted in increased transfer of training. They also found that transfer of training did not necessarily increase with increasing simulator fidelity. These results on fidelity are important in that they emphasize that simply creating a task environment in a virtual world without consideration of learning processes may not be sufficient to transfer skills to the physical world.
To increase the fidelity of the simulation, a method has been designed to allow a real person or movable physical object to occlude virtual objects in an augmented reality application using a single tracking data sensor for both human and non-human objects. This aspect of the invention comprises an orthogonal plane billboarding technique that allows textures with fuzzy edges to be used to convey the sense of a soft-edged 3D model in the space. This is done by creating three orthogonal (perpendicular) planes. A texture map is mapped onto each of these planes, consisting of profile views of the object of interest as silhouettes of how they look from each of these directions. One application for this orthogonal plane billboard is the modeling of a human head and torso which can be used to occlude fire, water, and smoke in the invention.
Furthermore, fidelity is increased by providing sufficient accuracy in real-time such that a computer can generate virtual images and mix them with the image data from the specially mounted camera in a way that the user sees the virtual and real images mixed in real time. The head-mounted tracker allows a computer to synchronize the virtual and real images such that the user sees an image being updated correctly with his or her head. The headphones further enhance the virtual/real experience by providing appropriate aural input. Review of Augmented Reality Equipment. A description of augmented reality was presented above. Commercial off the shelf technologies exist with which to implement augmented reality applications. This includes helmet-mounted displays (HMDs), position tracking equipment, and live/virtual mixing of imagery.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG 1 is a block diagram indicating the hardware components of an embodiment of the augmented reality (AR) firefighter training system, also useful in the method of the invention.
FIG 2 illustrates the geometric particle representation associated with smoke.
FIG 3 illustrates the geometric particle representation associated with flames.
FIG 4 illustrates the three particle systems used to represent a fire.
FIG 5 illustrates the idea of two-layer smoke obscuration.
FIG 6 illustrates particle arrangement for a surface representation of a particle system.
FIG 7 illustrates a surface mesh for a surface representation of a particle system.
FIG 8 illustrates the technologies that combine to create an AR firefighter training system, and method.
FIG 9 is a diagram indicating a nozzle, extinguishing agent stream, and fire and smoke plume.
FIG 10 is a diagram is a variant of FIG 9 where the extinguishing agent stream is modeled to have multiple cone layers to represent multiple velocities in the profile of the stream.
FIG 11 is a diagram of the three orthogonal planes that contain the three texture mapped images of a human head, useful in understanding the invention. FIG 12 is a diagram of a human head and torso, which will be compared to graphical components in FIG 13; and
FIG 13 is a diagram of two sets of orthogonal planes, along with the joint between the two sets, for the human head and torso of FIG 12.
FIG 14 is a diagram of the main components of the HMD integrated with the SCBA.
FIG 15 is a diagram of a two-mirror layout for placing the camera viewpoint immediately in front of the wearer's eyes.
FIG 16 is a diagram of a headphone attachment design.
FIG 17 is a diagram of a bumper that can be used to protect a mirror.
FIG 18 is an exploded view of a mirror mount design that places minimum pressure on the mirror to minimize distortion due to compression.
FIG 19 depicts a structure that can be used to protect a mirror from being bumped and to prevent the mirror mount from being hooked from underneath.
FIG 20 is a cross-sectional view of the structure of FIG19.
FIGS 21-23 are top, side, and end views, respectively, of the structure in FIGS 19 and 20.
FIGS 24 and 25 are top and side views, respectively, of a rugged mirror mount.
FIG 26 is a perspective view of a nozzle and all of the major instrumentation components involved in the preferred embodiment of the invention, except for the cover; FIG 27 is the same as FIG 26, but with the cover in place; FIG 28 is a top view of the fully assembled ruggedized nozzle of FIG 27; and FIG 29 is a front view of the fully assembled ruggedized nozzle of FIG 27. FIG 30 schematically depicts the basic optical and tracking components of the preferred embodiment of the invention and one possible arrangement of them. FIG 31 shows the same components with an overlay of the optical paths and relationships to the components. FIG 32 is a dimensioned engineering drawing of the prism of FIG 30. FIG 33 shows the components of FIG 30 in relation to the SCBA face mask and protective shell. FIG 34 shows externally visible components, including the SCBA and protective shell.
DETAILED DESCRIPTION OF THE INVENTION
FIG 1 is a block diagram indicating the hardware components of the augmented reality
(AR) firefighter training system, also useful for the method of the invention. Imagery from a head-worn video camera 4 is mixed in video mixer 3 via a linear luminance key with computer-generated (CG) output that has been converted to NTSC using VGA-to-NTSC encoder 2. The luminance key removes white portions of the computer-generated imagery and replaces them with the camera imagery. Black computer graphics remain in the final image, and luminance values for the computer graphics in between white and black are blended appropriately with the camera imagery. The final image is displayed to a user in head-mounted display (HMD) 5.
The following subsections explain details relating to important aspects of the invention. The primary sections are hardware and software. HARDWARE
This system requires significant hardware to accomplish the method of the invention. The user interfaces with the system using an instrumented and ruggedized firefighter's SCBA and vari-nozzle. This equipment must be ruggedized since shock sensitive parts are mounted on it. Additionally, other equipment is used to mix the real and computer generated images and run the simulation. SCBA Integrated Head Mounted Display
FIG 14 summarizes the main components of the HMD as integrated with an SCBA, including instrumentation required for tracking in an AR or VR environment. The major components of the integrated HMD are a video camera 51 a motion tracker 50, a head- mounted display (HMD) 52, headphones 48, and a self-contained breathing apparatus (SCBA) mask 53 with a "catcher's mask" style straps 49 to hold the SCBA onto the wearer's head.
Camera
Any sufficiently lightweight video camera 51 can be used for the camera in the HMD. A PANASONIC® (Matsushita Electric Corporation of America, One Panasonic Way, Secaucus, NJ 07094) GP-KS162 micro ("lipstick" style) camera with a 7 mm lens (GP-LM7TA) is preferably used in the invention. Because the camera must be worn on the user's head, it should be lightweight and minimally obtrusive. The camera's field of view (FOV) must be close to that of the HMD for minimal distortion, i.e., to best map the image of the real world to the AR world; this also contributes to maximizing the user's perception of presence in the AR world. If the HMD is an optical see-through display, the camera is not required. However, the current preferred embodiment of the invention is a video-based AR display.
The ideal location of a camera for minimal offset from the wearer's eyes is inside the user's eyes. Of course, this is impractical, so the optical path may be folded using mirrored surfaces to allow the camera viewpoint to coincide with the wearer's eye location with the camera at some distance from the wearer's eyes. ' These surfaces can be provided using either mirrors or prisms with reflective coatings. In the ruggedized version of the instrumented SCBA a prism is the preferred embodiment for image reflection. Using an even number of mirrored surfaces as in FIG 15 allows the camera images to be used as-is, and using an odd number of mirrored surfaces requires the camera image to be flipped before it is displayed to the wearer of the display.
Mounting of the mirrored surfaces is a challenge because at least one surface must be exposed, yet all surfaces must also be protected. FIG 17 shows a bumper design for protecting a mirror and whatever this mirror may bump against. FIG 18 shows a mirror mount design that both protects a mirror 57 and allows a minimal amount of pressure to be applied to this mirror by using a mechanism that sandwiches the mirror between parts 58 and 59, reducing the distortion that is created, especially if plastic mirrors are used. FIGS 19-23 show an angular frame structure 260 to shield the mirror mount 250 and prevent it from getting hooked from underneath. The structure 260 is mounted to the HMD housing 263 independent from the mirror mount 250, and it uses rubber mounts 262 at mounting points 261. FIGS 24 and 25 show the rugged mirror mount 250 with a raised area 252 at the front to prevent the mirror (not shown, but it would be resident at location 251) from getting bumped from that direction. The mirror mount 250 is mounted vertically through bolt holes 254, and then a bend 253 at a 45 degree angle is used to place the mirror at the correct 45 degree relative to the camera (not shown) which is looking directly down. A hole 255 is cut out of the mirror mount 250 to reduce weight.
For a stereoscopic embodiment of this aspect of the invention, two cameras are required. They should be spaced the same distance apart as the wearer's eyes and possibly have an interpupillary distance (IPD) adjustment. Each camera can be mounted similarly to the way a single camera mount is described above.
Motion Tracker
In FIG 14, motion tracking equipment 50 is used to provide real-time, 6 degree of freedom (DOF), position and orientation information about a tracked object. By attaching a motion tracker rigidly to the camera, the camera can be tracked. Knowledge of the camera field of view and its position and orientation allows a computer to overlay computer- generated images on the camera video that appear to be anchored to locations in 3-D space. Alternatively, attaching the tracker to a user's head allows the user's eye positions to be tracked, enabling the see-through embodiment of this technology. Any tracker that provides suitable 6 DOF measurements can be used as a part of this invention. Example technologies for motion tracking include magnetic, acousto-inertial, and optical. Two preferred trackers for this invention are the INTERSENSE (InterSense, Inc., 73 Second Avenue, Burlington, MA 01803, USA) IS-900™ and the INTERSENSE (InterSense, Inc., 73 Second Avenue, Burlington, MA 01803, USA) IS-600 Mark 2 Plus™.
HMD
The HMD 52, as stated above, can be either see-through or non-see-through in this invention. As most HMDs are too large to fit inside an SCBA mask, the preferred method of attachment to the SCBA mask is to cut out part or all of the transparent (viewing) portion of the mask to allow the bulk of the HMD to stick out, while placing the HMD optics close to the wearer's eyes. Holes drilled through the mask provide attachment points for the HMD. By drilling through the transparent portion of the mask, a generic solution can be achieved that does not depend upon the geometry of any particular SCBA mask or HMD. The preferred HMD for this invention is the VIRTUAL RESEARCH (Virtual Research Systems, Inc., 3824 Vienna Drive, Aptos , California 95003) V6™ for a non-see-through method. (See FIG 14)
Headphones
Headphones 48 must be attached to the SCBA if audio is part of the AR application. Two requirements for the headphones are that they should not block out real-world sounds, and they should not interfere with donning the mask or other firefighter equipment. To accomplish these purposes, a pair of headphones 55 (AIWA [AIWA AMERICA, INC., 800 Corporate Drive Mahwah, NJ 07430] HP-A091 Stereo Headphones™) rigidly mounted 54 to the SCBA mask at a distance from the wearer's ears can be used (see FIG 16). Additional strength can be added to the shafts that connect the headphones to the SCBA by means of hardened epoxy resin, which can be accomplished with JB WELD™ (JB Weld Company, P.O. Box 483, Sulphur Springs, TX 75483).
SCBA Mask
Any SCBA mask 53 (FIG 14) can be used with this invention. One preferred mask is the SCOTT (Scott Aviation, A Scott Technologies Company, Erie. Lancaster, NY 14086) AV2000™. This mask is an example of the state of the art for firefighting equipment, and the design of the mask has a hole near the wearer's mouth that allows easy breathing and speaking when a regulator is not attached. When performing the integration of the invention, the mask face seal, the "catcher's mask" style straps for attaching the mask, and the nose cup are features that are preserved. When integrating an AR display, it is also necessary to black out any view beyond the augmented reality portions of the user's field of view to ensure that the trainee's only view of the outside world is the AR view displayed to him/her by the HMD. In the case where an SCBA is used, the rest of the SCBA can be blacked out by using an opaque substance such as tape, foam, plastic, rubber, silicone, paint, or preferably a combination of plastic, silicone, and paint. This is done to ensure that the trainee doesn't see the un-augmented real world by using his or her un-augmented peripheral vision to see around AR smoke or other artificial (computer-generated virtual) obstacles.
SCBA Ruggedization
Since the SCBA is to be used to train for situations in which rough treatment is expected, the SCBA must be ruggedized against shock and contaminant penetration. The instrumentation that is protected in the instrumented SCBA consists of (1) the head mounted display (HMD) used to show an image to the user; (2) the camera used to acquire the image the user is looking at; (3) the InterSense InertiaCube used to measure the SCBA orientation; (4) two InterSense SoniDiscs used to measure the SCBA position; and (5) the prism used to shift the image in front of the user's eyes so that the image is in front of the camera. All of this equipment, except for the prism, has electrical connections that carry signals through a tether to a computer, which receives and processes these signals.
Layout of Components for Ruggedization
In FIG 30, the eye 137 of the person wearing the SCBA (not shown) looks through the optics 138 to see the image formed on the display element inside the electronics portion 139 of the HMD. Separately, the image of the outside world is captured by camera 135, which looks through a prism 140 that has two reflective surfaces to bend the path of light to the camera 135. The tracking components, in this case from InterSense, include the InertiaCube 136 and two SoniDiscs 141 which are positioned on either side of the camera, one going into the figure, and one coming out of the figure. The InertiaCube 136 can be placed anywhere there is room on the structure, but the SoniDiscs 141 must be at the top of the structure in order to have access to the external tracking components.
FIG 31 shows detailed sketches of the light paths. Upon entering the prism 150 at the polished transparent entrance point 153, the FOV 151 of the camera 155 is temporarily reduced due to the refractive index of the glass, preferably SFL6 as it has a very high index of refraction while maintaining a relatively low density. This reduced FOV 151 is reflected first off mirrored surface 152 and then mirrored surface 148 before exiting through surface 149. Upon exit, the FOV 151 is restored to its original size, and any aberrations due to the glass is eliminated since the FOV 151 is entering and exiting the prism perpendicular to surfaces 153 and 149. Using this layout, the image captured by the camera 155 is effectively taken from the virtual eye-point 147, even though the real eye-point of the camera is at point 154.
In FIG 31 the virtual eye-point 147 would ideally be at the same point as the user's eye- point 144. To make this happen, however, the optics would have to be bigger. It is preferred to use smaller optics that place the virtual eye-point 147 slightly forward of the user's eye- point 144. This arrangement tends to be acceptable to most users. Even though the virtual eye-point 147 isn't lined up exactly with the eye 144, the HMD (146 and 145) as well as the prism 150 are all co-located on the same optical axis 143, thereby minimizing the disorientation of the user.
To achieve the same eye-point 147 without the use of the prism or mirrors would require placing the camera at the same location as the user's eye 144 as well as HMD 146 and 145. By folding the light path with the prism, the camera 155 can be placed above the other components, thereby achieving a relatively compact solution.
FIG 32 shows the detailed dimensions and descriptions of the preferred prism. Surfaces 157 and 159 are mirrored and surfaces 160 and 158 are polished. All other surfaces are plain or preferably painted black. All units in the drawing are inches. The prism can be mounted to the SCBA by fixing mounting plates that define tapped holes on the two sides of the prism. The prism material is preferably SFL 6, which has a refractive index of 1.8.
If two mirrors are used instead of the prism, then the utility of producing a right-side-up image is accomplished because of the two reflections. However, if the FOV 151 of the camera 155 is relatively large, then the mirrors required to fold over the FOV 151 would have to very large in order to clear the case of the camera 155.
An alternate solution to the use of the prism is to place the camera 155 looking straight down using one mirror. So long as the image from the camera 155 can be processed with video equipment to mirror back the image it would work, but it will most likely either place the camera 155 in an inconvenient location, or the mirror would have to be very large.
If mirrors are used, there are several choices for materials. The lightest and cheapest are plastic mirrors. A step up in quality would be the use of glass mirrors, especially if they are front-surface mirrors (versus the typical back-surfaced mirrors used in households). The highest durability and quality can be achieved with metallic mirrors. Metallic mirrors, preferably aluminum, can be manufactured that have tough, highly reflective surfaces. Metal mirrors can be made to have built-in mounting points as well, enabling a mirror very well suited for the needs of the invention for light weight, compact size, and durability. The very simplest alternative method of camera arrangement would be to put the camera parallel to and directly above the optical axis of HMD 146 and 145. This method negates the need for a prism or mirrors, but it loses all of the benefit of the virtual eye-point on the optical axis of the HMD 146 and 145.
Equipment Mounting and Connections
The HMD 170 (FIGS 33 and 34) is mounted directly to the SCBA 174. The InertiaCube 167 (FIG 34) and SoniDiscs 179 are attached rigidly to the camera 168 /prism 171 assembly (or mirrors if those are used), locking their positions together. By locking the position tracking equipment directly to the camera/prism assembly, one can ensure that the computer- generated imagery will correspond to the camera's position. A hard plastic electronics enclosure or shell 170 (FIG 33) attaches to the SCBA 174 preferably with bolts 173, providing a means for hiding from view and protecting from the elements all electronic equipment, except for the SoniDisc speakers 169, which must be exposed to the air to allow the separate tracking devices (not shown) to receive the ultrasonic chirps coming from the SoniDiscs. The plastic shell 170 that surrounds all of the equipment should be made of a tough material, such as nylon, that can withstand the shock of being dropped, yet is slightly bendable, allowing for a little bit of inherent shock-mounting for the equipment. If this is not sufficient, then the HMD 176, prism 171, camera 168, and/or tracking equipment 167 and 179 can be mounted to the SCBA 174 and plastic shell 170 with rubber mounting points (not shown). In this case the HMD, prism, camera, and/or tracking equipment can all be mounted together with a very rigid structure, for example a metallic frame (not shown). That rigid structure could then be mounted separately to the plastic shell, preferably with shock- absorbing mounts.
The signal wires (not shown) coming from the instrumentation 168, 176, 167, and 179 come out of the plastic shell 170 through a single hole 166 with built-in strain relief, ensuring that the cables cannot be pulled out of the plastic shell 170 through normal use, and also ensuring that pulling on the cables will not create unacceptable cable wear. To maximize ease of use and maintenance, the cables coming out of the plastic shell can be attached to a single, specialized connector either mounted on the plastic cover at the exit point 166, or preferably attached to the belt of the wearer. From that connector, another single cable connects this one connector to the computer (not shown) and other equipment (not shown) by splitting the cable into its sub-connectors as needed by the various components. The use of this specialized connector provides for easy connect and disconnect of the equipment from the user. Equipment Shock and Contamination Protection
The equipment is protected by a plastic cover which protects the overall assembly from both shock and penetration by foreign agents, such as water and dirt. The InertiaCube and SoniDiscs are somewhat easy to uncalibrate and are sensitive to shock. This provides a great deal of shock protection.
One potential issue with the HMD 176 is the build-up of heat, as the HMD gives off a substantial amount. One method is to put vent holes (not shown) in the plastic cover 170, allowing direct access to cool air outside the cover 170, however that can allow in foreign contaminants. The preferred method is to have one-way valves 172, 180 inside the SCBA mask 174. In this preferred method, as the user breathes in air, the air comes in through the mouth piece as is normal, but then the air gets redirected by a one-way valve 175, up through a one-way valve 172 and thus into shell 170, then over the electronics, and away from the electronics through another one-way valve 180 before entering the user's airway. When exhaling, the moist, warm air would get a direct path to the outside via the one-way valve 175. This redirection of air can be preferably accomplished through the use of typical, oneway rubber valves.
Protection Against Misuse
To prevent the user from accidentally using the device in a real fire emergency, the protective design uses an obvious indicator to the user that the device is for training use only, and not for use in real emergencies. The preferred method uses an alternating yellow and black color scheme to get the user's attention that this is not a standard part. Additionally, a sign is used which indicates that the device is to be used for training purposes only. Instrumented Nozzle
Additional hardware is attached to a firefighter's vari-nozzle 193 to allow control of a virtual water stream. The nozzle used is an Elkhart vari-nozzle. The instrumentation for the nozzle consists of (1) a potentiometer used to measure the nozzle fog pattern; (2) a potentiometer used to measure the nozzle bail angle; (3) an INTERSENSE (Burlington, MA) InertiaCube used to measure the nozzle orientation; and (4) two INTERSENSE (Burlington, MA) SoniDiscs used to measure the nozzle position. All of this equipment is connected by wiring that carries message signals through a tether to a computer and associated equipment (including an analog-to-digital converter) which receives and processes these signals. The InertiaCube and SoniDiscs are equipment from the InterSense IS-600 line of tracking equipment. If the end user of the nozzle calls for the use of tracking equipment other than the IS-600 line, the invention could readily be adapted to protect equipment from the IS-900 line from InterSense, and 3rd Tech's optical tracking equipment.
Instrumented Nozzle Ruggedization
All of the described nozzle instrumentation is protected from shock and contaminant intrusion by a shell an island design. This shell and island, described below, would need to be modified slightly to hold different tracking equipment in place.
Equipment Mounting and Connections
In FIG 26, at least two potentiometers (not visible, but with one under potentiometer cover 187 to measure the position of bail 184 and one within the pattern selector 192 to measure its position) are mounted directly to the vari-nozzle 193. The InertiaCube 194 and SoniDiscs 183 are attached to a rigid, yet floating hard plastic island 181 which holds these items firmly in place. This island 181 is attached to a rigid hard plastic base 190 by two narrow, flexible posts 189. The base 190 is rigidly attached to the nozzle. A hard plastic cover 198 (FIG 27) attaches to the base 190, providing a means for hiding all electronic equipment from view, except for the speakers 182 (FIG 27) on top of the SoniDiscs 183, which must be exposed. This cover 198 (FIG 27) also constrains the floating island 181 from freely drifting laterally on the very flexible posts 189. The potentiometer underneath the plate 187, which measures the angle of the bail 184, is attached to the nozzle 193 by a soft polymeric coupling 185. The signal wires coming from certain instrumentation are attached to the mounting block 195 using screw-down connectors or soldering posts 186 integrated inside of the cover. The purpose of using this sort of connection is that the wires can be held much more securely with this method rather than standard plugs. By using solder or strong screw-down terminals, the wire connections can be assured a quality connection. A separate cable (not shown) connects this common mounting block 195 to the computer and associated equipment which receives data from the nozzle. The specific wires that can be easily mounted in this way include (a) the leads from the SoniDiscs 183, and (b) the wires attached to the leads 188 of the potentiometer under the plate 187 and the potentiometer inside the pattern selector 192. The cable connection to the InertiaCube 194 may not be suitable to separately wire in this fashion since the InertiaCube signals may be sensitive to interference due to shielding concerns, though it should be possible to use a connector provided from InterSense. The wires and/or cables are routed through a hole in the nozzle (not shown), and down the hose (not shown) to the end where they can come out and connect to the needed equipment. This method keeps the wires from being visible to the user. Equipment Shock and Contamination Protection
The equipment is protected by a plastic cover 198, which protects the overall assembly from both shock and penetration by foreign agents, such as water and dirt. The INTERSENSE (Burlington, MA) InertiaCube is sensitive to shock, especially the action of the metal bail handle 184, hitting the metal stops at either extreme of its range of motion. The InertiaCube and SoniDiscs are mounted on an island which is held up by two soft polymeric pillars. This provides a great deal of protection against shock from slamming the bail handle all the way forward very quickly. This island is also surrounded by a thin layer of padding (not shown in the figures) located between the island 181, and the cover 198 to protect the island from horizontal shock. This thin layer also provides further protection from penetration by foreign agents, and can be made such that an actual seal is made around the circumference of the island. A small shoe made of soft material is used as a bumper 191 to prevent shock caused by setting the device down too rapidly or dropping it. The shoe also provides wear resistance to the base part in the assembly. The shoe is also a visual cue to the user that the device is to be set down using the shoe as a contact point.
Protection Against Misuse
To prevent the user from accidentally using the device to spray water as in a real fire emergency, the protective design uses an alternating yellow and black color scheme (not shown) to get the user's attention that this is not a standard part. Additionally, a sign attached to the cover (not shown) is used which indicates that the device is to be used for training purposes only. Alternate Hardware Embodiments
One alternative to the display setup diagrammed in FIG 1 is the use of optical see-through AR. In such an embodiment, camera 4 and video mixer 3 are absent, and HMD 5 is one that allows its wearer to see computer graphics overlaid on his/her direct view of the real world. This embodiment is not currently preferred for fire fighting because current see-through technology does not allow black smoke to obscure a viewer's vision.
A second alternative to the display setup diagrammed in FIG 1 is capturing and overlaying the camera video signal in the computer, which removes the video mixer 3 from the system diagram. This allows high-quality imagery to be produced because the alpha, or transparency channel of the computer 1 graphics system may be used to specify the amount of blending between camera and CG imagery. This embodiment is not currently preferred because the type of image blending described here requires additional delay of the video signal over the embodiment of FIG 1, which is undesirable in a fire fighting application because it reduces the level of responsiveness and interactivity of the system.
A third alternative to the display setup diagrammed in FIG 1 is producing two CG images and using one as an external key for luminance keying in a video mixer. In this embodiment, two VGA-to-NTSC encoders 2 are used to create two separate video signals from two separate windows created on the computer 1. One window is an RGB image of the scene, and a second window is a grayscale image representing the alpha channel. The RGB image may be keyed with the camera image using the grayscale alpha signal as the keying image. Such an embodiment allows controllable transparency with a minimum of real-world video delay.
FIG 1 diagrams the two 6 degree-of-freedom (6DOF) tracking stations 7 and 8 present in all embodiments of the system. One tracking station 7 is attached to the HMD 5 and is used to measure a user's eye location and orientation in order to align the CG scene with the real world. In addition to matching the real-world and CG eye locations, the fields of view must be matched for proper registration. The second tracking station 8 measures the location and orientation of a nozzle 9 that may be used to apply virtual extinguishing agents. Prediction of the 6DOF locations of 7 and 8 is done to account for system delays and allow correct alignment of real and virtual imagery. The amount of prediction is varied to allow for a varying CG frame rate. The system uses an InterSense IS-600 tracking system 6, and it also supports the InterSense IS-900 and Ascension Flock of Birds. SOFTWARE
A method for real-time depiction of fire is diagrammed in FIGS 2-4. A particle system is employed for each of the persistent flame, intermittent flame, and buoyant plume components of a fire, as diagrammed in FIG 4. The particles representing persistent and intermittent flames are created graphically as depicted in FIG 3. Four triangles make up a fire particle, with transparent vertices 12-15 at the edges and an opaque vertex 16 in the center. Smooth shading of the triangles interpolates vertex colors over the triangle surfaces. The local Y axis 27 of a fire particle is aligned to the direction of particle velocity, and the particle is rotated about the local Y axis 27 to face the viewer, a technique known as "billboarding." A fire texture map is projected through both the persistent and intermittent flame particle systems and rotated about a vertical axis to give a horizontal swirling effect.
Smoke particles, used to represent the buoyant plume portion of a flame, are created graphically as depicted in FIG 2. A texture map 11 representing a puff of smoke is applied to each particle 10, which consists of two triangles, and transparency of the texture-mapped particle masks the appearance of polygon edges. Smoke particles 10 are rotated about two axes to face the viewer, a technique known as "spherical billboarding."
The flame base 17 is used as the particle emitter for the three particle systems, and buoyancy and drag forces are applied to each system to achieve acceleration in the persistent flame, near-constant velocity in the intermittent flame, and deceleration in the buoyant plume. An external force representing wind or vent flow may also be applied to affect the behavior of the fire plume particles. When flame particles are born, they are given a velocity directed towards the center of the fire and a life span inversely proportional to their initial distance from the flame center. The emission rate of intermittent flame particles fluctuates sinusoidally at a rate determined by a correlation with the flame base area. Flame height may be controlled by appropriately specifying the life span of particles in the center portion of the flame.
A number of graphical features contribute to the realistic appearance of the fire and smoke plume diagrammed in FIG 4. Depth buffer writing is disabled when drawing the particles to allow blending without the need to order the drawing of the particles from back to front. A light source is placed in the center of the flames, and its brightness fluctuates in unison with the emission rate of the intermittent flame particle system. The light color is based on the average color of the pixels in the fire texture map applied to the flame particles. Lighting is disabled when drawing the flame particles to allow them to be at full brightness, and lighting is enabled when drawing the smoke particles to allow the light source at the center of the flame to cast light on the smoke plume. A billboarded, texture-mapped, polygon with a texture that is a round shape fading from bright white in the center to transparent at the edges is placed in the center of the flame to simulate a glow. The RGB color of the polygon is the same as the light source, and the alpha of the polygon is proportional to the density of smoke in the atmosphere. When smoke is dense, the glow polygon masks the individual particles, making the flames appear as a flickering glow through smoke. The glow width and height is scaled accordingly with the flame dimensions.
FIG 5 describes the concept of two layers of smoke in a compartment. In a compartment fire, smoke from the buoyant plume rises to the top of a room and spreads out into a layer, creating an upper layer 20 and a lower layer 21 with unique optical densities. The lower layer has optical density kj, and the upper layer has density fo. Transmittance through the layers from a point P 19 on the wall to a viewer's eye 18 is given by the equation T = ff-A**4*-*-) _ The color of 19 as seen through smoke is given by C = TCt + (1 - T)C, , where C,- represents the color of 19 with no obscuration and Cs represents the smoke color.
To apply the concept of two-layer smoke in an AR system, a polygonal model of the real room and contents is created. The model is aligned to the corresponding real-world using the system of FIG 1. As the model is drawn, the above equations are applied to modify the vertex colors to reflect smoke obscuration. Smooth shading interpolates between vertex colors so that per-pixel smoke calculations are not required. If the initial color, C,-, of the vertices is white, and the smoke color, Cs, is black, the correct amount of obscuration of the real world will be achieved using the luminance keying method described above. In the other video-based embodiments, the above equations can be applied to the alpha value of vertices of the room model.
In computer graphics, color values are generally specified using and integer range of 0 to 255 or a floating point range of 0 to 1.0. Using the obscuration approach described above of white objects that become obscured by black smoke, this color specification does not take into account light sources such as windows to the outdoors, overhead fluorescent lights, or flames; which will shine through smoke more than non-luminous objects such as walls and furniture. To account for this, a luminance component was added to the color specification to affect how objects are seen through smoke. Luminance values, L, range from 0 to 1.0 in this embodiment, and they alter the effective optical density as follows: k'= k(l - L) . This makes objects with higher luminance show through smoke more than non-luminous (L=0) objects.
One additional component to the layered smoke model is the addition of a smoke particle system, as depicted in FIG 2. A smoke particle system is placed in the upper, denser layer 20 to give movement to the otherwise static obscuration model. To determine the volume and optical density of the upper smoke layer, one method is to assign volume and density characteristics to the buoyant plume smoke particles. When a buoyant plume smoke particle fades after hitting the ceiling of a room, the volume and optical density of the particle can be added to the upper layer to change the height and optical density the layer.
The same polygonal model used for smoke obscuration is also used to allow real-world elements to occlude the view of virtual objects such as smoke and fire. A fire plume behind a real desk that has been modeled is occluded by the polygonal model. In the combined AR view, it appears as if the real desk is occluding the view of the fire plume.
Graphical elements such as flame height, smoke layer height, upper layer optical density, and lower layer optical density may be given a basis in physics by allowing them to be controlled by a zone fire model. A file reader developed for the system allows CFAST models to control the simulation. CFAST, or consolidated fire and smoke transport, is a zone model developed by the National Institute of Standards and Technology (NIST) and used worldwide for compartment fire modeling. Upper layer temperature calculated by CFAST is monitored by the simulation to predict the occurrence of flashover, or full room involvement in a fire. The word "flashover" is displayed to a trainee and the screen is turned red to indicate that this dangerous event in the development of a fire has occurred.
A key component in a fire fighting simulation is simulated behavior and appearance of an extinguishing agent. In this embodiment, water application from a vari-nozzle 9, 23, and 25 has been simulated using a particle system. To convincingly represent a water stream with minimal computation, a surface representation of a particle system was devised. This representation allows very few particles to represent a water stream, as opposed to alternative methods that would require the entire volume of water to be filled with particles. Behavior such as initial water particle velocity and hose stream range for different nozzle settings is assigned to a water particle system. Water particles are then constrained to emit in a ring pattern from the nozzle location each time the system is updated. This creates a series of rings of particles 22 as seen FIG 6. The regular emission pattern and spacing of particles allows a polygon surface to easily be created using the particles as triangle vertices, as seen in the wireframe mesh 24 in FIG 7. The surface 24 is texture-mapped with a water texture, and the texture map is translated in the direction of flow at the speed of the flow. A second surface particle system that is wider than the first is given a more transparent texture map to the hard edge of the surface particle system representation. A third particle system using small billboards to represent water droplets is employed to simulate water splashing.
To add realism to the behavior of the water stream, collision detection with the polygonal room and contents model is employed. A ray is created from a particle's current position and its previous position, and the ray is tested for intersection with room polygons to detect collisions. When a collision between a water particle and room polygon is detected, the particle's velocity component normal to the surface is reversed and scaled according to an elasticity coefficient. The same collision method is applied to smoke particles when they collide with the ceiling of a room. Detection of collision may be accomplished in a number of ways. The "brute force" approach involves testing every particle against every polygon. For faster collision detection, a space partitioning scheme may be applied to the room polygons in a preprocessing stage to divide the room into smaller units. Particles within a given space are only tested for collision with polygons that are determined to be in that space in the preprocessing stage. Some space partitioning schemes include creation of a uniform 3- D grid, binary space partitioning (BSP), and octree space partitioning (OSP).
A simpler approach to collisions that is applicable in an empty rectangular room is the use of an axis-aligned bounding box. In such an implementation, particles are simply given minimum and maximum X, Y, and Z coordinates, and a collision is registered if the particle position meets or exceeds the specified boundaries.
To increase the realism of water application, steam is generated when water particles collide at or near the location of the fire. Steam particle emitters are placed at the collision locations and they are given an emittance rate that is scaled by the size of the fire and the inverse of the collision's distance from the fire. Steam particles are rendered as spherically biUboarded, texture-mapped polygons similar to the smoke particles in FIG 2, but with a different texture map 11 and different particle behavior. In compartment fire fighting, steam is generated when a hose stream is aimed at the upper, hot gas layer. Steam particle systems may be placed in this layer to simulate this phenomenon. Steam emittance in the upper layer can be directly proportional to the temperature of the upper layer as calculated by CFAST.
To simulate extinguishment, a number of techniques are employed. Water particles that collide with the surface on which the flame base is located are stored as particles that can potentially contribute to extinguishment. The average age of these particles is used in conjunction with the nozzle angle to determine the average water density for the extinguishing particles. Triangles are created using the particle locations as vertices. If a triangle is determined to be on top of the fire, then an extinguishment algorithm is applied to the fire.
Extinguishing a fire primarily involves reducing and increasing the flame height in a realistic manner. This is accomplished by managing three counters that are given initial values representing extinguish time, soak time, and reflash time. If intersection between water stream and flame base is detected, the extinguish time counter is decremented, and the flame height is proportionately decreased until both reach zero. If water is removed before the counter reaches zero, the counter is incremented until it reaches its initial value, which increments the flame height back to its original value. After flame height reaches zero, continued application of water decrements the soak counter until it reaches zero. If water is removed before the soak counter reaches zero, the reflash counter decrements to zero and the flames re-ignite and grow to their original height. The rate at which the extinguish and soak counters are decremented can be scaled by the average water density for more realistic behavior. To allow more realistic extinguishing behavior, a flame base is divided into a 2-D grid of smaller areas. Each grid square is an emitter for three particle systems: persistent flames, intermittent flames, and buoyant plume. When flame particles are bom in a grid square, they are given a velocity directed towards the center of the flame base and a life span inversely proportional to their initial distance from the flame center. This allows multiple flame particle systems to appear as a single fire. Each grid square has an independent flame height, extinguish counter, soak counter, and reflash counter. This allows portions of a flame to be extinguished while other portions continue to burn. This is especially useful for larger fires where the hose stream can only be directed at one part of the fire at a time.
To further enhance the realism of the simulation, computation of approximate forces from the agent stream on airborne particles is also performed. FIG 9 represents the preferred embodiment of a real-time graphical simulation of an extinguishing agent 29 (e.g., water or foam) exiting a nozzle 28 in the vicinity of a fire and smoke plume 30. If the extinguishing agent, fire, and smoke plume are modeled as particle systems, each extinguishing agent, fire, or smoke particle will have a mass and a velocity associated with it. Using a scale factor based on distance, a force on the fire and smoke particles can be calculated from the speed and direction of extinguishing agent particles. To model the effect, an equation of the form (other actual forms are envisioned, but they will mainly show similar characteristics):
Figure imgf000026_0001
can be made, where:
PVout is the velocity (3-D vector) of the smoke and fire particles after the force is applied PVjn is the velocity (3-D vector) of the smoke and fire particles before the force is applied ExtAV is the velocity (3-D vector) of extinguishing agent particles R is the radial distance between the smoke and fire particles and the extinguishing agent particles
K is a factor that can be adjusted (from a nominal value of 1) for:
— Desired friction of the particle interactions
— The time in between simulation updates (will be referred to as Δt) ~ Mass of the smoke, fire, and extinguishing agent particles
— Or other particular simulation characteristics where a value of 1 produces unrealistic results
A force on the fire and smoke particles can be calculated based on the change in velocity:
Figure imgf000027_0001
where:
F is the actual force (3-D vector) to be applied to the smoke and fire particles
Mass is the mass of the fire or smoke particles
Δt is the time in between simulation updates
Taking the above two equations, substituting and simplifying, the equation for the calculated force could be:
Figure imgf000027_0002
Other versions of this equation are envisioned, but they are expected to be of a similar nature, with the same inputs as this equation.
The application of the calculated force simulates the visual effect of the extinguishing agent stream causing airflow that alters the motion of fire and smoke particles. Additionally, the calculations can be applied to smoke or other particles that are not part of a fire and smoke plume, such as extinguishing agent passing through ambient steam or ambient smoke particles in a room. The invention extends to other aerosols, gases, and particulate matter, such as dust, chemical smoke, and fumes.
As shown in FIG 10, a side view of the preferred embodiment, a further refinement for determining a force to apply to particles in the fire and smoke plume 35 would entail modeling extinguishing agent 32-34 in cones 32, 33, and 34 (which are affected by gravity and will droop) from the nozzle 31, where the multiple additional concentric cones 32 and 33 to apply varying force. One embodiment that can produce the cones 32, 33, and 34 can be a system of rings (the system of rings may be modeled as a particle system) emitted from the nozzle, which, when connected, form cones 32, 33, and 34. Those fire and smoke particles 35 which are contained mostly inside the inner cone 34 of the extinguishing agent 32-34 can have one level of force applied, and fire and smoke particles 35 which are not contained within cone 34, but are contained within cones 33 or 32 can have a different, often smaller, force applied to them. Thus, multiple levels of velocity from extinguishing agent and air entrainment can be easily simulated to apply multiple levels of force to the fire and smoke. The additional cones 33 and 32 (or more if additional levels of force are desired) do not have to be drawn in the simulation, as they could be used strictly in determining the force to apply to the fire and smoke.
In the case of modeling the extinguishing agent as concentric cones outlined above, the force applied to a particle can be modeled as: (A) the extinguishing agent cone(s) 32, 33, 34 each having a velocity associated with them, (B) a difference in velocity between a particle 35 and the extinguishing agent cone(s) 32, 33, 34 can be calculated, (C) a force can be calculated that scales with that difference, and (D) the particles 35 will accelerate based on the force calculated, approaching the velocity of the extinguishing agent inside of the cone(s) 32, 33, 34.
The results of the simulated effects described above can be observed by drawing particles as computer-generated graphics primitives using real-time graphics software and hardware. The invention is applicable to such areas as training simulations and computer games.
To enhance the augmented experience, a technique has been implemented to occlude the computer-generated elements, such as fire, water, and smoke when a desired object (such as a second user) is standing in the path of the user's viewpoint to the computer generated elements. In the invention, texture maps representing the movable object (such as another user) are mapped onto three orthogonal planes. A tracking sensor is used to determine the actual position and orientation of the object, as appropriate. When the viewer's viewpoint is perpendicular to a plane, the texture map is opaque. When the viewpoint is parallel to the plane, the texture map is transparent. The texture map is faded between these two extremes as the orientation changes between these two extremes, to accomplish a desirable graphical mixing result that matches the silhouette of the object (or human) while maintaining a softer border around the edge of the silhouette contained in the texture map. The appearance of a soft ("fuzzy") border is made by fading to transparent the edges of the object silhouette in the texture map.
In the preferred embodiment, as shown in FIG 11, a series of three texture maps containing silhouettes 36, 42, and 37 (in this case of a human head, from the side, top, and front, respectively) are shown mapped onto each of three orthogonal planes 38, 40 and 41, respectively. The texture maps may fade to transparent at their edges for a fuzzy appearance to the shape. The orthogonal planes are each broken up into 4 quadrants defined by the intersection of the planes, and the 41 resulting quadrants are rendered from back to front for correct alpha blending in OpenGL of the texture maps and planes, with depth buffering enabled. When a plane is perpendicular to the view plane of a virtual viewpoint looking at the object, the plane is rendered to be completely transparent. A linear fade is used to completely fade the texture map to completely opaque when the plane is parallel to the view plane. This fade from opaque to transparent as the planes are turned relative to a viewer is responsible for the large part of the desirable fuzzy appearance to the shape. The texture maps used to shade the three planes were created from a digital image of a person, then made into grayscale silhouettes, and so match the silhouette of a human user very well. The edges 39 of the human silhouettes in the texture maps were blurred so that they would fade linearly from solid white (which represents the silhouette) to solid black (which represents non- silhouette portions of the texture map) to look better in an augmented reality situation where a little bit of a fuzzy edge is desirable. This fuzzy edge spans what is equivalently approximately 0.5 inches in real distance.
FIG 12 depicts a diagram of a human 44 wearing a head tracker 43 on a head mounted display. The virtual representation of human 44 is shown used in the inventive technique in FIG 13. Item 43 in FIG 12 is a motion tracker, in this case a six-degree-of-freedom motion tracker that measures the head location and orientation. The position and orientation information from tracker 43 can be applied to orthogonal plane billboards.
If the object being depicted has a pivot point, such as pivot point 46 in FIG 13 for a human neck-torso joint, an orthogonal plane billboard torso 47 can be created in approximately the correct place relative to the joint. The torso in this instance may be designed to remain upright, only rotating about a vertical axis. The head in this instance has full 6 degree-of-freedom motion capability based on the data coming from the tracker worn on the head of the user. This allows the head orthogonal plane billboard to be lined up correspondingly with the user's head. The torso orthogonal plane billboard is attached to the pivot point 46 and is placed "hanging" straight down from that point, and has 4 degrees of freedom: three to control its position in space, and one controlling the horizontal orientation.
The head and torso models, when lined up to a real human, occlude computer-generated graphics in a scene. If augmented reality video mixing is achieved with a luminance key to combine live and virtual images, white head and torso models will mask out a portion of a computer-generated image for replacement with a live video image of the real world. This invention can be applied to any real world movable objects for which an occlusion model may be needed.
The above technique is also applicable to a movable non-person real world object. If the non-person object has no joints, then such an implementation is simpler since the complexity of coupling the separate head and torso models is avoided. The technique for the single movable real-world physical object is functionally identical to the above method when only the head model is used.
3-D audio allows sound volume to diminish with distance from a sound emitter, and it allow works with stereo headphones to give directionality to sounds. 3-D audio emitters are attached to the fire and the hose nozzle. The fire sound volume is proportional to physical volume of the fire.
Appendix A contains settings for the parameters of particle systems used in the invention. These parameters are meant to be guidelines that give realistic behavior for the particles. Many of the parameters are changed within the program, but the preferred starting parameters for flames, smoke, steam, and water are listed in the appendix.
Approach to Untethered ARBT for Firefighters. The basic philosophy behind the objectives herein for developing an untethered ARBT system for firefighters follows from a systems-based approach to training system development. The essential steps in such an approach are:
• Determine training goals and functions
— Implement a development strategy
• Perform training needs analysis
— Assess training needs
~ Collect and analyze task data
• Undertake training system development ~ Write training objectives
— Construct criterion measures
— Construct evaluative measures ~ Choose a delivery system
~ Select and sequence content
— Select an instructional strategy
• Develop augmented reality firefighter training system software and hardware ~ Develop/implement an accurate position tracking system
— Develop/implement capability for mixing real and virtual imagery
— Develop/implement capability for anchoring virtual objects in the real world
— Develop/implement models for occluding real objects by virtual objects and virtual objects by real objects
— Develop/implement technology to display augmented reality scenes to the firefighter — Develop/implement models for fire, smoke, water, and steam ~ Perform system integration of the above (See FIG 8)
• Establish training system validity
— Test & evaluate
The opportunity identified above amounts to an assessment of training needs of firefighters tempered by the realities of state-of-the-art technologies. The issues in this needs assessment include:
• Sensory representations of fire and smoke
• Real-time presentation of those sensory representations
• Modeling of fire spread
• Instructor authoring of fire training exercises
We consider pre-flashover compartment fires in an effort to demonstrate feasibility of our approach to a training system. Flashover refers to the point in the evolution of a compartment fire in which the fire transitions from local burning to involvement of the entire compartment.
One of the key elements of our approach is the preconiputation of fire dynamics. We have elected to use a zone-type fire model. A zone-type fire model should provide sufficient accuracy for meeting our training objectives. There are a number of zone models available, including the Consolidated Fire and Smoke Transport Model (CFAST) from NIST and the WPI fire model from Worcester Polytechnic Institute, among others.
The outputs of a zone-type fire model can be extended to achieve a visual representation of a compartment fire.
Figure imgf000031_0001
Figure imgf000032_0001
Task 1. Infrastructure for Real-Time Display of Fire Task Summary. The organization and structuring of information to be displayed is as important as actual display processing for real-time dynamical presentation of augmented environments. As a firefighter moves through a scenario (using an augmented reality device) the location, extent, and density of fire and smoke change. From a computational perspective, an approach is to examine the transfer of data to and from the hard disk, through system memory, to update display memory with the desired frequency. Approach to This Task. Precomputation of the bulk of a firefighter training simulation, implies that most of the operations involved in real-time presentation of sensory information revolve around data transfer. In order to identify bottlenecks and optimize information throughput, it is advantageous to analyze resource allocation in the context of some systems model, such as queuing theory. Given such an analysis, we may then implement data structures and the memory management processes that form what we call the infrastructure for real-time presentation of sensory information.
Risks and Risk Management. The risks inherent in this task arise from unrecognized or unresolved bottlenecks remaining in our infrastructure for real-time presentation of sensory information. This risk is managed in our approach by thorough analysis of resource allocation requirements prior to commitment in software of any particular data management configuration. Furthermore, subsequent tasks build on this infrastructure and therefore continue the process of challenging and reinforcing our approach to an infrastructure for realtime presentation of sensory information.
Measures of Success. Completion of this task can be recognized by the existence of a fully implemented and tested data management system. The level of success achieved for this task can be directly measured in terms of data throughput relative to system requirements.
Task 2. Visual Representation of Smoke and Fire Task Summary. The way in which sensory stimuli are presented in an ARBT scenario may or may not effect task performance by a student. It is essential to capture the aspects of the sensory representations of fire and smoke that affect student behavior in a training scenario without the computational encumbrance of those aspects that do not affect behavior. For the purposes of providing sensory stimuli for firefighter training, we need to know not only the spatial distribution and time evolution of temperature and hot gases in a compartment fire, but also the visible appearance of smoke and flame, along with sounds associated with a burning compartment, taken over time. There are three tiers of attributes of fire and smoke:
• First tier: location and extent
• Second tier: opacity, luminosity, and dynamics
• Third tier: illumination of other objects in the scene
Approach. Part of the rationale behind the problem identified above is the degree to which the time- and 3D-sp'ace-dependent elements of a desired scenario for a compartment fire can be precomputed. The visual representations of fire and smoke can be precomputed. In order to do so, and still retain real-time effects, the appearance of the fire and smoke from reasonable vantage points within the compartment would be determined. As a firefighter moves through a training simulation, the appropriate data need only be retrieved in real time to provide the necessary visual stimulation.
The emission of visual light from flame and the scattering and absorption of light by smoke is to be modeled. A zone-type fire model can be used to determine the location and extent of the smoke and flame. In addition to these quantities, the zone-type fire model also will yield aerosol densities in a given layer. Values for optical transmission through smoke can be calculated using a standard model such as found in the CFAST (Consolidated Fire and Smoke Transport) model, or in the EOSAEL (Electro-Optical Systems Atmospheric Effects Library) code.
It is thought that the intermittent flame region in a fire oscillates with regularity, and that the oscillations arise from instabilities at the boundary between the fire plume and the surrounding air. The instabilities generate vortex structures in the flame which in turn rise through the flame resulting in observed oscillations. For the purposes of this description, the visual dynamics of flame can be modeled from empirical data such as is known in the art. Measures of Success. This task can be judged on the aesthetics of the visual appearance of the simulated fire and smoke. Ultimately, the visual appearance of fire and smoke should be evaluated relative to the efficacy of an ARBT system.
Task 3. Position Anchoring Task Summary. Augmented reality techniques rely on superimposing information onto a physical scene. Superposition means that information is tied to objects or events in the scene. As such, it is necessary then to compensate for movement by an observer in order to maintain the geometric relations between superimposed information and underlying physical structures in the scene.
Approach. Position sensors in the form of a head tracker can, in real-time, calculate changes in location caused by movement of a firefighter within a training scenario. Virtual objects will be adjusted accordingly to remain "fixed" to the physical world.
Risks and Risk Management. Rapid movements by an observer can cause superimposed information to lag behind the apparent motion of objects in the field of view. This lag may result in the feeling that the superimposed information is floating independent of the scene rather than remaining anchored to a specific position. In severe cases the lag in motion compensation may result in a form of simulator sickness which arises when conflicting motion information is received by the brain. In order to minimize this effect, we can again consider the complexity of the visual presentation of augmented information. (It may also be possible to essentially blank out the augmented information until observer movement stabilizes.)
Measures of Success. Anchoring virtual flame and smoke to a specified position in a real room with minimal motion lag signals the completion of this task.
Task 4. Authoring Tools Task Summary. The implementation of any sort of authoring tool for instructors to create training scenarios is beyond the scope of this description. However, because we do envision the creation of a prototype authoring system, this task is devoted to the investigation of issues and characteristics involved. An authoring system typically takes the form of a visual programming interface over a modular toolkit of fundamental processes. A training instructor can use an authoring tool to visually select and sequence modules to create the desired training course without ever having to resort to direct programming in some computer language such as C or FORTRAN.
Approach. Authoring tools do exist for construction of general, business-oriented, computer-based training. Examination of successful attempts can serve as an instructive guide to specification of an authoring system supporting ARBT for firefighters. Risks and Risk Management. Although there is no risk, per se, inherent in this task, authoring any real-time system is problematic. An authoring system relies on the existence of independent modules that are executed through a central control facility. If the control module handles all data traffic, then bottlenecks may occur that would not necessarily exist in an optimized, real-time system. Measures of Success. This task leads into development for instructors of an authoring system for an ARBT system for firefighters. The measure of success then lies in the coverage of issues pertaining to
• Authoring of real-time systems
• Commercially available authoring tools or systems
Task 5. ARBT Technology Demonstration Task Summary. The previous tasks herein developed the pieces of an augmented reality fire simulation. It remains to pull everything together into a coherent demonstration to show the suitability of the selected technologies to the delivery of training to firefighters. Approach. A scenario consisting of a real room and virtual fire is to be constructed, and a problem solving situation will be presented to prospective trainees.
Risks and Risk Management. The obvious risk is that the virtual fire and smoke training demonstration scenarios do not achieve adequate realism to an experienced firefighter. Measures of Success. The real measure of success for this task lies in the realism perceived by a trainee. In order to judge the success of the demonstration, the users will evaluate the effectiveness of the simulation.
APPENDIX A
The descriptions in this Appendix contain parameters that may be used to describe the behavior of particle systems used to represent the following phenomena:
• Flames
• Smoke plume
• Smoke with random motion to be used in the upper layer
• Steam
• Water spray from a vari-nozzle
Default Fire Parameters
System Type: Faded, Directional Quads
Emitter Shape: Rectangular
Global Force Vector (lbf): 0.0, 1.00, 0.0
Particle Mass: 0.2 lbm
Mass Variance: 0.0
Yaw: 0.0 radian
Yaw Variance: 6.28 radian
Pitch: 0.0 radian
Pitch Variance: 0.05 radian
Initial Speed: 0.65 ft/s
Initial Speed Variance: 0.05 ft/s
Emission Rate: 750 particles/sec
Emission Rate Variance: 500 particles/sec
Life Span: 1.3 sec
Life Span Variance: 0.1 sec
Start Color (RGBA): 1.0, 1.0, 1.0, 0.0
Middle Color (RGBA): 1.0, 1.0, 1.0, 1.0
End Color (RGBA): 1.0, 1.0, 1.0, 0.0
Random Force: 0.45 lbf
Start Scale: 1.0
End Scale: 1.0
Default Smoke parameters
System Type: Billboard
Emitter Shape: Rectangular
Global Force Vector (lbf): 0.0, 1.00, 0.0
Particle Mass: 0.2 lbm
Mass Variance: 0.05
Yaw: 0.0 radian
Yaw Variance: 6.28 radian
Pitch: 1.2 radian
Pitch Variance: 0.393 radian
Initial Speed: 0.2 ft/s
Initial Speed Variance: 0.0 ft/s
Emission Rate: 10.0 particles/sec
Emission Rate Variance: 0.25 particles/sec
Life Span: 3.25 sec
Life Span Variance: 0.25 sec
Start Color (RGBA): 1.0, 1.0, 1.0, 0.0
Middle Color (RGBA): 1.0, 1.0, 1.0, 0.85
End Color (RGBA): 1.0, 1.0, 1.0, 0.0
Random Force: 0.4 lbf
Start Scale: 0.105
End Scale: 4.2
Default Layer Smoke Parameters
System Type: Billboard
Emitter Shape: Rectangular
Global Force Vector (lbf): 0.0, 0.025, 0.0
Particle Mass: 0.2 lbm
Mass Variance: 0.05
Yaw: 0.0 radian
Yaw Variance: 6.28 radian
Pitch: 1.2 radian
Pitch Variance: 0.393 radian
Initial Speed: 0.0 ft/s
Initial Speed Variance: 0.0 ft/s
Emission Rate: 35.0 particles/sec
Emission Rate Variance: 5.0 particles/sec
Life Span: 4.0 sec
Life Span Variance: 0.5 sec
Start Color (RGBA): 1.0, 1.0, 1.0, 0.0
Middle Color (RGBA): 1.0, 1.0, 1.0, 1.0
End Color (RGBA): 1.0, 1.0, 1.0, 0.0
Random Force: 0.35 lbf
Start Scale: 1.75
End Scale: 3.75
Default Steam Parameters
System Type: Billboard
Emitter Shape: Rectangular
Global Force Vector (lbf): 0.0, 0.4, 0.0
Particle Mass: 0.2 lbm
Mass Variance: 0.05
Yaw: 0.0 radian
Yaw Variance: 6.28 radian
Pitch: 1.2 radian
Pitch Variance: 0.393 radian
Initial Speed: 0.6 ft/s
Initial Speed Variance: 0.0 ft/s
Emission Rate: 50.0 particles/sec
Emission Rate Variance: 10.0 particles/sec
Life Span: 2.5 sec
Life Span Variance: 0. 5 sec
Start Color (RGBA): 1.0, 1.0, 1.0, 0.0
Middle Color (RGBA): 1.0, 1.0, 1.0, 0.45
End Color (RGBA): 1.0, 1.0, 1.0, 0.0
Random Force: 0.25 lbf
Start Scale: 0.7
End Scale: 2.8
Default Water Stream Parameters
System Type: Surface
Emitter Shape: Spherical
Global Force Vector (lbf): 0.0, -2.11, 0.0
Particle Mass: 0.0656 lbm
Mass Variance: 0.005
Initial Speed: 162.0 ft s
Initial Speed Variance: 0.0 ft/s
Life Span: 4.5 sec
Life Span Variance: 0.4 sec
Start Color (RGBA): 0.65, 0.65, 1.0, 1.0
Middle Color (RGBA): 0.65, 0.65, 1.0, 0. 5
End Color (RGBA): 0.75, 0.75, 1.0, 0.0
Random Force: 0.15 lbf
Although specific features of the invention are shown in some drawings and not others, this is for convenience only, as each feature may be combined with any or all of the other features in accordance with the invention.
Other embodiments will occur to those skilled in the art and are within the following claims.
What is claimed is:

Claims

1. A method of accomplishing an augmented reality firefighter training system for a user, comprising: providing a head-worn display unit; providing a real device that the user can operate to simulate applying extinguishing agent, to be carried by the user during firefighter training; providing motion tracking hardware, and attaching it both to the head-worn display unit and the extinguishing agent device; using the motion tracking hardware that is attached to the head worn unit to determine the location and direction of the viewpoint of the head-worn display unit; using the motion tracking hardware that is attached to the extinguishing agent device to determine the location and direction of the extinguishing agent device; determining the operating state of the extinguishing agent device; using a computer to generate graphical elements comprising simulated fire graphical elements, simulated multiple layer smoke obscuration graphical elements, and simulated application of an extinguishing agent, showing the extinguishing agent itself emanating directly from the extinguishing agent device, and showing the interaction and extinguishing of the agent with the fire and airborne particles in the system; rendering the generated graphical elements to correspond to the user's viewpoint; occluding graphical elements from the user's viewpoint which are blocked by selected persons or objects; and creating for the user a mixed view comprised of an actual view of the real world as it appears in front of the user, where graphical elements can be placed any place in the real world and remain anchored to that place in the real world regardless of the direction in which the user is looking, wherein the rendered graphical elements are superimposed on the actual view, to accomplish an augmented reality view of fire and smoke in the real world, and the application of extinguishing agent to the fire, and the effect of extinguishing agent on the fire.
2. A method of generating a three-dimensional fire and smoke plume for graphical display, comprising: employing a first particle system for lower persistent flames; employing a second particle system for upper intermittent flames; texture mapping both of said particle systems; giving flame particles an impetus towards the flame center to simulate air entrainment; and employing a third particle system for a buoyant smoke plume.
3. A method of simulating multiple layer obscuration from a viewpoint, comprising: displaying polygonal representations of objects; calculating transmittance along a vector from the viewpoint to the polygons; applying said transmittance, original polygon color, and obscuration color to recolor the polygons; and modifying said transmittance to represent light sources.
4. A method of simulating flow of an extinguishing agent from a fire hose or other agent application means for graphical display, comprising: employing at least one particle system to simulate physical behavior of an extinguishing agent stream; organizing particles into regularly spaced rings; using particle locations as polygon vertices to create a surface; texture mapping said surface; and ' moving said texture map in the direction of extinguishing agent flow to simulate extinguishing agent flow.
5. The method of claim 1 in which flame height and smoke layer parameters of the graphical elements are controlled by output from a zone fire model.
6. The method of claim 1 in which the simulated fire graphical elements are generated by: employing a first particle system for lower persistent flames; employing a second particle system for upper intermittent flames; texture mapping both of said particle systems; giving flame particles an impetus towards the flame center to simulate air entrainment; and employing a third particle system for a smoke plume.
7. The method of claim 1 in which the smoke obscuration graphical elements rendering is accomplished by: displaying polygonal representations of objects; calculating transmittance along a vector from the viewpoint to the polygons; applying said transmittance, original polygon color, and obscuration color to recolor the polygons; and modifying said transmittance to represent light sources.
8. The method of claim 1 in which the extinguishing agent is simulated using the following method: employing a particle system to simulate physical behavior of an extinguishing agent stream; organizing particles into regularly spaced rings; using particle locations as polygon vertices to create a surface; texture mapping said surface; and moving said texture map in the direction of extinguishing agent flow to simulate extinguishing agent flow.
9. The method of claim 1 in which real- world imagery is provided by a head- worn camera.
10. The method of claim 9 in which real- world imagery and the rendered graphical elements are mixed via a luminance key in a video mixer.
11. The method of claim 10 in which a separate image indicating the transparency of the rendered graphical elements is generated and used as an external key.
12. The method of claim 9 in which real-world imagery and the rendered graphical elements are mixed on a computer using a transparency (alpha) channel for blending.
13. The method of claim 1 in which real- world imagery and the rendered graphical elements are combined on an optical see-through head-worn display.
14. The method of claim 4 where a particle system is additionally used to represent water droplets.
15. The method of claim 2 in which particles are represented by biUboarded, textured, polygons with opaque centers and less opaque or transparent edges.
16. The method of claim 1 in which multiple trainees with similar apparatus can view and interact with the same fire scenario.
17. The method of claim 2 in which graphics display depth buffer writing is disabled to improve blending or particles without ordering particles from back to front.
18. The method of claim 2 further including placing an apparent light source at the flame location.
19. The method of claim 18, further including fluctuating the apparent light source synchronously with emissions of the intermittent flames.
20. The method of claim 2 in which the smoke particles are textured, spherically biUboarded particles.
21. The method of claim 2 in which a drag force is used to create particle acceleration in the persistent flame, near-constant particle velocity in the intermittent flame, and deceleration in the smoke plume.
22. The method of claim 2 in which the flames have a base area, and the intermittent flames are displayed with intermittence that is based on the area of the flame base.
23. The method of claim 2 in which flames have a base area that is represented by a grid of smaller sub-areas.
24. The method of claim 23 in which height of flames emanating from each sub-area is separately controllable.
25. The method of claim 3 further including displaying a biUboarded, texture-mapped polygon to simulate the diffuse glow of a light source through obscuration.
26. The method of claim 3 further including employing a smoke particle system for an upper smoke layer.
27. The method of claim 4 in which there are surfaces in the display, and wherein the intersection of a water stream with a surface generates steam particles.
28. The method of claim 4 in which intersection of the extinguishing agent stream with a flame base grid sub-area is determined, and used to reduce the height of the flames emanating from that sub-area.
29. The method of claim 24 in which intersection of the extinguishing agent stream with a flame base grid sub-area is determined, and used to reduce the height of the flames emanating from that sub-area.
30. The method of claim 29 in which flame height is reduced to zero in a specified extinguish time if extinguishing agent is applied constantly.
31. The method of claim 30 in which extinguishing agent must be applied for a specified soak time after flame height has been reduced to zero.
32. The method of claim 31 in which flames reflash and grow after a specified reflash time if extinguishing agent application is stopped before the specified amount of soak time.
33. The method of claim 4 in which there are polygonal models in the display, and wherein the extinguishing agent particles' collision with a polygonal model are determined, and the direction of movement of the extinguishing agent particles is changed accordingly based on particle elasticity.
34. The method of claim 4 in which two rings of particles are emitted to create an outer and inner surface.
35. The method of claim 34 in which the outer surface is given a different, more transparent texture than the inner surface in order to mask the surface edges.
36. The method of claim 2 in which of at least one of the height of the persistent flame region and the height of the intermittent flame region is controllable.
37. The method of claim 2 in which said texture map is projected through the flame particle systems and rotated about a vertical axis.
38. The method of claim 3 where layer heights and optical densities are separately controlled.
39. The method of claim 25 where the height and width of the glow polygon are related to the height and width of the light source.
40. The method of claim 3 where the light source intensity is specified with a luminosity color component.
41. The method of claim 2 where any or all particle systems are tested for collision with a geometric model and the direction of movement of the particles is changed accordingly based on particle elasticity.
42. The method of claim 1 further comprising creating a geometric representation of real- world objects and using the geometric representation to occlude the computer-generated graphical elements so as to allow them to appear appropriately behind or in front of real- world objects.
43. The method of claim 1 in which the rendering comprises matching the user's virtual eye location and field of view to the user's real eye location and field of view.
44. The method of claim 1 in which view frustum culling is applied to the computer- generated graphical elements to optimize graphics performance.
45. The method of claim 33 in which collision is detected by testing every particle with all polygons that they may collide with.
46. The method of claim 33 in which selected particles are tested for collision with selected polygons in order to improve speed.
47. The method of claim 46 in which space is partitioned into a 3-D grid in order to limit the number of polygons and particle pairings that are tested for collision.
48. The method of claim 46 in which space is partitioned using a binary space partition (BSP) tree approach in order to determine which particles to test for collision with which polygons.
49. The method of claim 46 in which space is partitioned using an octree space partition (OSP) tree approach in order to determine which particles to test for collision with which polygons.
50. The method of claim 33 in which collision is determined by setting axis-aligned minimum and maximum bounds for particle motion.
51. The method of claim 41 in which collision is detected by testing every particle with all polygons that they may collide with.
52. The method of claim 41 in which selected particles are tested for collision with selected polygons in order to improve speed.
53. The method of claim 52 in which space is partitioned into a 3-D grid in order to limit the number of polygons and particle pairings that are tested for collision.
54. The method of claim 52 in which space is partitioned using a binary space partition (BSP) tree approach in order to determine which particles to test for collision with which polygons.
55. The method of claim 52 in which space is partitioned using an octree space partition (OSP) tree approach in order to determine which particles to test for collision with which polygons.
56. The method of claim 41 in which collision is determined by setting axis-aligned minimum and maximum bounds for particle motion.
57. The method of claim 1 in which the device that the user can operate to simulate applying extinguishing agent is instrumented to allow adjustment of the flow rate and fog pattern of the computer-generated extinguishing agent.
58. The method of claim 2 in which smoke particles are given the properties of volume and optical density in order to facilitate accumulation of a smoke layer.
59. The method of claim 2 in which the motion of particles can be affected by an external force such as airflow through a vent or extinguishing agent flow from a nozzle.
60. The method of claim 5 in which the upper layer temperature is monitored and an indication of flashover is displayed when it is predicted to occur.
61. The method of claim 1 further comprising attaching audio to the location of objects in the mixed view.
62. The method of claim 61 in which the volume of the audio increases with fire size.
63. The method of claim 2 in which in which flame height is controlled by output from a zone fire model.
64. The method of claim 3 in which in which smoke layer parameters are controlled by output from a zone fire model.
65. The method of claim 1 further comprising periodically updating the prediction amount for motion tracking based on display frame rate in order to best anchor graphical elements in the real environment.
66. A method of calculating in real-time a force to apply to components in a computer simulation to simulate the effects that an extinguishing agent stream has on the airflow in the vicinity of the stream.
67. The method of claim 66 wherein the said simulation components are selected from the group of simulations consisting of fire plumes, smoke plumes, ambient steam, ambient smoke, fumes, other aerosols, gases, and particulate matter.
68. The method of claim 66 where the said computer simulation components are particle systems.
69. The method of claim 66 where a particle system is used to simulate the extinguishing agent.
70. The method of claim 66 where multiple layers of cones of extinguishing agent are used to simulate multiple levels of force to apply to the simulation components.
71. The method of claim 70 where the applied force varies from cone layer to cone layer in order to accelerate the components of the simulation towards the velocity of the extinguishing agent stream in the cone layer.
72. The method of claim 69 where the extinguishing agent particle mass, type, particle velocity, and distance from extinguishing agent particles to other particles are used to determine a force to apply to simulation components.
73. The method of claim 66 where the results of the calculations are used to graphically simulate the effects of the extinguishing agent stream on the components of the simulation.
74 A method of occluding virtual objects with a real world human in augmented reality comprising:
Creating an orthogonal plane billboard with soft texture edges representing a user's head;
Creating an orthogonal plane billboard with soft texture edges representing a user's torso;
Positioning and orienting the head billboard using motion tracker data;
Positioning and orienting the torso billboard relative to the head billboard;
Displaying the head and torso billboards in real-time 3-D to correspond to the location of a real person; and
Mixing the resulting image with a live image of a real person.
75 A method of occluding virtual objects with a movable real-world physical object in augmented reality comprising:
Creating an orthogonal plane billboard with soft texture edges representing the object;
Positioning and orienting the object billboard using motion tracker data;
Displaying the object billboard in real-time 3-D to correspond to the location of the object; and
Mixing the resulting image with a live image of the real object.
76 The method of Claim 75 in which the movable real world physical object has a joint connecting two parts, one part of which is tracked and the other part of which is connected to the first part by a joint but which is not separately tracked.
77. The method of claim 1 in which said head worn display unit (HMD) is integrated into a self contained breathing apparatus (SCBA) mask and a motion tracker.
78. The method of claim 77, wherein the HMD is non-see-through, and further comprising a head-mounted camera.
79. The method of claim 77 in which the HMD is a see-through display.
80. The method of claim 78, further comprising one or two mirrors to set the camera viewpoint to more closely coincide with the wearer's eye position.
81. The method of claim 78, further comprising a head-mounted camera for generating a stereoscopic view.
82. The method of claim 81, further comprising one or two mirrors to set the camera viewpoints to more closely coincide with the wearer's eye positions.
83. The method of claim 77, further comprising headphones.
84. The method of claim 83, further comprising shafts to connect the headphones to the SCBA, and wherein the shafts are filled with epoxy or other means to strengthen the shafts.
85. The method of claim 80, further comprising a rabber bumper placed around the mirror or minors.
86. The method of claim 82, further comprising a rubber bumper placed around the minor or minors.
87. The method of claim 80 wherein each minor is placed in a mechanical clamp mount.
88. The method of claim 82 wherein each minor is placed in a mechanical clamp mount.
89. The method of claim 80, further comprising a structure for protecting each minor from being bumped or hooked.
90. The method of claim 82, further comprising a structure for protecting each minor from being bumped or hooked.
91. The method of claim 80 wherein each minor is mounted on a mounting plate.
92. The method of claim 82 wherein each minor is mounted on a mounting plate.
93. The method of claim 77 wherein the non-augmented reality portion of the user's field of view is blocked with opaque materials from view by the user such that only augmented reality imagery is visible to the user.
94. The method of claim 93 wherein the said opaque materials are selected from the group of materials consisting of tape, foam, plastic, rubber, silicone, paint, and combinations of these materials.
95. The method of clam 1 in which the extinguishing agent device is ruggedized, and comprises: an instrumented firefighter's vari-nozzle with a pattern selector and bail; and a covering device which protects said nozzle and said instrumentation from shock and environmental hazards
96. The ruggedized instrumented firefighter's vari-nozzle of claim 95 in which said instrumented nozzle is instrumented for measuring said nozzle pattern selector position.
97. The ruggedized instrumented firefighter's vari-nozzle of claim 95 in which said instrumented nozzle is instrumented for measuring said bail angle.
98. The ruggedized instrumented firefighter's vari-nozzle of claim 95 in which said instrumented nozzle instrumentation is for tracking or measuring the position and orientation of said nozzle.
99. The ruggedized instrumented firefighter's vari-nozzle of claim 95 in which said covering device holds said instrumentation such that said instrumentation sits on a floating island to provide shock protection.
100. The ruggedized instrumented firefighter's vari-nozzle of claim 99 in which said island is supported by at least one soft post.
101. The ruggedized instrumented firefighter's vari-nozzle of claim 99 in which said island is protected from lateral shock and motion by a soft barrier.
102. The ruggedized instrumented firefighter's vari-nozzle of claim 95 in which a soft bumper is placed on at least one corner of said covering device to protect said corner from shock.
103. The ruggedized instrumented firefighter's vari-nozzle of claim 95 in which said covering device has a color scheme that indicates that said nozzle is to be used for training purposes.
104. The ruggedized instrumented firefighter's vari-nozzle of claim 95 in which said covering device has signage which indicates that said nozzle is to be used for training purposes.
105. The ruggedized instrumented firefighter's vari-nozzle of claim 95 in which said instrumentation is electronically connected at one location by a single connector.
106. The ruggedized instrumented firefighter's vari-nozzle of claim 105 in which said single connector has soldering posts for connections.
107. The ruggedized instrumented firefighter's vari-nozzle of claim 105 in which said single connector has screw down connections.
108. The ruggedized instrumented firefighter's vari-nozzle of claim 105 in which said single connector has quick-release connections.
109. The ruggedized instrumented firefighter's vari-nozzle of claim 95 in which said covering device also provides protection against water, dirt, and contaminant penetration.
110. The ruggedized instramented firefighter's vari-nozzle of claim 95 in which all instrumentation connecting wires exit the instrumented vari-nozzle through the nozzle-hose interface.
111. The ruggedized instramented firefighter's vari-nozzle of claim 95 in which all wires and instrumentation, except those which must be exposed in order to function properly, are hidden from view by said covering device.
112. The ruggedized instramented firefighter's vari-nozzle of claim 95 further comprising computer equipment not carried by the user and a wireless system to provide an electronic connection for the instrumentation to the computer equipment. .
113. The ruggedized instramented firefighter's vari-nozzle of claim 98 in which said tracking equipment comprises a microphone.
114. The ruggedized instrumented firefighter's vari-nozzle of claim 98 in which said tracking equipment comprises a speaker.
115. The ruggedized instramented firefighter's vari-nozzle of claim 98 in which said tracking equipment comprises an optical receiver.
116. The method of claim 1 further comprising a ruggedized firefighter's Self Contained Breathing Apparatus (SCBA) instrumented with electronic and passive equipment, comprising: an instrumented firefighter's SCBA; and a covering device which protects said SCBA and said instrumentation from shock and environmental hazards
117. The ruggedized firefighter's SCBA of claim 116 in which the instrumentation comprises a Head Mounted Display (HMD).
118. The ruggedized firefighter's SCBA of claim 117 in which the HMD is a non-see- through HMD.
119. The ruggedized firefighter's SCBA of claim 117 in which the HMD is a see-through HMD.
120. The ruggedized firefighter's SCBA of claim 116 in which the instrumentation comprises tracking equipment that measures the position and orientation of the equipment mounted on the SCBA.
121. The ruggedized firefighter's SCBA of claim 116 in which the instrumentation comprises a camera.
122. The ruggedized firefighter's SCBA of claim 121 in which the camera is used to acquire an image of the real world in the user's field of view.
123. The ruggedized firefighter's SCBA of claim 122 further comprising means for shifting the image acquired by the camera.
124. The ruggedized firefighter's SCBA of claim 116 in which the instrumentation comprises an image shifter.
125. The ruggedized firefighter's SCBA of claim 124 in which the image shifter comprises a prism.
126. The ruggedized firefighter's SCBA of claim 124 in which the image shifter comprises a set of two mirrors.
127. The ruggedized firefighter's SCBA of claim 126 in which the minors are made of plastic.
128. The ruggedized firefighter's SCBA of claim 126 in which the minors are made of glass.
129. The ruggedized firefighter's SCBA of claim 126 in which the minors are made of metal.
130. The ruggedized firefighter's SCBA of claim 116 in which the covering has a color scheme that indicates that the SCBA is to be used for training purposes.
131. The ruggedized firefighter's SCBA of claim 116 in which the covering has signage which indicates that the SCBA is to be used for training purposes.
132. The ruggedized firefighter's SCBA of claim 116 in which the instrumentation is connected via a cable to the instrumentation's associated computer equipment at one location by a single connector.
133. The ruggedized firefighter's SCBA of claim 132 in which the single connector has soldering posts for connections.
134. The ruggedized firefighter's SCBA if claim 132 in which the single connector has screw down connections.
135. The raggedized firefighter's SCBA of claim 132 in which the single connector has quick-release connections.
136. The ruggedized firefighter's SCBA of claim 116 in which the covering device also provides protection against water, dirt, and contaminant penetration.
137. The ruggedized firefighter's SCBA of claim 116 in which all wires and instrumentation are hidden from view by the cover and the SCBA.
138. The ruggedized firefighter's SCBA of claim 124 in which the image shifter comprises one minor.
139. The raggedized firefighter's SCBA of claim 138 in which the minor is made of glass.
140. The ruggedized firefighter's SCBA of claim 138 in which the minor is made of plastic.
141. The ruggedized firefighter's SCBA of claim 138 in which the minor is made of metal.
142. The raggedized firefighter's SCBA of claim 138 in which the image shifter further comprises an electronic means for reversing the image, to achieve the affect that two minors have, ensuring that the image is upright and not reversed.
143. The raggedized firefighter's SCBA of claim 116 in which the covering device defines air holes to provide cooling of the components.
144. The ruggedized firefighter's SCBA of claim 116 further comprising means to use the breathing action of the user to draw air over the electronic equipment to provide cooling of the components.
145. The ruggedized firefighter's SCBA of claim 116 further comprising computer equipment not worn by the user, and a wireless system to provide an electronic connection for the equipment on the SCBA to the computer equipment.
146. The ruggedized firefighter's SCBA of claim 120 in which the tracking equipment comprises a microphone.
147. The raggedized firefighter's SCBA of claim 120 in which the tracking equipment comprises a speaker.
148. The ruggedized firefighter's SCBA of claim 120 in which the tracking equipment comprises an optical receiver.
149. The ruggedized firefighter's SCBA of claim 122 in which the instrumentation defines two optical paths that are used to produce a stereo simulation to the user.
150. The ruggedized firefighter's SCBA of claim 149 wherein the instrumentation comprises two cameras and two prisms.
PCT/US2002/025065 2001-08-09 2002-08-07 Augmented reality-based firefighter training system and method WO2003015057A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP02752732A EP1423834A1 (en) 2001-08-09 2002-08-07 Augmented reality-based firefighter training system and method
CA002456858A CA2456858A1 (en) 2001-08-09 2002-08-07 Augmented reality-based firefighter training system and method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US09/927,043 US7110013B2 (en) 2000-03-15 2001-08-09 Augmented reality display integrated with self-contained breathing apparatus
US09/927,043 2001-08-09
US10/123,364 2002-04-16
US10/123,364 US6822648B2 (en) 2001-04-17 2002-04-16 Method for occlusion of movable objects and people in augmented reality scenes

Publications (1)

Publication Number Publication Date
WO2003015057A1 true WO2003015057A1 (en) 2003-02-20

Family

ID=26821473

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/025065 WO2003015057A1 (en) 2001-08-09 2002-08-07 Augmented reality-based firefighter training system and method

Country Status (3)

Country Link
EP (1) EP1423834A1 (en)
CA (1) CA2456858A1 (en)
WO (1) WO2003015057A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1465115A2 (en) * 2003-03-14 2004-10-06 British Broadcasting Corporation Method and apparatus for generating a desired view of a scene from a selected viewpoint
EP1775556A1 (en) * 2005-10-13 2007-04-18 Honeywell International Inc. Synthetic vision final approach terrain fading
US8135227B2 (en) 2007-04-02 2012-03-13 Esight Corp. Apparatus and method for augmenting sight
AT509799B1 (en) * 2010-04-29 2013-01-15 Gerhard Gersthofer EXERCISE ARRANGEMENT WITH FIRE EXTINGUISHER FOR FIRE FIGHTING
WO2013111145A1 (en) * 2011-12-14 2013-08-01 Virtual Logic Systems Private Ltd System and method of generating perspective corrected imagery for use in virtual combat training
US8610771B2 (en) 2010-03-08 2013-12-17 Empire Technology Development Llc Broadband passive tracking for augmented reality
US9618748B2 (en) 2008-04-02 2017-04-11 Esight Corp. Apparatus and method for a dynamic “region of interest” in a display system
NO341406B1 (en) * 2016-07-07 2017-10-30 Real Training As Training system
FR3064801A1 (en) * 2017-03-31 2018-10-05 Formation Conseil Securite FIRE EXTINGUISHING DEVICE MANIPULATION SIMULATOR
US10366514B2 (en) 2008-04-05 2019-07-30 Sococo, Inc. Locating communicants in a multi-location virtual communications environment
EP3466295A4 (en) * 2016-05-25 2020-03-11 Sang Kyu Min Virtual reality dual use mobile phone case
US10659511B2 (en) 2007-10-24 2020-05-19 Sococo, Inc. Automated real-time data stream switching in a shared virtual area communication environment
US10728144B2 (en) 2007-10-24 2020-07-28 Sococo, Inc. Routing virtual area based communications
WO2020256965A1 (en) * 2019-06-19 2020-12-24 Carrier Corporation Augmented reality model-based fire extinguisher training platform
US11020624B2 (en) 2016-04-19 2021-06-01 KFT Fire Trainer, LLC Fire simulator
US11023092B2 (en) 2007-10-24 2021-06-01 Sococo, Inc. Shared virtual area communication environment based apparatus and methods
CN112930061A (en) * 2021-03-04 2021-06-08 贵州理工学院 VR-BOX experiences interactive installation
US11088971B2 (en) 2012-02-24 2021-08-10 Sococo, Inc. Virtual area communications
FR3120731A1 (en) * 2021-03-14 2022-09-16 Ertc Center CBRN RISKS AND THREATS TRAINING SYSTEM
US12041103B2 (en) 2011-03-03 2024-07-16 Sococo, Inc. Realtime communications and network browsing client

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117533A (en) * 2018-07-27 2019-01-01 上海宝冶集团有限公司 Electronic workshop fire-fighting method based on BIM combination VR

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5059124A (en) * 1990-08-03 1991-10-22 Masahiro Tsujita Imitation apparatus for fire extinguishing training
US5660549A (en) * 1995-01-23 1997-08-26 Flameco, Inc. Firefighter training simulator
US5823784A (en) * 1994-05-16 1998-10-20 Lane; Kerry S. Electric fire simulator
US5920492A (en) * 1996-04-26 1999-07-06 Southwest Research Institute Display list generator for fire simulation system
US5984684A (en) * 1996-12-02 1999-11-16 Brostedt; Per-Arne Method and system for teaching physical skills
US6129552A (en) * 1996-07-19 2000-10-10 Technique-Pedagogie-Securite Equipements Teaching installation for learning and practicing the use of fire-fighting equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5059124A (en) * 1990-08-03 1991-10-22 Masahiro Tsujita Imitation apparatus for fire extinguishing training
US5823784A (en) * 1994-05-16 1998-10-20 Lane; Kerry S. Electric fire simulator
US5660549A (en) * 1995-01-23 1997-08-26 Flameco, Inc. Firefighter training simulator
US5920492A (en) * 1996-04-26 1999-07-06 Southwest Research Institute Display list generator for fire simulation system
US6129552A (en) * 1996-07-19 2000-10-10 Technique-Pedagogie-Securite Equipements Teaching installation for learning and practicing the use of fire-fighting equipment
US5984684A (en) * 1996-12-02 1999-11-16 Brostedt; Per-Arne Method and system for teaching physical skills

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1465115A3 (en) * 2003-03-14 2005-09-21 British Broadcasting Corporation Method and apparatus for generating a desired view of a scene from a selected viewpoint
EP1465115A2 (en) * 2003-03-14 2004-10-06 British Broadcasting Corporation Method and apparatus for generating a desired view of a scene from a selected viewpoint
EP1775556A1 (en) * 2005-10-13 2007-04-18 Honeywell International Inc. Synthetic vision final approach terrain fading
US7719483B2 (en) 2005-10-13 2010-05-18 Honeywell International Inc. Synthetic vision final approach terrain fading
US8135227B2 (en) 2007-04-02 2012-03-13 Esight Corp. Apparatus and method for augmenting sight
US11023092B2 (en) 2007-10-24 2021-06-01 Sococo, Inc. Shared virtual area communication environment based apparatus and methods
US10728144B2 (en) 2007-10-24 2020-07-28 Sococo, Inc. Routing virtual area based communications
US10659511B2 (en) 2007-10-24 2020-05-19 Sococo, Inc. Automated real-time data stream switching in a shared virtual area communication environment
US9618748B2 (en) 2008-04-02 2017-04-11 Esight Corp. Apparatus and method for a dynamic “region of interest” in a display system
US10366514B2 (en) 2008-04-05 2019-07-30 Sococo, Inc. Locating communicants in a multi-location virtual communications environment
US9390503B2 (en) 2010-03-08 2016-07-12 Empire Technology Development Llc Broadband passive tracking for augmented reality
US8610771B2 (en) 2010-03-08 2013-12-17 Empire Technology Development Llc Broadband passive tracking for augmented reality
AT509799B1 (en) * 2010-04-29 2013-01-15 Gerhard Gersthofer EXERCISE ARRANGEMENT WITH FIRE EXTINGUISHER FOR FIRE FIGHTING
US12041103B2 (en) 2011-03-03 2024-07-16 Sococo, Inc. Realtime communications and network browsing client
WO2013111145A1 (en) * 2011-12-14 2013-08-01 Virtual Logic Systems Private Ltd System and method of generating perspective corrected imagery for use in virtual combat training
US11088971B2 (en) 2012-02-24 2021-08-10 Sococo, Inc. Virtual area communications
US11951344B2 (en) 2016-04-19 2024-04-09 KFT Fire Trainer, LLC Fire simulator
US11020624B2 (en) 2016-04-19 2021-06-01 KFT Fire Trainer, LLC Fire simulator
US10948727B2 (en) 2016-05-25 2021-03-16 Sang Kyu MIN Virtual reality dual use mobile phone case
EP3466295A4 (en) * 2016-05-25 2020-03-11 Sang Kyu Min Virtual reality dual use mobile phone case
NO341406B1 (en) * 2016-07-07 2017-10-30 Real Training As Training system
WO2018009075A1 (en) * 2016-07-07 2018-01-11 Real Training As Training system
NO20161132A1 (en) * 2016-07-07 2017-10-30 Real Training As Training system
FR3064801A1 (en) * 2017-03-31 2018-10-05 Formation Conseil Securite FIRE EXTINGUISHING DEVICE MANIPULATION SIMULATOR
WO2020256965A1 (en) * 2019-06-19 2020-12-24 Carrier Corporation Augmented reality model-based fire extinguisher training platform
CN112930061A (en) * 2021-03-04 2021-06-08 贵州理工学院 VR-BOX experiences interactive installation
WO2022194696A1 (en) * 2021-03-14 2022-09-22 Ertc Center System for training in cbrn risks and threats
FR3120731A1 (en) * 2021-03-14 2022-09-16 Ertc Center CBRN RISKS AND THREATS TRAINING SYSTEM

Also Published As

Publication number Publication date
CA2456858A1 (en) 2003-02-20
EP1423834A1 (en) 2004-06-02

Similar Documents

Publication Publication Date Title
US6809743B2 (en) Method of generating three-dimensional fire and smoke plume for graphical display
EP1423834A1 (en) Augmented reality-based firefighter training system and method
Vince Introduction to virtual reality
Vince Essential virtual reality fast: how to understand the techniques and potential of virtual reality
US8195084B2 (en) Apparatus and method of simulating a somatosensory experience in space
CN102540464B (en) Head-mounted display device which provides surround video
US20020191004A1 (en) Method for visualization of hazards utilizing computer-generated three-dimensional representations
US20030210228A1 (en) Augmented reality situational awareness system and method
US7046214B2 (en) Method and system for accomplishing a scalable, multi-user, extended range, distributed, augmented reality environment
WO2007133209A1 (en) Advanced augmented reality system and method for firefighter and first responder training
US20060114171A1 (en) Windowed immersive environment for virtual reality simulators
JP2007229500A (en) Method and apparatus for immersion of user into virtual reality
AU2002366994A1 (en) Method and system to display both visible and invisible hazards and hazard information
Vichitvejpaisal et al. Firefighting simulation on virtual reality platform
Hatsushika et al. Underwater vr experience system for scuba training using underwater wired hmd
Wilson et al. Design of monocular head-mounted displays for increased indoor firefighting safety and efficiency
SE523098C2 (en) Milieu creation device for practising e.g. a sport includes stimuli generation with optical positioning system
Rodrigue et al. Mixed reality simulation with physical mobile display devices
AU2002355560A1 (en) Augmented reality-based firefighter training system and method
USRE45525E1 (en) Apparatus and method of simulating a somatosensory experience in space
Segura et al. Interaction and ergonomics issues in the development of a mixed reality construction machinery simulator for safety training
Schoor et al. Elbe Dom: 360 Degree Full Immersive Laser Projection System.
Sibert et al. Initial assessment of human performance using the gaiter interaction technique to control locomotion in fully immersive virtual environments
Silverman The Rule of 27s: A Comparative Analysis of 2D Screenspace and Virtual Reality Environment Design
Gupta et al. Training in virtual environments

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG UZ VN YU ZA ZM ZW

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BY BZ CA CH CN CO CR CU CZ DE DM DZ EC EE ES FI GB GD GE GH HR HU ID IL IN IS JP KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NO NZ OM PH PL PT RU SD SE SG SI SK SL TJ TM TN TR TZ UA UG UZ VN YU ZA ZM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ UG ZM ZW AM AZ BY KG KZ RU TJ TM AT BE BG CH CY CZ DK EE ES FI FR GB GR IE IT LU MC PT SE SK TR BF BJ CF CG CI GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2002355560

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 2456858

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2002752732

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2002752732

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP