US20230215070A1 - Facial activity detection for virtual reality systems and methods - Google Patents
Facial activity detection for virtual reality systems and methods Download PDFInfo
- Publication number
- US20230215070A1 US20230215070A1 US18/092,728 US202318092728A US2023215070A1 US 20230215070 A1 US20230215070 A1 US 20230215070A1 US 202318092728 A US202318092728 A US 202318092728A US 2023215070 A1 US2023215070 A1 US 2023215070A1
- Authority
- US
- United States
- Prior art keywords
- virtual reality
- rider
- facial
- data
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001815 facial effect Effects 0.000 title claims abstract description 128
- 238000000034 method Methods 0.000 title claims description 31
- 238000001514 detection method Methods 0.000 title description 4
- 230000033001 locomotion Effects 0.000 claims description 123
- 230000001133 acceleration Effects 0.000 claims description 8
- 230000008569 process Effects 0.000 description 18
- 230000000007 visual effect Effects 0.000 description 13
- 210000003128 head Anatomy 0.000 description 12
- 238000004891 communication Methods 0.000 description 5
- 210000004709 eyebrow Anatomy 0.000 description 5
- 238000003058 natural language processing Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000001720 vestibular Effects 0.000 description 5
- 201000003152 motion sickness Diseases 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000004397 blinking Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000037308 hair color Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 241001282135 Poromitra oscitans Species 0.000 description 1
- 206010048232 Yawning Diseases 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000013481 data capture Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 210000003027 ear inner Anatomy 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000037361 pathway Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63G—MERRY-GO-ROUNDS; SWINGS; ROCKING-HORSES; CHUTES; SWITCHBACKS; SIMILAR DEVICES FOR PUBLIC AMUSEMENT
- A63G31/00—Amusement arrangements
- A63G31/16—Amusement arrangements creating illusions of travel
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/205—3D [Three Dimensional] animation driven by audio data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
- G10L2015/025—Phonemes, fenemes or fenones being the recognition units
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L21/10—Transforming into visible information
- G10L2021/105—Synthesis of the lips movements from speech, e.g. for talking heads
Definitions
- the present disclosure generally relates to virtual reality systems and, more particularly, to virtual reality (VR) systems implemented and/or operated incorporating facial activity detection to facilitate providing a more immersive user experience.
- VR virtual reality
- Amusement parks often contain attractions or experiences that use virtual reality systems to provide enjoyment and entertain guests of the amusement parks.
- the attractions may include themed environments established using display devices presenting media content (e.g., in the form of video, text, still imagery, motion graphics, or a combination thereof).
- media content e.g., in the form of video, text, still imagery, motion graphics, or a combination thereof.
- it may be desirable to display media content with special visual effects to create a realistic and/or immersive viewing or playing experience for guests.
- attractions may be implemented and/or operated to present virtual reality content to guests.
- a virtual reality ride system includes a display to present virtual reality image content to a first rider, an audio sensor to capture audio data associated with a second rider, and an image sensor to capture image data associated with the second rider.
- the virtual reality ride system also includes at least one processor communicatively coupled to the display and configured to receive the audio data, the image data, or both.
- the at least one processor is also configured to generate a virtual avatar corresponding to the second rider, wherein the virtual avatar includes a set of facial features.
- the at least one processor is also configured to update the set of facial features based on the audio data, the image data, or both and instruct the display to present the virtual reality image content including the virtual avatar and the updated set of facial features.
- a virtual reality device in an embodiment, includes an audio sensor to capture audio data indicative of speech of a user and an image sensor to capture image data indicative of facial characteristics of the user.
- the virtual reality device also includes at least one processor communicatively coupled to the audio sensor, and the image sensor.
- the at least one processor determines a set of facial characteristics based on the image data, determines a set of facial movements associated with the set of facial characteristics based on the audio data, and transmits the set of facial characteristics and the set of facial movements to a second virtual reality device, the second virtual reality device configured to display virtual reality image content based on the set of facial characteristics and the set of facial movements.
- a method includes receiving audio data, image data, or both, generating a virtual avatar based on the image data, the virtual avatar including a set of facial features, and determining a set of facial characteristics associated with the image data.
- the method also includes comparing the set of facial characteristics with a set of facial gesture profiles, each facial gesture profile of the set of facial gesture profiles including a corresponding set of stored facial characteristics.
- the method also includes selecting, based on the comparison, a facial gesture profile of the set of facial gesture profiles, animating the set of facial features based on the selected facial gesture profile, the audio data, or both, and presenting virtual reality image content including the virtual avatar and the animated set of facial features.
- FIG. 1 is a block diagram of a virtual reality ride system including a virtual reality device, in accordance with an embodiment of the present disclosure
- FIG. 2 is an example of the virtual reality device of FIG. 1 , in accordance with an embodiment of the present disclosure
- FIG. 3 is an example of multiple virtual reality devices of FIG. 1 , in accordance with an embodiment of the present disclosure.
- FIG. 4 is a flow diagram of an example process for operating the virtual reality ride system of FIG. 1 , in accordance with an embodiment of the present disclosure.
- references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.
- “viseme” refers to a shape and/or configuration of facial features such as the mouth, lips, and/or tongue when making a corresponding sound.
- phoneme refers to a distinct unit of sound in spoken language that facilitates distinguishing between different spoken words.
- the present disclosure relates generally to virtual reality systems. More particularly, the present disclosure relates to virtual reality systems incorporating facial activity detection to facilitate providing a more immersive user experience.
- Amusement parks often contain attractions or experiences that use virtual reality systems to provide enjoyment and entertain guests of the amusement parks.
- the attractions may include any type of ride system that is designed to entertain a passenger, such as an attraction that includes a ride vehicle that travels along a path, an attraction that includes a room or theatre with stationary or moving seats for passengers to sit in while the passengers watch a video, an attraction that includes a pathway for guests to travel along, a room for guests to explore, or the like.
- the disclosed embodiments generally discuss virtual reality systems that are used for entertainment purposes, the disclosed embodiments may also apply to virtual reality systems that are used for any other suitable purpose.
- a rider on a virtual reality ride system may experience virtual reality image content that fails to resemble the surroundings (e.g., other riders, weather conditions, landscape, and so forth), which affects (e.g., reduces and/or degrades) the ride experience, when virtual reality image content does not match the rider's expected view.
- the rider may expect to see the other rider depicted in the virtual reality display (e.g., head-mounted display).
- the virtual reality image content presented on the display may not match the other rider's physical characteristics, gestures, facial features, and so forth. As such, a mismatch between the rider's expected view and the virtual reality image content may affect the ride experience.
- a virtual reality ride system may generate virtual reality image content based at least in part on characteristics of other riders and/or guests and/or based at least in part on characteristics of physical (e.g., actual and/or real) movement of a ride vehicle, and thus, a rider carried by the vehicle.
- a virtual reality ride system may present virtual reality image content to a rider of a ride vehicle such that virtual reality image content is coordinated with physical (e.g., real and/or actual) characteristics of other riders.
- the virtual reality ride system may generate and display virtual reality image content that includes virtual avatars with similar facial characteristics (e.g., mouth, nose, eyes, and so forth), similar facial movement (e.g., open mouth, raised eyebrows, furrowed brow, and so forth), similar facial gestures (e.g., smile, frown, excitement, and so forth) and that results in visually perceived images occurring at approximately the same time and for approximately the same duration.
- virtual reality image content includes virtual avatars with similar facial characteristics (e.g., mouth, nose, eyes, and so forth), similar facial movement (e.g., open mouth, raised eyebrows, furrowed brow, and so forth), similar facial gestures (e.g., smile, frown, excitement, and so forth) and that results in visually perceived images occurring at approximately the same time and for approximately the same duration.
- virtual avatar refers to a graphical representation (e.g., a virtual representation) of a character (e.g., a rider of a virtual reality ride system and/or a guest of an amusement park attraction or experience) in a graphical environment (e.g., a virtual reality environment, a mixed reality environment, an augmented reality environment, and so forth).
- a graphical representation e.g., a virtual representation of a character
- a graphical environment e.g., a virtual reality environment, a mixed reality environment, an augmented reality environment, and so forth.
- a virtual reality ride system may include one or more image sensors.
- a rider may view a display (e.g., a head-mounted display) that includes a camera facing the rider that is implemented and/or operated to sense (e.g., capture images) physical characteristics of the rider, such as facial features, facial movement, facial gestures, movement of limbs, and so forth.
- a virtual reality ride system may coordinate presentation of virtual reality image content with physical characteristics of the rider to other riders and/or guests at approximately the same time as image data indicative of the physical characteristics is determined (e.g., sensed and/or captured).
- a rider on a virtual reality ride system may speak to another rider.
- an avatar of a rider speaking in the virtual reality image content may fail to resemble a speaking character, which affects (e.g., reduces and/or degrades) the ride experience, when the virtual reality image content does not match the rider's expected view.
- a virtual reality ride system may generate virtual reality image content based at least in part on captured speech of riders and/or guests.
- the virtual reality ride system may generate and provide audio content that corresponds to captured speech of riders and may generate and display virtual reality image content that includes virtual avatars with similar mouth movement to speaking riders that results in visually perceived images occurring at approximately the same time and for approximately the same duration.
- a virtual reality ride system may include one or more audio sensors (e.g., microphones).
- a display e.g., a head-mounted display
- the virtual reality ride system may analyze the sensed speech to determine text based on the captured speech and/or to determine facial movements based on the captured speech and/or determined text.
- a virtual reality ride system may coordinate presentation of audio content and/or virtual reality image content with captured speech of riders at approximately the same time as audio data indicative of the speech being sensed (e.g., captured and/or detected.)
- visual stimuli are perceived by a human's visual system.
- changes in perceived visual stimuli over time may enable a human to detect motion (e.g., movement).
- motion e.g., movement
- the human may perceive (e.g., determine and/or detect) that he/she is moving right relative to the perceived visual stimuli or vice versa.
- a perceived visual stimuli is translated upward over time, the human may perceive that he/she is moving downward relative to the perceived visual stimuli or vice versa.
- Movement of a human may additionally or alternatively be perceived by the human's vestibular system (e.g., inner ear).
- the human's vestibular system e.g., inner ear
- movement of a human may be perceived by the human's vestibular system as well as by the human's visual system.
- a mismatch between the movement perceived by the human's vestibular system and the movement perceived by the human's visual system may result in the human experiencing motion sickness.
- a rider on a virtual reality ride system may experience motion sickness, which affects (e.g., reduces and/or degrades) the ride experience, when visually perceived movement does not match movement perceived by the rider's vestibular system.
- a ride vehicle may carry a rider through a ride environment of a virtual reality ride system and, thus, movement of the rider may be dependent at least in part on movement of the ride vehicle.
- a virtual reality ride system may coordinate virtual reality content with physical ride vehicle movement.
- the virtual reality ride system may display virtual reality image content that is expected to result in characteristics, such as magnitude, time, duration, and/or direction, of visually perceived movement matching corresponding characteristics of movement perceived by the rider's vestibular system.
- a virtual reality ride system may present virtual reality image content to a rider of a ride vehicle such that movement perceived from the virtual reality content is coordinated with physical (e.g., real and/or actual) movement of the ride vehicle.
- the virtual reality ride system may generate and display virtual reality image content that results in visually perceived movement occurring at approximately the same time, for approximately the same duration, and/or in approximately the same direction as the physical movement of the ride vehicle.
- the virtual reality ride system may generate movement-coordinated virtual reality content by adapting (e.g., adjusting) default virtual reality content, for example, which corresponds with a default (e.g., stationary and/or planned) ride vehicle movement profile.
- a virtual reality ride system may include one or more sensors, such as a vehicle sensor, a rider (e.g., head-mounted display) sensor, and/or an environment sensor.
- a ride vehicle may include one or more vehicle sensors, such as a gyroscope and/or accelerometer, which are implemented and/or operated to sense (e.g., measure and/or determine) characteristics of ride vehicle movement, such as movement time, movement duration, movement direction (e.g., orientation), and/or movement magnitude (e.g., distance).
- a virtual reality ride system may coordinate presentation of virtual reality content with ride vehicle movement at least in part by presenting movement-coordinated virtual reality content at approximately the same time as sensor data indicative of occurrence of the ride vehicle movement is determined (e.g., sensed and/or measured).
- generation and/or presentation (e.g., display) of virtual reality content is generally non-instantaneous.
- reactively generating and/or presenting virtual reality content may result in presentation of virtual reality content being delayed relative to another rider's movement and/or corresponding ride vehicle movement.
- reactively generating and/or presenting virtual reality image content may result in the virtual reality image content being displayed after the other rider's movement and/or corresponding ride vehicle movement has already occurred, which, at least in some instances, may result in a reduced and/or degraded rider experience.
- a virtual reality ride system may predict characteristics, such as movement time, movement duration, movement direction, and/or movement magnitude, of the ride vehicle movement and/or riders in the ride vehicle over a prediction horizon (e.g., subsequent period of time).
- the virtual reality ride system may determine a predicted ride vehicle movement profile (e.g., trajectory) over the prediction horizon and/or a predicted rider movement profile (e.g., facial gesture, movement, and so forth) over the prediction horizon.
- the predicted rider movement profile may indicate that a corresponding rider raises their arms from a first time to a second (e.g., subsequent) time, smiles from the second time to a third (e.g., subsequent) time, laughs from the third time to a fourth (e.g., subsequent) time, and so forth.
- the predicted ride vehicle movement profile may indicate that a corresponding ride vehicle moves a first distance (e.g., magnitude) in a first direction from a first time to a second (e.g., subsequent) time, a second distance in a second direction from the second time to a third (e.g., subsequent) time, and so on.
- the techniques described in the present disclosure may facilitate coordinating virtual reality image content based on physical characteristics of riders, the ride vehicle, and/or captured speech, which, at least in some instances, may facilitate improving the ride experience provided by the virtual reality ride system.
- FIG. 1 illustrates an example of a virtual reality ride system 100 including a virtual reality device 102 (e.g., head-mounted display device), any number of environment actuators 122 , and any number of ride vehicles 124 .
- the virtual reality ride system 100 may be used to provide visual effects to a display 112 during an amusement park attraction and/or experience.
- the virtual reality device 102 may be provided in the form of a computing device, such as a head-mounted display device, programmable logic controller (PLC), a personal computer, a laptop, a tablet, a mobile device (e.g., a smart phone), a server, or any other suitable computing device.
- PLC programmable logic controller
- the virtual reality device 102 may control operation of any number of image sensors 110 , any number of audio sensors 114 , and the display 112 and may process data received from the image sensors 110 , audio sensors 114 , environment actuators 122 , vehicle sensors 132 , and/or vehicle actuators 134 .
- the virtual reality device 102 may include the image sensors 110 , the display 112 , the audio sensors 114 , the speakers 116 , and an antenna 118 .
- An automation controller 104 may be coupled to the image sensors 110 , the audio sensors 114 , the display 112 , the antenna 118 , the environment actuators 122 , and/or the ride vehicles 124 by any suitable techniques for communicating data and control signals between the automation controller 104 , the components of the virtual reality device 102 , the environment actuators 122 , and/or the ride vehicles 124 , such as a wireless, optical, coaxial, or other suitable connection.
- the virtual reality device 102 may include a control system having multiple controllers, such as the automation controller 104 , each having at least one processor 106 and at least one memory 108 .
- the virtual reality device 102 may represent a unified hardware component or an assembly of separate components integrated through communicative coupling (e.g., wired or wireless communications). It should be noted that, in some embodiments, the virtual reality device 102 may include additional illustrated components of the virtual reality ride system 100 .
- the virtual reality device 102 may include the vehicle sensors 132 and/or a vehicle controller 126 and may be operable to communicate with additional virtual reality devices.
- the automation controller 104 may use information from the image sensors 110 , the audio sensors 114 , the environment actuators 122 , and/or the ride vehicles 124 to generate and/or update virtual reality image content and to control operation of the display 112 to present the virtual reality image content.
- the virtual reality device 102 may include communication features (e.g., the antenna 118 ) that facilitate communication with other devices (e.g., external sensors, additional virtual reality devices 102 ) to provide additional data for use by the virtual reality device 102 .
- the virtual reality device 102 may operate to communicate with external cameras and/or audio sensors to facilitate image data and/or audio data capture for an amusement park attraction or experience, guest interaction, and so forth.
- the memory 108 may include one or more tangible, non-transitory, computer-readable media that store instructions executable by the processor 106 (representing one or more processors) and/or data to be processed by the processor 106 .
- the memory 108 may include random access memory (RAM), read only memory (ROM), rewritable non-volatile memory, such as flash memory, hard drives, optical discs, and/or the like.
- the processor 106 may include one or more general purpose microprocessors, one or more application specific processors, one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), or any combination thereof.
- the memory 108 may store sensor data and/or information obtained via the image sensors 110 , the audio sensors 114 , the environment actuators 122 , and/or the ride vehicles 124 , virtual reality image content data generated, transmitted, and/or displayed via the display 112 , and/or algorithms utilized by the processor 106 to help control operations of components of the virtual reality ride system 100 based on the sensor data and/or virtual reality image content data. Additionally, the processor 106 may process the sensor data and/or information to generate virtual reality image content data for a virtual avatar for display on the display 112 or another display of another virtual reality device. In certain embodiments, the virtual reality device 102 may include additional elements not shown in FIG. 1 , such as additional data acquisition and processing controls, additional sensors and displays, user interfaces, and so forth.
- the image sensors 110 may be incorporated into the virtual reality device 102 and may be capable of capturing images and/or video of a rider 120 .
- the virtual reality device 102 may be a head-mounted display device worn on the head of the rider 120 and the image sensors 110 may capture any number of images of the rider 120 .
- the image sensors 110 may capture facial features (e.g., eyes, nose, mouth, lips, chin, eyebrows, ears, and so forth) of the rider 120 .
- the image sensors 110 may generate and/or may transmit image data corresponding to the captured images to the automation controller 104 .
- the image sensors 110 may include any number of cameras, such as any number of video cameras, any number of depth cameras capable of determining depth and distance to facial features and/or between facial features, any number of infrared cameras, any number of digital cameras, and so forth.
- the image sensors 110 may process the image data before transmission to the automation controller 104 .
- the image sensors 110 may transmit raw image data to the automation controller 104 .
- the image sensors 110 may be capable of tracking a gaze of the rider 120 .
- the image sensors 110 may determine a direction the rider 120 is looking.
- the memory 108 may store facial gesture profiles associated with a number of facial gestures.
- each facial gesture profile may correspond to a different facial gesture, such as smiling, blinking, frowning, yawning, and so forth.
- the automation controller 104 may compare the captured image data from the image sensors 110 to the stored facial gesture profiles and may determine the captured image data is similar (e.g., matches, within a similarity threshold) to a stored facial gesture profile.
- the automation controller 104 may compare a position, an orientation, a movement, and/or a shape of any number of facial features depicted in the image data and may compare to the stored facial gesture profiles.
- the automation controller 104 may determine a stored facial gesture profile that corresponds to captured images of the rider 120 .
- the display 112 may be capable of depicting image content (e.g., still image, video, visual effects) to be viewed by one or more riders 120 of the virtual reality ride system 100 and/or guests of an amusement park attraction and/or experience.
- the display 112 may be a head-mounted display and may be placed or worn on the head of a rider 120 and the display 112 may be placed in front of either one or both eyes of a rider 120 .
- the display 112 may be capable of depicting virtual reality image content including a virtual avatar (e.g., avatar) of other riders of the virtual reality ride system 100 and/or guests of the amusement park attraction and/or experience.
- the virtual reality image content may include more than one virtual avatar and may depict image content associated with the amusement park attraction and/or experience.
- an amusement park ride may appear to take place on horseback travelling through a forest, on a motorcycle travelling along the road, in a haunted house, and so forth.
- the audio sensors 114 may also be incorporated into the virtual reality device 102 and may be capable of capturing speech and/or sounds of the rider 120 .
- the audio sensors 114 may include microphones and the audio sensors 114 may be positioned on the virtual reality device 102 adjacent/proximate the mouth of the rider 120 wearing the virtual reality device 102 .
- the audio sensors 114 may generate and/or may transmit audio data corresponding to the captured speech and/or sounds to the automation controller 104 .
- the audio sensors 114 may process the audio data before transmission to the automation controller 104 .
- the audio sensors 114 may transmit raw audio data to the automation controller 104 .
- the virtual reality device 102 may include any number of audio playback components, such as one or more speakers 116 , to playback audio content associated with the virtual reality experience.
- the speakers 116 may playback audio corresponding to sounds of a horse during a virtual horseback ride, sounds of a motorcycle during a virtual motorcycle ride, and so forth.
- the speakers 116 may playback audio content based on received audio data from other virtual reality devices 102 .
- virtual reality devices 102 worn by other riders of the virtual reality ride system 100 may capture audio data associated with the other riders (e.g., speech, sounds, and so forth) via audio sensors 114 , as described herein.
- the virtual reality devices 102 may transmit the captured audio data to any number of additional virtual reality devices 102 for playback of the captured audio data via the audio playback components.
- the automation controller 104 may generate and/or update virtual reality image content based on the audio data.
- the automation controller 104 may determine any number of phonemes associated with the audio data.
- the automation controller 104 may determine a sequence of phonemes based on captured speech of a rider 120 of the virtual reality ride system 100 .
- the sequence of phonemes may include an order of phonemes (e.g., first to last) corresponding to when the sounds were made by the rider 120 .
- the automation controller 104 may determine a corresponding sequence of visemes based on the sequence of visemes and/or the audio data.
- the automation controller 104 may determine facial movements (e.g., position and/or shape of the mouth) based on the visemes and may alter the facial features of a virtual avatar corresponding to the rider 120 based on the visemes. Accordingly, the automation controller 104 may generate and/or update the virtual reality image content to display facial movements of the virtual avatar corresponding to the captured speech of the rider 120 .
- the automation controller 104 may analyze the audio data using natural language processing to determine text associated with corresponding captured speech of the rider 120 .
- the automation controller 104 may generate and/or update the virtual reality image content based on the determined text.
- the automation controller 104 may generate and/or animate a rigged model of a virtual avatar based on the determined text.
- the rigged model may include a number of movable features, such as facial features, and the automation controller 104 may animate the movable features based on the captured audio data and/or the determined text.
- the antenna 118 may transmit data to additional virtual reality devices 102 and/or receive data from the additional virtual reality devices 102 via, for example, a network or a direct connection.
- the antenna 118 may receive image data corresponding to images of other riders 120 and/or audio data corresponding to speech and/or sounds of the other riders 120 from additional virtual reality devices 102 .
- the antenna 118 may be communicatively coupled to the automation controller 104 and may transmit data received from other virtual reality devices 102 to the automation controller 104 for processing. Additionally, or alternatively, the antenna 118 may receive image data and/or audio data from the automation controller 104 and may transmit the image data and/or audio data to additional virtual reality devices 102 .
- the antenna 118 may be representative of any of various communication devices (e.g., wired or wireless transmitters and/or receivers).
- the virtual reality ride system 100 may be deployed at an amusement park, a theme park, a carnival, a fair, and/or the like. Additionally, in some embodiments, the virtual reality ride system 100 may be a roller coaster ride system, a lazy river ride system, a log flume ride system, a boat ride system, or the like. However, it should be appreciated that the depicted example is merely intended to be illustrative and not limiting.
- the virtual reality device 102 may be fully included in one or more ride vehicles 124 . Additionally, or alternatively, in other embodiments, any components of the virtual reality device 102 may be remote from the one or more ride vehicles 124 and/or the one or more riders 120 .
- a ride vehicle 124 may generally be implemented and/or operated to carry (e.g., support) one or more riders 120 (e.g., users) through the ride environment of the virtual reality ride system 100 . Accordingly, physical (e.g., actual and/or real) movement (e.g., motion) of a rider 120 in the ride environment may generally be dependent on physical movement of the ride vehicle 124 carrying the rider.
- the ride vehicle may include one or more vehicle actuators 134 .
- the vehicle actuators 134 may include pneumatics, hydraulics, an engine, a motor, and/or a brake that enables controlling movement speed of the ride vehicle 124 .
- the vehicle actuators 134 may include a steering wheel and/or a rudder that enables controlling movement direction of the ride vehicle 124 .
- the ride vehicle 124 may additionally or alternatively include one or more haptic vehicle actuators implemented and/or operated to present virtual reality tactile content.
- one or more environment actuators 122 may be implemented and/or operated to move the ride vehicle 124 .
- the environment actuators 122 may include pneumatics, hydraulics, an engine, a motor, and/or a brake to move the ride vehicle 124 through a ride environment.
- the ride vehicle 124 may also include one or more vehicle sensors 132 to detect (e.g., sense and/or measure) sensor data indicative of any number of movement characteristics of the ride vehicle 124 , such as orientation of the ride vehicle 124 , location of the ride vehicle 124 , movement profile of the ride vehicle 124 , speed of the ride vehicle 124 , acceleration (e.g., accelerating or decelerating) of the ride vehicle 124 , and so forth.
- the ride vehicle 124 may include an accelerometer and/or a gyroscope to detect speed, acceleration, and/or orientation of the ride vehicle 124 .
- the one or more vehicle sensors 132 may generate and/or transmit the sensor data to the vehicle controller 126 and/or the automation controller 104 .
- the vehicle controller 126 may receive the vehicle sensor data and may determine a current and/or past orientation of the ride vehicle 124 , a current and/or past location of the ride vehicle 124 , a current and/or past speed of the ride vehicle 124 , a current and/or past acceleration of the ride vehicle 124 , current and/or past movement characteristics of the ride vehicle 124 , and so forth.
- the vehicle controller 126 may transmit the movement characteristics associated with the ride vehicle 124 to the automation controller 104 .
- the vehicle controller 126 may generate and/or may transmit the vehicle sensor data to the automation controller 104 and the automation controller 104 may process the vehicle sensor data to determine the movement characteristics associated with the ride vehicle 124 based on the vehicle sensor data.
- the automation controller 104 may generate and/or update the virtual reality image content based on the movement characteristics associated with the ride vehicle 124 .
- the automation controller 104 may alter an orientation and/or a position of any number of virtual avatars (e.g., a virtual representation) corresponding to any number of riders of the virtual reality ride system 100 based on the movement characteristics.
- the automation controller 104 may determine the ride vehicle 124 is decelerating. As such, the automation controller 104 may alter an orientation of a virtual avatar corresponding to a rider 120 to show the virtual avatar leaning forward due to the deceleration. Additionally, or alternatively, the automation controller 104 may generate and/or update facial poses and/or gestures based on the movement characteristics of the ride vehicle 124 .
- the automation controller 104 may determine predicted facial poses and/or gestures based on the movement characteristics of the ride vehicle 124 . For example, the automation controller 104 may predict a surprised face (e.g., raised eyebrows, open mouth) based on acceleration of the ride vehicle 124 and may alter the facial pose of the virtual avatar accordingly to display the surprised face.
- a surprised face e.g., raised eyebrows, open mouth
- the virtual reality device 102 may also include one or more sensors to detect (e.g., sense and/or measure) sensor data indicative of any number of movement characteristics of the rider 120 , such as orientation of the rider 120 , a location of the rider 120 , a pose of the rider 120 , speed of the rider 120 , acceleration of the rider 120 , and so forth.
- the virtual reality device 102 may include an accelerometer to detect the rider sensor data and may transmit the sensor data to the automation controller 104 .
- the automation controller 104 may receive the rider sensor data and may determine a current and/or past orientation of the rider 120 , a current and/or past location of the rider 120 , a current and/or past pose of the rider 120 , a current and/or past speed of the rider 120 , a current and/or past acceleration of the rider 120 , and so forth. Additionally, or alternatively, the automation controller 104 may generate and/or may transmit the rider sensor data and/or the determined movement characteristics to any number of additional virtual reality devices 102 associated with other riders of the virtual reality ride system 100 . Additionally, or alternatively, the virtual reality device 102 may receive rider sensor data for any number of riders 120 of the virtual reality ride system 100 .
- the automation controller 104 may generate and/or update the virtual reality image content based on the movement characteristics associated with the rider 120 .
- the automation controller 104 may alter an orientation and/or a position of the virtual avatar corresponding to a rider of the virtual reality ride system based on the movement characteristics. For example, the automation controller 104 may determine the rider turns their head. As such, the automation controller 104 may alter the orientation of the virtual avatar's head corresponding to the rider to show the virtual avatar turned in the same direction.
- the automation controller 104 and/or the vehicle controller 126 may receive the vehicle sensor data and the rider sensor data and may determine relative movement characteristics of the rider 120 relative to the ride vehicle 124 .
- the automation controller and/or the vehicle controller 126 may determine the orientation of the rider 120 relative to the vehicle 124 , the position of the rider 120 relative to the vehicle 124 , the speed of the rider 120 relative to the vehicle 124 , the acceleration of the rider 120 relative to the vehicle 124 , and/or vice versa.
- the virtual reality device 102 and/or the vehicle controller 126 may transmit the relative movement characteristics to any number of additional virtual reality devices 102 associated with other riders of the virtual reality ride system 100 .
- the automation controller 104 and/or the vehicle controller 126 may receive the vehicle sensor data indicative of a current and/or past movement profile of the ride vehicle 124 and may determine a predicted ride vehicle movement that is expected to occur during a subsequent time period.
- a “predicted ride vehicle movement profile” of the ride vehicle 124 describes movement characteristics of the ride vehicle 124 that are predicted (e.g., expected) to occur during a time period.
- the predicted ride vehicle movement profile may include one or more ride vehicle movement times, one or more ride vehicle movement durations, one or more predicted ride vehicle movement directions, one or more predicted ride vehicle movement magnitudes, and so forth.
- the one or more ride vehicle movement times may be indicative of a predicted start time and/or a predicted stop time of a specific movement of the ride vehicle 124 during the time period.
- the one or more ride vehicle movement durations may be indicative of one or more durations over which a specific movement of the ride vehicle 124 is predicted to occur during the time period.
- the one or more predicted ride vehicle movement directions may be indicative of a movement direction of the ride vehicle 124 during a corresponding ride vehicle movement duration in the time period.
- the one or more predicted ride vehicle movement magnitudes may be indicative of a movement magnitude (e.g., distance) of the ride vehicle 124 that is predicted to occur at a corresponding ride vehicle movement time and/or during a corresponding predicted ride vehicle movement duration.
- the automation controller 104 may generate and/or update the virtual reality image content based on the predicted ride vehicle movement profile associated with the ride vehicle 124 . In certain embodiments, the automation controller 104 may alter a position and/or an orientation of any number of virtual avatars corresponding to any number of additional riders of the virtual reality ride system 100 . Additionally, or alternatively, the automation controller 104 may generate and/or update facial poses and/or facial gestures for any number of virtual avatars based on the predicted ride vehicle movement profile associated with the ride vehicle 124 .
- the ride vehicle 124 may include a control system having multiple controllers, such as vehicle controller 126 , each having at least one processor 128 and at least one memory 130 .
- the vehicle controller 126 may be provided in the form of a computing device, such as a programmable logic controller (PLC), a personal computer, a laptop, a tablet, a mobile device (e.g., a smart phone), a server, or any other suitable computing device.
- PLC programmable logic controller
- the vehicle controller 126 may control operation of any number of vehicle sensors 132 , any number of vehicle actuators 134 , and/or any number of environment actuators 122 and may process sensor data received from the vehicle sensors 132 , the vehicle actuators 134 , and/or the environment actuators 122 .
- the vehicle controller 126 may be coupled to the vehicle sensors 132 , the vehicle actuators 134 , and/or the environment actuators 122 by any suitable techniques for communicating data and control signals between the vehicle controller 126 , the components of the ride vehicles 124 , and/ or the environment actuators 122 , such as a wireless, optical, coaxial, or other suitable connection.
- the vehicle controller 126 may represent a unified hardware component or an assembly of separate components integrated through communicative coupling (e.g., wired or wireless communications). It should be noted that, in some embodiments, the vehicle controller 126 may include additional illustrated components of the virtual reality ride system 100 . For example, the vehicle controller 126 may include the environment actuators 122 and may be operable to communicate with additional virtual reality devices 102 . With respect to functional aspects of the ride vehicle 124 , the vehicle controller 126 may use information from the environment actuators 122 , the vehicle sensors 132 , and/or the vehicle actuators 134 to generate and/or transmit vehicle sensor data and/or environment sensor data to one or more virtual reality devices 102 .
- the vehicle controller 126 may use information from the environment actuators 122 , the vehicle sensors 132 , and/or the vehicle actuators 134 to generate and/or transmit vehicle sensor data and/or environment sensor data to one or more virtual reality devices 102 .
- the memory 130 may include one or more tangible, non-transitory, computer-readable media that store instructions executable by the processor 128 (representing one or more processors) and/or data to be processed by the processor 128 .
- the memory 130 may include random access memory (RAM), read only memory (ROM), rewritable non-volatile memory, such as flash memory, hard drives, optical discs, and/or the like.
- the processor 128 may include one or more general purpose microprocessors, one or more application specific processors, one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), or any combination thereof.
- the memory 130 may store vehicle sensor data and/or environment sensor data obtained via the environment actuators 122 , the vehicle sensors 132 , and/or the vehicle actuators 134 and/or algorithms utilized by the processor 128 to help control operations of components of the ride vehicles 124 based on the vehicle sensor data and/or environment sensor data. Additionally, the processor 128 may process the vehicle sensor data and/or environment sensor data. In certain embodiments, the ride vehicle 124 may include additional elements not shown in FIG. 1 , such as additional data acquisition and processing controls, additional sensors and displays, user interfaces, and so forth.
- the virtual reality system 100 may include any number of virtual reality devices 102 .
- each rider 120 may be provided with a corresponding virtual reality device 102 .
- Each virtual reality device 102 may capture image data and/or audio data associated with a corresponding rider 120 .
- the image sensors 110 may face or point towards a face of the corresponding rider 120 and may capture image data associated with facial characteristics and/or facial movements of the corresponding rider 120 .
- the audio sensors 114 may capture audio data corresponding to speech and/or sounds made by the corresponding rider 120 .
- image data and/or audio data may be captured before the rider 120 enters the ride vehicle 124 and/or before the ride starts.
- the rider 120 may enter a designated area, such as a photo booth, and any number of cameras may capture images and/or video of the rider 120 .
- the cameras may be positioned and/or operated to capture images and/or video of the rider 120 at different angles, at different distances, with different lighting, and so forth. Additionally, or alternatively, the cameras may be operated to capture images of different portions (e.g., head, face, arm, hand, and so forth) of the rider 120 .
- an electronic display may provide instructions or prompt the rider 120 to pose in different ways, such as standing, sitting, walking, and so forth and the cameras may capture images and/or video of the different poses.
- the electronic display may also prompt the rider 120 to make different facial gestures, facial movements, or facial poses, such as smiling, frowning, raising eyebrows, yelling, shaking or nodding of the head, and so forth as the cameras capture images and/or video of the rider 120 .
- any number of virtual reality devices 102 may receive image data corresponding to the captured images and/or video from the cameras in the designated area.
- the automation controller 104 may receive the image data and may generate and/or update a virtual avatar based on the image data.
- the automation controller 104 may analyze and/or process the image data to determine physical characteristics of the rider 120 , such as a height, a hair color, an eye color, a position of facial features, and so forth.
- the automation controller 104 may generate and/or update the virtual avatar based on the determined physical characteristics.
- the image data may be processed and/or analyzed remotely from the virtual reality device 102 and the automation controller 104 may receive processed image data and/or physical characteristics associated with any number of riders 120 .
- the automation controller 104 may compare the image data to stored facial gesture profiles and may generate and/or update the virtual avatar based on a selected facial gesture profile.
- Each stored facial gesture profile may include a set of facial feature characteristics and a corresponding emotion and/or gesture.
- the image data may be indicative of a rider smiling with upturned lips, teeth showing, and/or raised eyebrows.
- the automation controller 104 may compare the facial features with the stored facial gesture profiles and may select the smiling facial gesture profile. Accordingly, the automation controller 104 may generate and/or update the virtual avatar to depict the virtual avatar smiling based on the selected facial gesture profile.
- the virtual avatar may include a rigged model of a corresponding rider.
- rigging refers to a technique for skeletal animation for representing a character model (e.g., a rigged model) using a series of interconnected digital features (e.g., bones).
- the rigged model may include movable features, such as facial features, a head, an arm, a hand, a finger, and so forth.
- the automation controller 104 may update the rigged model based on physical characteristics of the corresponding rider.
- the automation controller 104 may update an orientation, a facial gesture, a facial movement, a facial pose, and so forth based on image data captured by the image sensors 110 of physical (e.g., real or actual) orientation, facial gestures, facial movements, facial poses, and so forth of the corresponding rider.
- a first rider with a first virtual reality device may turn their head to look towards a second rider with a second virtual reality device.
- the second virtual reality device may capture image data of the second rider and may process and/or transmit the image data to the first virtual reality device.
- the first virtual reality device may receive the image data corresponding to the second rider and may generate and/or update virtual reality image content to display to the first rider.
- the first virtual reality device may generate and/or update a virtual avatar corresponding to the second rider.
- FIG. 2 illustrates an example embodiment of the virtual reality device 102 in FIG. 1 .
- the virtual reality device 102 may incorporate the image sensor 110 and the audio sensor 114 .
- the image sensor 110 may capture any number of images and/or video of the rider 120 .
- the image sensor 110 may capture images and/or video of the face, body, fingers, hands, and/or limbs of the rider 120 .
- the image sensor 110 may capture a viewing area 202 selected by a controller, such as the automation controller 104 .
- the automation controller 104 may generate and transmit control signals to the image sensor 110 to capture the viewing area 202 based on movement detected by the image sensor 110 .
- the viewing area 202 may include the face of the rider 120 and/or facial features (e.g., eyes, nose, mouth, and so forth) of the rider 120 .
- the image sensor 110 may generate and/or transmit image data associated with the viewing area 202 to the automation controller 104 for processing.
- the automation controller 104 may determine physical characteristics (e.g., size, position, color, and so forth) associated with the rider 120 based on the image data.
- the automation controller 104 may receive the image data and may determine contours, textures, and/or features of the rider's face.
- the automation controller 104 may determine the position of the eyes on the rider's face, the color of the rider's hair, and so forth.
- the automation controller 104 may generate virtual reality image content based on the image data.
- the automation controller 104 may generate and/or update a virtual avatar based on the determined physical characteristics.
- the audio sensor 114 may capture speech 204 and/or sounds made by the rider 120 .
- the audio sensor 114 may generate audio data based on the captured speech 204 and/or sounds and may transmit the audio data to the automation controller 104 .
- the automation controller 104 may receive the audio data and may determine text (e.g., words, phrases, sentences, and so forth) spoken by the rider 120 .
- the automation controller 104 may process the audio data using a natural language processing algorithm to generate text data.
- the automation controller 104 may generate virtual reality image content based on the audio data and/or the text data.
- the automation controller 104 may generate and/or update a virtual avatar based on the audio data and/or the text data.
- the automation controller 104 may determine and/or generate phonemes based on the audio data and may determine and/or generate visemes based on the audio data and/or the phonemes. Additionally, or alternatively, the automation controller 104 may generate text associated with the captured speech 204 based on the audio data. For example, the automation controller 104 may use natural language processing to determine text associated with captured speech and may generate visemes based on the determined text. The automation controller 104 may transmit the audio data, the phonemes, the text, and/or the visemes to any number of additional virtual reality devices to generate and/or update virtual reality image content corresponding to the rider 120 based on the captured speech 204 of the rider 120 .
- FIG. 3 illustrates an example embodiment of the virtual reality system 100 in FIG. 1 including a first virtual reality device 102 A worn by a first rider 120 A and a second virtual reality device 102 B worn by a second rider 120 B.
- the first virtual reality device 102 A may capture sensor data, audio data, and/or image data associated with the first rider 120 A, as described herein.
- the first virtual reality device 102 A may transmit the sensor data, the audio data, and/or the image data associated with the first rider 120 A to the second virtual reality device 102 B.
- the second virtual reality device 102 B may receive the sensor data, the audio data, and/or the image data and may generate and/or update virtual reality image content to be displayed to the second rider 120 B.
- the first rider 120 A may turn their head towards the second rider 120 B.
- the second virtual reality device 102 B may generate and/or update a virtual avatar corresponding to the first rider 120 A based on the sensor data indicating the first rider 120 A turning their head.
- the second rider 120 B may view the virtual avatar corresponding to the first rider 120 A turning their head.
- the second virtual reality device 102 B may generate and/or update the virtual reality image content based on image data captured by the first virtual reality device 102 A.
- image sensors 110 in the first virtual reality device 102 A may capture images indicative of facial movements, facial gestures, facial poses, and so forth made by the first rider 120 A.
- the first virtual reality device 102 A may transmit the image data corresponding to the captured images to the second virtual reality device 102 B.
- the second virtual reality device 102 B may generate and/or update the virtual avatar corresponding to the first rider 120 A based on the image data. For example, the first rider 120 A may smile, blink, move their eyes, and so forth.
- the second virtual reality device 102 B may generate and/or update the virtual avatar corresponding to the first rider 120 A based on the image data indicating facial movements of the first rider 120 A.
- the second rider 120 B may view the virtual avatar corresponding to the first rider 120 A blinking, smiling, moving their eyes, and so forth.
- the second virtual reality device 102 B may generate and/or update the virtual reality image content based on audio data captured by the first virtual reality device 102 A.
- audio sensors 114 in the first virtual reality device 102 A may capture audio indicative of speech made by the first rider 120 A.
- the first virtual reality device 102 A may transmit the audio data corresponding to the captured speech to the second virtual reality device 102 B.
- the second virtual reality device 102 B may generate and/or update the virtual avatar corresponding to the first rider 120 A based on the audio data.
- the second virtual reality device 102 B may perform natural language processing on the audio data to determine text corresponding to the audio data.
- the second virtual reality device 102 B may generate a sequence of phonemes and/or a sequence of visemes based on the audio data, the determined text, or a combination thereof. As such, the second virtual reality device 102 B may generate and/or update facial movements of the virtual avatar corresponding to the first rider 120 A based on the sequence of visemes. Additionally, or alternatively, the second virtual reality device 102 B may include one or more speakers to playback the audio data captured by the first virtual reality device 102 A. Accordingly, the second virtual reality device 102 B may display facial movements of the virtual avatar based on the audio data so the virtual avatar appears to be speaking during playback of the audio data.
- the first virtual reality device 102 A includes the automation controller 104 , the processor 106 , and the memory 108 . Additionally, or alternatively, the first virtual reality device 102 A may include any number of components, such as image sensors 110 , display 112 , audio sensors 114 , speakers 116 , antenna 118 , and so forth.
- the second virtual reality device 102 B may include the same components and/or similar components to the first virtual reality device 102 A.
- FIG. 4 illustrates a flowchart of a process 400 for operating the virtual reality ride system 100 of FIG. 1 , in accordance with an embodiment of the present disclosure. While the process is described as being performed by the automation controller 104 , it should be understood that the process 400 may be performed by any suitable device, such as the processor 106 , the vehicle controller 126 , and so forth, that may control and/or communicate with components of a virtual reality ride system. Furthermore, while the process 400 is described using steps in a specific sequence, it should be understood that the present disclosure contemplates that the described steps may be performed in different sequences than the sequence illustrated, and certain described steps may be skipped or not performed altogether. In some embodiments, the process 400 may be implemented by executing instructions stored in a tangible, non-transitory, computer-readable medium, such as the memory 108 , using any suitable processing circuitry, such as the processor 106 .
- a virtual reality device such as virtual reality device 102 in FIG. 1 may receive audio data, image data, rider sensor data, vehicle sensor data, or any combination thereof.
- the automation controller 104 may receive audio data captured by one or more audio sensors 114 of a separate virtual reality device, image data captured by one or more image sensors 110 of the separate virtual reality device, rider sensor data captured by one or more rider sensors, and/or vehicle sensor data captured by one or more vehicle sensors 132 .
- the virtual reality device 102 may receive environment sensor data associated with a ride environment.
- the automation controller 104 may generate and/or update virtual reality image content based on the image data. For example, the automation controller 104 may determine physical characteristics (e.g., hair color, facial movements, facial gestures, and so forth) of another rider of the virtual reality ride system 100 and may update and/or animate a virtual avatar corresponding to the other rider. Additionally, or alternatively, the automation controller 104 may generate and/or update facial features of the virtual avatar based on the image data. For example, the automation controller 104 may generate and/or update a position and/or a size of facial features (e.g., mouth, nose, eyes, and so forth) based on the image data.
- facial features e.g., mouth, nose, eyes, and so forth
- the automation controller may generate text data based on the audio data.
- the automation controller 104 may perform a natural language processing algorithm to determine text associated with captured speech for another rider of the virtual reality ride system.
- the automation controller 104 may determine a sequence of phonemes and/or a sequence of visemes associated with the captured speech.
- the automation controller 104 may process the audio data.
- the automation controller 104 may filter the audio data to remove background noise, may enhance an audio characteristic (e.g., volume) of the audio data, may alter a voice characteristic (e.g., pitch, tone, timbre, and so forth) associated with the captured speech, and so forth.
- a voice characteristic e.g., pitch, tone, timbre, and so forth
- the automation controller 104 may generate new audio data and/or update the audio data based on a theme of the virtual reality ride system 100 .
- the virtual reality ride system 100 may include an electronics or robotics theme and the automation controller 104 may generate new audio data and/or alter the audio data to produce a more robotic sounding speech based on the captured speech.
- the automation controller 104 may generate and/or update the virtual reality image content based on the text data and/or the audio data.
- the automation controller 104 may adjust facial features of the virtual avatar based on the text data and/or the audio data.
- the automation controller 104 may adjust and/or animate the facial features of the virtual avatar based on the sequence of visemes.
- the virtual reality image content may depict movement of the facial features of the virtual avatar corresponding to the captured speech.
- the virtual reality device 102 may display the virtual reality image content including the virtual avatar.
- the automation controller 104 may instruct the display 112 to display the virtual reality image content and/or may instruct one or more speakers to playback the audio data.
- the rider of the virtual reality ride system 100 may hear playback of the captured speech and may view facial movements of the virtual avatar corresponding to the captured speech to provide a more realistic and/or immersive experience.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
In an embodiment, a virtual reality ride system includes a display to present virtual reality image content to a first rider, an audio sensor to capture audio data associated with a second rider, and an image sensor to capture image data associated with the second rider. The virtual reality ride system also includes at least one processor communicatively coupled to the display and configured to (i) receive the audio data, the image data, or both, (ii) generate a virtual avatar corresponding to the second rider, wherein the virtual avatar includes a set of facial features, (iii) update the set of facial features based on the audio data, the image data, or both, and (iv) instruct the display to present the virtual reality image content including the virtual avatar and the updated set of facial features.
Description
- This application claims priority from and the benefit of U.S. Provisional Application Ser. No. 63/296,363, entitled “FACIAL ACTIVITY DETECTION FOR VIRTUAL REALITY SYSTEMS AND METHODS”, filed Jan. 4, 2022, which is hereby incorporated by reference in its entirety for all purposes.
- This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present techniques, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
- The present disclosure generally relates to virtual reality systems and, more particularly, to virtual reality (VR) systems implemented and/or operated incorporating facial activity detection to facilitate providing a more immersive user experience.
- Amusement parks often contain attractions or experiences that use virtual reality systems to provide enjoyment and entertain guests of the amusement parks. For example, the attractions may include themed environments established using display devices presenting media content (e.g., in the form of video, text, still imagery, motion graphics, or a combination thereof). For some attractions, it may be desirable to display media content with special visual effects to create a realistic and/or immersive viewing or playing experience for guests. To facilitate providing a more realistic and/or immersive experience, attractions may be implemented and/or operated to present virtual reality content to guests.
- A summary of certain embodiments disclosed herein is set forth below. It should be understood that these aspects are presented merely to provide the reader with a brief summary of these certain embodiments and that these aspects are not intended to limit the scope of this disclosure. Indeed, this disclosure may encompass a variety of aspects that may not be set forth below.
- In an embodiment, a virtual reality ride system includes a display to present virtual reality image content to a first rider, an audio sensor to capture audio data associated with a second rider, and an image sensor to capture image data associated with the second rider. The virtual reality ride system also includes at least one processor communicatively coupled to the display and configured to receive the audio data, the image data, or both. The at least one processor is also configured to generate a virtual avatar corresponding to the second rider, wherein the virtual avatar includes a set of facial features. The at least one processor is also configured to update the set of facial features based on the audio data, the image data, or both and instruct the display to present the virtual reality image content including the virtual avatar and the updated set of facial features.
- In an embodiment, a virtual reality device includes an audio sensor to capture audio data indicative of speech of a user and an image sensor to capture image data indicative of facial characteristics of the user. The virtual reality device also includes at least one processor communicatively coupled to the audio sensor, and the image sensor. The at least one processor determines a set of facial characteristics based on the image data, determines a set of facial movements associated with the set of facial characteristics based on the audio data, and transmits the set of facial characteristics and the set of facial movements to a second virtual reality device, the second virtual reality device configured to display virtual reality image content based on the set of facial characteristics and the set of facial movements.
- In an embodiment, a method includes receiving audio data, image data, or both, generating a virtual avatar based on the image data, the virtual avatar including a set of facial features, and determining a set of facial characteristics associated with the image data. The method also includes comparing the set of facial characteristics with a set of facial gesture profiles, each facial gesture profile of the set of facial gesture profiles including a corresponding set of stored facial characteristics. The method also includes selecting, based on the comparison, a facial gesture profile of the set of facial gesture profiles, animating the set of facial features based on the selected facial gesture profile, the audio data, or both, and presenting virtual reality image content including the virtual avatar and the animated set of facial features.
- These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
-
FIG. 1 is a block diagram of a virtual reality ride system including a virtual reality device, in accordance with an embodiment of the present disclosure; -
FIG. 2 is an example of the virtual reality device ofFIG. 1 , in accordance with an embodiment of the present disclosure; -
FIG. 3 is an example of multiple virtual reality devices ofFIG. 1 , in accordance with an embodiment of the present disclosure; and -
FIG. 4 is a flow diagram of an example process for operating the virtual reality ride system ofFIG. 1 , in accordance with an embodiment of the present disclosure. - One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
- When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, it should be understood that references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. As used herein, “viseme” refers to a shape and/or configuration of facial features such as the mouth, lips, and/or tongue when making a corresponding sound. As used herein, “phoneme” refers to a distinct unit of sound in spoken language that facilitates distinguishing between different spoken words.
- The present disclosure relates generally to virtual reality systems. More particularly, the present disclosure relates to virtual reality systems incorporating facial activity detection to facilitate providing a more immersive user experience. Amusement parks often contain attractions or experiences that use virtual reality systems to provide enjoyment and entertain guests of the amusement parks. The attractions may include any type of ride system that is designed to entertain a passenger, such as an attraction that includes a ride vehicle that travels along a path, an attraction that includes a room or theatre with stationary or moving seats for passengers to sit in while the passengers watch a video, an attraction that includes a pathway for guests to travel along, a room for guests to explore, or the like. For some attractions, it may be desirable to display media content with special visual effects to create a realistic and/or immersive viewing or playing experience for guests. Additionally, while the disclosed embodiments generally discuss virtual reality systems that are used for entertainment purposes, the disclosed embodiments may also apply to virtual reality systems that are used for any other suitable purpose.
- In some instances, a rider on a virtual reality ride system may experience virtual reality image content that fails to resemble the surroundings (e.g., other riders, weather conditions, landscape, and so forth), which affects (e.g., reduces and/or degrades) the ride experience, when virtual reality image content does not match the rider's expected view. For example, when the rider turns their head toward another rider on the virtual reality ride system, the rider may expect to see the other rider depicted in the virtual reality display (e.g., head-mounted display). However, the virtual reality image content presented on the display may not match the other rider's physical characteristics, gestures, facial features, and so forth. As such, a mismatch between the rider's expected view and the virtual reality image content may affect the ride experience.
- To facilitate reducing mismatch of a rider's expected view and the virtual reality image content, in some instances, a virtual reality ride system may generate virtual reality image content based at least in part on characteristics of other riders and/or guests and/or based at least in part on characteristics of physical (e.g., actual and/or real) movement of a ride vehicle, and thus, a rider carried by the vehicle.
- As described above, to facilitate reducing mismatch between the rider's expected view and the virtual reality image content, a virtual reality ride system may present virtual reality image content to a rider of a ride vehicle such that virtual reality image content is coordinated with physical (e.g., real and/or actual) characteristics of other riders. For example, to display other riders of the virtual reality ride system and/or other guests, the virtual reality ride system may generate and display virtual reality image content that includes virtual avatars with similar facial characteristics (e.g., mouth, nose, eyes, and so forth), similar facial movement (e.g., open mouth, raised eyebrows, furrowed brow, and so forth), similar facial gestures (e.g., smile, frown, excitement, and so forth) and that results in visually perceived images occurring at approximately the same time and for approximately the same duration. As used herein, virtual avatar refers to a graphical representation (e.g., a virtual representation) of a character (e.g., a rider of a virtual reality ride system and/or a guest of an amusement park attraction or experience) in a graphical environment (e.g., a virtual reality environment, a mixed reality environment, an augmented reality environment, and so forth).
- To facilitate coordinating presentation of virtual reality content with physical characteristics and/or movement of other riders and/or guests, a virtual reality ride system may include one or more image sensors. For example, a rider may view a display (e.g., a head-mounted display) that includes a camera facing the rider that is implemented and/or operated to sense (e.g., capture images) physical characteristics of the rider, such as facial features, facial movement, facial gestures, movement of limbs, and so forth. As such, in some embodiments, a virtual reality ride system may coordinate presentation of virtual reality image content with physical characteristics of the rider to other riders and/or guests at approximately the same time as image data indicative of the physical characteristics is determined (e.g., sensed and/or captured).
- Typically, a rider on a virtual reality ride system may speak to another rider. In some instances, an avatar of a rider speaking in the virtual reality image content may fail to resemble a speaking character, which affects (e.g., reduces and/or degrades) the ride experience, when the virtual reality image content does not match the rider's expected view. To facilitate reducing the mismatch based on rider's speaking to one another, a virtual reality ride system may generate virtual reality image content based at least in part on captured speech of riders and/or guests. For example, the virtual reality ride system may generate and provide audio content that corresponds to captured speech of riders and may generate and display virtual reality image content that includes virtual avatars with similar mouth movement to speaking riders that results in visually perceived images occurring at approximately the same time and for approximately the same duration.
- To facilitate coordinating virtual reality image content with speech and corresponding facial movements of riders and/or guests, a virtual reality ride system may include one or more audio sensors (e.g., microphones). For example, a display (e.g., a head-mounted display) may include a microphone which is implemented and/or operated to sense (e.g., capture and/or detect) speech from a corresponding rider. Additionally, or alternatively, the virtual reality ride system may analyze the sensed speech to determine text based on the captured speech and/or to determine facial movements based on the captured speech and/or determined text. As such, in some embodiments, a virtual reality ride system may coordinate presentation of audio content and/or virtual reality image content with captured speech of riders at approximately the same time as audio data indicative of the speech being sensed (e.g., captured and/or detected.)
- Generally, visual stimuli are perceived by a human's visual system. In fact, at least in some instances, changes in perceived visual stimuli over time may enable a human to detect motion (e.g., movement). For example, when a perceived visual stimuli is translated left over time, the human may perceive (e.g., determine and/or detect) that he/she is moving right relative to the perceived visual stimuli or vice versa. Additionally, or alternatively, when a perceived visual stimuli is translated upward over time, the human may perceive that he/she is moving downward relative to the perceived visual stimuli or vice versa.
- Movement of a human may additionally or alternatively be perceived by the human's vestibular system (e.g., inner ear). In other words, at least in some instances, movement of a human may be perceived by the human's vestibular system as well as by the human's visual system. However, at least in some instances, a mismatch between the movement perceived by the human's vestibular system and the movement perceived by the human's visual system may result in the human experiencing motion sickness.
- In other words, at least in some instances, a rider on a virtual reality ride system may experience motion sickness, which affects (e.g., reduces and/or degrades) the ride experience, when visually perceived movement does not match movement perceived by the rider's vestibular system. As described above, a ride vehicle may carry a rider through a ride environment of a virtual reality ride system and, thus, movement of the rider may be dependent at least in part on movement of the ride vehicle. Thus, to facilitate reducing likelihood of producing motion sickness, a virtual reality ride system may coordinate virtual reality content with physical ride vehicle movement. For example, the virtual reality ride system may display virtual reality image content that is expected to result in characteristics, such as magnitude, time, duration, and/or direction, of visually perceived movement matching corresponding characteristics of movement perceived by the rider's vestibular system.
- To facilitate reducing likelihood of producing motion sickness, a virtual reality ride system may present virtual reality image content to a rider of a ride vehicle such that movement perceived from the virtual reality content is coordinated with physical (e.g., real and/or actual) movement of the ride vehicle. For example, to compensate for physical movement of a ride vehicle, the virtual reality ride system may generate and display virtual reality image content that results in visually perceived movement occurring at approximately the same time, for approximately the same duration, and/or in approximately the same direction as the physical movement of the ride vehicle. In fact, in some embodiments, the virtual reality ride system may generate movement-coordinated virtual reality content by adapting (e.g., adjusting) default virtual reality content, for example, which corresponds with a default (e.g., stationary and/or planned) ride vehicle movement profile.
- To facilitate coordinating presentation of virtual reality content with physical movement of a ride vehicle, a virtual reality ride system may include one or more sensors, such as a vehicle sensor, a rider (e.g., head-mounted display) sensor, and/or an environment sensor. For example, a ride vehicle may include one or more vehicle sensors, such as a gyroscope and/or accelerometer, which are implemented and/or operated to sense (e.g., measure and/or determine) characteristics of ride vehicle movement, such as movement time, movement duration, movement direction (e.g., orientation), and/or movement magnitude (e.g., distance). As such, in some embodiments, a virtual reality ride system may coordinate presentation of virtual reality content with ride vehicle movement at least in part by presenting movement-coordinated virtual reality content at approximately the same time as sensor data indicative of occurrence of the ride vehicle movement is determined (e.g., sensed and/or measured).
- However, at least in some instances, generation and/or presentation (e.g., display) of virtual reality content is generally non-instantaneous. In other words, at least in some such instances, reactively generating and/or presenting virtual reality content may result in presentation of virtual reality content being delayed relative to another rider's movement and/or corresponding ride vehicle movement. Merely as an illustrative non-limiting example, due to the non-instantaneous nature, reactively generating and/or presenting virtual reality image content may result in the virtual reality image content being displayed after the other rider's movement and/or corresponding ride vehicle movement has already occurred, which, at least in some instances, may result in a reduced and/or degraded rider experience.
- Thus, to facilitate coordinating presentation of virtual reality content, in some embodiments, a virtual reality ride system may predict characteristics, such as movement time, movement duration, movement direction, and/or movement magnitude, of the ride vehicle movement and/or riders in the ride vehicle over a prediction horizon (e.g., subsequent period of time). In other words, in such embodiments, the virtual reality ride system may determine a predicted ride vehicle movement profile (e.g., trajectory) over the prediction horizon and/or a predicted rider movement profile (e.g., facial gesture, movement, and so forth) over the prediction horizon. For example, the predicted rider movement profile may indicate that a corresponding rider raises their arms from a first time to a second (e.g., subsequent) time, smiles from the second time to a third (e.g., subsequent) time, laughs from the third time to a fourth (e.g., subsequent) time, and so forth. As another example, the predicted ride vehicle movement profile may indicate that a corresponding ride vehicle moves a first distance (e.g., magnitude) in a first direction from a first time to a second (e.g., subsequent) time, a second distance in a second direction from the second time to a third (e.g., subsequent) time, and so on.
- In this manner, the techniques described in the present disclosure may facilitate coordinating virtual reality image content based on physical characteristics of riders, the ride vehicle, and/or captured speech, which, at least in some instances, may facilitate improving the ride experience provided by the virtual reality ride system.
- With the foregoing in mind,
FIG. 1 illustrates an example of a virtualreality ride system 100 including a virtual reality device 102 (e.g., head-mounted display device), any number ofenvironment actuators 122, and any number ofride vehicles 124. The virtualreality ride system 100 may be used to provide visual effects to adisplay 112 during an amusement park attraction and/or experience. In certain embodiments, thevirtual reality device 102 may be provided in the form of a computing device, such as a head-mounted display device, programmable logic controller (PLC), a personal computer, a laptop, a tablet, a mobile device (e.g., a smart phone), a server, or any other suitable computing device. Thevirtual reality device 102 may control operation of any number ofimage sensors 110, any number ofaudio sensors 114, and thedisplay 112 and may process data received from theimage sensors 110,audio sensors 114, environment actuators 122,vehicle sensors 132, and/orvehicle actuators 134. Thevirtual reality device 102 may include theimage sensors 110, thedisplay 112, theaudio sensors 114, thespeakers 116, and anantenna 118. Anautomation controller 104 may be coupled to theimage sensors 110, theaudio sensors 114, thedisplay 112, theantenna 118, theenvironment actuators 122, and/or theride vehicles 124 by any suitable techniques for communicating data and control signals between theautomation controller 104, the components of thevirtual reality device 102, theenvironment actuators 122, and/or theride vehicles 124, such as a wireless, optical, coaxial, or other suitable connection. - The
virtual reality device 102 may include a control system having multiple controllers, such as theautomation controller 104, each having at least oneprocessor 106 and at least onememory 108. Thevirtual reality device 102 may represent a unified hardware component or an assembly of separate components integrated through communicative coupling (e.g., wired or wireless communications). It should be noted that, in some embodiments, thevirtual reality device 102 may include additional illustrated components of the virtualreality ride system 100. For example, thevirtual reality device 102 may include thevehicle sensors 132 and/or avehicle controller 126 and may be operable to communicate with additional virtual reality devices. With respect to functional aspects of thevirtual reality device 102, theautomation controller 104 may use information from theimage sensors 110, theaudio sensors 114, theenvironment actuators 122, and/or theride vehicles 124 to generate and/or update virtual reality image content and to control operation of thedisplay 112 to present the virtual reality image content. Further, thevirtual reality device 102 may include communication features (e.g., the antenna 118) that facilitate communication with other devices (e.g., external sensors, additional virtual reality devices 102) to provide additional data for use by thevirtual reality device 102. For example, thevirtual reality device 102 may operate to communicate with external cameras and/or audio sensors to facilitate image data and/or audio data capture for an amusement park attraction or experience, guest interaction, and so forth. - In some embodiments, the
memory 108 may include one or more tangible, non-transitory, computer-readable media that store instructions executable by the processor 106 (representing one or more processors) and/or data to be processed by theprocessor 106. For example, thememory 108 may include random access memory (RAM), read only memory (ROM), rewritable non-volatile memory, such as flash memory, hard drives, optical discs, and/or the like. Additionally, theprocessor 106 may include one or more general purpose microprocessors, one or more application specific processors, one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), or any combination thereof. Further, thememory 108 may store sensor data and/or information obtained via theimage sensors 110, theaudio sensors 114, theenvironment actuators 122, and/or theride vehicles 124, virtual reality image content data generated, transmitted, and/or displayed via thedisplay 112, and/or algorithms utilized by theprocessor 106 to help control operations of components of the virtualreality ride system 100 based on the sensor data and/or virtual reality image content data. Additionally, theprocessor 106 may process the sensor data and/or information to generate virtual reality image content data for a virtual avatar for display on thedisplay 112 or another display of another virtual reality device. In certain embodiments, thevirtual reality device 102 may include additional elements not shown inFIG. 1 , such as additional data acquisition and processing controls, additional sensors and displays, user interfaces, and so forth. - The
image sensors 110 may be incorporated into thevirtual reality device 102 and may be capable of capturing images and/or video of arider 120. For example, thevirtual reality device 102 may be a head-mounted display device worn on the head of therider 120 and theimage sensors 110 may capture any number of images of therider 120. In certain embodiments, theimage sensors 110 may capture facial features (e.g., eyes, nose, mouth, lips, chin, eyebrows, ears, and so forth) of therider 120. Theimage sensors 110 may generate and/or may transmit image data corresponding to the captured images to theautomation controller 104. Theimage sensors 110 may include any number of cameras, such as any number of video cameras, any number of depth cameras capable of determining depth and distance to facial features and/or between facial features, any number of infrared cameras, any number of digital cameras, and so forth. In certain embodiments, theimage sensors 110 may process the image data before transmission to theautomation controller 104. Alternatively, theimage sensors 110 may transmit raw image data to theautomation controller 104. In some embodiments, theimage sensors 110 may be capable of tracking a gaze of therider 120. For example, theimage sensors 110 may determine a direction therider 120 is looking. - In certain embodiments, the
memory 108 may store facial gesture profiles associated with a number of facial gestures. For example, each facial gesture profile may correspond to a different facial gesture, such as smiling, blinking, frowning, yawning, and so forth. Theautomation controller 104 may compare the captured image data from theimage sensors 110 to the stored facial gesture profiles and may determine the captured image data is similar (e.g., matches, within a similarity threshold) to a stored facial gesture profile. For example, theautomation controller 104 may compare a position, an orientation, a movement, and/or a shape of any number of facial features depicted in the image data and may compare to the stored facial gesture profiles. As such, theautomation controller 104 may determine a stored facial gesture profile that corresponds to captured images of therider 120. - The
display 112 may be capable of depicting image content (e.g., still image, video, visual effects) to be viewed by one ormore riders 120 of the virtualreality ride system 100 and/or guests of an amusement park attraction and/or experience. In some embodiments, thedisplay 112 may be a head-mounted display and may be placed or worn on the head of arider 120 and thedisplay 112 may be placed in front of either one or both eyes of arider 120. In certain embodiments, thedisplay 112 may be capable of depicting virtual reality image content including a virtual avatar (e.g., avatar) of other riders of the virtualreality ride system 100 and/or guests of the amusement park attraction and/or experience. Additionally, or alternatively, the virtual reality image content may include more than one virtual avatar and may depict image content associated with the amusement park attraction and/or experience. For example, an amusement park ride may appear to take place on horseback travelling through a forest, on a motorcycle travelling along the road, in a haunted house, and so forth. - The
audio sensors 114 may also be incorporated into thevirtual reality device 102 and may be capable of capturing speech and/or sounds of therider 120. For example, theaudio sensors 114 may include microphones and theaudio sensors 114 may be positioned on thevirtual reality device 102 adjacent/proximate the mouth of therider 120 wearing thevirtual reality device 102. Theaudio sensors 114 may generate and/or may transmit audio data corresponding to the captured speech and/or sounds to theautomation controller 104. In certain embodiments, theaudio sensors 114 may process the audio data before transmission to theautomation controller 104. Alternatively, theaudio sensors 114 may transmit raw audio data to theautomation controller 104. In certain embodiments, thevirtual reality device 102 may include any number of audio playback components, such as one ormore speakers 116, to playback audio content associated with the virtual reality experience. For example, thespeakers 116 may playback audio corresponding to sounds of a horse during a virtual horseback ride, sounds of a motorcycle during a virtual motorcycle ride, and so forth. Additionally, or alternatively, thespeakers 116 may playback audio content based on received audio data from othervirtual reality devices 102. For example,virtual reality devices 102 worn by other riders of the virtualreality ride system 100 may capture audio data associated with the other riders (e.g., speech, sounds, and so forth) viaaudio sensors 114, as described herein. Thevirtual reality devices 102 may transmit the captured audio data to any number of additionalvirtual reality devices 102 for playback of the captured audio data via the audio playback components. - The
automation controller 104 may generate and/or update virtual reality image content based on the audio data. In certain embodiments, theautomation controller 104 may determine any number of phonemes associated with the audio data. For example, theautomation controller 104 may determine a sequence of phonemes based on captured speech of arider 120 of the virtualreality ride system 100. The sequence of phonemes may include an order of phonemes (e.g., first to last) corresponding to when the sounds were made by therider 120. Theautomation controller 104 may determine a corresponding sequence of visemes based on the sequence of visemes and/or the audio data. In some embodiments, theautomation controller 104 may determine facial movements (e.g., position and/or shape of the mouth) based on the visemes and may alter the facial features of a virtual avatar corresponding to therider 120 based on the visemes. Accordingly, theautomation controller 104 may generate and/or update the virtual reality image content to display facial movements of the virtual avatar corresponding to the captured speech of therider 120. - Additionally, or alternatively, the
automation controller 104 may analyze the audio data using natural language processing to determine text associated with corresponding captured speech of therider 120. Theautomation controller 104 may generate and/or update the virtual reality image content based on the determined text. For example, theautomation controller 104 may generate and/or animate a rigged model of a virtual avatar based on the determined text. The rigged model may include a number of movable features, such as facial features, and theautomation controller 104 may animate the movable features based on the captured audio data and/or the determined text. - The
antenna 118 may transmit data to additionalvirtual reality devices 102 and/or receive data from the additionalvirtual reality devices 102 via, for example, a network or a direct connection. In some embodiments, theantenna 118 may receive image data corresponding to images ofother riders 120 and/or audio data corresponding to speech and/or sounds of theother riders 120 from additionalvirtual reality devices 102. Theantenna 118 may be communicatively coupled to theautomation controller 104 and may transmit data received from othervirtual reality devices 102 to theautomation controller 104 for processing. Additionally, or alternatively, theantenna 118 may receive image data and/or audio data from theautomation controller 104 and may transmit the image data and/or audio data to additionalvirtual reality devices 102. Theantenna 118 may be representative of any of various communication devices (e.g., wired or wireless transmitters and/or receivers). - In some embodiments, the virtual
reality ride system 100 may be deployed at an amusement park, a theme park, a carnival, a fair, and/or the like. Additionally, in some embodiments, the virtualreality ride system 100 may be a roller coaster ride system, a lazy river ride system, a log flume ride system, a boat ride system, or the like. However, it should be appreciated that the depicted example is merely intended to be illustrative and not limiting. For example, in other embodiments, thevirtual reality device 102 may be fully included in one ormore ride vehicles 124. Additionally, or alternatively, in other embodiments, any components of thevirtual reality device 102 may be remote from the one ormore ride vehicles 124 and/or the one ormore riders 120. In any case, aride vehicle 124 may generally be implemented and/or operated to carry (e.g., support) one or more riders 120 (e.g., users) through the ride environment of the virtualreality ride system 100. Accordingly, physical (e.g., actual and/or real) movement (e.g., motion) of arider 120 in the ride environment may generally be dependent on physical movement of theride vehicle 124 carrying the rider. - To facilitate controlling movement of the
ride vehicle 124, the ride vehicle may include one ormore vehicle actuators 134. For example, thevehicle actuators 134 may include pneumatics, hydraulics, an engine, a motor, and/or a brake that enables controlling movement speed of theride vehicle 124. In other embodiments, thevehicle actuators 134 may include a steering wheel and/or a rudder that enables controlling movement direction of theride vehicle 124. In some embodiments, theride vehicle 124 may additionally or alternatively include one or more haptic vehicle actuators implemented and/or operated to present virtual reality tactile content. Additionally, or alternatively, one ormore environment actuators 122 may be implemented and/or operated to move theride vehicle 124. For example, the environment actuators 122 may include pneumatics, hydraulics, an engine, a motor, and/or a brake to move theride vehicle 124 through a ride environment. - The
ride vehicle 124 may also include one ormore vehicle sensors 132 to detect (e.g., sense and/or measure) sensor data indicative of any number of movement characteristics of theride vehicle 124, such as orientation of theride vehicle 124, location of theride vehicle 124, movement profile of theride vehicle 124, speed of theride vehicle 124, acceleration (e.g., accelerating or decelerating) of theride vehicle 124, and so forth. For example, theride vehicle 124 may include an accelerometer and/or a gyroscope to detect speed, acceleration, and/or orientation of theride vehicle 124. The one ormore vehicle sensors 132 may generate and/or transmit the sensor data to thevehicle controller 126 and/or theautomation controller 104. For example, thevehicle controller 126 may receive the vehicle sensor data and may determine a current and/or past orientation of theride vehicle 124, a current and/or past location of theride vehicle 124, a current and/or past speed of theride vehicle 124, a current and/or past acceleration of theride vehicle 124, current and/or past movement characteristics of theride vehicle 124, and so forth. In certain embodiments, thevehicle controller 126 may transmit the movement characteristics associated with theride vehicle 124 to theautomation controller 104. Additionally or alternatively, thevehicle controller 126 may generate and/or may transmit the vehicle sensor data to theautomation controller 104 and theautomation controller 104 may process the vehicle sensor data to determine the movement characteristics associated with theride vehicle 124 based on the vehicle sensor data. - The
automation controller 104 may generate and/or update the virtual reality image content based on the movement characteristics associated with theride vehicle 124. In certain embodiments, theautomation controller 104 may alter an orientation and/or a position of any number of virtual avatars (e.g., a virtual representation) corresponding to any number of riders of the virtualreality ride system 100 based on the movement characteristics. For example, theautomation controller 104 may determine theride vehicle 124 is decelerating. As such, theautomation controller 104 may alter an orientation of a virtual avatar corresponding to arider 120 to show the virtual avatar leaning forward due to the deceleration. Additionally, or alternatively, theautomation controller 104 may generate and/or update facial poses and/or gestures based on the movement characteristics of theride vehicle 124. In some embodiments, theautomation controller 104 may determine predicted facial poses and/or gestures based on the movement characteristics of theride vehicle 124. For example, theautomation controller 104 may predict a surprised face (e.g., raised eyebrows, open mouth) based on acceleration of theride vehicle 124 and may alter the facial pose of the virtual avatar accordingly to display the surprised face. - Additionally or alternatively, the
virtual reality device 102 may also include one or more sensors to detect (e.g., sense and/or measure) sensor data indicative of any number of movement characteristics of therider 120, such as orientation of therider 120, a location of therider 120, a pose of therider 120, speed of therider 120, acceleration of therider 120, and so forth. For example, thevirtual reality device 102 may include an accelerometer to detect the rider sensor data and may transmit the sensor data to theautomation controller 104. Theautomation controller 104 may receive the rider sensor data and may determine a current and/or past orientation of therider 120, a current and/or past location of therider 120, a current and/or past pose of therider 120, a current and/or past speed of therider 120, a current and/or past acceleration of therider 120, and so forth. Additionally, or alternatively, theautomation controller 104 may generate and/or may transmit the rider sensor data and/or the determined movement characteristics to any number of additionalvirtual reality devices 102 associated with other riders of the virtualreality ride system 100. Additionally, or alternatively, thevirtual reality device 102 may receive rider sensor data for any number ofriders 120 of the virtualreality ride system 100. - The
automation controller 104 may generate and/or update the virtual reality image content based on the movement characteristics associated with therider 120. In certain embodiments, theautomation controller 104 may alter an orientation and/or a position of the virtual avatar corresponding to a rider of the virtual reality ride system based on the movement characteristics. For example, theautomation controller 104 may determine the rider turns their head. As such, theautomation controller 104 may alter the orientation of the virtual avatar's head corresponding to the rider to show the virtual avatar turned in the same direction. - In certain embodiments, the
automation controller 104 and/or thevehicle controller 126 may receive the vehicle sensor data and the rider sensor data and may determine relative movement characteristics of therider 120 relative to theride vehicle 124. For example, the automation controller and/or thevehicle controller 126 may determine the orientation of therider 120 relative to thevehicle 124, the position of therider 120 relative to thevehicle 124, the speed of therider 120 relative to thevehicle 124, the acceleration of therider 120 relative to thevehicle 124, and/or vice versa. In some embodiments, thevirtual reality device 102 and/or thevehicle controller 126 may transmit the relative movement characteristics to any number of additionalvirtual reality devices 102 associated with other riders of the virtualreality ride system 100. - The
automation controller 104 and/or thevehicle controller 126 may receive the vehicle sensor data indicative of a current and/or past movement profile of theride vehicle 124 and may determine a predicted ride vehicle movement that is expected to occur during a subsequent time period. As used herein, a “predicted ride vehicle movement profile” of theride vehicle 124 describes movement characteristics of theride vehicle 124 that are predicted (e.g., expected) to occur during a time period. The predicted ride vehicle movement profile may include one or more ride vehicle movement times, one or more ride vehicle movement durations, one or more predicted ride vehicle movement directions, one or more predicted ride vehicle movement magnitudes, and so forth. The one or more ride vehicle movement times may be indicative of a predicted start time and/or a predicted stop time of a specific movement of theride vehicle 124 during the time period. The one or more ride vehicle movement durations may be indicative of one or more durations over which a specific movement of theride vehicle 124 is predicted to occur during the time period. The one or more predicted ride vehicle movement directions may be indicative of a movement direction of theride vehicle 124 during a corresponding ride vehicle movement duration in the time period. The one or more predicted ride vehicle movement magnitudes may be indicative of a movement magnitude (e.g., distance) of theride vehicle 124 that is predicted to occur at a corresponding ride vehicle movement time and/or during a corresponding predicted ride vehicle movement duration. - In certain embodiments, the
automation controller 104 may generate and/or update the virtual reality image content based on the predicted ride vehicle movement profile associated with theride vehicle 124. In certain embodiments, theautomation controller 104 may alter a position and/or an orientation of any number of virtual avatars corresponding to any number of additional riders of the virtualreality ride system 100. Additionally, or alternatively, theautomation controller 104 may generate and/or update facial poses and/or facial gestures for any number of virtual avatars based on the predicted ride vehicle movement profile associated with theride vehicle 124. - The
ride vehicle 124 may include a control system having multiple controllers, such asvehicle controller 126, each having at least oneprocessor 128 and at least onememory 130. In certain embodiments, thevehicle controller 126 may be provided in the form of a computing device, such as a programmable logic controller (PLC), a personal computer, a laptop, a tablet, a mobile device (e.g., a smart phone), a server, or any other suitable computing device. Thevehicle controller 126 may control operation of any number ofvehicle sensors 132, any number ofvehicle actuators 134, and/or any number ofenvironment actuators 122 and may process sensor data received from thevehicle sensors 132, thevehicle actuators 134, and/or theenvironment actuators 122. Thevehicle controller 126 may be coupled to thevehicle sensors 132, thevehicle actuators 134, and/or the environment actuators 122 by any suitable techniques for communicating data and control signals between thevehicle controller 126, the components of theride vehicles 124, and/ or theenvironment actuators 122, such as a wireless, optical, coaxial, or other suitable connection. - The
vehicle controller 126 may represent a unified hardware component or an assembly of separate components integrated through communicative coupling (e.g., wired or wireless communications). It should be noted that, in some embodiments, thevehicle controller 126 may include additional illustrated components of the virtualreality ride system 100. For example, thevehicle controller 126 may include theenvironment actuators 122 and may be operable to communicate with additionalvirtual reality devices 102. With respect to functional aspects of theride vehicle 124, thevehicle controller 126 may use information from theenvironment actuators 122, thevehicle sensors 132, and/or thevehicle actuators 134 to generate and/or transmit vehicle sensor data and/or environment sensor data to one or morevirtual reality devices 102. - In some embodiments, the
memory 130 may include one or more tangible, non-transitory, computer-readable media that store instructions executable by the processor 128 (representing one or more processors) and/or data to be processed by theprocessor 128. For example, thememory 130 may include random access memory (RAM), read only memory (ROM), rewritable non-volatile memory, such as flash memory, hard drives, optical discs, and/or the like. Additionally, theprocessor 128 may include one or more general purpose microprocessors, one or more application specific processors, one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), or any combination thereof. Further, thememory 130 may store vehicle sensor data and/or environment sensor data obtained via theenvironment actuators 122, thevehicle sensors 132, and/or thevehicle actuators 134 and/or algorithms utilized by theprocessor 128 to help control operations of components of theride vehicles 124 based on the vehicle sensor data and/or environment sensor data. Additionally, theprocessor 128 may process the vehicle sensor data and/or environment sensor data. In certain embodiments, theride vehicle 124 may include additional elements not shown inFIG. 1 , such as additional data acquisition and processing controls, additional sensors and displays, user interfaces, and so forth. - In certain embodiments, the
virtual reality system 100 may include any number ofvirtual reality devices 102. For example, eachrider 120 may be provided with a correspondingvirtual reality device 102. Eachvirtual reality device 102 may capture image data and/or audio data associated with acorresponding rider 120. For example, theimage sensors 110 may face or point towards a face of thecorresponding rider 120 and may capture image data associated with facial characteristics and/or facial movements of thecorresponding rider 120. Additionally, or alternatively, theaudio sensors 114 may capture audio data corresponding to speech and/or sounds made by thecorresponding rider 120. - In some embodiments, image data and/or audio data may be captured before the
rider 120 enters theride vehicle 124 and/or before the ride starts. For example, therider 120 may enter a designated area, such as a photo booth, and any number of cameras may capture images and/or video of therider 120. In certain embodiments, the cameras may be positioned and/or operated to capture images and/or video of therider 120 at different angles, at different distances, with different lighting, and so forth. Additionally, or alternatively, the cameras may be operated to capture images of different portions (e.g., head, face, arm, hand, and so forth) of therider 120. In some embodiments, an electronic display may provide instructions or prompt therider 120 to pose in different ways, such as standing, sitting, walking, and so forth and the cameras may capture images and/or video of the different poses. The electronic display may also prompt therider 120 to make different facial gestures, facial movements, or facial poses, such as smiling, frowning, raising eyebrows, yelling, shaking or nodding of the head, and so forth as the cameras capture images and/or video of therider 120. - Any number of
virtual reality devices 102 may receive image data corresponding to the captured images and/or video from the cameras in the designated area. For example, theautomation controller 104 may receive the image data and may generate and/or update a virtual avatar based on the image data. For example, theautomation controller 104 may analyze and/or process the image data to determine physical characteristics of therider 120, such as a height, a hair color, an eye color, a position of facial features, and so forth. Theautomation controller 104 may generate and/or update the virtual avatar based on the determined physical characteristics. Additionally, or alternatively, the image data may be processed and/or analyzed remotely from thevirtual reality device 102 and theautomation controller 104 may receive processed image data and/or physical characteristics associated with any number ofriders 120. In certain embodiments, theautomation controller 104 may compare the image data to stored facial gesture profiles and may generate and/or update the virtual avatar based on a selected facial gesture profile. Each stored facial gesture profile may include a set of facial feature characteristics and a corresponding emotion and/or gesture. For example, the image data may be indicative of a rider smiling with upturned lips, teeth showing, and/or raised eyebrows. As such, theautomation controller 104 may compare the facial features with the stored facial gesture profiles and may select the smiling facial gesture profile. Accordingly, theautomation controller 104 may generate and/or update the virtual avatar to depict the virtual avatar smiling based on the selected facial gesture profile. - In certain embodiments, the virtual avatar may include a rigged model of a corresponding rider. As used herein, rigging refers to a technique for skeletal animation for representing a character model (e.g., a rigged model) using a series of interconnected digital features (e.g., bones). The rigged model may include movable features, such as facial features, a head, an arm, a hand, a finger, and so forth. The
automation controller 104 may update the rigged model based on physical characteristics of the corresponding rider. Additionally or alternatively, theautomation controller 104 may update an orientation, a facial gesture, a facial movement, a facial pose, and so forth based on image data captured by theimage sensors 110 of physical (e.g., real or actual) orientation, facial gestures, facial movements, facial poses, and so forth of the corresponding rider. For example, a first rider with a first virtual reality device may turn their head to look towards a second rider with a second virtual reality device. The second virtual reality device may capture image data of the second rider and may process and/or transmit the image data to the first virtual reality device. Accordingly, the first virtual reality device may receive the image data corresponding to the second rider and may generate and/or update virtual reality image content to display to the first rider. For example, the first virtual reality device may generate and/or update a virtual avatar corresponding to the second rider. - With the foregoing in mind,
FIG. 2 illustrates an example embodiment of thevirtual reality device 102 inFIG. 1 . Thevirtual reality device 102 may incorporate theimage sensor 110 and theaudio sensor 114. Theimage sensor 110 may capture any number of images and/or video of therider 120. For example, theimage sensor 110 may capture images and/or video of the face, body, fingers, hands, and/or limbs of therider 120. Theimage sensor 110 may capture aviewing area 202 selected by a controller, such as theautomation controller 104. For example, theautomation controller 104 may generate and transmit control signals to theimage sensor 110 to capture theviewing area 202 based on movement detected by theimage sensor 110. In certain embodiments, theviewing area 202 may include the face of therider 120 and/or facial features (e.g., eyes, nose, mouth, and so forth) of therider 120. Theimage sensor 110 may generate and/or transmit image data associated with theviewing area 202 to theautomation controller 104 for processing. In certain embodiments, theautomation controller 104 may determine physical characteristics (e.g., size, position, color, and so forth) associated with therider 120 based on the image data. For example, theautomation controller 104 may receive the image data and may determine contours, textures, and/or features of the rider's face. For example, theautomation controller 104 may determine the position of the eyes on the rider's face, the color of the rider's hair, and so forth. Additionally, or alternatively, theautomation controller 104 may generate virtual reality image content based on the image data. For example, theautomation controller 104 may generate and/or update a virtual avatar based on the determined physical characteristics. - The
audio sensor 114 may capturespeech 204 and/or sounds made by therider 120. Theaudio sensor 114 may generate audio data based on the capturedspeech 204 and/or sounds and may transmit the audio data to theautomation controller 104. In certain embodiments, theautomation controller 104 may receive the audio data and may determine text (e.g., words, phrases, sentences, and so forth) spoken by therider 120. For example, theautomation controller 104 may process the audio data using a natural language processing algorithm to generate text data. Theautomation controller 104 may generate virtual reality image content based on the audio data and/or the text data. For example, theautomation controller 104 may generate and/or update a virtual avatar based on the audio data and/or the text data. Theautomation controller 104 may determine and/or generate phonemes based on the audio data and may determine and/or generate visemes based on the audio data and/or the phonemes. Additionally, or alternatively, theautomation controller 104 may generate text associated with the capturedspeech 204 based on the audio data. For example, theautomation controller 104 may use natural language processing to determine text associated with captured speech and may generate visemes based on the determined text. Theautomation controller 104 may transmit the audio data, the phonemes, the text, and/or the visemes to any number of additional virtual reality devices to generate and/or update virtual reality image content corresponding to therider 120 based on the capturedspeech 204 of therider 120. - With the foregoing in mind,
FIG. 3 illustrates an example embodiment of thevirtual reality system 100 inFIG. 1 including a firstvirtual reality device 102A worn by afirst rider 120A and a secondvirtual reality device 102B worn by asecond rider 120B. The firstvirtual reality device 102A may capture sensor data, audio data, and/or image data associated with thefirst rider 120A, as described herein. In some embodiments, the firstvirtual reality device 102A may transmit the sensor data, the audio data, and/or the image data associated with thefirst rider 120A to the secondvirtual reality device 102B. The secondvirtual reality device 102B may receive the sensor data, the audio data, and/or the image data and may generate and/or update virtual reality image content to be displayed to thesecond rider 120B. For example, thefirst rider 120A may turn their head towards thesecond rider 120B. The secondvirtual reality device 102B may generate and/or update a virtual avatar corresponding to thefirst rider 120A based on the sensor data indicating thefirst rider 120A turning their head. As such, thesecond rider 120B may view the virtual avatar corresponding to thefirst rider 120A turning their head. - Additionally, or alternatively, the second
virtual reality device 102B may generate and/or update the virtual reality image content based on image data captured by the firstvirtual reality device 102A. In certain embodiments,image sensors 110 in the firstvirtual reality device 102A may capture images indicative of facial movements, facial gestures, facial poses, and so forth made by thefirst rider 120A. In some embodiments, the firstvirtual reality device 102A may transmit the image data corresponding to the captured images to the secondvirtual reality device 102B. The secondvirtual reality device 102B may generate and/or update the virtual avatar corresponding to thefirst rider 120A based on the image data. For example, thefirst rider 120A may smile, blink, move their eyes, and so forth. As such, the secondvirtual reality device 102B may generate and/or update the virtual avatar corresponding to thefirst rider 120A based on the image data indicating facial movements of thefirst rider 120A. As such, thesecond rider 120B may view the virtual avatar corresponding to thefirst rider 120A blinking, smiling, moving their eyes, and so forth. - In some embodiments, the second
virtual reality device 102B may generate and/or update the virtual reality image content based on audio data captured by the firstvirtual reality device 102A. For example,audio sensors 114 in the firstvirtual reality device 102A may capture audio indicative of speech made by thefirst rider 120A. In certain embodiments, the firstvirtual reality device 102A may transmit the audio data corresponding to the captured speech to the secondvirtual reality device 102B. The secondvirtual reality device 102B may generate and/or update the virtual avatar corresponding to thefirst rider 120A based on the audio data. For example, the secondvirtual reality device 102B may perform natural language processing on the audio data to determine text corresponding to the audio data. In some embodiments, the secondvirtual reality device 102B may generate a sequence of phonemes and/or a sequence of visemes based on the audio data, the determined text, or a combination thereof. As such, the secondvirtual reality device 102B may generate and/or update facial movements of the virtual avatar corresponding to thefirst rider 120A based on the sequence of visemes. Additionally, or alternatively, the secondvirtual reality device 102B may include one or more speakers to playback the audio data captured by the firstvirtual reality device 102A. Accordingly, the secondvirtual reality device 102B may display facial movements of the virtual avatar based on the audio data so the virtual avatar appears to be speaking during playback of the audio data. In the illustrated embodiment, the firstvirtual reality device 102A includes theautomation controller 104, theprocessor 106, and thememory 108. Additionally, or alternatively, the firstvirtual reality device 102A may include any number of components, such asimage sensors 110,display 112,audio sensors 114,speakers 116,antenna 118, and so forth. The secondvirtual reality device 102B may include the same components and/or similar components to the firstvirtual reality device 102A. - With the foregoing in mind,
FIG. 4 illustrates a flowchart of aprocess 400 for operating the virtualreality ride system 100 ofFIG. 1 , in accordance with an embodiment of the present disclosure. While the process is described as being performed by theautomation controller 104, it should be understood that theprocess 400 may be performed by any suitable device, such as theprocessor 106, thevehicle controller 126, and so forth, that may control and/or communicate with components of a virtual reality ride system. Furthermore, while theprocess 400 is described using steps in a specific sequence, it should be understood that the present disclosure contemplates that the described steps may be performed in different sequences than the sequence illustrated, and certain described steps may be skipped or not performed altogether. In some embodiments, theprocess 400 may be implemented by executing instructions stored in a tangible, non-transitory, computer-readable medium, such as thememory 108, using any suitable processing circuitry, such as theprocessor 106. - In the
process 400, a virtual reality device, such asvirtual reality device 102 inFIG. 1 may receive audio data, image data, rider sensor data, vehicle sensor data, or any combination thereof. For example, atblock 402, theautomation controller 104 may receive audio data captured by one or moreaudio sensors 114 of a separate virtual reality device, image data captured by one ormore image sensors 110 of the separate virtual reality device, rider sensor data captured by one or more rider sensors, and/or vehicle sensor data captured by one ormore vehicle sensors 132. Additionally, or alternatively, thevirtual reality device 102 may receive environment sensor data associated with a ride environment. - At
block 404, theautomation controller 104 may generate and/or update virtual reality image content based on the image data. For example, theautomation controller 104 may determine physical characteristics (e.g., hair color, facial movements, facial gestures, and so forth) of another rider of the virtualreality ride system 100 and may update and/or animate a virtual avatar corresponding to the other rider. Additionally, or alternatively, theautomation controller 104 may generate and/or update facial features of the virtual avatar based on the image data. For example, theautomation controller 104 may generate and/or update a position and/or a size of facial features (e.g., mouth, nose, eyes, and so forth) based on the image data. - At
block 406, the automation controller may generate text data based on the audio data. For example, theautomation controller 104 may perform a natural language processing algorithm to determine text associated with captured speech for another rider of the virtual reality ride system. In certain embodiments, theautomation controller 104 may determine a sequence of phonemes and/or a sequence of visemes associated with the captured speech. Additionally, or alternatively, theautomation controller 104 may process the audio data. For example, theautomation controller 104 may filter the audio data to remove background noise, may enhance an audio characteristic (e.g., volume) of the audio data, may alter a voice characteristic (e.g., pitch, tone, timbre, and so forth) associated with the captured speech, and so forth. In some embodiments, theautomation controller 104 may generate new audio data and/or update the audio data based on a theme of the virtualreality ride system 100. For example, the virtualreality ride system 100 may include an electronics or robotics theme and theautomation controller 104 may generate new audio data and/or alter the audio data to produce a more robotic sounding speech based on the captured speech. - At
block 408, theautomation controller 104 may generate and/or update the virtual reality image content based on the text data and/or the audio data. In some embodiments, theautomation controller 104 may adjust facial features of the virtual avatar based on the text data and/or the audio data. For example, theautomation controller 104 may adjust and/or animate the facial features of the virtual avatar based on the sequence of visemes. Accordingly, the virtual reality image content may depict movement of the facial features of the virtual avatar corresponding to the captured speech. - At
block 410, thevirtual reality device 102 may display the virtual reality image content including the virtual avatar. In certain embodiments, theautomation controller 104 may instruct thedisplay 112 to display the virtual reality image content and/or may instruct one or more speakers to playback the audio data. As such, the rider of the virtualreality ride system 100 may hear playback of the captured speech and may view facial movements of the virtual avatar corresponding to the captured speech to provide a more realistic and/or immersive experience. - While only certain features of the invention have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
- The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for (perform)ing (a function) . . . ” or “step for (perform)ing (a function) . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).
Claims (20)
1. A virtual reality ride system, comprising:
a display configured to present virtual reality image content to a first rider;
an audio sensor configured to capture audio data associated with a second rider;
an image sensor configured to capture image data associated with the second rider; and
at least one processor communicatively coupled to the display, the at least one processor configured to:
receive the audio data, the image data, or both;
generate a virtual avatar corresponding to the second rider, wherein the virtual avatar comprises a set of facial features;
update the set of facial features based on the audio data, the image data, or both; and
instruct the display to present the virtual reality image content comprising the virtual avatar and the updated set of facial features.
2. The virtual reality ride system of claim 1 , wherein the at least one processor is configured to:
generate a facial gesture based on the image data; and
instruct the display to present the virtual reality image content comprising the facial gesture.
3. The virtual reality ride system of claim 1 , wherein the at least one processor is configured to:
determine a sequence of visemes based on the audio data; and
update the set of facial features based on the visemes.
4. The virtual reality ride system of claim 1 , comprising a ride vehicle sensor configured to capture vehicle sensor data indicative of movement characteristics of a ride vehicle.
5. The virtual reality ride system of claim 4 , wherein the at least one processor is configured to:
receive the vehicle sensor data;
determine a predicted movement profile associated with the ride vehicle; and
alter the virtual avatar based on the predicted movement profile.
6. The virtual reality ride system of claim 5 , wherein altering the virtual avatar based on the predicted movement profile comprises altering the set of facial features based on the predicted movement profile.
7. The virtual reality ride system of claim 1 , comprising a rider sensor configured to capture sensor data indicative of a set of movement characteristics associated with the second rider.
8. The virtual reality ride system of claim 7 , wherein the set of movement characteristics comprises an orientation of the second rider, a position of the second rider, a speed of the second rider, an acceleration of the second rider, or any combination thereof.
9. The virtual reality ride system of claim 7 , wherein the at least one processor is configured to:
receive the sensor data; and
update the virtual avatar based on the sensor data.
10. The virtual reality ride system of claim 9 , wherein the at least one processor is configured to:
alter a pose of the virtual avatar based on the sensor data;
alter the set of facial features based on the sensor data; and
instruct the display to present the virtual reality image content comprising the altered set of facial features.
11. A virtual reality device, comprising:
an audio sensor configured to capture audio data indicative of speech of a user;
an image sensor configured to capture image data indicative of facial characteristics of the user; and
at least one processor communicatively coupled to the audio sensor, and the image sensor, wherein the at least one processor is configured to:
determine a set of facial characteristics based on the image data;
determine a set of facial movements associated with the set of facial characteristics based on the audio data; and
transmit the set of facial characteristics and the set of facial movements to a second virtual reality device, the second virtual reality device configured to display virtual reality image content based on the set of facial characteristics and the set of facial movements.
12. The virtual reality device of claim 11 , comprising:
a display configured to display virtual reality image content to the user;
wherein the at least one processor is configured to:
receive, from the second virtual reality device, second audio data, second image data, or both;
generate a model of a second user based on the second image data;
animate the model based on the second audio data; and
instruct the display to present the virtual reality image content including the animated model.
13. The virtual reality device of claim 12 , comprising an audio playback device configured to playback the second audio data.
14. The virtual reality device of claim 12 , wherein the at least one processor is configured to:
receive user sensor data associated with the second user; and
animate the model based on the user sensor data.
15. The virtual reality device of claim 12 , wherein the at least one processor is configured to:
receive vehicle sensor data associated with a ride vehicle; and
animate the model based on the vehicle sensor data.
16. The virtual reality device of claim 12 , wherein the at least one processor is configured to:
determine text data associated with the second audio data;
determine a set of visemes associated with the text data; and
animate the model based on the set of visemes.
17. The virtual reality device of claim 12 , wherein the second audio data corresponds to speech of the second user.
18. The virtual reality device of claim 12 , wherein the at least one processor is configured to alter a set of facial features of the model based on the speech.
19. A method, comprising:
receiving audio data, image data, or both;
generating a virtual avatar based on the image data, wherein the virtual avatar comprises a set of facial features;
determining a set of facial characteristics associated with the image data;
comparing the set of facial characteristics with a set of facial gesture profiles, each facial gesture profile of the set of facial gesture profiles comprising a corresponding set of stored facial characteristics;
selecting, based on the comparison, a facial gesture profile of the set of facial gesture profiles;
animating the set of facial features based on the selected facial gesture profile, the audio data, or both; and
presenting virtual reality image content comprising the virtual avatar and the animated set of facial features.
20. The method of claim 19 , comprising:
receiving a set of vehicle sensor data indicative of a movement profile associated with a vehicle; and
animating the virtual avatar based on the movement profile.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/092,728 US20230215070A1 (en) | 2022-01-04 | 2023-01-03 | Facial activity detection for virtual reality systems and methods |
PCT/US2023/010129 WO2023133149A1 (en) | 2022-01-04 | 2023-01-04 | Facial activity detection for virtual reality systems and methods |
CA3240128A CA3240128A1 (en) | 2022-01-04 | 2023-01-04 | Facial activity detection for virtual reality systems and methods |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202263296363P | 2022-01-04 | 2022-01-04 | |
US18/092,728 US20230215070A1 (en) | 2022-01-04 | 2023-01-03 | Facial activity detection for virtual reality systems and methods |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230215070A1 true US20230215070A1 (en) | 2023-07-06 |
Family
ID=86992011
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/092,728 Pending US20230215070A1 (en) | 2022-01-04 | 2023-01-03 | Facial activity detection for virtual reality systems and methods |
Country Status (1)
Country | Link |
---|---|
US (1) | US20230215070A1 (en) |
-
2023
- 2023-01-03 US US18/092,728 patent/US20230215070A1/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9479736B1 (en) | Rendered audiovisual communication | |
US10217261B2 (en) | Deep learning-based facial animation for head-mounted display | |
Olszewski et al. | High-fidelity facial and speech animation for VR HMDs | |
KR102196380B1 (en) | Technology for controlling a virtual image generation system using user's emotional states | |
US20160110922A1 (en) | Method and system for enhancing communication by using augmented reality | |
US20190197755A1 (en) | Producing realistic talking Face with Expression using Images text and voice | |
US20110304629A1 (en) | Real-time animation of facial expressions | |
KR102400398B1 (en) | Animated Character Head Systems and Methods | |
US20160260252A1 (en) | System and method for virtual tour experience | |
WO2023284435A1 (en) | Method and apparatus for generating animation | |
CN114787759A (en) | Communication support program, communication support method, communication support system, terminal device, and non-language expression program | |
JP2023511107A (en) | neutral avatar | |
KR102573465B1 (en) | Method and system for providing emotion correction during video chat | |
CN111275158A (en) | Method and apparatus for generating and displaying electronic avatar | |
US20230215070A1 (en) | Facial activity detection for virtual reality systems and methods | |
WO2023133149A1 (en) | Facial activity detection for virtual reality systems and methods | |
US20220147143A1 (en) | Method and device for performance-based progression of virtual content | |
WO2021196751A1 (en) | Digital human-based vehicle cabin interaction method, apparatus and vehicle | |
Heisler et al. | Making an android robot head talk | |
CN111736700A (en) | Digital person-based vehicle cabin interaction method and device and vehicle | |
US20240078732A1 (en) | Avatar facial expressions based on semantical context | |
US20240078731A1 (en) | Avatar representation and audio generation | |
JP6935531B1 (en) | Information processing programs and information processing systems | |
KR20240043132A (en) | Mirror for metaverse and metaverse system through the same | |
CN113852863A (en) | Dynamic media item delivery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: UNIVERSAL CITY STUDIOS LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JORDAN, ROBERT MICHAEL;TRAYNOR, MARK JAMES;GOERGEN, PATRICK JOHN;SIGNING DATES FROM 20230202 TO 20230518;REEL/FRAME:063687/0603 |