WO2019245681A1 - Virtual reality hand gesture generation - Google Patents

Virtual reality hand gesture generation Download PDF

Info

Publication number
WO2019245681A1
WO2019245681A1 PCT/US2019/032928 US2019032928W WO2019245681A1 WO 2019245681 A1 WO2019245681 A1 WO 2019245681A1 US 2019032928 W US2019032928 W US 2019032928W WO 2019245681 A1 WO2019245681 A1 WO 2019245681A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
controller
touch
force
hand
Prior art date
Application number
PCT/US2019/032928
Other languages
English (en)
French (fr)
Inventor
Scott Douglas Nietfeld
Joe van den Heuvel
Original Assignee
Valve Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/195,718 external-priority patent/US10987573B2/en
Application filed by Valve Corporation filed Critical Valve Corporation
Priority to EP19822879.3A priority Critical patent/EP3807747A4/en
Priority to CN201980041061.3A priority patent/CN112437909A/zh
Priority to KR1020217001241A priority patent/KR20210021533A/ko
Priority to JP2020570736A priority patent/JP7337857B2/ja
Publication of WO2019245681A1 publication Critical patent/WO2019245681A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/218Input arrangements for video game devices characterised by their sensors, purposes or types using pressure sensors, e.g. generating a signal proportional to the pressure applied by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • G06F3/03547Touch pads, in which fingers can move on a surface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the video game industry has spawned many innovations in both hardware and software.
  • various hand-held video game controllers have been designed, manufactured, and sold for a variety of game applications.
  • Some of the innovations have applicability outside of the video game industry, such as controllers for industrial machines, defense systems, robotics, etc.
  • VR virtual reality
  • controllers for VR systems have to perform several different functions and meet strict (and sometimes competing) design constraints while often optimizing certain desired characteristics.
  • these controllers include sensors for measuring a force of a user’s grip, which in turn is used for performing a predefined gameplay function.
  • sensors for measuring a force of a user’s grip which in turn is used for performing a predefined gameplay function.
  • sensors have been utilized in an effort to meet these objectives, including, among others, a force sensing resistor (FSR), which uses variable resistance to measure an amount of force applied to the FSR.
  • FSR force sensing resistor
  • existing controllers with FSRs tend to exhibit fairly crude response times. Additionally, the controller may fail to accurately depict and sense hand positions, gestures, and/or movement throughout a gameplay experience.
  • FIG. 1 depicts an environment of a user interacting with a virtual reality (VR) system according to an example embodiment of the present disclosure.
  • VR virtual reality
  • FIG. 2 depicts an example controller in a user’s hand according to an example embodiment of the present disclosure.
  • FIG. 3 depicts an example controller according to an example embodiment of the present disclosure.
  • FIG. 4 depicts the example controller of FIG. 3 in a user’s hand according to an example embodiment of the present disclosure.
  • FIG. 5 depicts the example controller of FIG. 3 in a user’s hand according to an example embodiment of the present disclosure.
  • FIG. 6 depicts the example controller of FIG. 3 in a user’s hand according to an example embodiment of the present disclosure.
  • FIG. 7 depicts a pair of example controllers according to an example embodiment of the present disclosure.
  • FIG. 8A depicts a front view of the example right-hand controller according to another example embodiment of the present disclosure.
  • FIG. 8B depicts a back view of the example right-hand controller of FIG. 8 A.
  • FIG. 9A depicts an example force sensing resistor (FSR) according to an example embodiment of the present disclosure.
  • FIG. 9B depicts a front view of the example FSR of FIG. 9A.
  • FIG. 9C depicts a cross section of the example FSR of FIG. 9B, taken along Section
  • FIG. 10A depicts a first hand gesture of a user holding an example controller according to an example embodiment of the present disclosure.
  • FIG. 10B depicts a second hand gesture of a user holding an example controller according to an example embodiment of the present disclosure.
  • FIG. 10C depicts a third hand gesture of a user holding an example controller according to an example embodiment of the present disclosure.
  • FIG. 10D depicts a fourth hand gesture of a user holding an example controller according to an example embodiment of the present disclosure.
  • FIG. 10E depicts a fifth hand gesture of a user holding an example controller according to an example embodiment of the present disclosure.
  • FIG. 10F depicts a sixth hand gesture of a user holding an example controller according to an example embodiment of the present disclosure.
  • FIG. 11 depicts an example process according to an example embodiment of the present disclosure.
  • FIG. 12 depicts an example process for training model(s) according to an example embodiment of the present disclosure.
  • FIG. 13 depicts an example process for using touch input to generate gestures according to an example embodiment of the present disclosure.
  • An example motion capture system may include cameras, projectors, and/or other sensors positioned about an environment to track a movement of the controller, as well as movement of a user operating the controller.
  • a plurality of cameras may mount within the environment and capture images of the controller and the user.
  • the plurality of cameras may capture some or all angles and positions within the environment.
  • the plurality of cameras may focus on or capture images within a predefined range or area of the environment.
  • the cameras may detect positions and orientations of the user and/or the controller(s), respectively.
  • the controller(s) and/or the user may include markers, respectively.
  • the markers may couple to the controller and/or the user.
  • the markers may include a digital watermark, an infrared reflector, or the like.
  • the motion capture system(s) may project light into the environment, which is then reflected by the markers.
  • the cameras may capture incident light reflected by the markers and the motion capture system(s) may track and plot the locations of the markers within the environment to determine movements, positions, and/or orientations of the controller and/or the user.
  • An example controller may be held by the user and may include one or more force sensing resistors (FSRs) or other types of sensors that detect touch input from the user.
  • FSRs force sensing resistors
  • an FSR may couple to a surface of the controller, such as a structure mounted within a handle of the controller and/or a structure mounted underneath at least one thumb-operated control of the controller.
  • the FSR may measure a resistance value that corresponds to an amount of force applied by the user.
  • the FSR may also associate the force(s) with a particular location, region, and/or portion of the controller. For example, the FSR may determine an amount of force applied to an outer surface of the handle and/or may determine location(s) on the controller corresponding to touch input from the user.
  • the controller may determine, via force data generated by the FSR, an amount of force in which the user squeezes the handle of the controller and/or an amount of force with which the user presses buttons on the controller.
  • the controller may translate presses or squeezes of varying force into digitized numerical values used for video game control and/or game mechanics.
  • the FSR may act as a switch to detect when an applied force exceeds a threshold, which in some instances, may dynamically update and/or adjust.
  • the threshold may adjust to a lower value to reduce hand fatigue during gameplay (e.g., when the user presses a control associated with the FSR to shoot a weapon frequently during gameplay). Conversely, the threshold may adjust to a higher value to reduce instances of accidental control operation.
  • the controller may also include an array of proximity sensors that are spatially distributed along a length of the handle and that are responsive to a proximity of the user’s fingers.
  • the proximity sensors may include any suitable technology, such as capacitive sensors, for sensing a touch input and/or a proximity of the hand of the user relative to the controller.
  • the array of proximity sensors may generate touch data that indicates a location of fmger(s) grasping the controller or when the user is not grasping the controller, a distance disposed between the handle and the fingers of the user (e.g., through measuring capacitance).
  • the proximity sensors may also detect a hand size of the user grasping the controller, which may configure the controller according to different settings. For instance, depending on the hand size, the controller may adjust to make force-based input easier for users with smaller hands.
  • the motion capture system(s) may capture motion data of the hand and/or the controller, while the controller may capture touch data corresponding to touch inputs at the controller and force data associated with the touch inputs of the user.
  • the motion data, the touch data, and/or the force data may be associated with one another to generate models that are indicative of hand gestures of the user.
  • the user may include markers placed on his or her knuckles, finger tips, wrist, joints, and so forth.
  • the controller may also include markers (e.g., top, bottom, sides, etc.).
  • the marker(s) may reflect incident light.
  • the motion capture system may detect and record movements of the user’s hand(s) and the position of the controlled s) via the cameras detecting positions of the markers.
  • the projectors of the motion capture system(s) may project infrared light, which is then reflected by the markers on the hand and/or the controller.
  • the cameras of the motion capture system(s) may capture images of the environment. The images are utilized to indicate the positions of the markers within the environment.
  • the positions of the markers are tracked over time and animated within a three-dimensional (3D) virtual space. This tracking may allow for the generation of animated 3D skeletal data (or models). For instance, the user may grip the controller with a clinched fist or two fingers (e.g., pinky finger and ring finger). The cameras may capture the positions of the user’s finger tips, knuckles, and/or other portions of the hand, wrist, and/or arm via the markers. In some instances, the positions are relative to the controller.
  • the array of proximity sensors may detect touch input, or a lack of touch input, at the controller.
  • the touch data may indicate the locations of the fingers of the user relative to the controller, for instance, through measuring capacitance.
  • the capacitance may vary with the distance disposed between the finger and the controller. In doing so, the controller may detect when the user grips the controller with one finger, two fingers, three fingers, and so forth. With the capacitance, the controller may also detect the relative placement of the fingers with respect to the controller, such as when the fingers of the user are not touching the controller.
  • the FSR may capture force data representative of force values received by the controller(s) (e.g., forces in which the user grips the controller). For instance, as the user grips the controller body with a clinched fist or two fingers, the FSR may capture force values corresponding to these respective grips. As an example, the FSR may detect an increase in force values when the user grips the controller with a clinched fist as compared to when the user grips the controller with two fingers.
  • the touch data and the force data may be associated with one another. For instance, when the user grips the controller with four fingers, the force values detected on the controller may be associated with certain locations of the controller. In doing so, the touch data and the force data may be associated with one another to determine which fingers of the user grasp the controller, as well as the relative force each finger the user grasps the controller. The same may be said when the user grips the controller with two fingers, where force values are detected and associated with certain portions of the controller body. Knowing where the touch input is received, from the array of proximity sensors, as well as the amount of force the user grips the controller, as detected by the FSR, the controller and/or another communicatively coupled remote system may associate the touch input with certain fingers of the user. In some instances, through correlating time stamps associated with the touch data with time stamps of the force data, the controller (or another communicatively coupled remote system) may associate the touch data and the force data.
  • the amount of force with which the user grips the controller i.e., the force data
  • the location of the touch input or lack thereof on the controller i.e., the touch data
  • motion captured by the camera of the motion capture system i.e., the motion data
  • the motion capture system may associate a clinched fist (e.g., using the motion data) with the touch data and/or the force data received at the controller.
  • the motion data may indicate the hand gesture (e.g., two finger grip) while the touch data may indicate the proximity of the hand (or fingers) to the controller and the force data may indicate how firm a user grips the controller.
  • models may be generated and trained to indicate gestures of the user. The models may continuously be trained to become more accurate overtime.
  • the models may characterize touch input at the controller and/or force values associated with the touch input to generate animations of a hand gesture on a display and the VR environment may utilize the models for use in gameplay. More particularly, the models may input the touch data and/or the force data to generate hand gestures within the VR environment.
  • the gestures may include various video game controls, such as crushing a rock or squeezing a balloon (e.g., clinched fist gesture), toggling through available weapons usable by a game character (e.g., scrolling or sliding fingers along the controller), dropping objects (e.g., open hand gesture), firing a weapon (e.g., pinky finger, ring finger, middle finger touching the controller but index finger and thumb are pointed outward), and so forth. That is, knowing the location of the touch input on the controller, as well as the force in which the user grips the controller. This information may be used in conjunction with the previously trained models to generate a hand gesture (e.g., clinched fist) within the VR environment and/or on a VR display. Further, the model(s) may utilize previously generated animations and/or image data when rendering and/or generating the hand gestures for display.
  • various video game controls such as crushing a rock or squeezing a balloon (e.g., clinched fist gesture),
  • FIG. 1 depicts an example environment 100 in which a motion capture system(s) 102 and a user 104 reside.
  • the motion capture system(s) 102 is shown mounted to walls of the environment 100, however, in some instances, the motion capture system(s) 102 may mount elsewhere within the environment 100 (e.g., ceiling, floor, etc.).
  • FIG. 1 illustrates four motion capture system(s) 102, the environment 100 may include more than or less than four motion capture system(s) 102.
  • the motion capture system(s) 102 may include projector(s) configured to generate and project light and/or images 106 within/into the environment 100.
  • the images 106 may include visible light images perceptible to the user 104, visible light images imperceptible to the user 104, images with non-visible light, and/or a combination thereof.
  • the projector(s) may include any number of technologies capable of generating the images 106 and projecting the images 106 onto a surface or objects within the environment 100.
  • suitable technologies may include a digital micromirror device (DMD), liquid crystal on silicon display (LCOS), liquid crystal display, 3LCD, and so forth.
  • DMD digital micromirror device
  • LCOS liquid crystal on silicon display
  • 3LCD liquid crystal display
  • the projector(s) may have a field of view which describes a particular solid angle and the field of view may vary according to changes in the configuration of the projector(s). For example, the field of view may narrow upon application of a zoom.
  • the motion capture system(s) 102 may include high resolution cameras, infrared (IR) detectors, sensors, and so forth.
  • the camera(s) may image the environment 100 in visible light wavelengths, non-visible light wavelengths, or both.
  • the camera(s) also have a field of view that describes a particular solid angle and the field of view of the camera may vary according to changes in the configuration of the camera(s). For example, an optical zoom of the camera(s) may narrow the camera field of view.
  • the environment 100 may include a plurality of cameras and/or a varying type of camera.
  • the cameras may include a three-dimensional (3D), an infrared (IR) camera, and/or a red-green-blue (RGB) camera.
  • the 3D camera and the IR camera may capture information for detecting depths of objects within the environment (e.g., markers) while the RGB camera may detect edges of objects by identifying changes in color within the environment 100.
  • the motion capture system(s) 102 may include a single camera configured to perform all of the aforementioned functions.
  • One or more components of the motion capture system(s) 102 may mount to a chassis with a fixed orientation or may mount to the chassis via an actuator, such that the chassis and/or the one or more components may move.
  • the actuators may include piezoelectric actuators, motors, linear actuators, and other devices configured to displace or move the chassis and/or the one more components mounted thereto, such as the projector(s) and/or the camera(s).
  • the actuator may comprise a pan motor, a tilt motor, and so forth. The pan motor may rotate the chassis in a yawing motion while the tilt motor may change the pitch of the chassis.
  • the chassis may additionally or alternatively include a roll motor, which allows the chassis to move in a rolling motion.
  • the motion capture system(s) 102 may capture different views of the environment 100.
  • the motion capture system(s) 102 may also include a ranging system.
  • the ranging system may provide distance information from the motion capture system(s) 102 to a scanned entity, object (e.g., the user 104 and/or the controller 110), and/or a set of objects.
  • the ranging system may comprise and/or use radar, light detection and ranging (LIDAR), ultrasonic ranging, stereoscopic ranging, structured light analysis, time-of-flight observations (e.g., measuring time- of-flight round trip for pixels sensed at a camera), and so forth.
  • LIDAR light detection and ranging
  • ultrasonic ranging ultrasonic ranging
  • stereoscopic ranging structured light analysis
  • time-of-flight observations e.g., measuring time- of-flight round trip for pixels sensed at a camera
  • the projector(s) may project a structured light pattern within the environment 100 and the camera(s) may capture an image of the reflected light pattern.
  • the motion capture system(s) 102 may analyze a deformation in the reflected pattern, due to a lateral displacement between the projector and the camera, to determine depths or distances corresponding to different points, areas, or pixels within the environment 100.
  • the motion capture system(s) 102 may determine or know distance(s) between the respective components of the motion capture system(s) 102, which may aid in the recovery of the structured light pattern and/or other light data from the environment 100.
  • the motion capture system(s) 102 may also use the distances to calculate other distances, dimensions, and/or otherwise aid in the characterization of entities or objects within the environment 100. In implementations where the relative angle and size of the projector field of view and camera field of view may vary, the motion capture system(s) 102 may determine and/or know such dimensions.
  • the user 104 may wear a VR headset 108 and hold the controllers 110.
  • the VR headset 108 may include an internal display (not shown) that presents a simulated view of a virtual environment, gameplay, or shows objects within virtual space.
  • the VR headset 108 may include a headband along with additional sensors.
  • the VR headset 108 may comprise a helmet or cap and include sensors located at various positions on the top of the helmet or cap to receive optical signals.
  • the user 104 and/or the controllers 110 may include markers.
  • the motion capture system(s) 102 via the projector(s) projecting light and the camera(s) capturing images of the reflections of the markers, may detect a position of the user 104 and/or the controllers 110 within the environment 100.
  • the markers may be utilized to determine an orientation and/or position of the user 104, or portions of the user 104 (e.g., hands or fingers) within the environment 100, as well as an orientation and/or position of the controller 110 within the environment 100.
  • the ranging system may also aid in determining locations of the user 104 (or portions thereof) and the controllers 110 through determining distances between the motion capture system(s) 102 and the markers.
  • the motion capture system(s) 102, the VR headset 108, and/or the controllers 110 may communicatively couple to one or more remote computing resource(s) 112.
  • the remote computing resource(s) 112 may be remote from the environment 100 and the motion capture system(s) 102, the VR headset 108, and/or the controllers 110.
  • the motion capture system(s) 102, the VR headset 108, and/or the controllers 110 may communicatively couple to the remote computing resource(s) 112 over a network 114.
  • the motion capture system(s) 102, the VR headset 108, and/or the controllers 110 may communicatively couple to the network 114 via wired technologies (e.g., wires, USB, fiber optic cable, etc.), wireless technologies (e.g., RF, cellular, satellite, Bluetooth, etc.), and/or other connection technologies.
  • the network 114 is representative of any type of communication network, including data and/or voice network, and may be implemented using wired infrastructure (e.g., cable, CAT5, fiber optic cable, etc.), a wireless infrastructure (e.g., RF, cellular, microwave, satellite, Bluetooth, etc.), and/or other connection technologies.
  • the remote computing resource(s) 112 may be implemented as one or more servers and may, in some instances, form a portion of a network-accessible computing platform implemented as a computing infrastructure of processors, storage, software, data access, and so forth that is maintained and accessible via a network such as the Internet.
  • the remote computing resource(s) 112 do not require end-user knowledge of the physical location and configuration of the system that delivers the services. Common expressions associated with these remote computing resource(s) 112 may include“on-demand computing,”“software as a service (SaaS),” “platform computing,”“network-accessible platform,”“cloud services,”“data centers,” and so forth.
  • the motion capture system(s) 102, the VR headset 108, and/or the controllers 110 may include one or more communication interfaces to facilitate the wireless connection to the network 114 and/or to one or more remote computing resource(s) 112. Additionally, the one or more communication interfaces may also permit transmission of data between the motion capture system(s) 102, the VR headset 108, and/or the controllers 110 (e.g., communication between one another). In some instances, however, the one or more communication interfaces may also include wired connections.
  • the remote computing resource(s) 112 include a processor(s) 116 and memory 118, which may store or otherwise have access to one or more model(s) 120.
  • the remote computing resource(s) 112 may receive motion data 122 from the motion capture system(s) 102 and touch data 124 and/or force data 126 from the controllers 110.
  • the touch data 124 may include a touch profile indicating a location (or locations) on the controller(s) 110 corresponding to touch input of the user.
  • the touch data 124 may also indicate a lack of touch input on the controller(s) 110.
  • the touch data 124 may indicate which fmger(s) is/are touching the controller, and/or what portions of the fmger(s) touch the controller(s) 110.
  • an array of proximity sensors e.g., capacitive sensors
  • the FSR may generate force data 126 that indicates force values of the touch input on the controller 110.
  • the touch data 124 and/or the force data 126 may indicate a hand position, grip, or gesture of the hand within the VR environment.
  • the remote computing resource(s) 112 may transmit animation(s) 128 or other image data to the VR headset 108 for display.
  • the remote computing resource(s) 112 may utilize the model(s) 120 to generate the animations 128 displayed on the VR headset 108.
  • the remote computing resource(s) 112 may generate and/or train the model(s) 120 using the motion data 122, the touch data 124, and/or the force data 126.
  • the remote computing resource(s) 112 may generate and/or train the model(s) 120 through interactions with users and receiving the motion data 122, the touch data 124, and/or the force data 126.
  • the processor(s) 116 may analyze the motion data 122 and correlate the motion data 122 with the touch data 124 and/or the force data 126. Additionally, the processor(s) 116 may analyze the touch data 124 and associate the touch data 124 with the force data 126.
  • the processor(s) 116 may correlate a time associated with a capture of the motion data 122 to learn characteristics of the users. For instance, the processor(s) 116 may learn characteristics of the touch data 124 (e.g., location on the controller 110) and/or the force data 126 and associate these characteristics with particular gestures of the hand. After performing the data analysis, the processor(s) 116 may generate the model(s) 120 to correlate the motion data 122, the touch data 124, and/or the force data 126. In other words, the processor(s) 116 may analyze the touch data 124 and/or the force data 126 to correlate or otherwise associate the touch data 124 and/or the force data 126 with hand gestures, as represented by the motion data 122.
  • the processor(s) 116 may analyze the touch data 124 and/or the force data 126 to correlate or otherwise associate the touch data 124 and/or the force data 126 with hand gestures, as represented by the motion data 122.
  • Training the model(s) 120 based on the motion data 122, the touch data 124, and/or the force data 126 permits the model(s) 120 to determine hand gestures using the touch data 124 and/or the force data 126 received in subsequent interactions by users (i.e., during gameplay). That is, the model(s) 120 may receive the touch data 124 and/or the force data 126 as inputs, and utilize the touch data 124 and/or the force data 126 to determine a hand gesture of the user 104. For example, when a user is holding the controller 110, the controller 110 may receive the touch data 124 generated by the array of the proximity sensors, where the touch data 124 indicates a location of the touch input at the controller 110.
  • the touch data 124 may also indicate a proximity of the hand of the user with respect to the controller 110 through measuring a capacitance value between fingers of the user and the controller 110. For instance, the user may hover his or her fingers above the controller 110.
  • the controller 110 may transmit the touch data 124 to the remote computing resource(s) 112, where the touch data 124 is input into the model(s) 120.
  • the FSR of the controller 110 may generate the force data 126 indicating an amount of force associated with the touch input.
  • the controller 110 may transmit the force data 126 to the remote computing resource(s) 112.
  • the processor(s) 116 may select one or more of the model(s) 120 based on characteristics of the touch data 124 and/or the force data 126. For example, the processor(s) 116 may select certain model(s) 120 for generating hand gestures based on the amount of force the user 104 grips the controller 110 (using the force data 126) and/or a location of the grip of the user 104 on the controller 110 (using the touch data 124).
  • the processor(s) 116 may select the model(s) 120 based in part on other user characteristics, such as on user interests, gender, age, etc. For instance, depending on how the user 104 holds the controller 110 and/or where the controller 110 receives touch input, the processor(s) 116 may identify an age and/or hand size of the user 104. Such information may be utilized to select different model(s) 120 and/or generate the animation(s) 128 representative of the hands of the user 104.
  • the processor(s) 116 may input the touch data 124 into the model(s) 120.
  • the processor(s) 116 using the model(s) 120, may generate the animation(s) 128 corresponding to the touch data 124 and/or the force data 126.
  • the processor(s) 116 may determine the user is holding the controller 110 with a clinched fist.
  • the processor(s) 116 may generate the animation 128 depicting the clinched fist of the user 104 and transmit the animation 128 to the VR headset 108 for display.
  • the processor(s) 116 may utilize rankings to determine the most probabilistic hand gesture represented by the touch data 124 and/or the force data 126 utilizing profiles stored in association with the model(s) 120. For instance, the processor(s) 116 may compare the touch data 124 and/or the force data 126 to a portion of the model(s) 120 or all of the model(s) 120 to determine a probability that particular hand gestures correspond to the touch input of the user. In such instances, the model(s) 120 may be stored in association with touch data 124 that indicates a location of touch input received at the controller 110 and/or force data 126 that indicates a relative force of the touch input at the controller 110.
  • the touch data 124 and/or the force data 126 may characterize the model(s) 120. Accordingly, during gameplay, when the remote computing resource(s) 112 receives touch data 124 and/or the force data 126, the remote computing resource(s) 112 may select one or more model(s) 120 to generate the animation 128 by comparing the received touch data 124 and/or the force data 126 with the touch data and/or the force data stored in association with the model(s) 120, respectively.
  • the remote computing resource(s) 112 may also perform predictive modeling for future events.
  • the predictive modeling may determine a probability of whether an outcome may occur or may not occur.
  • the processor(s) 116 may determine a probability of future hand gestures utilizing the motion data 122, the touch data 124, and/or the force data 126 available from the memory 118.
  • the processor(s) 116 may predict a forthcoming second hand gesture and generate the second hand gesture for display on the VR headset 108.
  • the processor(s) 116 may utilize previous motion data 122, touch data 124, and/or force data 126 to predict future hand gestures of the user 104 and generate corresponding animation(s) 128. In some instances, the prediction may reduce a latency time between gestures generated by the remote computing resource(s) 112 that are displayed on the VR headset 108.
  • the processor(s) 116 may determine a certain probability and/or confidence associated with a predicted gesture. For instance, if a predicted second hand gesture is within a certain confidence level or threshold, the processor(s) 116 may generate an animation(s) 128 corresponding to the second hand gesture and provide the gesture to the VR headset 108 for display.
  • validation operations may validate an accuracy of the model(s) 120. That is, as noted above, through iteratively capturing the motion data 122, the touch data 124, and/or the force data 126, the processors(s) 116 may train the model(s) 120 to better correlate the touch data 124 and/or the force data 126 with hand gestures represented within the motion data 122 (e.g., machine learning algorithms or techniques). Training the model(s) 120 may increase the accuracy that the displayed animation(s) 128 is/are representative of the touch data 124 and/or the force data 126 received at the controller 110.
  • the processor(s) 116 may also include components that learn the model(s) 120 based on interactions with different types of users. For instance, the processor(s) 116 may build and/or refine the model(s) 120, or may learn combinations and/or blendings of existing model(s) 120.
  • the model generation techniques described herein may also include at least one of gradient boosting techniques and/or hyperparameter tuning to train the model(s) 120.
  • Gradient boosting may include, for example, producing a prediction model in the form of an ensemble of weak prediction models, which may be decision trees.
  • the prediction model may be built in a stage- wise fashion and may allow optimization of an arbitrary differential loss function.
  • Hyperparameter tuning may include optimization of hyperparameters during a training process. For example, the model 120 may receive a training data set. In evaluating the aggregate accuracy of the model 120, hyperparameters may be tuned.
  • training the model(s) 120 may involve identifying input features that increase the accuracy of the model(s) 120 and/or other input features that decrease the accuracy of the model(s) 120 or have no or little effect on the model(s) 120.
  • the model(s) 120 may be refitted to utilize the features that increase accuracy while refraining from utilizing the features that decrease accuracy or have no or little effect on accuracy.
  • a processor such as processor(s) 116, may include multiple processors and/or a processor having multiple cores. Further, the processors may comprise one or more cores of different types. For example, the processors may include application processor units, graphic processing units, and so forth. In one implementation, the processor may comprise a microcontroller and/or a microprocessor.
  • the processor(s) 116 may include a graphics processing unit (GPU), a microprocessor, a digital signal processor or other processing units or components known in the art. Alternatively, or in addition, the functionally described herein can be performed, at least in part, by one or more hardware logic components.
  • illustrative types of hardware logic components include field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), application-specific standard products (ASSPs), system-on-a-chip systems (SOCs), complex programmable logic devices (CPLDs), etc.
  • FPGAs field-programmable gate arrays
  • ASICs application-specific integrated circuits
  • ASSPs application-specific standard products
  • SOCs system-on-a-chip systems
  • CPLDs complex programmable logic devices
  • each of the processor(s) 116 may possess its own local memory, which also may store program components, program data, and/or one or more operating systems.
  • the memory 118 may include volatile and nonvolatile memory, removable and non removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program component, or other data.
  • Such memory 118 may include, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other medium which can be used to store the desired information and which can be accessed by a computing device.
  • the memory 118 may be implemented as computer-readable storage media (“CRSM”), which may be any available physical media accessible by the processor(s) 116 to execute instructions stored on the memory 118.
  • CRSM computer-readable storage media
  • CRSM may include random access memory (“RAM”) and Flash memory.
  • RAM random access memory
  • CRSM may include, but is not limited to, read-only memory (“ROM”), electrically erasable programmable read-only memory (“EEPROM”), or any other tangible medium which can be used to store the desired information and which can be accessed by the processor(s).
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • FIG. 2 shows a user 104 holding a controller 110 (which may represent, and/or be similar to the controller 110 of FIG. 1).
  • the controller 110 may include markers 200, which may couple and/or attach to any portion of the controller 110, such as handles, straps, grips, and so forth.
  • potions of the user 104 may include markers 202 that attach on and/or along a hand of the user 104, such as fingertips, knuckles, finger joints, wrists, and so forth.
  • the markers 200, 202 may attach to the user 104 and/or the controller 110, respectively, using adhesives.
  • the markers 200, 202 may include infrared elements, reflectors, and/or images that are responsive to electromagnetic radiation (e.g., infrared light) emitted by the projector(s) of the motion capture system(s) 102. Additionally, or alternatively, the markers 200, 202 may include tracking beacons that emit electromagnetic radiation (e.g., infrared light) captured by the cameras of the motion capture system(s) 102.
  • electromagnetic radiation e.g., infrared light
  • the motion capture system(s) 102 may scan at least a portion of an environment, such as the environment 100, and objects contained therein to detect the markers 200, 202.
  • the projector(s) may project infrared light towards the user 104 and the controlled s) 110
  • the markers 200, 202 may reflect the light
  • the camera(s) and/or the sensors of the motion capture system(s) 102 may capture the reflected light.
  • a position and/or orientation of the controller(s) 110 and/or the hand of the user 104 may be determined.
  • the remote computing resource(s) 112 may analyze and parse images captured by the cameras and identify positions of the markers 200, 202 within the environment 100.
  • the remote computing resource(s) 112 may determine, using the position of the markers 200, 202, gestures, hand positions, finger positions, and so forth made by the user 104 (i.e., which fingers are extended, curled, etc.).
  • the motion capture system(s) 102 may utilize information about a location/pattern of the markers 200, 202 to generate a skeletal model representing (e.g., animated 3D skeletal model) of the hand or gestures of the hand (e.g., clinched fist). [0067] FIGS.
  • controller 300 may represent, and/or be similar to the controller 110 of FIGS. 1 and 2) according to an example embodiment of the present disclosure.
  • an electronic system such as a VR video gaming system, a robot, weapon, or medical device, may utilize the controller 300.
  • the controller 300 may include a controller body 310 having a handle 312, and a hand retainer 320 to retain the controller 300 in the hand of a user (e.g. the user’s left hand).
  • the handle 312 may comprise a substantially cylindrical tubular housing. In this context, a substantially cylindrical shape need not have a constant diameter, or a perfectly circular cross-section.
  • the controller body 310 may include a head (between the handle 312 and a distal end 311), which may optionally include one or more thumb-operated controls 314, 315, 316.
  • the head may include a tilting button, or any other button, knob, wheel, joystick, or trackball considered as a thumb-operated control conveniently manipulated by a thumb of a user during normal operation and while the controller 300 is held in the hand of the user.
  • the controller 300 may include a tracking member 330 that is fixed to the controller body 310, and may include two noses 332, 334, each protruding from a corresponding one of two opposing distal ends of the tracking member 330.
  • the tracking member 330 may comprise a tracking arc having an arcuate shape.
  • the tracking member 330 may include a plurality of tracking transducers disposed therein/thereon (which may represent, and/or be similar to the markers 200, 202 of FIG. 2).
  • each protruding nose 332, 334 may include at least one tracking transducer.
  • the controller body 310 may include tracking transducers, with at least one distal tracking transducer disposed adjacent the distal end 311.
  • the tracking transducers which may include tracking sensors, may respond to electromagnetic radiation (e.g. infrared light) emitted by the motion capture system(s) 102.
  • the tracking transducers may include tracking beacons that emit electromagnetic radiation (e.g. infrared light) that is received by cameras of the motion capture system(s) 102.
  • the projectors of the motion capture system(s) 102 may widely broadcast pulsed infrared light towards the controller 300.
  • the plurality of tracking transducers of the tracking member 330 may include infrared light sensors that receive or shadow from the broadcasted pulsed infrared light.
  • the tracking transducers in each nose 332, 334 e.g., three sensors in each nose
  • a material of the tracking member 330 and/or the controller body 310 may include a substantially rigid material such as hard plastic, which are firmly fixed together to not appreciably translate or rotate relative to each other.
  • the tracking member 330 may couple to the controller body 310 at two locations.
  • the hand retainer 320 may attach to the controller 300 (e.g., the controller body 310 and/or the tracking member 330) adjacent those two locations, to bias the palm of the user against the outside surface of the handle 312 between the two locations.
  • the tracking member 330 and the controller body 310 may comprise an integral monolithic component having material continuity, rather than being assembled together.
  • a single injection-molding process may mold the tracking member 330 and the controller body 310 together, resulting in one integral hard plastic component that comprises both the tracking member 330 and the controller body 310.
  • the tracking member 330 and the controller body 310 may comprise separately fabricated parts that are later assembled together. In either instance, the tracking member 330 may affix to the controller body 310.
  • the hand retainer 320 is shown in the open position in FIG. 3.
  • the hand retainer 320 may optionally bias in the open position by a curved resilient member 322 to facilitate the insertion of the left hand of the user between the hand retainer 320 and the controller body 310 when the user grasps for the controller 300 with his or her vision blocked by a VR headset (e.g., VR headset 108).
  • the curved resilient member 322 may comprise a flexible metal strip that elastically bends, or may comprise an alternative plastic material, such as nylon that may bend substantially elastically.
  • a cushion or fabric material 324 e.g., a neoprene sheath
  • the fabric material 324 may adhere to only the side of the curved resilient member 322, such as on a side that faces the hand of the user.
  • the hand retainer 320 may adjust in length, for example, by including a draw cord 326 that is cinched by a spring-biased chock 328.
  • the draw cord 326 may have an excess length used as a lanyard.
  • the cushion or fabric material 324 may couple to the draw cord 326.
  • the tension of the cinched draw cord 326 may preload the curved resilient member 322.
  • the tension that the curved resilient member 322 imparts to the hand retainer 320 may cause the hand retainer 320 to automatically open when the draw cord 326 is un-cinched.
  • the length of a hand retainer 320 may adjust in other ways, such as a cleat, an elastic band (that temporarily stretches when the hand is inserted, so that it applies elastic tension to press against the back of the hand), a hook & loop strap attachment that allows length adjustment, etc.
  • the hand retainer 320 may dispose between the handle 312 and the tracking member 330, and contact the back of the user’s hand.
  • FIG. 4 shows the controller 300 during operation with the left hand of the user inserted therein but not grasping the controller body 310.
  • the hand retainer 320 is closed and tightened over the hand to physically bias the palm of the user against the outside surface of the handle 312.
  • the hand retainer 320 may retain the controller 300 within the hand of the user even in instances where the user is not grasping the controller body 310.
  • the handle 312 of the controller body 310 includes an array of proximity sensors that are spatially distributed partially or completely around its outer surface.
  • the proximity sensors of the array of proximity sensors are not necessarily of equal size and do not necessarily have equal spacing between them.
  • the array of proximity sensors may comprise a grid spatially distributed about the controller body 310.
  • the array of proximity sensors is responsive to the proximity of the fmger(s) of the user relative to the outside surface of the handle 312.
  • the array of proximity sensors may include an array of capacitive sensors embedded under the outer surface of the handle 312, where the outer surface comprises an electrically insulative material to sense touch from the user.
  • the capacitance between the array of capacitive sensors and a portion of the hand of the user may be inversely related to the distance therebetween.
  • an RC oscillator circuit may connect to an element of the array of capacitive sensors and noting that the time constant of the RC oscillator circuit, and therefore the period and frequency of oscillation, will vary with the capacitance.
  • the circuit may detect a release of fmger(s) from the outer surface of the handle 312.
  • the array of proximity sensors may generate touch data (e.g., the touch data 124) in response to touch input from the user, where the touch data indicates the proximity of the fmger(s) of the user relative to the outside surface of the handle 312.
  • the hand retainer 320 when tightly closed around the hand of the user, may prevent the controller 300 from falling out of hand and the fingers from excessively translating relative to the array of proximity sensors on the handle 312, thereby reliably sensing finger motion and position.
  • the motion capture system(s) 102 and/or the remote computing resource(s) 112 may include an algorithm embodying anatomically-possible motions of fingers to better use the touch data 124 from the array of proximity sensors to render the opening of a controlled character’s hand, finger pointing, or other motions of fingers relative to the controller 300 or relative to each other (e.g., hand gestures).
  • the user’s movement of the controller 300 and/or fingers may help control a VR gaming system, defense system, medical system, industrial robot or machine, or another device.
  • the system may render a throwing motion based on the movement of the tracking transducers, and may render the release of a thrown object based on sensing the release of the user’s fingers (e.g., using the touch data 124) from the outer surface of the handle 312 of the controller 300.
  • the hand retainer 320 may therefore allow the user to“let go” of the controller 300 without the controller 300 actually separating from the hand, or being, thrown, and/or dropped to the floor, which may enable additional functionality of the controlled electronic system. For example, sensing a release and/or a restoration of the user’s grasp of the handle 312 of the controller body 310 may indicate a corresponding throwing and/or grasping of objects within gameplay. The hand retainer 320 may therefore safely secure and retain the hand of the user during such animations. In some instances, the location of the hand retainer 320 in the embodiment of FIGS. 3-7 may help the tracking member 330 to protect back of user’s hand from impacts in real world, for example, when the user moves in response to a prompt sensed in the VR environment (e.g., while practically blinded by the VR headset 108).
  • the controller 300 may include a FSR to detect force values associated with touches from the user (e.g., the force data 126).
  • the force data 126 may be utilized in conjunction with the touch data 124 to indicate movements and/or grips of the user with a VR environment.
  • the controller 300 may include a rechargeable battery disposed within the controller body 310 and/or the hand retainer 320 (e.g. hand retention strap) may include an electrically-conductive charging wire electrically coupled to the rechargeable battery.
  • the controller 300 may also include a radio frequency (RF) transmitter for communication with the rest of the motion capture system(s) 102.
  • the rechargeable battery may power the RF transmitter and may respond to the thumb-operated controls 314, 315, 316, the array of proximity sensors in the handle 312 of the controller body 310, and/or tracking sensors in the tracking member 330.
  • FIGS. 5 and 6 depict the controller 300 during operation when the hand retainer 320 is closed and when the hand grasps the controller body 310.
  • FIGS. 5 and 6 also illustrate that the thumb may operate one or more of the thumb -operated controls (e.g., the track pad 316).
  • FIG. 7 shows that in certain embodiments, the controller 300 may comprise a left controller in a pair of controllers that may include a similar right controller 700.
  • the controllers 300, 700 may individually generate the touch data 124 and/or the force data 126 from the array of proximity sensors and the FSR, respectively, from both of a user’s hands, simultaneously.
  • the remote computing resource(s) 112 may receive the motion data 122 (from the camera(s) of the motion capture system(s) 102) as well as the touch data 124 and/or the force data 126 (from the controllers 300, 700) to enhance a VR experience.
  • FIGS. 8A and 8B depict a front view of right-hand controller 800 a back view of the right-hand controller 800, respectively, according to another example embodiment of the present disclosure.
  • the right-hand controller 800 may include components discussed above with regard to the controlled s) 110 of FIG. 1 and/or the controller 300 of FIGS. 3-7.
  • the controller 800 may include a controller body comprising a head 810 and a handle 812.
  • the head 810 may include at least one thumb- operated control A, B, 808, and may also include a control operable by the index finger (e.g., trigger 809).
  • the handle 812 may comprise a tubular housing that is partially wrapped by an outer shell 840.
  • the inner surface of the outer shell 840 may include a spatially distributed array of proximity sensors.
  • the array of proximity sensors may respond to a proximity of the user’s fingers relative to the outer shell 840.
  • the proximity sensors of the array of proximity sensors are not necessarily of equal size, nor are they necessarily spaced regularly or equally from each other.
  • the array of proximity sensors may be a plurality of capacitive sensors that may connect to a flex circuit bonded to the inner surface of the outer shell 840.
  • a tracking member 830 may affix to the controller body at the head 810 and at an end of the handle 812.
  • a hand retainer 820 is configured to physically bias the user’s palm against the outer shell 840, between the head 810 and the end of the handle 812.
  • the hand retainer 820 is preferably disposed between the handle 812 and the tracking member 830, and may comprise a hand retention strap adjusts in length and contacts the back of the user’s hand.
  • the hand retainer 820 may include a draw cord 828 that may adjust in length by a cord lock 826 (adjacent a distal end of the handle 812) that selectively prevents sliding motion by the draw cord 828 at the location of the cord lock 826.
  • tracking transducers 832, 833 are disposed on the tracking member 830.
  • protruding noses at opposing distal ends of the tracking member 830 may include the tracking transducers 822, 833.
  • a distal region of the head 810 may include additional tracking transducers 834.
  • the motion capture system(s) 102 may include tracking sensors that respond to electromagnetic radiation (e.g., infrared light) emitted by the motion capture system(s) 102, or may include tracking beacons that emit electromagnetic radiation (e.g., infrared light) received by the motion capture system(s) 102.
  • the motion capture system(s) 102 may include projector(s) that widely broadcast pulsed infrared light towards the controller 800.
  • the motion capture system(s) 102 may receive the response of the tracking sensors and the motion capture system(s) 102 and/or the remote computing resource(s) 112 may interpret such response to effectively track the location and orientation of the controller 800.
  • a printed circuit board may mount within the handle 812 and electrically connect components within the controller 800 (e.g., buttons, battery, etc.).
  • the PCB may include a force sensing resistor (FSR) and the controller 800 may include a plunger that conveys a compressive force applied via the outer shell 840 towards the outside of the tubular housing of the handle inward to the FSR.
  • FSR force sensing resistor
  • the FSR in conjunction with the array of proximity sensor, may facilitate sensing of both the onset of grasping by the user, and the relative strength of such grasping by the user, which may facilitate certain gameplay features.
  • FIGS. 9A-9C depict different views of a force sensing resistor (FSR) 900 according to an example embodiment of the present disclosure.
  • the FSR 900 may include a first substrate 902, which in some instances may include polyimide.
  • the FSR 900 may further include a second substrate 904 disposed on (or over) the first substrate 902.
  • the first substrate 902 and the second substrate 904 may comprise the two primary substrates (or layers) of the FSR 900 (i.e., a 24ayer FSR 900).
  • the FSR 900 may include additional layers.
  • the first substrate 902 may represent a“bottom” or“base” substrate with respect to the two primary substrates of the FSR 900, however, in some instances, there may be layers of material behind (or below) the first substrate 902 (i.e., in the negative Z direction, as depicted in FIG. 9C).
  • the first substrate 902 includes a conductive material disposed on a front surface (i.e., the surface facing in the positive Z direction) of the first substrate 902.
  • the conductive material may include a plurality of interdigitated metal fingers.
  • the second substrate 904 (sometimes referred to as a resistive“membrane”) may include a resistive material disposed on a back surface (i.e., the surface facing the negative Z direction) of the second substrate 904.
  • This resistive material may include a semiconductive material, such as an ink composition (e.g., silver ink, carbon ink, mixtures thereof, etc.), that exhibits some level of electrical resistance (e.g., a relatively high sheet resistance within a range of 300 kiloOhm (kOhm) per square (kOhm/sq) to 400 kOhm/sq).
  • the sheet resistance of the second substrate 904 is 350 kOhm/sq.
  • the second substrate 904 may include other sheet resistance values, including those outside of the sheet resistance ranges specified herein, such as when the FSR 900 is used in other applications (e.g., non-controller based applications).
  • a material of the second substrate 904 may include mylar, with the resistive material disposed on a back surface of the second substrate 904.
  • the second substrate 904 may include made of polyimide having a resistive material (e.g., a conductive ink composition) on the back surface. Using polyimide for the second substrate 904 may allow for mass manufacturing of the FSR 900 using a reflow oven, whereas mylar may not withstand such high temperatures.
  • the FSR 900 may include one or more spacer layers interposed between the first substrate 902 and the second substrate 904 so that a center portion of the second substrate 904 may suspend over the first substrate 902 and spaced a distance therefrom.
  • FIG. 9C shows two spacer layers including, without limitation, a coverlay 906 disposed on the first substrate 902 at a periphery of the first substrate 902, and a layer of adhesive 908 disposed on the coverlay 906.
  • a material of the coverlay 906 may include polyimide, and may thus include the same material as the first substrate 902.
  • a thickness (as measured in the Z direction) of the coverlay 906 may range from 10 microns to 15 microns.
  • a thickness (as measured in the Z direction) of the layer of adhesive 908 may range from 50 microns to 130 microns.
  • the total distance at which the second substrate 904 is spaced from the first substrate 902 may, therefore, be the sum of the thicknesses of the one or more spacer layers (e.g., the thickness of the coverlay 906 plus the thickness of the layer of adhesive 908).
  • These layers may include thicknesses that are outside of the thickness ranges specified herein, such as when the FSR 900 is used in other applications, such as non-controller based applications. As such, these thickness ranges are to be understood as non-limiting.
  • the thickness of the layer of adhesive 908 is made as thin as possible (e.g., at the lower end of the specified thickness range) to allow for an initial response (e.g., the FSR 900 starts detecting an input) under a very light applied force, F.
  • the adhesives, both materials and a thickness thereof, may vary to increase or decrease a stiffness of the FSR 900.
  • the substrate 904 may include an actuator 910 (such as a disk-shaped, compliant plunger) configured to convey a force, F, onto a front surface of the second substrate 904.
  • a material of the actuator 910 may include Poron, which is a compliant material that deforms to a degree upon application of a force upon the actuator 910.
  • the actuator 910 may be concentric with a center of an active area of the FSR 900 in order to center the applied force, F.
  • the actuator 910 may also span a portion of the active area of the FSR 900 to evenly distribute the applied force, F, across that portion of the active area of the FSR 900.
  • a thickness (as measured in the Z direction) of the second substrate 904 may include a range of 50 microns to 130 microns.
  • the second substrate 904 is flexible.
  • a material of the second substrate 904 may include mylar, which is flexible at a thickness within the above-specified range.
  • Functional operation of the FSR 900 relies on the flexibility of the second substrate 904 in order for the resistive material on the back surface of the second substrate 904 to come into contact with the conductive material on the front surface of the first substrate 902 under a compressive force, F, applied to the actuator 910.
  • a thickness (as measured in the Z direction) of the first substrate 902 may include a range of 20 microns to 30 microns.
  • a thickness (as measured in the Z direction) of the actuator 910 may range from 780 microns to 810 microns. These layers may include thicknesses that are outside of the thickness ranges specified herein, such as when the FSR 900 is used in other applications (e.g., non-controller based applications). As such, these thickness ranges are to be understood as non-limiting.
  • the FSR 900 may exhibit varying resistance in response to a variable force, F, applied to the actuator 910. For example, as the applied force, F, on the actuator 910 increases, the resistance may decrease. In this manner, the FSR 900 may represent as a variable resistor whose value is controlled by the applied force, F.
  • the FSR 900 may include a“ShuntMode” FSR 900 or a“ThruMode” FSR 900.
  • the conductive material disposed on the front surface of the first substrate 902 may include a plurality of interdigitated metal fingers.
  • the resistive material on the back surface of the second substrate 904 may come into contact with some of the interdigitated metal fingers, which shunts the interdigitated metal fingers, thereby varying the resistance across the output terminals of the FSR 900.
  • An example conductive for the interdigitated metal fingers may include copper, such as HA copper or RA copper.
  • the interdigitated metal fingers may also include gold plating.
  • a subtractive manufacturing process may form the plurality of interdigitated metal fingers.
  • a finger width and spacing between interdigitated metal fingers may provide an optimal balance between maximum sensitivity of the FSR 900 and minimize manufacturing etch tolerance.
  • the interdigitated metal fingers may include a uniform pattern or non-uniform patterns (e.g., denser fingers toward a center and less dense fingers toward the outside).
  • there may be no additional copper plating over the base layer copper prior to gold plating as adding additional copper plating over the base layer copper prior to gold-plating may cause an undesirable increase of detected resistance.
  • the omission of any additional copper plating on the interdigitated metal fingers prior to the gold plating may achieve optimal sensitivity in the FSR 900.
  • the conductive material on the first substrate 902 may include a solid area of conductive material with a semi conductive (or resistive) material disposed on the conductive material.
  • the second substrate 904 may have a similar construction (e.g., a solid area of conductive material having a semi conductive (or resistive) material disposed thereon).
  • the solid area of conductive material on each substrate (902 and 904) may couple to an individual output terminal and excitation current may pass through one layer to the other when the two substrates (902 and 904) come into contact under an applied force, F.
  • the FSR 900 may exhibit less hysteresis and higher repeatability (from one FSR 900 to another FSR 900), as compared to conventional FSRs, such as those that use mylar as the material for the bottom substrate.
  • Loading hysteresis describes the effect of previously applied forces on the current FSR 900 resistance.
  • the response curve is also monotonic and models true analog input that may be leveraged for a number of game mechanics in a VR gaming system, such as to crush a virtual rock, squeeze a virtual balloon, etc.
  • the FSR 900 is, in actuality, sensitive to applied pressure (force x area) because equal amounts of force applied at a small point verses a larger area on front surface of the second substrate 904 may result in a different resistance response of the FSR 900.
  • the actuator 910 may play a role in maintaining repeatability across FSRs 900 in terms of the response curves under the applied force, F.
  • the FSR 900 may include an open-circuit under no external force (or load).
  • a threshold circuit may set a threshold resistance value at which the first substrate 902 and the second substrate 904 are considered to be“in contact,” meaning that the FSR 900 may represent an open-circuit until the threshold resistance value is met, even if the two primary substrates (i.e., 902 and 904) are actually in contact.
  • the FSR 900 may mount on a planar surface of a structure within a handheld controller, such as the controller 110, 300, and 800 disclosed herein.
  • the FSR 900 may mount at any suitable location within the controller body to measure a resistance value that corresponds to an amount of force associated with touch inputs of the user applied to an outer surface of the controller body (e.g., a force applied by a finger pressing upon a control, a force applied by a hand squeezing the handle of the controller).
  • the FSR 900 may mount on a planar surface of the PCB, which itself may mount within the tubular housing of the handle.
  • the plunger may interface with the actuator 910 of the FSR 900, which may allow for conveying a compressive force from the plunger to the actuator 910.
  • Other configurations are possible, however, where the plunger is omitted, and the actuator 910 may interface with a portion of the tubular housing of the handle.
  • the FSR 900 may mount on a planar surface of a structure within a head (between the handle and a distal end).
  • the structure may mount within the head underneath one or more of the thumb-operated controls.
  • the FSR 900 may mount underneath the thumb-operated control (e.g., a track pad).
  • the controller may include multiple FSRs 900 disposed within the controller body, such as one or more FSRs 900 mounted within the handle and/or one or more FSRs 900 mounted underneath one or more corresponding controls on the head of the controller body.
  • the FSR 900 may enable variable analog inputs when implemented in a controller. For instance, squeezing the handle or pressing upon the thumb -operated control(s) with varying amounts of force may cause a resistance of the FSR 900 to vary with the applied force, F. The resistance may be converted into a varying digitized value that represents the FSR input for controlling game mechanics (e.g., picking up and throwing objects).
  • the FSR 900 may utilize different touches or touch styles.
  • a“Simple Threshold” style may mean that a FSR input event occurs when the digitized FSR input value meets or exceeds a threshold value. Because the digitized FSR input value corresponds to a particular resistance value measured by the FSR 900, which, in turn, corresponds to a particular amount of force applied to the FSR 900, one can also think of this style of“Soft Press” as registering a FSR input event when the resistance value measured by the FSR 900 meets a threshold resistance value, and/or when the applied force F meets a threshold amount of force.
  • the handle of the controller e.g., the controller 110, 300, and/or 800
  • the handle may be squeezed until a threshold amount of force is reached, and, in response, the FSR 900 input event is registered as a“Soft Press.”
  • the force required to “unpress” may be a fraction of the threshold value for debounce purposes and/or to mimic a tact switch with a physical snap ratio.
  • A“Hair Trigger” style may set a baseline threshold value, and once a digitized FSR input value associated with the FSR 900 meets or exceeds the baseline threshold value, the binding is activated (i.e., a FSR input event is registered, akin to a press-and- hold button actuation).
  • A“Hip Fire” style may be similar to the“Simple Threshold” style of Soft Press, except that the“Hip Fire” style utilizes a time delay so that, in a configuration with multiple levels of bindings, the time delay can be used to ignore lower FSR input values if a higher threshold value is reached quickly enough.
  • the amount of time delay varies between the different sub-styles (e.g., Aggressive, Normal, and Relaxed).
  • an additional Soft Press Threshold may include a multi-level threshold, such as the thresholds for the“Hip Fire” style of Soft Press.
  • the different styles of Soft Press for FSR-based input may enable a number of different game-related, analog inputs by virtue of the user squeezing or pressing a FSR-based input mechanism with varying force.
  • a VR game may allow a user to crush a rock or squeeze a balloon by squeezing the handle of the controller body with increasing force.
  • a shooting-based game may allow the user to toggle between different types of weapons by pressing a thumb-operated control with different levels of applied force.
  • the user may adjust the thresholds to reduce hand fatigue relating to actuation of the FSR-based input mechanism.
  • the threshold may include a default threshold value for a particular game (e.g., a lower default threshold value for a shooting game, a higher default threshold value for an exploration game, etc.).
  • FIGS. 10A-10F illustrate different variations of a user 1000 holding a controller 1002 (which may represent, and/or be similar to the controller 110 of FIGS. 1 and 2, the controller 300 of FIGS. 3-7, and/or the controller 800 of FIG. 8).
  • the remote computing resource(s) 112 may generate an animation (e.g., the animation 128) for display on the VR headset 108.
  • the animation may resemble the hand gestures depicted in FIGS. 10A-10F, respectively. That is, using previously trained model(s) 120, the remote computing resource(s) 112 may generate images of the hand based on the touch data 124 and/or force data 126 received from the controlled s) 1002.
  • the user 1000 is shown holding the controller 1002 with an open grip.
  • the fingers and the thumb of the user 1000 are not contacting the controller 1002 but instead, the controller 1002 may contact the palm of the user 1000.
  • the controller 1002 may detect this contact, generate the touch data 124 and/or the force data 126, and transmit the touch data 124 and/or the force data 126 to the remote computing resource(s) 112.
  • the touch data 124 may represent or indicate the palm of the user 1000 touches the controller 1002.
  • the force data 126 may indicate a level of force the palm of the user 1000 is biased against the controller 1002.
  • the controller 1002 may only generate the touch data 124 indicative of the proximity of the fmger(s) of the user relative to the outside surface of the handle 312.
  • the remote computing resource(s) may input the touch data 124 and/or the force data 126 into the model(s) 120 which may generate hand image data (e.g., the animation 128) corresponding to an open hand gesture.
  • the remote computing resource(s) 112 may select specified model(s) 120 for inputting the touch data 124 and/or the force data 126.
  • an open hand gesture may represent picking up an object, dropping an object, and so forth.
  • FIG. 10B illustrates the user 1000 holding the controller 1002 with all four fingers and the thumb.
  • the touch data 124 generated by the array of proximity sensors of the controller 1002 may indicate the grasp of the user 1000.
  • the force data 126 generated by the FSR (e.g., the FSR 900) may indicate the force in which the user 1000 grasps the controller 1002.
  • the controller 1002 may transmit the touch data 124 and/or the force data 126 to the remote computing resource(s) 112 where the remote computing resource(s) 112 may select the model(s) 120 corresponding to the touch data 124 and/or the force data 126.
  • the animation 128 corresponding to the model(s) 120 may generate a hand gesture that represents a closed fist gesture, a grabbing gesture, and so forth.
  • FIG. 10C illustrates the user 1000 holding the controller 1002 with all four fingers but not the thumb.
  • the remote computing resource(s) 112 may utilize the touch data 124 and/or the force data 126 to determine associated model(s) 120, where the model(s) 120 indicate that the user 1000 holds an object with all four fingers but not the thumb.
  • the model(s) 120 may generate the animation 128 for display on the VR headset 108 representing this configuration of touch on the controller 1002 (e.g., thumbs up, trigger actuator, etc.).
  • FIG. 10D illustrates the user 1000 holding the controller 1002 with the middle finger and the ring finger.
  • the touch data 124 may indicate the touch of the middle finger and the ring finger.
  • the touch data 124 may also indicate the proximity of the index finger and the pinky finger (which to do not contact the controller 1002) relative to the outside surface of the handle of the controller 1002.
  • the force data 126 may indicate the force values associated with the grip of the middle finger and/or the ring finger of the user 1000.
  • the model(s) 120 may generate an associated animation 128 according to the touches of the middle finger and ring finger as well as their associated force values.
  • FIG. 10E illustrates the user 1000 holding the controller 1002 with the ring finger and the pinky finger.
  • the remote computing resource(s) 112 may utilize the touch data 124 associated with the touch of the ring finger and the pinky finger and/or a lack of touch of the index finger and/or the middle finger, to select associated model(s) 120, a corresponding animation 128, and generate a hand gesture for display on the VR headset 108.
  • the remote computing resource(s) 112 may also utilize force data 126 generated from the FSR in selecting model(s) 120 and generating the hand gesture.
  • FIG. 10F illustrates the user holding the controller 1002 with the index finger, the middle finger, and the pinky finger.
  • the remote computing resource(s) 112 utilize the touch data 124 and/or the force data 126 to generate an associated hand gesture on the VR headset 108, such as the user 1000 firing a weapon.
  • FIGS. 10A-10F illustrate particular combinations of the fingers and thumb of the user 1000 touching the controller 1002 to generate an associated hand gesture, other combinations are possible.
  • the controller 1002 may detect the location associated with the touch input, using the array of proximity sensors, as well as a force associated with the touch input of the user 1000, using the FSR 900.
  • the controller 1002 may transmit the touch data 124 and/or the force data 126 to the remote computing resource(s) 112, where the remote computing resource(s) 112 may select the model(s) 120 corresponding to the touch data 124 and/or the force data 126.
  • the model(s) 120 are previously trained and/or generated utilizing previous motion data 122, the touch data 124, and/or the force data 126. Accordingly, at a later instance, through receiving the touch data 124 and/or the force data 126, the remote computing resource(s) 112 may associate the touch data 124 and/or the force data 126 with one or more model(s) 120. As the model(s) 120 are associated with an animation 128, the remote computing resource(s) 112 may select one or more model(s) 120, generate a corresponding animation 128, and transmit the animation 128 to the VR headset 108 for display.
  • FIGS. 11-13 illustrate various processes according to the embodiments of the instant application.
  • the processes described herein are illustrated as collections of blocks in logical flow diagrams, which represent a sequence of operations, some or all of which may be implemented in hardware, software, or a combination thereof.
  • the blocks may represent computer-executable instructions stored on one or more computer-readable media that, when executed by one or more processors, program the processors to perform the recited operations.
  • computer-executable instructions include routines, programs, objects, components, data structures and the like that perform particular functions or implement particular data types.
  • the order in which the blocks are described should not be construed as a limitation, unless specifically noted.
  • the process 1100 may receive touch data corresponding to a touch input at a controller.
  • the touch data may represent location(s) on the controller where the touch input was received and/or the proximity of the fmger(s) of the user relative to the controller (e.g., capacitance values from an array of proximity sensors or capacitive sensors).
  • the process 1100 may receive force data corresponding to the touch input at the controller.
  • the force data may represent an amount of force associated with the touch input at the controller.
  • the force data may be received when the force values are over a certain force threshold.
  • the process 1100 may receive motion data corresponding to a movement of a user operating the controller.
  • the motion data may represent movements of a user, such as a movement of the user’s fmger(s) and wrist(s).
  • the motion data may also represent a motion of the controller.
  • the process 1100 may train model(s) using the touch data, the force data, and/or the motion data. For instance, to train the model(s), the process 1100 may associate the touch data, the force data, and/or the motion data to correspond movements of the user, as represented by the motion data. That is, using the touch data, the force data, and/or the motion data, the process 1100 may train model(s) to learn characteristics of the touch of the user and associate these characteristics with certain hand gestures, as determined from the motion data. In some instances, the characteristics may include a location and a force of the touch input(s) on the controller.
  • the touch data, the force data, and/or the motion data may be associated utilizing time stamps corresponding to when the touch data, the force data, and/or the motion data was captured, respectively.
  • the process 1100 may correlate the touch data and/or the force data with the motion data and identify hand gesture(s) of the user. In training the model(s), at later instances, the process 1100 may receive the touch data and/or the force data and determine an associated gesture (without receiving motion data).
  • the process 1100 may loop to block 1102 to receive additional touch data, additional force data (e.g., block 1104), and/or additional motion data (e.g., block 1106).
  • This additional data may be utilized to further train the model(s), which may allow for a more accurate hand gesture determination based on the touch data and/or the force data received at later instances (e.g., during gameplay). That is, the process 1100 may continue to correlate the touch data, the force data, and/or the motion data such that when the process 1100 receives subsequent touch data and/or force data, the process 1100 may accurately determine an associated hand gesture corresponding to the touch data and/or the force data (via the model(s)).
  • correlating the touch data, the force data, and/or the motion data may involve matching time stamps of the touch data, time stamps of the force data, and/or time stamps of the motion data.
  • the process 1100 may receive touch data.
  • the touch data received at block 1110 may correspond to touch data received during gameplay.
  • the process 1100 may receive force data.
  • the force data received at block 1112 may correspond to touch data received during gameplay.
  • the process 1100 may select a model(s).
  • the touch data received at block 1110 may be compared with touch data or a touch profile corresponding to previously generated model(s). Additionally, or alternatively, selecting the model(s) may involve comparing the force data received at block 1112 with force data or the touch profile corresponding to previously generated model(s).
  • the touch profile of the model(s) may include force values associated with force data representing the hand gesture of the model(s) and/or the location associated with the touch data representing the hand gesture of the model(s).
  • the touch data may indicate a touch input at a center of the controller, such as the middle finger and/or the index finger touching the controller (e.g., FIG. 10D). In some instances, the touch data may associate the touch input with certain fingers of the user and/or may indicate those fingers not touching the controller. Using the touch data and/or the force data, a corresponding model may be selected.
  • the process 1100 may input the touch data and/or the force data into the model(s). More particularly, because the model(s) are previously been trained to associate the touch data and/or the force data with motion data and corresponding hand gestures, once trained, the model(s) may receive the touch data and/or the force data and determine hand gestures. In other words, the touch data may indicate which fingers grasp the controller or which fingers do not grasp the controller, as well as a location on the controller corresponding to the touch, or lack thereof. Accordingly, after the model(s) are trained, the model(s) may accept touch data and/or the force data representing touch from a user received during gameplay.
  • the process 1100 may generate image data corresponding to the touch data and/or the force data. For instance, after inputting the touch data and/or the force data into the model(s), the process 1100 may use the touch data and/or the force to generate a hand gesture.
  • the process 1100 may present the image data on a display.
  • the representation of the hand gesture on the display may correspond to the hand gesture of the user interacting with the controller.
  • the process may perform blocks 1110— 1120 in real-time and/or substantially contemporaneously with each other.
  • the process 1100 may loop to block 1110. Therein, the process 1100 may repeat between blocks 1110 and 1120 to continuously receive touch data and/or force data and generate animations corresponding to the touch input from the user. In doing so, as a user plays a game, the touch data and/or the force data received from the controller may change, depending on the levels, scenes, frame, and so forth in the game. Through continuously inputting the touch data and/or the force data into the model(s), the process 1100 may select a corresponding model(s) and continuously generate hand gestures for display.
  • the process 1100 between block 1102 and block 1108 may occur during a first instance of time where the user is not playing in a gameplay mode and where the model(s) are trained.
  • the training (or generation) of the model(s) may occur at a facility where the motion data (captured from the motion capture system(s) 102), the touch data, and/or the force data are captured and correlated with one another to associate the touch data and/or the force data with particular hand gestures.
  • the process 1100 between block 1110 and block 1120 may occur while the user is in gameplay mode.
  • the process 1200 may receive touch data corresponding to a touch input at a controller.
  • the remote computing resource(s) 112 may receive the touch data 124 from the controller (e.g., the controller 110, 300, and/or 800).
  • the touch data 124 may represent the location(s) on the controller corresponding to the touch input(s) of the user.
  • the touch data 124 may indicate that all four fingers of the user are touching the controller, a location of the touch(es), or in some instances, which fingers are not touching the controller and/or which areas of the controller do not receive touch input.
  • the process 1200 may receive force data corresponding to the touch input at the controller.
  • the remote computing resource(s) 112 may receive the force data 126 from the controller (e.g., the controller 110, 300, and/or 800).
  • the force data 126 may represent an amount of force associated with the touch input at the controller or the relative strength associated with a grip of the user on the controller. In instances, were the user does not grip the controller (for instance, as shown in FIG. 10A), the remote computing resource(s) 112 may not receive the force data 126 from the controller.
  • the process 1200 may receive motion data corresponding to a movement of a user operating the controller.
  • the remote computing resource(s) 112 may receive the motion data 122 from the motion capture system(s) 102.
  • the motion data 122 may represent movements of a user and/or movements of the controller, using the markers 200, 202.
  • projector(s) of the motion capture system(s) 102 may project light onto markers 200, 202 disposed on the user and/or the controller.
  • the markers 200, 202 may reflect this light, which is then captured by camera(s) of the motion capture system(s) 102.
  • the process 1200 may train a model(s) using the touch data, the force data, and/or the motion data.
  • the remote computing resource(s) 112 may train (or generate) the model(s) 120 using the motion data 122, the touch data 124, and/or the force data 126.
  • training the model(s) 120 may involve associating the touch data 124, the force data 126, and/or the motion data 122 to determine characteristics of the touch data 124 and/or the force data 126 that correspond to movements of the user.
  • the remote computing resource(s) 112 may generate image data or an animation(s) corresponding to the touch data 124 received from the controller.
  • the remote computing resource(s) 112 may correlate the touch data 124 and/or the force data 126 with a gesture of the user using the previous motion data 122.
  • associating the touch data 124, the force data 126, and/or the motion data 122 may involve matching time stamps of the touch data 124, time stamps of the force data 126, and time stamps of the motion data 122.
  • the remote computing resource(s) 112 may learn (e.g., using machine learning algorithms), how the touch data 124 and/or the force data 126 relates to hand gestures of the user.
  • the process 1200 may loop to block 1202 to receive additional touch data 124, additional force data 126, and/or additional motion data 122.
  • the remote computing resource(s) 112 may receive additional touch data 124 (e.g., block 1202), additional force data 126 (e.g., block 1204), and/or additional motion data 122 (e.g., block 1206) to train the model(s) 120. Training the model(s) 120 may allow for a more accurate determination of the hand gesture performed by the user.
  • the process 1300 may receive touch data.
  • the remote computing resource(s) 112 may receive, from the controller (e.g., the controller 110, 300, and/or 800), the touch data 124.
  • the controller e.g., the controller 110, 300, and/or 800
  • the touch data 124 may indicate the placement of the user’s fingers or hand on the controller 110.
  • the process 1300 may receive force data.
  • the remote computing resource(s) 112 may receive, from the controller (e.g., the controller 110, 300, and/or 800), the force data 126.
  • an FSR e.g., the FSR 900
  • the controller may generate the force data 126, which may indicate an amount of force associated with touches of the user on the controller.
  • the process 1300 may input the touch data and/or the force data into the model(s).
  • the processor(s) 116 of the remote computing resource(s) 112 may input the touch data 124 and/or the force data 126 into the model(s) 120.
  • the model(s) 120 are previously trained to associate the touch data 124 and/or the force data 126 with the motion data 122 and hand gestures, once trained, the model(s) 120 may receive the touch data 124 and/or the force data 126 to determine hand gestures.
  • the remote computing resource(s) 112 may selectively input the touch data 124 and/or the force data 126 into model(s) 120 that closely match or are associated with the touch data 124 and/or the force data 126. For instance, if the touch data 124 indicates that the user grips the controller 110 with four fingers, the processor(s) 116 may select a model 120 that corresponds to a four-finger grip. [0135] At block 1308, the process 1300 may generate image data corresponding to the touch data and/or the force data. For instance, the processor(s) 116 of the remote computing resource(s) 112, using the model(s) 120, may determine a hand gesture corresponding to the touch data 124 and/or the force data 126.
  • the remote computing resource(s) 112 may generate image data, such as the animation 128, corresponding to the hand gesture.
  • the model(s) 120 may generate the animation 128 of the hand utilizing the touch data 124 and/or the force data 126 (e.g., crushing a rock or dropping an object).
  • the process 1300 may present the image data on a display.
  • the remote computing resource(s) 112 may transmit the image data to the VR headset 108 (or another computing device), whereby the VR headset 108 may display the image data.
  • the VR headset 108 may display the hand gesture on the display according to the touch data 124 and/or the force data 126 received at the controller(s) 110.
  • the representation of the hand gesture on the display of the VR headset 108 may correlate with the hand gesture in which the user interacts with the controller(s) 110.
  • the process 1300 may perform blocks 1302 - 1310 in real-time and/or substantially contemporaneously with each other. Additional, the pre-generation of the model(s) 120 may allow for faster computing when receiving the touch data 124 and/or the force data 126 to generate an associated hand gesture.
  • the process 1300 may loop to block 1302, where the process 1300 may repeat between blocks 1302 and 1310 to continuously generate image data .
  • the touch data 124 and/or the force data 126 corresponding to how the user holds and grips the controller 110 may update, and through inputting the touch data 124 and/or the force data 126 into the model(s) 120, the process 1300 may continuously generate hand gestures for display on the VR headset 108.
  • a system comprising:
  • one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform acts comprising:
  • the one or more non-transitory computer-readable media store computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to further perform an act comprising transmitting, to a virtual reality display, the image data corresponding to the representation of the hand.
  • the associating the first motion data with the first touch data and the first force data includes matching a first time stamp of the first motion data with a first time stamp of the first touch data and a first time stamp of the first force data;
  • the associating the second motion data with the second touch data and second force data includes matching a second time stamp of the second motion data with a second time stamp of the second touch data and a second time stamp of the second force data.
  • a system comprising:
  • one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform acts comprising:
  • the motion data comprises first motion data and the touch data comprises first touch data
  • the movement of the hand comprises a first movement
  • the touch input of the hand comprises a first touch input
  • the one or more non-transitory computer-readable media store computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to further perform acts comprising:
  • transmitting the image data causes a remote device to display the representation of the hand.
  • the one or more non-transitory computer-readable media store computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to further perform an act comprising receiving force data corresponding to an amount of force associated with the touch input, and wherein the generating the training model corresponding to the gesture of the hand is further based at least in part on the force data.
  • touch data indicates one or more locations on the controller receiving the touch input.
  • associating the motion data and the touch data comprises associating a time stamp of the motion data with a time stamp of the touch data.
  • receiving the motion data comprises receiving the motion data from a camera communicatively coupled to the system;
  • receiving the touch data comprises receiving the touch data from a controller communicatively coupled to the system.
  • one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform acts comprising:
  • touch data indicating a touch input received at the controller or force data indicating an amount of force associated with the touch input; analyzing at least one of the touch data or the force with respect to a trained model that is associated with a hand gesture; determining, based at least in part on the analyzing, that at least one of the touch data or the force data corresponds to the hand gesture;
  • the touch data comprises first touch data
  • the force data comprises first force data
  • the trained model comprises a first trained model
  • the touch input comprises first touch input
  • the image data comprises first image data
  • the hand gesture comprises a first hand gesture
  • the one or more non-transitory computer- readable media store computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to further perform acts comprising:
  • second touch data representing a second touch input received at the controller; or second force data indicating an amount of force associated with the second touch input;
  • the image data comprises first image data and the hand gesture comprises a first hand gesture
  • the one or more non- transitory computer-readable media store computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to further perform acts comprising: determining a second hand gesture using at least one or more predictive modeling techniques and based at least in part on the first hand gesture;
  • the trained model comprises a model previously trained using at least one of:
  • previous touch data received from one or more controllers during a first period of time previous touch data received from one or more controllers during a first period of time; previous force data received from the one or more controllers during the first period of time; or
  • controller comprises a first controller
  • touch data comprises first touch data
  • force data comprises first force data
  • image data comprises first image data
  • one or more non-transitory computer- readable media store computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to further perform acts comprising:
  • second touch data indicating a touch input received at the second controller
  • second force data indicating an amount of force associated with the touch input received at the second controller
  • the image data comprises a three dimensional (3D) representation of the hand gesture.
  • the one or more non-transitory computer-readable media store computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to further perform acts comprising:
  • the one or more trained models comprise the trained model, and wherein individual trained models of the one or more trained models are associated with one or more hand gestures, and
  • analyzing at least one of the touch data or the force data with respect to the trained model is based at least in part on the comparing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)
PCT/US2019/032928 2018-06-20 2019-05-17 Virtual reality hand gesture generation WO2019245681A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP19822879.3A EP3807747A4 (en) 2018-06-20 2019-05-17 GENERATION OF HAND GESTURES OF VIRTUAL REALITY
CN201980041061.3A CN112437909A (zh) 2018-06-20 2019-05-17 虚拟现实手势生成
KR1020217001241A KR20210021533A (ko) 2018-06-20 2019-05-17 가상 현실 핸드 제스처 생성
JP2020570736A JP7337857B2 (ja) 2018-06-20 2019-05-17 仮想現実の手のジェスチャ生成

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201862687780P 2018-06-20 2018-06-20
US62/687,780 2018-06-20
US16/195,718 2018-11-19
US16/195,718 US10987573B2 (en) 2016-10-11 2018-11-19 Virtual reality hand gesture generation

Publications (1)

Publication Number Publication Date
WO2019245681A1 true WO2019245681A1 (en) 2019-12-26

Family

ID=68982963

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/032928 WO2019245681A1 (en) 2018-06-20 2019-05-17 Virtual reality hand gesture generation

Country Status (5)

Country Link
EP (1) EP3807747A4 (ko)
JP (1) JP7337857B2 (ko)
KR (1) KR20210021533A (ko)
CN (1) CN112437909A (ko)
WO (1) WO2019245681A1 (ko)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024085829A1 (en) * 2022-10-17 2024-04-25 İstanbul Geli̇şi̇m Üni̇versi̇tesi̇ A system for learning to play old in a meta verse environment using virtual reality technology
US11991222B1 (en) 2023-05-02 2024-05-21 Meta Platforms Technologies, Llc Persistent call control user interface element in an artificial reality environment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116650950B (zh) * 2023-06-08 2024-02-06 廊坊市珍圭谷科技有限公司 一种用于vr游戏的控制系统及方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110080339A1 (en) * 2009-10-07 2011-04-07 AFA Micro Co. Motion Sensitive Gesture Device
US20120214594A1 (en) * 2011-02-18 2012-08-23 Microsoft Corporation Motion recognition
US20140240267A1 (en) * 2010-04-23 2014-08-28 Handscape Inc. Method Using a Finger Above a Touchpad for Controlling a Computerized System
US20160357261A1 (en) * 2015-06-03 2016-12-08 Oculus Vr, Llc Virtual Reality System with Head-Mounted Display, Camera and Hand-Held Controllers
US20170351345A1 (en) * 2015-02-27 2017-12-07 Hewlett-Packard Development Company, L.P. Detecting finger movements
US20180099219A1 (en) * 2016-10-11 2018-04-12 Valve Corporation Electronic controller with finger sensing and an adjustable hand retainer

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130154952A1 (en) * 2011-12-16 2013-06-20 Microsoft Corporation Gesture combining multi-touch and movement
WO2014100045A1 (en) * 2012-12-17 2014-06-26 Qi2 ELEMENTS II, LLC Foot-mounted sensor systems for tracking body movement
US10514766B2 (en) * 2015-06-09 2019-12-24 Dell Products L.P. Systems and methods for determining emotions based on user gestures
CN108140360B (zh) * 2015-07-29 2020-12-04 森赛尔股份有限公司 用于操纵虚拟环境的系统和方法
US10549183B2 (en) * 2016-10-11 2020-02-04 Valve Corporation Electronic controller with a hand retainer, outer shell, and finger sensing
WO2019142329A1 (ja) * 2018-01-19 2019-07-25 株式会社ソニー・インタラクティブエンタテインメント 情報処理装置、情報処理システム、情報処理方法、及びプログラム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110080339A1 (en) * 2009-10-07 2011-04-07 AFA Micro Co. Motion Sensitive Gesture Device
US20140240267A1 (en) * 2010-04-23 2014-08-28 Handscape Inc. Method Using a Finger Above a Touchpad for Controlling a Computerized System
US20120214594A1 (en) * 2011-02-18 2012-08-23 Microsoft Corporation Motion recognition
US20170351345A1 (en) * 2015-02-27 2017-12-07 Hewlett-Packard Development Company, L.P. Detecting finger movements
US20160357261A1 (en) * 2015-06-03 2016-12-08 Oculus Vr, Llc Virtual Reality System with Head-Mounted Display, Camera and Hand-Held Controllers
US20180099219A1 (en) * 2016-10-11 2018-04-12 Valve Corporation Electronic controller with finger sensing and an adjustable hand retainer

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PAUL KRY ET AL.: "Experimental Robotics", vol. 39, 1 January 2006, SPRINGER BERLIN HEIDELBERG, article "Grasp Recognition and Manipulation with the Tango", pages: 551 - 559
SEUNGJU HAN ET AL.: "Grip-ball: A spherical multi-touch interface for interacting with virtual worlds", 2013 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS, 11 January 2013 (2013-01-11), pages 600 - 601, XP032348905, DOI: 10.1109/ICCE.2013.6487035
SVEN MAYER ET AL.: "Estimating the Finger Orientation on Capacitive Touchscreens Using Convolutional Neural Networks", PROCEEDINGS OF THE 2017 ACM INTERNATIONAL CONFERENCE ON INTERACTIVE SURFACES AND SPACES, ACMPUB27, NEW YORK, NY, USA, 17 October 2017 (2017-10-17), pages 220 - 229, XP058458482, DOI: 10.1145/3132272.3134130

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024085829A1 (en) * 2022-10-17 2024-04-25 İstanbul Geli̇şi̇m Üni̇versi̇tesi̇ A system for learning to play old in a meta verse environment using virtual reality technology
US11991222B1 (en) 2023-05-02 2024-05-21 Meta Platforms Technologies, Llc Persistent call control user interface element in an artificial reality environment

Also Published As

Publication number Publication date
EP3807747A1 (en) 2021-04-21
KR20210021533A (ko) 2021-02-26
JP2021527896A (ja) 2021-10-14
JP7337857B2 (ja) 2023-09-04
EP3807747A4 (en) 2022-03-09
CN112437909A (zh) 2021-03-02

Similar Documents

Publication Publication Date Title
US11992751B2 (en) Virtual reality hand gesture generation
US11294485B2 (en) Sensor fusion algorithms for a handheld controller that includes a force sensing resistor (FSR)
US11465041B2 (en) Force sensing resistor (FSR) with polyimide substrate, systems, and methods thereof
US11625898B2 (en) Holding and releasing virtual objects
EP3265895B1 (en) Embedded grasp sensing devices, systems, and methods
US11185763B2 (en) Holding and releasing virtual objects
WO2019245681A1 (en) Virtual reality hand gesture generation
JP7459108B2 (ja) 動的センサ割当
US10649583B1 (en) Sensor fusion algorithms for a handheld controller that includes a force sensing resistor (FSR)
JP7358408B2 (ja) 仮想物体の保持および解放
CN112219246B (zh) 具有聚酰亚胺衬底的力感测电阻器(fsr)、系统和其方法
CN113474748B (zh) 用于包括力感测电阻器(fsr)的手持式控制器的传感器融合算法
JP7361725B2 (ja) 力検知抵抗器(fsr)を含むハンドヘルドコントローラのセンサ融合アルゴリズム
JP7383647B2 (ja) 仮想物体の保持および解放

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19822879

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020570736

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20217001241

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019822879

Country of ref document: EP

Effective date: 20210114