WO2019080902A1 - Interactive intelligent virtual object - Google Patents

Interactive intelligent virtual object

Info

Publication number
WO2019080902A1
WO2019080902A1 PCT/CN2018/111917 CN2018111917W WO2019080902A1 WO 2019080902 A1 WO2019080902 A1 WO 2019080902A1 CN 2018111917 W CN2018111917 W CN 2018111917W WO 2019080902 A1 WO2019080902 A1 WO 2019080902A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
virtual object
player
animation
sensors
Prior art date
Application number
PCT/CN2018/111917
Other languages
French (fr)
Inventor
Pak Kit Lam
Peter Han Joo CHONG
Original Assignee
Zyetric Inventions Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zyetric Inventions Limited filed Critical Zyetric Inventions Limited
Publication of WO2019080902A1 publication Critical patent/WO2019080902A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/211Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/833Hand-to-hand fighting, e.g. martial arts competition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/90Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
    • A63F13/92Video game devices specially adapted to be hand-held while playing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1694Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means

Definitions

  • the present disclosure generally relates to virtual objects, and more specifically, to interactions with computer-implemented virtual objects.
  • Virtual objects are provided in computer-generated environments and relayed to one or more users on display screens.
  • the virtual objects can communicate with their users in response to direct user input via user input devices, such as touch screens, joysticks, game pads, and other control buttons.
  • a method includes, at an electronic device having a display screen and one or more sensors, displaying a virtual object on the display screen, and in response to detecting a first set of data at the one or more sensors, where the first set of data satisfies a set of player presence criteria indicative of a presence of a user at the electronic device, displaying an animation at the virtual object.
  • the method includes, while displaying the animation at the virtual object, detecting a second set of data at the one or more sensors, and in accordance with a determination that the second set of data satisfies a set of movement criteria, where the second set of data satisfies the set of movement criteria when the second set of data indicates a movement of the display screen relative to the user of the device, dynamically modifying the displayed animation of the virtual object based on the second set of data.
  • dynamically modifying the displayed animation of the virtual object includes rendering the displayed animation to track the movement of the device relative to the user while the animation is being displayed.
  • the method includes, in response to detecting the first set of data at the one or more sensors, determining whether the first set of data satisfies the set of player presence criteria, and in accordance with a determination that the first set of data does not satisfy the set of player presence criteria, forgoing displaying the animation at the virtual object.
  • the one or more sensors includes a camera.
  • the first set of data includes image data detected by the camera, and the set of player presence criteria includes a criterion that is met when a facial feature of the user is detected in the image data.
  • the second set of data includes image data including a facial feature of the user detected by the camera, and the set of movement criteria includes a criterion that is met when the image data indicates a movement of the facial feature in a direction toward an edge of a display screen.
  • dynamically modifying the displayed animation of the virtual object includes adjusting the displayed animation to match the detected movement in the direction toward the edge of the display screen.
  • the one or more sensors comprises a motion sensor
  • the second set of data comprises an orientation of the device detected by the motion sensor
  • the set of movement criteria includes a criterion that is met when the second set of data indicates a change in orientation of the device.
  • the motion sensor includes at least one of a gyroscope and an accelerometer.
  • the change in orientation of the device includes at least one of an angular rotation and a linear displacement of the device, and dynamically modifying the displayed animation of the virtual object includes adjusting the displayed animation to match the angular rotation or the linear displacement of the device.
  • the method includes, subsequent to initiating display of the animation at the virtual object, detecting a third set of data at the one or more sensors, determining whether the third set of data satisfies the set of player presence criteria, and in accordance with a determination that the third set of data satisfies the set of player presence criteria, forgoing incrementing a score associated with the user and incrementing a score associated with the virtual object, while in accordance with a determination that the third set of data does not satisfy the set of player presence criteria, incrementing the score associated with the user and forgoing incrementing the score associated with the virtual object.
  • the third set of data is detected upon lapse of a predetermined time interval T after initiating display of the animation at the virtual object.
  • the method includes activating the one or more sensors in response to launching an application corresponding to the virtual object at the electronic device.
  • the virtual object is a virtual boxer
  • the displayed animation comprises a punching action by the virtual boxer in a direction out of the display screen and toward the user.
  • a computer readable storage medium stores one or more programs, and the one or more programs include instructions, which when executed by an electronic device having a display screen and one or more sensors, cause the device to perform any of the methods described above and herein.
  • an electronic device includes a display screen, one or more sensors, one or more processors, memory, and one or more programs, and the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of described above and herein.
  • an electronic device includes means for performing any of the methods described above and herein.
  • FIG. 1A depicts a front view of an example electronic device that implements various embodiments of the present invention
  • FIG. 1B depicts a back view of an example electronic device that implements various embodiments of the present invention
  • FIG. 2A depicts an example of a virtual object that is not providing an artificially intelligent interaction in accordance with various embodiments of the present invention.
  • FIG. 2B depicts an example of a virtual object that is providing an artificially intelligent interaction in accordance with various embodiments of the present invention.
  • FIG. 3A depicts an example of a virtual object that provides an artificially intelligent interaction based on data indicating that the device is translated, in accordance with various embodiments of the present invention.
  • FIG. 3B depicts an example of a virtual object providing an artificially intelligent interaction based on data indicating that the device is rotated, in accordance with various embodiments of the present invention.
  • FIG. 4 depicts an example method for providing interactive intelligent virtual objects in accordance with various embodiments of the present invention.
  • FIG. 5 depicts a computer system, such as a smart device, that may be used to implement various embodiments of the present invention.
  • an interactive, intelligent virtual object also referred to as “intelligent virtual object” and/or “virtual object”
  • the intelligent virtual object is provided with certain artificial intelligence ( “AI” ) to determine the most suitable response toward a user’s real-time action or movement.
  • AI artificial intelligence
  • the intelligent virtual object can detect a human player moving his or her face or body aside from the computer’s display screen to avoid a hit being imparted by the virtual object.
  • Such intelligent virtual objects enhance interaction between the human player and the virtual object and elevate the user experience to another level.
  • smart device 100 which can be utilized to implement various embodiments of the present technology is shown.
  • smart device 100 is a smart phone or tablet computing device.
  • the embodiments described herein are not limited to performance on a smart device, and can be implemented on other types of electronic devices, such as wearable devices, computers, or laptop computers.
  • a front side of the smart device 100 includes a display screen, such as a touch sensitive display 102, a speaker 122, and a front-facing camera 120.
  • the touch-sensitive display 102 can detect user inputs received thereon, such as a number and/or location of finger contact (s) on the screen, contact duration, contact movement across the screen, contact coverage area, contact pressure, and so on. Such user inputs can generate various interactive effects and controls at the device 100.
  • the front-facing camera 120 faces the user and captures the user’s movements, such as hand or facial gestures, which may be registered and analyzed as input for generating the intelligent responses described herein.
  • the touch-sensitive display 102 and speaker 122 further promote user interaction with various programs at the device, such as by detecting user inputs while displaying visual effects on the display screen and/or while generating verbal communications or sound effects from the speaker 122.
  • FIG. 1B shows an example back view of the smart device 100 having a back-facing camera 124.
  • the back-facing camera 124 captures images of an environment or surrounding, such as a room or location that the user is in or observing.
  • smart device 100 shows such captured image data as a background to an augmented reality experience displayed on the display screen.
  • smart device 100 includes a variety of other sensors and/or input mechanisms to receive user and environmental inputs, such as microphones (which is optionally integrated with speaker 122) , movement/orientation sensors (e.g., one or more accelerometers, gyroscopes, digital compasses) , depth sensors (which are optionally part of front-facing camera 120 and/or back-facing camera 124) , and so on.
  • smart device 100 is similar to and includes some or all of the components of computing system 500 described below in FIG. 5.
  • the present technology is performed at a smart device having display screen 102 and various movement/orientation sensors.
  • virtual objects generate a response that is not considered an intelligent response (e.g., FIG. 2A)
  • virtual objects generate intelligent responses (e.g., FIG. 2B)
  • FIG. 2A for example, in a computer game running on a smart device 200 having a display screen 202, a human player 204 plays against an “unintelligent” virtual object 206 controlled by a computer at the smart device 200, whereby the unintelligent virtual object 206 does not respond intelligently to the human player’s reaction and/or body movement. Specifically, the unintelligent virtual object 206 attempts to virtually punch the human player 204 with its virtual fist 208.
  • the human player 204 shifts his face or body from position 210a to position 210b in a direction 212 (e.g., to a side of the screen 202) to avoid being hit in punching direction 214 of the traveling virtual fist 208.
  • the unintelligent virtual object 206 does not provide an intelligent response because it is not provided with the intelligence described herein, such as knowledge or data detected by sensors at the smart device 200 indicating a movement or reaction of the human player 204. Instead, the unintelligent virtual object 206 continues the virtual punch in punching direction 214 toward position 210a, thereby missing its opponent (e.g., player 204) that moved to position 210b. Without knowing that the player 204 has avoided the virtual fist 208, unintelligent virtual object 206 cannot adjust the direction of its current attack toward the “new” position 210b of the human player 204.
  • the unintelligent virtual object 206 cannot determine a suitable direction or strategy for its next attack because the virtual object 206 is not provided with knowledge that the human player 204 has moved to the new position 210b. Further, in some cases, the unintelligent virtual object 206 does not know it has missed the player 204.
  • the human player 204 plays against an “intelligent” virtual object 220 controlled by the computer at the smart device 200, whereby the intelligent virtual object 220 responds to the human player’s reactions and/or body movements.
  • the intelligent virtual object 220 attempts to virtually punch the human player 204 with an initial punch (e.g., initial throw in the punching direction 214 of the virtual fist 208, as illustrated above in FIG. 2A) .
  • the human player 204 shifts his face or body from position 210a to position 210b in direction 212 (e.g., to a side of the screen 202) to avoid the initial punch.
  • the intelligent virtual object 220 follows or otherwise tracks the reaction or movement of player 204 and accordingly directs its next punch (and in some cases, redirects its current punch) with virtual fist 222 in new direction 224 toward new position 210b, which hits the player 204.
  • the intelligent virtual object 220 provides an intelligent response because it was provided with the intelligence described herein, such as knowledge or data detected by sensors at the smart device 200 indicating a movement or reaction of the human player 204.
  • the intelligent virtual object 220 delivers its immediate next virtual punch in a new punching direction 224 based on the detected movement or reaction of player 204.
  • the intelligent virtual object 22 knows, based on the detected data, that the player 204 avoided the initial punch, and therefore adjusts its response by directing its next attack toward the new position 210b of the human player 204.
  • the direction of a current attack is adjusted as the intelligent virtual object 220 tracks reactions and movements of the user based on detected data such that the current punch is redirected (e.g., in real-time) to follow the user’s movements until the user is hit.
  • the intelligent virtual object 220 utilizes the knowledge to determine a strategy for attack, such as which fist to attack with based on which side of the display screen 202 the player 204 has moved to. For example, at FIG.
  • the intelligent virtual object 220 utilizes its left fist 224 to deliver the attack because the player 204 has moved toward that side of the display screen 202. In some examples, as shown at FIG. 2B, the intelligent virtual object 220 shifts is gaze as its eyes follow the player 204.
  • the intelligent virtual object 229 interacts with, or otherwise delivers visual communications with, the player 204 through the device’s touch screen (e.g., display screen 202) .
  • Other interactions can be contemplated, such as interactions delivered by output via other common interfaces provided on or coupled to the device (e.g., audibly via speaker 122, tactually via vibration generators in the smart device 200) .
  • the interactive and intelligent virtual object (e.g., intelligent virtual object 220 of FIG. 2B) is provided with the artificial intelligence that utilizes data indicating the player’s movements and reactions to provide one or more intelligent responses (e.g., redirecting a current attack, strategizing and directing a next attack, displaying other interactions including a gaze tracking the user, knowing when an attack has hit or missed the player, selecting which fist to deliver the attack) .
  • the data indicating the player’s movements or reactions can be gathered by one or more sensors that are provided at, or in operative communication with, the smart device on which the intelligent virtual object exists (e.g., in a computer game running on the smart device) .
  • Such sensors can include various game sensors such as a camera, front-facing camera, back-facing camera, gyroscope, accelerometer, and so on. Described further below are examples of using data collected by a front-facing camera and/or a gyroscope to enable the intelligent virtual object to actively adjust its response into a suitable response or strategy against the player in real-time during a game.
  • the intelligent virtual object is configured to respond to human movements that are detected by a front-facing camera, such as front-facing camera 226 from FIG. 2B and/or front-facing camera 120 at FIG. 1.
  • the intelligent virtual object 220 is provided with artificial intelligence that uses image data sensed from the front-facing camera 226.
  • the virtual object 220 is a virtual boxer (hereinafter also referred to as “virtual boxer 220” ) of a boxing game running on the smart device 200.
  • the human player 204 holds smart device 200 in his hand during the boxing game.
  • the virtual boxer 220 in communication with various game sensors such as camera 226, matches the player’s movements to hit its opponent.
  • the virtual boxer 220 detects, via camera 226, a location of the player’s face and attempts to punch the face with its virtual fist based on the detected location.
  • the virtual boxer 220 determines based on the detected data that the player 204 has not moved aside in response to the virtual boxer’s punch (e.g., if the player 204 has remained at initial position 210a after a punch in direction 214 toward position 210a is delivered) , then human player 208 can be considered to be hit by the virtual fist.
  • the virtual boxer 220 determines that the player 204 has not moved after a predetermined period of time after initiation of the punch (e.g., after 0.5 seconds of initiation of the punch) , the human player 204 is considered to be hit by the virtual fist.
  • the human player 204 if the human player 204 sees that the virtual boxer 204 is ready to punch his face, player 204 “escapes” by turning his face away from the screen so that the camera 226 no longer detects his face or by shifting his body in direction 212 to new position 210b out of the punch.
  • the human player’s face is “leaving” the screen 202 or otherwise leaving a field of view of the virtual boxer 220 provided via camera 226, the front-facing camera 226 detects which direction the player 204 disappears to based on the camera’s detection of the movement of the face and/or body.
  • the camera 226 can send the data to the virtual boxer 220, which is programmed to determine which “side” of the device 200 the player 204 has escaped to and provide an intelligent response accordingly. For example, by using the updated location of the player 204, the virtual boxer 220 intelligently responds in real-time by attempting to chase, track, match, or otherwise follow the human player 204 and move its punch in the direction of the new location of the player 204 before the player 204 successfully “leaves” the screen 202. In some examples, by using the updated location of the player 204, the virtual boxer 220 intelligently responds in real-time by generating a new punch at the new location before the player 204 successfully “leaves” the screen.
  • the virtual boxer 220 can shift its gaze or stare toward the side of the screen 202 that the human player 204 has successfully disappeared to, and/or wait (e.g., maintain the gaze as if watching for the player 204) for the human player 204 to “return” to the screen 202 based on motion detected by camera 226 to direct its next virtual punch.
  • the virtual boxer 220 receives motion data related to other types of motion (e.g., other people walking in the background) and discerns that the movement is not related to the player 204, and therefore continues to watch for return of the player 204 to deliver its next punch.
  • the virtual boxer 220 and/or its fist 222 has 3-D effects that cause the virtual boxer and/or fist to appear to come out of the screen 202 while throwing a punch.
  • other features of the virtual boxer 220 such as eyes of the virtual boxer 220 and/or head or body positioning of the virtual boxer 220, rotate and follow the player’s movements across the screen 202 based on detection of the player’s movements relative to the display screen 202.
  • the front-facing camera 226 includes a depth sensor that detects a distance of the player from the display screen 202 and generates a response accordingly. For example, if the player’s face is within a predetermined threshold distance (e.g., a minimum distance) from the display screen 202, the virtual boxer 220 can determine that the player 204 is within hitting distance and deliver a punch. If the player’s face is outside of the predetermined threshold distance, the virtual boxer 220 can determine that the player 204 is too far to be hit.
  • a predetermined threshold distance e.g., a minimum distance
  • the virtual boxer 220 can generate other types of responses instead of delivering a hit, such as tricking the player 204 to get closer to the screen 202, pretending to not “see” the player 204 by averting its eyes, detecting motion of background objects and adjusting its eyes to track the background objects to fool the player 204 into thinking it is not watching for the player’s reactions, and/or other animations and graphics.
  • the virtual boxer 220 can deliver the punch.
  • the intelligent virtual object is configured to respond to human movements that are detected by a gyroscope at the smart device 200.
  • an intelligent virtual object 320 (hereinafter also referred to as a virtual boxer 320) on display screen 302 of smart device 300 is provided with artificial intelligence that uses data captured by a gyroscope (not shown) in the device 300.
  • the virtual boxer 320 is equipped with decision-making abilities that utilize the knowledge based on detected movement of the device 300 relative to the human player 304.
  • the virtual boxer 320 can generate an intelligent response based on data detected by a combination of sensors at the device 300, such as intelligence detected by other game sensors and/or camera 326, which may be similar to camera 226 of FIG. 2B discussed above.
  • human movement is detected by both the camera 326 and gyroscope built into the smart device 300.
  • a combination of game sensors, such as the gyroscope with the camera 326 is utilized for detecting data.
  • utilizing multiple sensors provides faster and more accurate feedback of the player’s movements than a single game sensor alone.
  • the gyroscope allows the smart device 300 to detect a physical change in the orientation of the device 300, which may be held in the player’s hand. For example, as shown at FIG.
  • the gyroscope alone and/or in combination with the camera 326, detects that the smart device 300 has been translated in a leftward direction 312 from a first position 310a to a second position 310b.
  • the virtual boxer 320 provides an intelligent response by adjusting its hit from fist 322 accordingly and in combination with data from the camera 326. For example, the virtual boxer 320 of FIG. 3A at position 310b has adjusted its gaze to focus or otherwise follow the player 304 as device 300 is translated.
  • the virtual boxer 320 delivers the punch from fist 22 which is a left fist of the virtual boxer 320, because the virtual boxer 320 detects that the player 304 has moved toward a left side of display screen 302 from the boxer’s vantage point. Further, the virtual boxer 320 delivers a punch angled toward the player 304 in direction 314. In some examples, the virtual boxer 320 at the first position 310a is calm and does not deliver the punch, but becomes agitated or otherwise activated when the device 300 is translated such that at the second position 310b, the virtual boxer 320 generates the hit. It is noted that in some examples, the gyroscope, alone and/or in combination with camera 326, detects translation of the device 300 relative to player 304 in other directions, such as toward the right hand side, left hand side, upper side or lower side.
  • the gyroscope can detect that the player 304 holding device 300 has rotated the device 300 at an angle, as shown at angled position 310c and angled position 310d. In these positions, the camera 326 is rotated away from the player 304. In some cases, the player 304 is still within the field of view of the camera 326. In other cases, the player 304 is no longer within the field of view, in which case the virtual boxer 320 generates an intelligent response based on data from other sensors such as the gyroscope.
  • the virtual boxer 320 estimates a location of the player based on the detected rotation (e.g., a degree of rotation detected) and adjusts its hits, gaze, and/or strategy accordingly. For example, as shown at FIG. 3B, the virtual object 320 adjusts the punch of its fists accordingly and in combination with data from camera 326. In angled position 310c, the virtual boxer 320 generates an intelligent response based on the detected data to deliver a punch utilizing its left hand 322a because the player 304 is closer to its left side of the display screen 302. The left-handed punch is angled toward the player 304 in a direction 314a.
  • the detected rotation e.g., a degree of rotation detected
  • the virtual boxer 320 in angled position 310d, the virtual boxer 320 generates an intelligent response based on the detected data to deliver a punch utilizing its right hand 322b because the player 304 is closer to its right side of the display screen 302.
  • the right-handed punch is angled toward the player 304 in a direction 314b.
  • the virtual boxer 320 also adjusts its gaze between the rotated positions to track the player 304, such that its gaze at position 310c is also in direction 314a, and its gaze at position 310d is also in direction 314b.
  • the gyroscope detects human movement when the player 304 is physically static (e.g., has not shifted position) , such that only the device 300 is being moved.
  • the gyroscope and camera 326 can also detect human movement when the player 304 has physically shifted while the device 300 has been moved.
  • a face-detection functionality on the front-facing camera (e.g., camera 226, 326) of the smart device (e.g., device 200, 300) and corresponding gyroscope are turned on or otherwise activated to register data.
  • the virtual boxer e.g., boxer 220, 320
  • the virtual boxer begins the “action” to punch its fist (e.g., fist 222, 322, 322a, 322b) outwardly from the screen (e.g., screen 202, 302) of the smart device.
  • the human player If the human player’s face is still detected after a predetermined period of time (e.g., 0.5 seconds from the time of the punch) , then the human player is considered to have been “punched” and the virtual boxer will score in the game. Otherwise, the human player is considered to have “escaped” from the punch and the human player will score.
  • a predetermined period of time e.g., 0.5 seconds from the time of the punch
  • the virtual object can adjust its punch or next punch accordingly. For instance, in some examples, the human player only moves his face or body aside while the orientation of the smart device is fixed. In this case, detected human movement is based on the images recorded from the front-facing camera that monitors the movement of the human player and notifies the virtual object (e.g., via central processing unit of the device) which side the player has disappeared to. In some examples, the player’s face is stationary and only moves the smart device aside.
  • detected human movement is based on data captured by the gyroscope that monitors movement of the device and notifies the virtual object (e.g., via central processing unit of the device) of the change in orientation of the device and therefore which side player has disappeared to.
  • both the player and the smart device move.
  • the smart device can determine the human movement by the information given by either the front-facing camera or the gyroscope, or a combination of information from both camera and gyroscope.
  • the steps described above are applicable to all edges of the screen. Further, it is noted that if the smart device has not detected any face at the moment, the program logic will instruct the virtual object to wait for the detection to occur and repeat the above punching procedure, until the player quits the game. While the intelligent virtual object is being described herein in the context of a virtual boxer of a boxing game, other implementations in other gaming or non-gaming environments may be contemplated.
  • FIG. 4 an example method 400 is shown for providing various embodiments of the interactive, intelligent virtual objects described in FIGS. 2B and 3A-3B at an electronic device, such as a smart device, having a display screen and one or more sensors (e.g., game sensors described above) .
  • the method 400 includes displaying a virtual object (e.g., virtual object 220, 320) on the display screen (block 402) .
  • the virtual object may be displayed in response to a user launching an application corresponding to the virtual object.
  • Method 400 includes, in response to detecting a first set of data at the one or more sensors (e.g., gyroscope, camera 226, 326) , wherein the first set of data satisfies a set of player presence criteria indicative of a presence of a user at the electronic device, displaying an animation (e.g., delivering a punch, adjusting its eyes) at the virtual object (block 404) .
  • method 400 includes, in response to detecting the first set of data at the one or more sensors, determining whether the first set of data satisfies the set of player presence criteria, and in accordance with a determination that the first set of data does not satisfy the set of player presence criteria, forgoing displaying the animation at the virtual object (block 406) .
  • the first set of data comprises image data detected by the camera
  • the set of player presence criteria includes a criterion that is met when a facial feature (e.g., utilizing facial recognition) of the user is detected in the image data (block 408) .
  • Method 400 includes, while displaying the animation at the virtual object, detecting a second set of data at the one or more sensors (block 410) .
  • Method 400 includes, in accordance with a determination that the second set of data satisfies a set of movement criteria, wherein the second set of data satisfies the set of movement criteria when the second set of data indicates a movement of the display screen relative to the user of the device, dynamically modifying the displayed animation (e.g., adjusting its current punch, delivering a next punch with another fist, adjusting its gaze) of the virtual object based on the second set of data (block 412) .
  • dynamically modifying the displayed animation e.g., adjusting its current punch, delivering a next punch with another fist, adjusting its gaze
  • the second set of data comprises image data including a facial feature of the user detected by the camera, and the set of movement criteria includes a criterion that is met when the image data indicates a movement of the facial feature in a direction toward an edge of a display screen (block 414) .
  • the one or more sensors comprises a motion sensor
  • the second set of data comprises an orientation of the device detected by the motion sensor
  • the set of movement criteria includes a criterion that is met when the second set of data indicates a change in orientation of the device (block 416) (e.g., FIGS. 3A-3B) .
  • the motion sensor comprises at least one of a gyroscope and an accelerometer.
  • method 400 includes dynamically modifying the displayed animation of the virtual object comprises rendering the displayed animation to track the movement of the device relative to the user while the animation is being displayed (block 418) .
  • method 400 includes dynamically modifying the displayed animation of the virtual object includes adjusting the displayed animation to match the detected movement in the direction toward the edge of the display screen (block 420) .
  • the change in orientation of the device comprises at least one of an angular rotation and a linear displacement of the device, and dynamically modifying the displayed animation of the virtual object includes adjusting the displayed animation to match the angular rotation or the linear displacement of the device (block 422) .
  • method 400 in accordance with a determination that the second set of data satisfies a set of movement criteria, wherein the second set of data satisfies the set of movement criteria when the second set of data indicates a movement of the display screen relative to the user of the device, includes generating a subsequent animation based on the detected second set of data such that the subsequent animation tracks the relative movement between the user and the display screen, as detected in the second set of data.
  • the method includes, subsequent to initiating display of the animation at the virtual object, detecting a third set of data at the one or more sensors; determining whether the third set of data satisfies the set of player presence criteria; in accordance with a determination that the third set of data satisfies the set of player presence criteria, forgoing incrementing a score associated with the user and incrementing a score associated with the virtual object; and in accordance with a determination that the third set of data does not satisfy the set of player presence criteria, incrementing the score associated with the user and forgoing incrementing the score associated with the virtual object.
  • the third set of data can be detected upon lapse of a predetermined time interval T (e.g., 0.5 seconds) after initiating display of the animation (e.g., delivering the punch) at the virtual object.
  • T a predetermined time interval
  • method 400 includes activating the one or more sensors in response to launching an application corresponding to the virtual object at the electronic device.
  • the virtual object is a virtual boxer
  • the displayed animation comprises a punching action by the virtual boxer in a direction out of the display screen and toward the user.
  • computing system 500 may be used to implement the smart device described above that implements any combination of the above embodiments.
  • Computing system 500 may include, for example, a processor, memory, storage, and input/output peripherals (e.g., display, keyboard, stylus, drawing device, disk drive, Internet connection, camera/scanner, microphone, speaker, etc. ) .
  • input/output peripherals e.g., display, keyboard, stylus, drawing device, disk drive, Internet connection, camera/scanner, microphone, speaker, etc.
  • computing system 500 may include circuitry or other specialized hardware for carrying out some or all aspects of the processes.
  • the main system 502 may include a motherboard 504 with a bus that connects an input/output (I/O) section 506, one or more microprocessors 508, and a memory section 510, which may have a flash memory card 512 related to it.
  • Memory section 510 may contain computer-executable instructions and/or data for carrying out the techniques and algorithms described above.
  • the I/O section 506 may be connected to display 524 (e.g., to display a virtual object) , a keyboard 514, a camera/scanner 526, a microphone 528, a speaker 530, a disk storage unit 516, and a media drive unit 518.
  • the media drive unit 518 can read/write a non-transitory computer-readable storage medium 520, which can contain programs 522 and/or data used to implement process 200 and/or process 400.
  • a non-transitory computer-readable storage medium can be used to store (e.g., tangibly embody) one or more computer programs for performing any one of the above-described processes by means of a computer.
  • the computer program may be written, for example, in a general-purpose programming language (e.g., Pascal, C, C++, Java, or the like) or some specialized application-specific language.
  • Computing system 500 may include various sensors, such as front facing camera 530 (e.g., camera 226 of FIG. 2B, camera 326 of FIGS. 3A-3B) , back facing camera 532, compass 534, accelerometer 536, gyroscope 538 (e.g., implemented at FIGS. 3A-3B) , and/or touch-sensitive surface 540. Other sensors may also be included.
  • front facing camera 530 e.g., camera 226 of FIG. 2B, camera 326 of FIGS. 3A-3B
  • compass 534 e.g., accelerometer 536
  • gyroscope 538 e.g., implemented at FIGS. 3A-3B
  • touch-sensitive surface 540 e.g., touch-sensitive surface 540.
  • Other sensors may also be included.
  • computing system 500 While the various components of computing system 500 are depicted as separate in FIG. 5, various components may be combined together.
  • display 524 and touch sensitive surface 540 may be combined together into a touch-sensitive display.
  • Item 1 A method comprising:
  • dynamically modifying the displayed animation of the virtual object comprises rendering the displayed animation to track the movement of the device relative to the user while the animation is being displayed.
  • Item 4 The method of any of items 1-3, further wherein:
  • the one or more sensors comprises a camera.
  • the first set of data comprises image data detected by the camera
  • the set of player presence criteria includes a criterion that is met when a facial feature of the user is detected in the image data.
  • the second set of data comprises image data including a facial feature of the user detected by the camera
  • the set of movement criteria includes a criterion that is met when the image data indicates a movement of the facial feature in a direction toward an edge of a display screen.
  • dynamically modifying the displayed animation of the virtual object includes adjusting the displayed animation to match the detected movement in the direction toward the edge of the display screen.
  • Item 8 The method of any of items 1-7, further wherein:
  • the one or more sensors comprises a motion sensor
  • the second set of data comprises an orientation of the device detected by the motion sensor
  • the set of movement criteria includes a criterion that is met when the second set of data indicates a change in orientation of the device.
  • Item 9 The method of item 8, wherein the motion sensor comprises at least one of a gyroscope and an accelerometer.
  • the change in orientation of the device comprises at least one of an angular rotation and a linear displacement of the device
  • dynamically modifying the displayed animation of the virtual object includes adjusting the displayed animation to match the angular rotation or the linear displacement of the device.
  • the third set of data is detected upon lapse of a predetermined time interval T after initiating display of the animation at the virtual object.
  • Item 13 The method of any of items 1-12, further comprising:
  • the virtual object is a virtual boxer
  • the displayed animation comprises a punching action by the virtual boxer in a direction out of the display screen and toward the user.
  • Item 15 A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device having a display screen and one or more sensors, cause the device to perform any of the methods of items 1-14.
  • An electronic device comprising:
  • processors one or more processors
  • one or more programs wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of items 1-14.
  • An electronic device comprising:

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Systems and methods are directed to providing an interactive intelligent virtual object at an electronic device by displaying a virtual object on the display screen, and in response to detecting a first set of data at the one or more sensors, displaying an animation at the virtual object when the first set of data indicates presence of a user at the device. Systems and methods further include, while displaying the animation at the virtual object, detecting a second set of data at the one or more sensors, and dynamically modifying the displayed animation of the virtual object based on the second set of data in accordance with a determination that the second set of data indicates a movement of the display screen relative to the user of the device.

Description

INTERACTIVE INTELLIGENT VIRTUAL OBJECT
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority to U.S. Provisional Patent Application Serial No. 62/578,281, entitled “INTERACTIVE INTELLIGENT VIRTUAL OBJECT, ” filed October 27, 2017, the content of which is hereby incorporated by reference for all purposes.
FIELD
The present disclosure generally relates to virtual objects, and more specifically, to interactions with computer-implemented virtual objects.
BACKGROUND
Virtual objects are provided in computer-generated environments and relayed to one or more users on display screens. In some cases, such as in computer games, the virtual objects can communicate with their users in response to direct user input via user input devices, such as touch screens, joysticks, game pads, and other control buttons.
SUMMARY
Below, various embodiments of the present invention are described to provide an interactive intelligent virtual object using data detected from a user.
In some embodiments, a method is provided. The method includes, at an electronic device having a display screen and one or more sensors, displaying a virtual object on the display screen, and in response to detecting a first set of data at the one or more sensors, where the first set of data satisfies a set of player presence criteria indicative of a presence of a user at the electronic device, displaying an animation at the virtual object. The method includes, while displaying the animation at the virtual object, detecting a second set of data at the one or more sensors, and in accordance with a determination that the second set of data satisfies a set of movement criteria, where the second set of data satisfies the set of movement criteria when the second set of data indicates a movement  of the display screen relative to the user of the device, dynamically modifying the displayed animation of the virtual object based on the second set of data.
Various examples of the present embodiments can be contemplated. In some examples, dynamically modifying the displayed animation of the virtual object includes rendering the displayed animation to track the movement of the device relative to the user while the animation is being displayed. In some examples, the method includes, in response to detecting the first set of data at the one or more sensors, determining whether the first set of data satisfies the set of player presence criteria, and in accordance with a determination that the first set of data does not satisfy the set of player presence criteria, forgoing displaying the animation at the virtual object.
In some examples, the one or more sensors includes a camera. In some examples, the first set of data includes image data detected by the camera, and the set of player presence criteria includes a criterion that is met when a facial feature of the user is detected in the image data. In some examples, the second set of data includes image data including a facial feature of the user detected by the camera, and the set of movement criteria includes a criterion that is met when the image data indicates a movement of the facial feature in a direction toward an edge of a display screen.
In some examples, dynamically modifying the displayed animation of the virtual object includes adjusting the displayed animation to match the detected movement in the direction toward the edge of the display screen. In some examples, the one or more sensors comprises a motion sensor, the second set of data comprises an orientation of the device detected by the motion sensor, and the set of movement criteria includes a criterion that is met when the second set of data indicates a change in orientation of the device.
In some examples, the motion sensor includes at least one of a gyroscope and an accelerometer. In some examples, the change in orientation of the device includes at least one of an angular rotation and a linear displacement of the device, and dynamically modifying the displayed animation of the virtual object includes adjusting the displayed animation to match the angular rotation or the linear displacement of the device. Further,  in some examples, the method includes, subsequent to initiating display of the animation at the virtual object, detecting a third set of data at the one or more sensors, determining whether the third set of data satisfies the set of player presence criteria, and in accordance with a determination that the third set of data satisfies the set of player presence criteria, forgoing incrementing a score associated with the user and incrementing a score associated with the virtual object, while in accordance with a determination that the third set of data does not satisfy the set of player presence criteria, incrementing the score associated with the user and forgoing incrementing the score associated with the virtual object.
In some examples, the third set of data is detected upon lapse of a predetermined time interval T after initiating display of the animation at the virtual object. In some examples, the method includes activating the one or more sensors in response to launching an application corresponding to the virtual object at the electronic device. Further, in some examples, the virtual object is a virtual boxer, and the displayed animation comprises a punching action by the virtual boxer in a direction out of the display screen and toward the user.
In some embodiments, a computer readable storage medium stores one or more programs, and the one or more programs include instructions, which when executed by an electronic device having a display screen and one or more sensors, cause the device to perform any of the methods described above and herein.
In some embodiments, an electronic device includes a display screen, one or more sensors, one or more processors, memory, and one or more programs, and the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of described above and herein.
In some embodiments, an electronic device includes means for performing any of the methods described above and herein.
BRIEF DESCRIPTION OF THE FIGURES
The present application can be best understood by reference to the figures described below taken in conjunction with the accompanying drawing figures, in which like parts may be referred to by like numerals.
FIG. 1A depicts a front view of an example electronic device that implements various embodiments of the present invention
FIG. 1B depicts a back view of an example electronic device that implements various embodiments of the present invention
FIG. 2A depicts an example of a virtual object that is not providing an artificially intelligent interaction in accordance with various embodiments of the present invention.
FIG. 2B depicts an example of a virtual object that is providing an artificially intelligent interaction in accordance with various embodiments of the present invention.
FIG. 3A depicts an example of a virtual object that provides an artificially intelligent interaction based on data indicating that the device is translated, in accordance with various embodiments of the present invention.
FIG. 3B depicts an example of a virtual object providing an artificially intelligent interaction based on data indicating that the device is rotated, in accordance with various embodiments of the present invention.
FIG. 4 depicts an example method for providing interactive intelligent virtual objects in accordance with various embodiments of the present invention.
FIG. 5 depicts a computer system, such as a smart device, that may be used to implement various embodiments of the present invention.
DETAILED DESCRIPTION
The following description is presented to enable a person of ordinary skill in the art to make and use the various embodiments. Descriptions of specific devices, techniques, and  applications are provided only as examples. Various modifications to the examples described herein will be readily apparent to those of ordinary skill in the art, and the general principles defined herein may be applied to other examples and applications without departing from the spirit and scope of the present technology. Thus, the disclosed technology is not intended to be limited to the examples described herein and shown, but is to be accorded the scope consistent with the claims.
Various embodiments of the present invention provide for an interactive, intelligent virtual object (also referred to as “intelligent virtual object” and/or “virtual object” ) that interacts with data detected from a user. As described below, the intelligent virtual object is provided with certain artificial intelligence ( “AI” ) to determine the most suitable response toward a user’s real-time action or movement. For example, in computer gaming applications, the intelligent virtual object can detect a human player moving his or her face or body aside from the computer’s display screen to avoid a hit being imparted by the virtual object. Such intelligent virtual objects enhance interaction between the human player and the virtual object and elevate the user experience to another level.
Referring to FIGS. 1A-1B, a front view and a back view, respectively, of smart device 100 which can be utilized to implement various embodiments of the present technology is shown. In some examples, smart device 100 is a smart phone or tablet computing device. However, it is noted that that the embodiments described herein are not limited to performance on a smart device, and can be implemented on other types of electronic devices, such as wearable devices, computers, or laptop computers.
As shown in FIG. 1A, a front side of the smart device 100 includes a display screen, such as a touch sensitive display 102, a speaker 122, and a front-facing camera 120. The touch-sensitive display 102 can detect user inputs received thereon, such as a number and/or location of finger contact (s) on the screen, contact duration, contact movement across the screen, contact coverage area, contact pressure, and so on. Such user inputs can generate various interactive effects and controls at the device 100. In some examples, the front-facing camera 120 faces the user and captures the user’s movements, such as hand  or facial gestures, which may be registered and analyzed as input for generating the intelligent responses described herein. The touch-sensitive display 102 and speaker 122 further promote user interaction with various programs at the device, such as by detecting user inputs while displaying visual effects on the display screen and/or while generating verbal communications or sound effects from the speaker 122.
FIG. 1B shows an example back view of the smart device 100 having a back-facing camera 124. In some embodiments, the back-facing camera 124 captures images of an environment or surrounding, such as a room or location that the user is in or observing. In some examples, smart device 100 shows such captured image data as a background to an augmented reality experience displayed on the display screen. Optionally, smart device 100 includes a variety of other sensors and/or input mechanisms to receive user and environmental inputs, such as microphones (which is optionally integrated with speaker 122) , movement/orientation sensors (e.g., one or more accelerometers, gyroscopes, digital compasses) , depth sensors (which are optionally part of front-facing camera 120 and/or back-facing camera 124) , and so on. In some examples, smart device 100 is similar to and includes some or all of the components of computing system 500 described below in FIG. 5. In some examples, the present technology is performed at a smart device having display screen 102 and various movement/orientation sensors.
Turning now to FIGS. 2A-2B, in some cases, virtual objects generate a response that is not considered an intelligent response (e.g., FIG. 2A) , while in other cases, virtual objects generate intelligent responses (e.g., FIG. 2B) . At FIG. 2A, for example, in a computer game running on a smart device 200 having a display screen 202, a human player 204 plays against an “unintelligent” virtual object 206 controlled by a computer at the smart device 200, whereby the unintelligent virtual object 206 does not respond intelligently to the human player’s reaction and/or body movement. Specifically, the unintelligent virtual object 206 attempts to virtually punch the human player 204 with its virtual fist 208. As a reaction, the human player 204 shifts his face or body from position 210a to position 210b in a direction 212 (e.g., to a side of the screen 202) to avoid being hit in punching direction 214 of the traveling virtual fist 208. In this example, the unintelligent virtual object 206 does not provide an intelligent response because it is not provided with  the intelligence described herein, such as knowledge or data detected by sensors at the smart device 200 indicating a movement or reaction of the human player 204. Instead, the unintelligent virtual object 206 continues the virtual punch in punching direction 214 toward position 210a, thereby missing its opponent (e.g., player 204) that moved to position 210b. Without knowing that the player 204 has avoided the virtual fist 208, unintelligent virtual object 206 cannot adjust the direction of its current attack toward the “new” position 210b of the human player 204.
In some cases, the unintelligent virtual object 206 cannot determine a suitable direction or strategy for its next attack because the virtual object 206 is not provided with knowledge that the human player 204 has moved to the new position 210b. Further, in some cases, the unintelligent virtual object 206 does not know it has missed the player 204.
Referring now to FIG. 2B, in this example of a computer game running on the smart device 200 having the display screen 202, the human player 204 plays against an “intelligent” virtual object 220 controlled by the computer at the smart device 200, whereby the intelligent virtual object 220 responds to the human player’s reactions and/or body movements. Specifically, the intelligent virtual object 220 attempts to virtually punch the human player 204 with an initial punch (e.g., initial throw in the punching direction 214 of the virtual fist 208, as illustrated above in FIG. 2A) . As a reaction, the human player 204 shifts his face or body from position 210a to position 210b in direction 212 (e.g., to a side of the screen 202) to avoid the initial punch. Here, however, the intelligent virtual object 220 follows or otherwise tracks the reaction or movement of player 204 and accordingly directs its next punch (and in some cases, redirects its current punch) with virtual fist 222 in new direction 224 toward new position 210b, which hits the player 204. In this example, the intelligent virtual object 220 provides an intelligent response because it was provided with the intelligence described herein, such as knowledge or data detected by sensors at the smart device 200 indicating a movement or reaction of the human player 204. As shown in FIG. 2B, the intelligent virtual object 220 delivers its immediate next virtual punch in a new punching direction 224 based on the detected movement or reaction of player 204.
In some examples, the intelligent virtual object 22 knows, based on the detected data, that the player 204 avoided the initial punch, and therefore adjusts its response by directing its next attack toward the new position 210b of the human player 204. In some examples, the direction of a current attack is adjusted as the intelligent virtual object 220 tracks reactions and movements of the user based on detected data such that the current punch is redirected (e.g., in real-time) to follow the user’s movements until the user is hit. In further examples, the intelligent virtual object 220 utilizes the knowledge to determine a strategy for attack, such as which fist to attack with based on which side of the display screen 202 the player 204 has moved to. For example, at FIG. 2B, the intelligent virtual object 220 utilizes its left fist 224 to deliver the attack because the player 204 has moved toward that side of the display screen 202. In some examples, as shown at FIG. 2B, the intelligent virtual object 220 shifts is gaze as its eyes follow the player 204.
It is contemplated that other intelligent responses can be included, additionally and/or alternatively, to provide further intelligent interactions with users. Further, it is noted that in some examples, the intelligent virtual object 229 interacts with, or otherwise delivers visual communications with, the player 204 through the device’s touch screen (e.g., display screen 202) . Other interactions can be contemplated, such as interactions delivered by output via other common interfaces provided on or coupled to the device (e.g., audibly via speaker 122, tactually via vibration generators in the smart device 200) .
As described further below, the interactive and intelligent virtual object (e.g., intelligent virtual object 220 of FIG. 2B) is provided with the artificial intelligence that utilizes data indicating the player’s movements and reactions to provide one or more intelligent responses (e.g., redirecting a current attack, strategizing and directing a next attack, displaying other interactions including a gaze tracking the user, knowing when an attack has hit or missed the player, selecting which fist to deliver the attack) . The data indicating the player’s movements or reactions can be gathered by one or more sensors that are provided at, or in operative communication with, the smart device on which the intelligent virtual object exists (e.g., in a computer game running on the smart device) . Such sensors can include various game sensors such as a camera, front-facing camera, back-facing camera, gyroscope, accelerometer, and so on. Described further below are  examples of using data collected by a front-facing camera and/or a gyroscope to enable the intelligent virtual object to actively adjust its response into a suitable response or strategy against the player in real-time during a game.
For instance, referring back to FIG. 2B, in some examples, the intelligent virtual object is configured to respond to human movements that are detected by a front-facing camera, such as front-facing camera 226 from FIG. 2B and/or front-facing camera 120 at FIG. 1. Here, the intelligent virtual object 220 is provided with artificial intelligence that uses image data sensed from the front-facing camera 226. Merely by way of example, the virtual object 220 is a virtual boxer (hereinafter also referred to as “virtual boxer 220” ) of a boxing game running on the smart device 200. In some examples, the human player 204 holds smart device 200 in his hand during the boxing game. The virtual boxer 220, in communication with various game sensors such as camera 226, matches the player’s movements to hit its opponent. Merely by way of example, the virtual boxer 220 detects, via camera 226, a location of the player’s face and attempts to punch the face with its virtual fist based on the detected location. In some examples, if the virtual boxer 220 determines based on the detected data that the player 204 has not moved aside in response to the virtual boxer’s punch (e.g., if the player 204 has remained at initial position 210a after a punch in direction 214 toward position 210a is delivered) , then human player 208 can be considered to be hit by the virtual fist. Further in some examples, if the virtual boxer 220 determines that the player 204 has not moved after a predetermined period of time after initiation of the punch (e.g., after 0.5 seconds of initiation of the punch) , the human player 204 is considered to be hit by the virtual fist.
In some examples, as shown at FIG. 2B, if the human player 204 sees that the virtual boxer 204 is ready to punch his face, player 204 “escapes” by turning his face away from the screen so that the camera 226 no longer detects his face or by shifting his body in direction 212 to new position 210b out of the punch. As shown in FIG. 2B, while the human player’s face is “leaving” the screen 202 or otherwise leaving a field of view of the virtual boxer 220 provided via camera 226, the front-facing camera 226 detects which direction the player 204 disappears to based on the camera’s detection of the movement of the face and/or body. With the detected movement, the camera 226 can send the data  to the virtual boxer 220, which is programmed to determine which “side” of the device 200 the player 204 has escaped to and provide an intelligent response accordingly. For example, by using the updated location of the player 204, the virtual boxer 220 intelligently responds in real-time by attempting to chase, track, match, or otherwise follow the human player 204 and move its punch in the direction of the new location of the player 204 before the player 204 successfully “leaves” the screen 202. In some examples, by using the updated location of the player 204, the virtual boxer 220 intelligently responds in real-time by generating a new punch at the new location before the player 204 successfully “leaves” the screen.
Additionally and/or alternatively, the virtual boxer 220 can shift its gaze or stare toward the side of the screen 202 that the human player 204 has successfully disappeared to, and/or wait (e.g., maintain the gaze as if watching for the player 204) for the human player 204 to “return” to the screen 202 based on motion detected by camera 226 to direct its next virtual punch. In some examples, the virtual boxer 220 receives motion data related to other types of motion (e.g., other people walking in the background) and discerns that the movement is not related to the player 204, and therefore continues to watch for return of the player 204 to deliver its next punch. In some examples, the virtual boxer 220 and/or its fist 222 (e.g., left or right fists) has 3-D effects that cause the virtual boxer and/or fist to appear to come out of the screen 202 while throwing a punch. In some examples, other features of the virtual boxer 220, such as eyes of the virtual boxer 220 and/or head or body positioning of the virtual boxer 220, rotate and follow the player’s movements across the screen 202 based on detection of the player’s movements relative to the display screen 202.
Further, in some examples, the front-facing camera 226 includes a depth sensor that detects a distance of the player from the display screen 202 and generates a response accordingly. For example, if the player’s face is within a predetermined threshold distance (e.g., a minimum distance) from the display screen 202, the virtual boxer 220 can determine that the player 204 is within hitting distance and deliver a punch. If the player’s face is outside of the predetermined threshold distance, the virtual boxer 220 can determine that the player 204 is too far to be hit. In that case, the virtual boxer 220 can  generate other types of responses instead of delivering a hit, such as tricking the player 204 to get closer to the screen 202, pretending to not “see” the player 204 by averting its eyes, detecting motion of background objects and adjusting its eyes to track the background objects to fool the player 204 into thinking it is not watching for the player’s reactions, and/or other animations and graphics. Upon detection, via depth sensors at camera 226, that the player 204 is close enough to the screen 202, the virtual boxer 220 can deliver the punch.
Turning now to FIGS. 3A-3B, in some examples, the intelligent virtual object is configured to respond to human movements that are detected by a gyroscope at the smart device 200. For example, as shown at FIGS. 3A-3B, an intelligent virtual object 320 (hereinafter also referred to as a virtual boxer 320) on display screen 302 of smart device 300 is provided with artificial intelligence that uses data captured by a gyroscope (not shown) in the device 300. In this way, the virtual boxer 320 is equipped with decision-making abilities that utilize the knowledge based on detected movement of the device 300 relative to the human player 304. Additionally and/or alternatively, the virtual boxer 320 can generate an intelligent response based on data detected by a combination of sensors at the device 300, such as intelligence detected by other game sensors and/or camera 326, which may be similar to camera 226 of FIG. 2B discussed above.
For example, at FIGS. 3A-3B, human movement is detected by both the camera 326 and gyroscope built into the smart device 300. Here, a combination of game sensors, such as the gyroscope with the camera 326, is utilized for detecting data. In some cases, utilizing multiple sensors provides faster and more accurate feedback of the player’s movements than a single game sensor alone. The gyroscope allows the smart device 300 to detect a physical change in the orientation of the device 300, which may be held in the player’s hand. For example, as shown at FIG. 3A, the gyroscope, alone and/or in combination with the camera 326, detects that the smart device 300 has been translated in a leftward direction 312 from a first position 310a to a second position 310b. In response to detection of this translation, the virtual boxer 320 provides an intelligent response by adjusting its hit from fist 322 accordingly and in combination with data from the camera 326. For example, the virtual boxer 320 of FIG. 3A at position 310b has adjusted its gaze  to focus or otherwise follow the player 304 as device 300 is translated. The virtual boxer 320 delivers the punch from fist 22 which is a left fist of the virtual boxer 320, because the virtual boxer 320 detects that the player 304 has moved toward a left side of display screen 302 from the boxer’s vantage point. Further, the virtual boxer 320 delivers a punch angled toward the player 304 in direction 314. In some examples, the virtual boxer 320 at the first position 310a is calm and does not deliver the punch, but becomes agitated or otherwise activated when the device 300 is translated such that at the second position 310b, the virtual boxer 320 generates the hit. It is noted that in some examples, the gyroscope, alone and/or in combination with camera 326, detects translation of the device 300 relative to player 304 in other directions, such as toward the right hand side, left hand side, upper side or lower side.
As shown in the example at FIG. 3B, the gyroscope can detect that the player 304 holding device 300 has rotated the device 300 at an angle, as shown at angled position 310c and angled position 310d. In these positions, the camera 326 is rotated away from the player 304. In some cases, the player 304 is still within the field of view of the camera 326. In other cases, the player 304 is no longer within the field of view, in which case the virtual boxer 320 generates an intelligent response based on data from other sensors such as the gyroscope. Further in some examples, although the player 304 is out of view of the camera 326 when the camera 326 is angled away, the virtual boxer 320 estimates a location of the player based on the detected rotation (e.g., a degree of rotation detected) and adjusts its hits, gaze, and/or strategy accordingly. For example, as shown at FIG. 3B, the virtual object 320 adjusts the punch of its fists accordingly and in combination with data from camera 326. In angled position 310c, the virtual boxer 320 generates an intelligent response based on the detected data to deliver a punch utilizing its left hand 322a because the player 304 is closer to its left side of the display screen 302. The left-handed punch is angled toward the player 304 in a direction 314a. On the other hand, in angled position 310d, the virtual boxer 320 generates an intelligent response based on the detected data to deliver a punch utilizing its right hand 322b because the player 304 is closer to its right side of the display screen 302. The right-handed punch is angled toward the player 304 in a direction 314b. Further, as shown in FIG. 3B, the virtual boxer 320 also adjusts its gaze between the rotated positions to track the player 304, such  that its gaze at position 310c is also in direction 314a, and its gaze at position 310d is also in direction 314b.
It is noted that in FIGS. 3A-3B, the gyroscope detects human movement when the player 304 is physically static (e.g., has not shifted position) , such that only the device 300 is being moved. However, as discussed above, the gyroscope and camera 326 can also detect human movement when the player 304 has physically shifted while the device 300 has been moved.
Various methods and algorithms for generating the intelligent virtual object, such as the virtual boxer, and its intelligent responses are contemplated herein. For instance, in some examples, when the game begins, a face-detection functionality on the front-facing camera (e.g., camera 226, 326) of the smart device (e.g., device 200, 300) and corresponding gyroscope are turned on or otherwise activated to register data. Upon detection of the human player’s face (e.g., player 204, 304) , the virtual boxer (e.g., boxer 220, 320) begins the “action” to punch its fist (e.g.,  fist  222, 322, 322a, 322b) outwardly from the screen (e.g., screen 202, 302) of the smart device.
If the human player’s face is still detected after a predetermined period of time (e.g., 0.5 seconds from the time of the punch) , then the human player is considered to have been “punched” and the virtual boxer will score in the game. Otherwise, the human player is considered to have “escaped” from the punch and the human player will score.
If the human player attempts to escape from the punch by moving his face or body aside from the screen of the smart device, or moving the front-facing camera of the smart device away from his/her face, the virtual object can adjust its punch or next punch accordingly. For instance, in some examples, the human player only moves his face or body aside while the orientation of the smart device is fixed. In this case, detected human movement is based on the images recorded from the front-facing camera that monitors the movement of the human player and notifies the virtual object (e.g., via central processing unit of the device) which side the player has disappeared to. In some examples, the player’s face is stationary and only moves the smart device aside. In this case, detected human movement is based on data captured by the gyroscope that monitors  movement of the device and notifies the virtual object (e.g., via central processing unit of the device) of the change in orientation of the device and therefore which side player has disappeared to. In some examples, both the player and the smart device move. In this case, the smart device can determine the human movement by the information given by either the front-facing camera or the gyroscope, or a combination of information from both camera and gyroscope.
It is noted that the steps described above are applicable to all edges of the screen. Further, it is noted that if the smart device has not detected any face at the moment, the program logic will instruct the virtual object to wait for the detection to occur and repeat the above punching procedure, until the player quits the game. While the intelligent virtual object is being described herein in the context of a virtual boxer of a boxing game, other implementations in other gaming or non-gaming environments may be contemplated.
Turning now to FIG. 4, an example method 400 is shown for providing various embodiments of the interactive, intelligent virtual objects described in FIGS. 2B and 3A-3B at an electronic device, such as a smart device, having a display screen and one or more sensors (e.g., game sensors described above) . The method 400 includes displaying a virtual object (e.g., virtual object 220, 320) on the display screen (block 402) . The virtual object may be displayed in response to a user launching an application corresponding to the virtual object.
Method 400 includes, in response to detecting a first set of data at the one or more sensors (e.g., gyroscope, camera 226, 326) , wherein the first set of data satisfies a set of player presence criteria indicative of a presence of a user at the electronic device, displaying an animation (e.g., delivering a punch, adjusting its eyes) at the virtual object (block 404) . In some examples, method 400 includes, in response to detecting the first set of data at the one or more sensors, determining whether the first set of data satisfies the set of player presence criteria, and in accordance with a determination that the first set of data does not satisfy the set of player presence criteria, forgoing displaying the animation at the virtual object (block 406) . In some examples, the first set of data comprises image data detected by the camera, and the set of player presence criteria  includes a criterion that is met when a facial feature (e.g., utilizing facial recognition) of the user is detected in the image data (block 408) .
Method 400 includes, while displaying the animation at the virtual object, detecting a second set of data at the one or more sensors (block 410) .
Method 400 includes, in accordance with a determination that the second set of data satisfies a set of movement criteria, wherein the second set of data satisfies the set of movement criteria when the second set of data indicates a movement of the display screen relative to the user of the device, dynamically modifying the displayed animation (e.g., adjusting its current punch, delivering a next punch with another fist, adjusting its gaze) of the virtual object based on the second set of data (block 412) . In some examples, the second set of data comprises image data including a facial feature of the user detected by the camera, and the set of movement criteria includes a criterion that is met when the image data indicates a movement of the facial feature in a direction toward an edge of a display screen (block 414) . In some examples, the one or more sensors comprises a motion sensor, the second set of data comprises an orientation of the device detected by the motion sensor, and the set of movement criteria includes a criterion that is met when the second set of data indicates a change in orientation of the device (block 416) (e.g., FIGS. 3A-3B) . In some examples, the motion sensor comprises at least one of a gyroscope and an accelerometer.
Further in some examples, method 400 includes dynamically modifying the displayed animation of the virtual object comprises rendering the displayed animation to track the movement of the device relative to the user while the animation is being displayed (block 418) . In some examples, method 400 includes dynamically modifying the displayed animation of the virtual object includes adjusting the displayed animation to match the detected movement in the direction toward the edge of the display screen (block 420) . In some examples, the change in orientation of the device comprises at least one of an angular rotation and a linear displacement of the device, and dynamically modifying the displayed animation of the virtual object includes adjusting the displayed animation to match the angular rotation or the linear displacement of the device (block 422) . In some  examples, in accordance with a determination that the second set of data satisfies a set of movement criteria, wherein the second set of data satisfies the set of movement criteria when the second set of data indicates a movement of the display screen relative to the user of the device, method 400 includes generating a subsequent animation based on the detected second set of data such that the subsequent animation tracks the relative movement between the user and the display screen, as detected in the second set of data.
Further in some examples, the method includes, subsequent to initiating display of the animation at the virtual object, detecting a third set of data at the one or more sensors; determining whether the third set of data satisfies the set of player presence criteria; in accordance with a determination that the third set of data satisfies the set of player presence criteria, forgoing incrementing a score associated with the user and incrementing a score associated with the virtual object; and in accordance with a determination that the third set of data does not satisfy the set of player presence criteria, incrementing the score associated with the user and forgoing incrementing the score associated with the virtual object. The third set of data can be detected upon lapse of a predetermined time interval T (e.g., 0.5 seconds) after initiating display of the animation (e.g., delivering the punch) at the virtual object. In some examples, method 400 includes activating the one or more sensors in response to launching an application corresponding to the virtual object at the electronic device. Further in some examples, the virtual object is a virtual boxer, and the displayed animation comprises a punching action by the virtual boxer in a direction out of the display screen and toward the user.
Turning now to FIG. 5, components of an exemplary computing system 500, configured to perform any of the above-described processes and/or operations are depicted. For example, computing system 500 may be used to implement the smart device described above that implements any combination of the above embodiments. Computing system 500 may include, for example, a processor, memory, storage, and input/output peripherals (e.g., display, keyboard, stylus, drawing device, disk drive, Internet connection, camera/scanner, microphone, speaker, etc. ) . However, computing system 500 may include circuitry or other specialized hardware for carrying out some or all aspects of the processes.
In computing system 500, the main system 502 may include a motherboard 504 with a bus that connects an input/output (I/O) section 506, one or more microprocessors 508, and a memory section 510, which may have a flash memory card 512 related to it. Memory section 510 may contain computer-executable instructions and/or data for carrying out the techniques and algorithms described above. The I/O section 506 may be connected to display 524 (e.g., to display a virtual object) , a keyboard 514, a camera/scanner 526, a microphone 528, a speaker 530, a disk storage unit 516, and a media drive unit 518. The media drive unit 518 can read/write a non-transitory computer-readable storage medium 520, which can contain programs 522 and/or data used to implement process 200 and/or process 400.
Additionally, a non-transitory computer-readable storage medium can be used to store (e.g., tangibly embody) one or more computer programs for performing any one of the above-described processes by means of a computer. The computer program may be written, for example, in a general-purpose programming language (e.g., Pascal, C, C++, Java, or the like) or some specialized application-specific language.
Computing system 500 may include various sensors, such as front facing camera 530 (e.g., camera 226 of FIG. 2B, camera 326 of FIGS. 3A-3B) , back facing camera 532, compass 534, accelerometer 536, gyroscope 538 (e.g., implemented at FIGS. 3A-3B) , and/or touch-sensitive surface 540. Other sensors may also be included.
While the various components of computing system 500 are depicted as separate in FIG. 5, various components may be combined together. For example, display 524 and touch sensitive surface 540 may be combined together into a touch-sensitive display.
Exemplary methods, non-transitory computer-readable storage media, systems, and electronic devices are set out in example implementations of the following items:
Item 1. A method comprising:
at an electronic device having a display screen and one or more sensors:
displaying a virtual object on the display screen;
in response to detecting a first set of data at the one or more sensors, wherein the first set of data satisfies a set of player presence criteria indicative of a presence of a user at the electronic device, displaying an animation at the virtual object;
while displaying the animation at the virtual object, detecting a second set of data at the one or more sensors; and
in accordance with a determination that the second set of data satisfies a set of movement criteria, wherein the second set of data satisfies the set of movement criteria when the second set of data indicates a movement of the display screen relative to the user of the device, dynamically modifying the displayed animation of the virtual object based on the second set of data.
Item 2. The method of item 1, further wherein:
dynamically modifying the displayed animation of the virtual object comprises rendering the displayed animation to track the movement of the device relative to the user while the animation is being displayed.
Item 3. The method of item 1, further comprising:
in response to detecting the first set of data at the one or more sensors, determining whether the first set of data satisfies the set of player presence criteria; and
in accordance with a determination that the first set of data does not satisfy the set of player presence criteria, forgoing displaying the animation at the virtual object.
Item 4. The method of any of items 1-3, further wherein:
the one or more sensors comprises a camera.
Item 5. The method of item 4, further wherein:
the first set of data comprises image data detected by the camera, and
the set of player presence criteria includes a criterion that is met when a facial feature of the user is detected in the image data.
Item 6. The method of any of items 4-5, further wherein:
the second set of data comprises image data including a facial feature of the user detected by the camera, and
the set of movement criteria includes a criterion that is met when the image data indicates a movement of the facial feature in a direction toward an edge of a display screen.
Item 7. The method of item 6, further wherein:
dynamically modifying the displayed animation of the virtual object includes adjusting the displayed animation to match the detected movement in the direction toward the edge of the display screen.
Item 8. The method of any of items 1-7, further wherein:
the one or more sensors comprises a motion sensor,
the second set of data comprises an orientation of the device detected by the motion sensor, and
the set of movement criteria includes a criterion that is met when the second set of data indicates a change in orientation of the device.
Item 9. The method of item 8, wherein the motion sensor comprises at least one of a gyroscope and an accelerometer.
Item 10. The method of any of items 8-9, further wherein:
the change in orientation of the device comprises at least one of an angular rotation and a linear displacement of the device; and
dynamically modifying the displayed animation of the virtual object includes adjusting the displayed animation to match the angular rotation or the linear displacement of the device.
Item 11. The method of any of items 1-10, further comprising:
subsequent to initiating display of the animation at the virtual object, detecting a third set of data at the one or more sensors;
determining whether the third set of data satisfies the set of player presence criteria;
in accordance with a determination that the third set of data satisfies the set of player presence criteria, forgoing incrementing a score associated with the user and incrementing a score associated with the virtual object; and
in accordance with a determination that the third set of data does not satisfy the set of player presence criteria, incrementing the score associated with the user and forgoing incrementing the score associated with the virtual object.
Item 12. The method of item 11, further wherein:
the third set of data is detected upon lapse of a predetermined time interval T after initiating display of the animation at the virtual object.
Item 13. The method of any of items 1-12, further comprising:
activating the one or more sensors in response to launching an application corresponding to the virtual object at the electronic device.
Item 14. The method of any of item 1-13 further wherein:
the virtual object is a virtual boxer, and
the displayed animation comprises a punching action by the virtual boxer in a direction out of the display screen and toward the user.
Item 15. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device having a display screen and one or more sensors, cause the device to perform any of the methods of items 1-14.
Item 16. An electronic device, comprising:
a display screen;
one or more sensors;
one or more processors;
memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of items 1-14.
Item 17. An electronic device, comprising:
means for performing any of the methods of items 1-14.
Various exemplary embodiments are described herein. Reference is made to these examples in a non-limiting sense. They are provided to illustrate more broadly applicable aspects of the disclosed technology. Various changes may be made and equivalents may be substituted without departing from the true spirit and scope of the various embodiments. In addition, many modifications may be made to adapt a particular situation, material, composition of matter, process, process act (s) or step (s) to the objective (s) , spirit or scope of the various embodiments. Further, as will be appreciated by those with skill in the art, each of the individual variations described and illustrated herein has discrete components and features which may be readily separated from or combined with the features of any of the other several embodiments without departing from the scope or spirit of the various embodiments.

Claims (17)

  1. A method comprising:
    at an electronic device having a display screen and one or more sensors:
    displaying a virtual object on the display screen;
    in response to detecting a first set of data at the one or more sensors, wherein the first set of data satisfies a set of player presence criteria indicative of a presence of a user at the electronic device, displaying an animation at the virtual object;
    while displaying the animation at the virtual object, detecting a second set of data at the one or more sensors; and
    in accordance with a determination that the second set of data satisfies a set of movement criteria, wherein the second set of data satisfies the set of movement criteria when the second set of data indicates a movement of the display screen relative to the user of the device, dynamically modifying the displayed animation of the virtual object based on the second set of data.
  2. The method of claim 1, further wherein:
    dynamically modifying the displayed animation of the virtual object comprises rendering the displayed animation to track the movement of the device relative to the user while the animation is being displayed.
  3. The method of claim 1, further comprising:
    in response to detecting the first set of data at the one or more sensors, determining whether the first set of data satisfies the set of player presence criteria; and
    in accordance with a determination that the first set of data does not satisfy the set of player presence criteria, forgoing displaying the animation at the virtual object.
  4. The method of any of claims 1-3, further wherein:
    the one or more sensors comprises a camera.
  5. The method of claim 4, further wherein:
    the first set of data comprises image data detected by the camera, and
    the set of player presence criteria includes a criterion that is met when a facial feature of the user is detected in the image data.
  6. The method of any of claims 4-5, further wherein:
    the second set of data comprises image data including a facial feature of the user detected by the camera, and
    the set of movement criteria includes a criterion that is met when the image data indicates a movement of the facial feature in a direction toward an edge of a display screen.
  7. The method of claim 6, further wherein:
    dynamically modifying the displayed animation of the virtual object includes adjusting the displayed animation to match the detected movement in the direction toward the edge of the display screen.
  8. The method of any of claims 1-7, further wherein:
    the one or more sensors comprises a motion sensor,
    the second set of data comprises an orientation of the device detected by the motion sensor, and
    the set of movement criteria includes a criterion that is met when the second set of data indicates a change in orientation of the device.
  9. The method of claim 8, wherein the motion sensor comprises at least one of a gyroscope and an accelerometer.
  10. The method of any of claims 8-9, further wherein:
    the change in orientation of the device comprises at least one of an angular rotation and a linear displacement of the device; and
    dynamically modifying the displayed animation of the virtual object includes adjusting the displayed animation to match the angular rotation or the linear displacement of the device.
  11. The method of any of claims 1-10, further comprising:
    subsequent to initiating display of the animation at the virtual object, detecting a third set of data at the one or more sensors;
    determining whether the third set of data satisfies the set of player presence criteria;
    in accordance with a determination that the third set of data satisfies the set of player presence criteria, forgoing incrementing a score associated with the user and incrementing a score associated with the virtual object; and
    in accordance with a determination that the third set of data does not satisfy the set of player presence criteria, incrementing the score associated with the user and forgoing incrementing the score associated with the virtual object.
  12. The method of claim 11, further wherein:
    the third set of data is detected upon lapse of a predetermined time interval T after initiating display of the animation at the virtual object.
  13. The method of any of claims 1-12, further comprising:
    activating the one or more sensors in response to launching an application corresponding to the virtual object at the electronic device.
  14. The method of any of claim 1-13 further wherein:
    the virtual object is a virtual boxer, and
    the displayed animation comprises a punching action by the virtual boxer in a direction out of the display screen and toward the user.
  15. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device having a display screen and one or more sensors, cause the device to perform any of the methods of claims 1-14.
  16. An electronic device, comprising:
    a display screen;
    one or more sensors;
    one or more processors;
    memory; and
    one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 1-14.
  17. An electronic device, comprising:
    means for performing any of the methods of claims 1-14.
PCT/CN2018/111917 2017-10-27 2018-10-25 Interactive intelligent virtual object WO2019080902A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762578281P 2017-10-27 2017-10-27
US62/578,281 2017-10-27

Publications (1)

Publication Number Publication Date
WO2019080902A1 true WO2019080902A1 (en) 2019-05-02

Family

ID=66246204

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/111917 WO2019080902A1 (en) 2017-10-27 2018-10-25 Interactive intelligent virtual object

Country Status (1)

Country Link
WO (1) WO2019080902A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11797175B2 (en) 2021-11-04 2023-10-24 Microsoft Technology Licensing, Llc Intelligent keyboard attachment for mixed reality input

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050212753A1 (en) * 2004-03-23 2005-09-29 Marvit David L Motion controlled remote controller
US20080211771A1 (en) * 2007-03-02 2008-09-04 Naturalpoint, Inc. Approach for Merging Scaled Input of Movable Objects to Control Presentation of Aspects of a Shared Virtual Environment
CN102004840A (en) * 2009-08-28 2011-04-06 深圳泰山在线科技有限公司 Method and system for realizing virtual boxing based on computer
CN102884490A (en) * 2010-03-05 2013-01-16 索尼电脑娱乐美国公司 Maintaining multiple views on a shared stable virtual space
CN103858074A (en) * 2011-08-04 2014-06-11 视力移动技术有限公司 System and method for interfacing with a device via a 3d display

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050212753A1 (en) * 2004-03-23 2005-09-29 Marvit David L Motion controlled remote controller
US20080211771A1 (en) * 2007-03-02 2008-09-04 Naturalpoint, Inc. Approach for Merging Scaled Input of Movable Objects to Control Presentation of Aspects of a Shared Virtual Environment
CN102004840A (en) * 2009-08-28 2011-04-06 深圳泰山在线科技有限公司 Method and system for realizing virtual boxing based on computer
CN102884490A (en) * 2010-03-05 2013-01-16 索尼电脑娱乐美国公司 Maintaining multiple views on a shared stable virtual space
CN103858074A (en) * 2011-08-04 2014-06-11 视力移动技术有限公司 System and method for interfacing with a device via a 3d display

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11797175B2 (en) 2021-11-04 2023-10-24 Microsoft Technology Licensing, Llc Intelligent keyboard attachment for mixed reality input

Similar Documents

Publication Publication Date Title
US10488941B2 (en) Combining virtual reality and augmented reality
JP7277545B2 (en) Rendering virtual hand poses based on detected hand input
US10137374B2 (en) Method for an augmented reality character to maintain and exhibit awareness of an observer
EP3332311B1 (en) Hover behavior for gaze interactions in virtual reality
US10317997B2 (en) Selection of optimally positioned sensors in a glove interface object
JP5832666B2 (en) Augmented reality representation across multiple devices
CN110622219B (en) Interactive augmented reality
US20160363997A1 (en) Gloves that include haptic feedback for use with hmd systems
JP2018512643A (en) Magnetic tracking of glove fingertips with peripheral devices
US10310583B2 (en) Attention-based rendering and fidelity
US11029753B2 (en) Human computer interaction system and human computer interaction method
WO2019080902A1 (en) Interactive intelligent virtual object
WO2018234866A2 (en) First-person role playing interactive augmented reality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18869671

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 30.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18869671

Country of ref document: EP

Kind code of ref document: A1