CN102448560B - User movement feedback via on-screen avatars - Google Patents

User movement feedback via on-screen avatars Download PDF

Info

Publication number
CN102448560B
CN102448560B CN2010800246209A CN201080024620A CN102448560B CN 102448560 B CN102448560 B CN 102448560B CN 2010800246209 A CN2010800246209 A CN 2010800246209A CN 201080024620 A CN201080024620 A CN 201080024620A CN 102448560 B CN102448560 B CN 102448560B
Authority
CN
China
Prior art keywords
user
incarnation
feedback
computing environment
capture region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2010800246209A
Other languages
Chinese (zh)
Other versions
CN102448560A (en
Inventor
E·C·吉埃默三世
T·J·帕希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Publication of CN102448560A publication Critical patent/CN102448560A/en
Application granted granted Critical
Publication of CN102448560B publication Critical patent/CN102448560B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/67Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/812Ball games, e.g. soccer or baseball
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/833Hand-to-hand fighting, e.g. martial arts competition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/843Special adaptations for executing a specific game genre or game mode involving concurrently two or more players on the same game device, e.g. requiring the use of a plurality of controllers or of a specific view of game data for each player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • A63F2300/1093Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera using visible light
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • A63F2300/5553Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history user representation in the game field, e.g. avatar
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8088Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game involving concurrently several players in a non-networked game, e.g. on the same game console

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The following discloses using avatars to provide feedback to users of a gesture-based computing environment about one or more features of the gesture-based computing environment. Gesture-based computing environments may not, in some circumstances, use a physical controller that associates a player with the computing environment. Accordingly, a player may not be provided with a player number. Thusrights and features typically associated with a particular controller may not be available to a user of a gesture-based system.

Description

Carry out the system and method that the user moves feedback via incarnation on the screen
Background technology
Use control to allow other aspects of user's direct game role or application such as many computing applications such as computer game, multimedia application, office application.Usually use that for example controller, remote controller, keyboard, mouse etc. are imported such control.Unfortunately, these controls may be difficult to learn, and have caused the obstacle between user and these recreation and the application thus.In addition, these controls may be different with actual play action or other using actions that these control is used for.
General introduction
Hereinafter public use incarnation provides feedback to the user based in the computing environment of posture who determines user's input by identification user's posture, movement or attitude.The computing environment based on posture like this can not used the physical controller that the player is associated with computing environment.Therefore, can be not do not provide based on the player of physical controller number or identifier to the player.Therefore, the ability, privilege, authority and the feature that usually are associated with specific controller can change into the user who identifies and being associated, and can provide via user's incarnation the feedback of user's relevant his or her authority, ability, feature, permission etc.For example, this feedback can be notified this user of this user just to be used as controller by this system " identification " or he or she and be bound to this system, but maybe this feedback can indicate this system to the response of this user's the posture of identifying, the particular player number that can be assigned to this user, this user whether in the capture region of this system or this user input gestures etc. when.
But the each side of the incarnation that is associated with the user can have the specified permission that is associated, feature the user aspect these changes perhaps the time.For example, if the user has the grade selected in the game environment or the authority in path, then its incarnation can change size, brightness, color, the position on screen, the position in the arrangement of each incarnation of describing, obtain one or more objects etc., perhaps even appear on this screen.This is particular importance in two or more users can be in based on the situation in the capture region of the computing environment of posture simultaneously.
Each side based on the computing environment of posture can be brought following situation: wherein need user feedback so that this system suitably receives the order based on posture from the user.For example, the user can partly walk out capture region.In order to return this capture region, the user may be from the feedback of this system, and this feedback notifies them to say that they are partly or entirely outside capture region.In addition, this feedback can be with providing based on the form to the virtual feedback of the change of one or more aspects of this incarnation.
This incarnation can provide the feedback of the response of the posture that relevant computing environment based on posture makes the user to this user.For example, if the user is raised to a certain height with their arm, the incarnation that is associated with this user also liftable they arm and the user to see that they need lift to their arm how high so that this its arm of incarnation full extension.Therefore, can provide relevant to receive the feedback of the degree that posture that required response user makes must reach from this system to this user.
In addition, this incarnation can be used to notify the user they when have this based on the computing environment of posture in input can import the order of what type based on the authority of the order of posture and they.For example, in car race game, when incarnation is arranged in vehicle, this user can from this arrangement understand they to specific vehicle have control and they can according to the order of computing environment is imported some be exclusively used in control vehicle posture.
The user can hold object and control one or more aspects based on the computing environment of posture.Should can detect, follow the tracks of this object and to this object modeling and a dummy object is put into the hand of this incarnation based on system of posture.One or more aspects of this object can change to notify the feature of this object of user.For example, if this object not in this capture region, then the each side of this object can change.As another example, the user can hold the short handle of typical example such as finishing tool.The dummy object that this incarnation is held can comprise that this short handle stretches portion along virtual " blade " of this finishing tool.
The accompanying drawing summary
Figure 1A, 1B illustrate the example embodiment based on the control system of posture that user is wherein playing games to 1C.
Fig. 2 illustrates the example embodiment of the capture device that can use in the system based on posture.
Fig. 3 illustrates the example embodiment of the computing environment of the one or more postures that can be used for interpreting user, and described user binding is to being associated based on the system of posture and with virtual port.
Fig. 4 illustrates another example embodiment of the computing environment of the one or more postures that can be used for interpreting user, and described user binding is to being associated based on the system of posture and with virtual port.
Fig. 5 illustrates the former example that controls environment of games system, and wherein the controller with cable connection or wireless connections can be used for controlling computing environment.
Fig. 6 is illustrated in based on a plurality of users in the capture region of the system of posture, but described system's user bound based on posture, feedback is provided and the user is associated with virtual port to the user.
Fig. 7 illustrates the example that available system based on posture comes the user of modeling, and wherein this user is modeled as joint and four limbs, and can use the motion of these joints and four limbs to come to explaining each posture based on the computing environment of posture.
Fig. 8 is provided by a series of sample incarnation that can provide at display screen.
Fig. 9 describes for incarnation and user being carried out related and the flow chart of feedback being provided to the user via incarnation.
Figure 10 describes for providing to the user about their flow chart in the feedback of the position of capture region.
Figure 11 describes for a plurality of users and incarnation being carried out related and the flow chart of feedback being provided to these users via these incarnation.
Figure 12 describes for incarnation and user being carried out related and the flow chart of the feedback of relevant user's posture being provided via incarnation.
The detailed description of illustrative embodiment
As will describing herein, can detect the user and carry out related with incarnation this user based on the system of posture.This incarnation can be used for providing feedback about one or more abilities of being associated with this user, feature, authority or privilege to this user.These features, authority and privilege can comprise for example make that menu is selected, the authority of input command, this system to the response of posture, relevant user for being centrally located in the capture region and information of the direction of needs movement etc. themselves.In non-computing environment based on posture, these features, authority and privilege can be associated with physical controller.Yet, may need to provide relevant these permissions, authority or franchise feedback to the user based on the system of posture, because the user has no longer had physical controller.
In one embodiment, this incarnation can be arranged in computing environment and is presented on the display screen with the mode that the information of the authority that has about this user is provided to the user.For example, have the wheel back such as stage properties such as weapon or the automobile in virtual world if see incarnation, this user can have control based on posture to these objects.Therefore, to the user provide they in computing environment current state and the visual feedback of privilege the decision information necessary of the action that the relevant input that will provide to the system based on posture and incarnation are provided is provided to the user.
Figure 1A and 1B illustrate the example embodiment based on the configuration of the system 10 of posture that user 18 is wherein playing boxing game.In an example embodiment, system 10 based on posture can be used for binding, identification, analyzes, follows the tracks of, creates incarnation, linked character, authority or privilege, be associated with human target, feedback is provided, receives based on the input of posture and/or be adapted to each side such as human targets such as users 18.
Shown in Figure 1A, can comprise computing environment 12 based on the system 10 of posture.Computing environment 12 can be computer, games system, console etc.According to an example embodiment, computing environment 12 can comprise nextport hardware component NextPort and/or component software, makes computing environment 12 can be used for carrying out such as application such as recreation application, non-recreation application.
Shown in Figure 1A, also can comprise capture device 20 based on the system 10 of posture.Capture device 20 can be detector for example, this detector can be used for monitoring such as one or more users such as users 18, so that can catch, analyze and follow the tracks of the performed posture of these one or more users so that one or more controls or the action in user feedback and the execution application to be provided, as will be described below in more detail.
According to an embodiment, system 10 based on posture can be connected to such as television set, monitor, HDTV audio-visual equipment 16 such as (HDTV), described audio-visual equipment 16 can show incarnation, provides with the authority, feature and the privilege that are associated with the user, user's movement, virtual port, binding, recreation or uses vision and/or the relevant feedback of audio frequency to user 18.For example, computing environment 12 can comprise such as video adapters such as graphics cards and/or such as audio frequency adapters such as sound cards, and these adapters can provide the audio visual signal that is associated with feedback about feature, authority and privilege, recreation application, non-recreation application etc.Audio-visual equipment 16 can be exported the recreation that is associated with this audio visual signal or use vision and/or audio frequency then from computing environment 12 receiving said audiovisual signals to user 18.According to an embodiment, audio-visual equipment 16 can be connected to computing environment 12 via for example S-vision cable, coaxial cable, HDMI cable, DVI cable, VGA cable, wireless connections etc.
Shown in Figure 1A and 1B, can be used for modeling, identification, analysis and/or follow the tracks of such as human targets such as users 18 based on the system 10 of posture.For example, can use capture device 20 to follow the tracks of user 18, be interpreted as the control that can be used for influencing the application of being carried out by computer environment 12 so that can and urine user 18 position, movement.Thereby according to an embodiment, user's 18 removable his or her healths are controlled application.
Shown in Figure 1A and 1B, in an example embodiment, the application of carrying out in computing environment 12 can be the boxing game that user 18 may play.For example, computing environment 12 can use audio-visual equipment 16 that sparring partner 22 visual representation is provided to user 18.Computing environment 12 also can use audio-visual equipment 16 that the visual representation of user's incarnation 24 that user 18 can control with his or her movement is provided at screen 14.For example, as shown in Figure 1B, user 18 can shake one's fists to make user's incarnation 24 to shake one's fists in gamespace in physical space.Therefore, according to an example embodiment, can be used for identifying with analysis user 18 based on the computer environment 12 of the system 10 of posture and capture device 20 and in physical space, go out fist, can be interpreted as game control to the user's incarnation 24 in the gamespace thereby make this go out fist.
In one embodiment, user's incarnation 24 can be exclusively used in user 18.User 18 can play any amount of recreation, and each recreation can allow to use user's incarnation 24.In one embodiment, the user can create incarnation 24 from the menu option tabulation.In another embodiment, incarnation 24 can be created by following steps: the one or more aspects that detect user 18, such as any other feature of for example user's color development, height, size, shirt color or user 18, the each side based on user 18 provides incarnation then.As another example, incarnation 24 can be used as the user representing that capture device catches and begins, and the user then can be by any means, by adding or removing any feature, add imagination element and wait to change this incarnation.
Other of user 18 move or attitude also can be interpreted as other controls or action, as to running, walk, accelerate, slow down, stop, gearshift or weapon, aiming at, open fire, dodge, jump, capture, open, close, fiddle with, play, wave arm, lean on, watch attentively, pat, weave, drag that pin is walked, lattice keep off, jab, shoot the control severely etc. of various different dynamics.Controlling incarnation or control computer environment in addition any other control or the action that may need all is included.In addition, some move or attitude can be interpreted as can be corresponding to the control of the action except control user incarnation 24.For example, the user can use mobile or attitude enter, withdraw from, open or close system, time-out, voluntarily, the switching virtual port, preserve recreation, select rank, profile or menu, check high score, communicate by letter etc. with friend.In addition, the motion of user 18 gamut can obtain, uses and analyze to carry out alternately with application in any suitable manner.These move with attitude can be any movement or the attitude that can use the user, and can comprise and enter and withdraw from capture region.For example, in one embodiment, enter and enter posture or order in the system that scene can be based on posture.
Shown in Fig. 1 C, can hold an object such as mankind's targets such as users 18.In these embodiments, thus the hand-holdable object of the user of electronic game can use the motion of user and object to adjust and/or control the parameter of recreation.For example, can follow the tracks of and utilize the motion of the hand-held racket 21 of user to control on the screen in the electron motion game racket and bat 23.In another example embodiment, can follow the tracks of and utilize the motion of the hand-held object of user to control weapon on the screen in the electronics FTG.Also can comprise any other object, such as one or more gloves, ball, bat, club, guitar, microphone, bar, pet, animal, drum etc.
In another embodiment, user's incarnation 24 can be depicted on the audiovisual display with one or more objects.As first example, can detect such as racket 21 objects such as grade based on the system of posture, this system can carry out modeling, tracking etc. to this object.Incarnation can be described with the object that the user hands, but and the motion of physical objects in the dummy object trace trap zone.In such example, if object moves on to outside the capture region, then one or more aspects of the hand-held dummy object of incarnation can change.For example, if racket partly or entirely moves on to outside the capture region, then the hand-held dummy object of incarnation can brighten, deepening, size increases or reduce, change color, disappearance or otherwise change to provide the feedback about the state of this object in the capture region to the user.
In another example, incarnation 24 can describe to provide the feedback about authority, privilege or the feature that is associated with this user to the user with object.For example, if the user is playing track and field recreation, and incarnation at first is depicted as and do not have the relay race relay baton, is depicted as then to have the relay race relay baton, and then the user knows when they may need to carry out one or more tasks.As another example, if quiz show type recreation is arranged, then this incarnation can be equipped with the buzzer on the screen, buzzer will notify the user he or she have authority to race to be the first to answer a question (buzz in).As a further example, if a plurality of users are arranged and have menu to select option, then can provide an object to make the authority that menu is selected to indicate this user to have to this user to the user with the authority that makes a choice in menu screen.
According to other example embodiment, can be used for target moved with attitude based on the system 10 of posture and be interpreted as operating system and/or application controls outside the field of play.For example, in fact any controlled aspect of operating system and/or application can be by controlling such as movement or the attitude of targets such as user 18.
Fig. 2 illustrates the example embodiment of the capture device 20 that can use in based on the system 10 of posture.According to an example embodiment, capture device 20 can be configured to via any suitable technique, comprises that for example flight time, structured light, stereo-picture wait to catch the video that has depth information that comprises depth image, and this depth information can comprise depth value.According to an embodiment, capture device 20 can be organized as the depth information that calculates " Z layer ", or the layer vertical with the Z axle that extends along its sight line from degree of depth camera.
As shown in Figure 2, according to an example embodiment, image camera assembly 25 can comprise the IR optical assembly 26 of the depth image that can be used for catching scene, three-dimensional (3-D) camera 27 and RGB camera 28.For example, in ToF analysis, the IR optical assembly 26 of capture device 20 can be transmitted into infrared light on the scene, then, can use the sensor (not shown), with for example 3-D camera 27 and/or RGB camera 28, detect one or more targets and the backscattered light of object surfaces from scene.In certain embodiments, can use pulsed infrared light, make and to measure the time difference between outgoing light pulse and the corresponding incident light pulse and to use it for target determining from capture device 20 to scene or the physical distance of the ad-hoc location on the object.Additionally, in other example embodiment, the phase place of outgoing light wave and the phase place of incident light wave can be compared to determine phase shift.Can use this phase in-migration to determine the physical distance of the ad-hoc location from the capture device to the target or on the object then.
According to another example embodiment, can use ToF analysis, by via for example comprising that the various technology of fast gate-type light pulse in being imaged on analyze folded light beam Strength Changes in time to determine from capture device 20 to target indirectly or the physical distance of the ad-hoc location on the object.
In another example embodiment, capture device 20 can use structured light to catch depth information.In this analysis, patterning light (that is, be shown as such as known pattern such as lattice or candy strips light) can be projected on the scene via for example IR optical assembly 26.In the time of on one or more targets in falling scene or the object surfaces, in response, the pattern deformable.This distortion of pattern can be caught by for example 3-D camera 27 and/or RGB camera 28, then can the analyzed physical distance of determining the ad-hoc location from the capture device to the target or on the object.
According to another embodiment, capture device 20 can comprise the camera that two or more physically separate, and these cameras can check from different perspectives that scene obtains can be resolved to generate the vision stereo data of depth information.
Capture device 20 also can comprise microphone 30.Microphone 30 can comprise transducer or the sensor that can receive sound and convert thereof into the signal of telecommunication.According to an embodiment, microphone 30 can be used to reduce based on the capture device 20 in the system 10 of posture and the feedback between the computing environment 12.In addition, microphone 30 can be used for receiving also can customer-furnished audio signal, with control can by computing environment 12 carry out such as application such as recreation application, non-recreation application.
Capture device 20 also can comprise feedback component 31.Feedback component 31 can comprise such as lamps such as LED or bulb, loudspeaker etc.Feedback device can carry out change color, open or close, increase or reduces brightness and glimmer with the speed that changes at least one.Feedback component 31 also can comprise can provide one or more sound or the noise loudspeaker as the feedback of one or more states.Feedback component also can in conjunction with 32 work of computing environment 12 or processor come by capture device any other element, the feedback of one or more forms is provided to the user based on the system of posture etc.
In example embodiment, capture device 20 can also comprise and can carry out the exercisable processor of communicating by letter 32 with image camera assembly 25.Processor 32 can comprise the standard processor, application specific processor, microprocessor of executable instruction etc., and these instructions can comprise for the instruction that receives depth image, be used for instruction or any other the suitable instruction determining whether suitable target can be included in the instruction of depth image, be used for suitable target is converted to skeleton representation or the model of this target.
Capture device 20 also can comprise memory assembly 34, the image that memory assembly 34 can store the instruction that can be carried out by processor 32, caught by 3-D camera or RGB camera or frame, user profiles or any other appropriate information of image, image etc.According to an example embodiment, memory assembly 34 can comprise random-access memory (ram), read-only storage (ROM), high-speed cache, flash memory, hard disk or any other suitable memory module.As shown in Figure 2, in one embodiment, memory assembly 34 can be the independent assembly that communicates with image capture assemblies 25 and processor 32.According to another embodiment, memory assembly 34 can be integrated in processor 32 and/or the image capture assemblies 25.
As shown in Figure 2, capture device 20 can communicate via communication link 36 and computing environment 12.Communication link 36 can be wired connection and/or the wireless connections such as wireless 802.11b, 802.11g, 802.11a or 802.11n connect that comprise for example USB connection, live wire connection, Ethernet cable connection and so on.According to an embodiment, computing environment 12 can provide clock to capture device 20 via communication link 36, can use this clock to determine when and catch for example scene.
In addition, the image that capture device 20 can provide depth information and be captured by for example 3-D camera 27 and/or RGB camera 28 to computing environment 12 by communication link 36, and the skeleton pattern that can be generated by capture device 20.Computing environment 12 can be used skeleton pattern, depth information then and the image that catches is for example created virtual screen, revise user interface and control such as application such as recreation or word processing programs.For example, as shown in Figure 2, computing environment 12 can comprise gesture library 190.Gesture library 190 can comprise the set of posture filter, and each posture filter comprises the information that can carry out the posture of (when the user moves) about skeleton pattern.Can with by camera 26,27 and equipment 20 compare with the posture filter in the gesture library 190 with the data that the form of skeleton pattern and movement associated therewith catches, when carried out one or more postures to identify (as represented by skeleton pattern) user.Those postures can be associated with the various controls of using.Therefore, computing environment 12 can be used gesture library 190 to explain the movement of skeleton pattern and move to control application based on this.
Fig. 3 shows the example embodiment of the computing environment of the computing environment 12 that can be used for realizing Figure 1A-2.Computing environment 12 can comprise the multimedia console 100 such as game console.As shown in Figure 3, multimedia console 100 has the CPU (CPU) 101 that contains on-chip cache 102, second level cache 104 and flash rom (read-only storage) 106.Therefore on-chip cache 102 and second level cache 104 temporary storaging datas also reduce number of memory access cycles, improve processing speed and handling capacity thus.CPU 101 can be arranged to have more than one nuclear, and additional firsts and seconds high-speed cache 102 and 104 thus.The executable code that loads during the starting stage of bootup process when flash rom 106 can be stored in multimedia console 100 energisings.
The Video processing streamline that GPU (GPU) 108 and video encoder/video codec (encoder/decoder) 114 are formed at a high speed and high graphics is handled.114 transport data via bus from GPU 108 to video encoder/video codec.The Video processing streamline is used for transferring to TV or other displays to A/V (audio/video) port one 40 output data.Memory Controller 110 is connected to GPU 108 making things convenient for the various types of memories 112 of processor access, such as but be not limited to RAM (random access memory).
Multimedia console 100 comprises I/O controller 120, System Management Controller 122, audio treatment unit 123, network interface controller 124, a USB master controller 126, the 2nd USB controller 128 and the front panel I/O subassembly of preferably realizing in module 118 130. USB controller 126 and 128 main frames as peripheral controllers 142 (1)-142 (2), wireless adapter 148 and external memory equipment 146 (for example flash memory, external CD/DVD ROM driver, removable medium etc.).Network interface 124 and/or wireless adapter 148 to network (for example provide, internet, home network etc.) visit, and can be comprise in the various wired or wireless adapter assembly of Ethernet card, modem, bluetooth module, cable modem etc. any.
Provide system storage 143 to be stored in the application data that loads during the bootup process.Media drive 144 is provided, and this media drive can comprise DVD/CD driver, hard disk drive, or other removable media drives etc.Media drive 144 can be internal or external at multimedia console 100.Application data can be via media drive 144 visit, with by multimedia console 100 execution, playback etc.Media drive 144 is connected to I/O controller 120 via connect buses such as (for example IEEE 1394) at a high speed such as serial ATA bus or other.
System Management Controller 122 provides the various service functions that relate to the availability of guaranteeing multimedia console 100.Audio treatment unit 123 and audio codec 132 form the corresponding audio with high fidelity and stereo processing and handle streamline.Voice data transmits between audio treatment unit 123 and audio codec 132 via communication link.Audio frequency is handled streamline and data are outputed to A/V port one 40 is reproduced for external audio player or equipment with audio capability.
Front panel I/O subassembly 130 supports to be exposed to power knob 150 on the outer surface of multimedia console 100 and the function of ejector button 152 and any LED (light emitting diode) or other indicators.System's supply module 136 is to the assembly power supply of multimedia console 100.Circuit in the fan 138 cooling multimedia consoles 100.
Front panel I/O subassembly 130 can comprise can provide the audio frequency of state of a control of multimedia control 100 or LED, visual display screen, bulb, loudspeaker or any other device of visual feedback to user 18.For example, if system is in the state that capture device 20 does not detect Any user, then reflect this state on the plate I/O subassembly 130 in front.If system mode changes, for example, the user becomes the system of being tied to, and then upgrades feedback states with the variation of reflection state on the plate I/O subassembly in front.
Various other assemblies in CPU 101, GPU 108, Memory Controller 110 and the multimedia console 100 are via one or more bus interconnection, and bus comprises serial and parallel bus, memory bus, peripheral bus and uses in the various bus architectures any processor or local bus.As example, these frameworks can comprise peripheral component interconnect (pci) bus, PCI-Express bus etc.
When multimedia console 100 energisings, application data can be loaded into memory 112 and/or the high-speed cache 102,104 and at CPU 101 from system storage 143 and carry out.The graphic user interface that application can be presented on provides consistent when navigating to different media types available on the multimedia console 100 user experiences.In operation, the application that comprises in the media drive 144 and/or other medium can start or broadcast from media drive 144, to provide additional function to multimedia console 100.
Multimedia console 100 can be operated as autonomous system by this system is connected to television set or other displays simply.In this stand-alone mode, multimedia console 100 allows one or more users and this system interaction, sees a film or listen to the music.Yet the participant that integrated along with the broadband connection that can use by network interface 124 or wireless adapter 148, multimedia console 100 also can be used as in the macroreticular community more operates.
When multimedia console 100 energisings, the hardware resource that can keep set amount is done system's use for multimedia console operating system.These resources can comprise that memory keeps that (for example, 16MB), CPU and GPU cycle keep (for example, 5%), network bandwidth reservation (for example, 8kbs) etc.Because keep when these resources guide in system, so institute's resources reserved says it is non-existent from application point of view.
Particularly, memory keeps preferably enough big, starts kernel, concurrent system application and driver to comprise.The CPU reservation is preferably constant, makes that then idle thread will consume any untapped cycle if the CPU consumption that keeps is not used by system applies.
Keep for GPU, interrupt dispatching code by use GPU pop-up window is rendered as coverage diagram to show the lightweight messages (for example, pop-up window) that is generated by system applies.The required amount of memory of coverage diagram depends on the overlay area size, and coverage diagram preferably with the proportional convergent-divergent of screen resolution.Use under the situation of using complete user interface the preferred resolution ratio that is independent of application resolution of using at concurrent system.Scaler can be used for arranging this resolution ratio, thereby need not to change frequency and cause that TV is synchronous again.
After multimedia console 100 guiding and system resource are retained, provide systemic-function with regard to the execution concurrence system applies.Systemic-function is encapsulated in the group system application of carrying out in the above-mentioned system resource that keeps.The operating system nucleus sign is the system applies thread but not plays and use the thread of thread.System applies preferably is scheduled as at the fixed time and moves at CPU 101 with predetermined time interval, in order to the system resource view of unanimity is provided for application.Dispatch is in order to minimize used caused high-speed cache division by the recreation that moves at console.
When the concurrent system application needs audio frequency, then audio frequency is handled scheduling asynchronously give the application of playing owing to time sensitivity.Multimedia console application manager (as described below) is controlled the audio level (for example, quiet, decay) that recreation is used when the system applies activity.
Input equipment (for example, controller 142 (1) and 142 (2)) is used by recreation and system applies is shared.Input equipment is not reservation of resource, but switches so that it has the focus of equipment separately between system applies and recreation application.Application manager is preferably controlled the switching of inlet flow, and need not to know the knowledge that recreation is used, and the status information of the relevant focus switching of driver maintenance.Camera 27,28 and capture device 20 can be the extra input equipment of console 100 definition.
Fig. 4 shows another example embodiment of the computing environment 220 that can be used for realizing the computing environment 12 shown in Figure 1A-2.Computing environment 220 is an example of suitable computing environment, and is not intended to the scope of application or the function of disclosed theme are proposed any restriction.Computing environment 220 should be interpreted as the arbitrary assembly shown in the exemplary operation environment 220 or its combination are had any dependence or requirement yet.In certain embodiments, the various calculating elements of describing can comprise the circuit that is configured to instantiation each concrete aspect of the present invention.For example, the term circuit that uses in the disclosure can comprise the specialized hardware components that is configured to carry out by firmware or switch function.In other examples, the term circuit can comprise by the General Porcess Unit of the software instruction configuration of the logic of implementing to can be used for to carry out function, memory etc.Circuit comprises that in the example embodiment of combination of hardware and software, the implementer can write the source code that embodies logic therein, and source code can be compiled as the machine readable code that can be handled by General Porcess Unit.Because those skilled in the art can understand prior art and evolve between hardware, software or the hardware/software combination and almost do not have the stage of difference, thereby select hardware or software to realize that concrete function is the design alternative of leaving the implementor for.More specifically, those skilled in the art can understand that software process can be transformed into hardware configuration of equal value, and hardware configuration itself can be transformed into software process of equal value.Therefore, realize still being that the selection that realizes of software is design alternative and leaves the implementor for for hardware.
In Fig. 4, computing environment 220 comprises computer 241, and computer 241 generally includes various computer-readable mediums.Computer-readable medium can be can be by any usable medium of computer 241 visit, and comprises volatibility and non-volatile media, removable and removable medium not.System storage 222 comprises the computer-readable storage medium of volatibility and/or nonvolatile memory form, as read-only storage (ROM) 223 and random-access memory (ram) 260.The common storage of basic input/output 224 (BIOS) that comprises such as the basic routine of transmission information between the element that helps between the starting period in computer 241 is stored among the ROM 223.But data and/or program module that RAM 260 comprises processing unit 259 zero accesses usually and/or operating at present.And unrestricted, Fig. 4 shows operating system 225, application program 226, other program modules 227 and routine data 228 as example.
Computer 241 also can comprise other removable/not removable, volatile/nonvolatile computer storage media.Only as example, Fig. 4 shows and reads in never removable, the non-volatile magnetizing mediums or to its hard disk drive that writes 238, from removable, non-volatile magnetic disk 254, read or to its disc driver that writes 239, and from such as reading removable, the non-volatile CDs 253 such as CD ROM or other optical mediums or to its CD drive that writes 240.Other that can use in the exemplary operation environment are removable/and not removable, volatile/nonvolatile computer storage media includes but not limited to cassette, flash card, digital versatile disc, digital recording band, solid-state RAM, solid-state ROM etc.Hard disk drive 238 usually by such as interface 234 grades not the removable memory interface be connected to system bus 221, and disc driver 239 and CD drive 240 are usually by being connected to system bus 221 such as removable memory interfaces such as interfaces 235.
More than discuss and driver shown in Figure 4 and the computer-readable storage medium that is associated thereof provide storage to computer-readable instruction, data structure, program module and other data for computer 241.In Fig. 4, for example, hard disk drive 238 is illustrated as storage operating system 258, application program 257, other program modules 256 and routine data 255.Notice that these assemblies can be identical with routine data 228 with operating system 225, application program 226, other program modules 227, also can be different with them.Be given different numberings at this operating system 258, application program 257, other program modules 256 and routine data 255, they are different copies at least with explanation.The user can pass through input equipment, and for example keyboard 251 and pointing device 252---typically refer to mouse, tracking ball or touch pads---to computer 241 input commands and information.Other input equipment (not shown) can comprise microphone, control stick, game paddle, satellite dish, scanner, capture device etc.These and other input equipments are connected to processing unit 259 by the user's input interface 236 that is coupled to system bus usually, but also can be by other interfaces and bus structures, and for example parallel port, game port or USB (USB) connect.Camera 27,28 and capture device 20 can be the extra input equipment of console 100 definition.The display device of monitor 242 or other types also is connected to system bus 221 by the interface such as video interface 232.Except monitor, computer can also comprise can be by other peripheral output equipments such as loudspeaker 244 and printer 243 of output peripheral interface 233 connections.
The logic that computer 241 can use one or more remote computers (as remote computer 246) connects, to operate in networked environment.Remote computer 246 can be personal computer, server, router, network PC, peer device or other common network nodes, and generally include many or all are above about computer 241 described elements, but in Fig. 4, only show memory storage device 247.Logic depicted in figure 2 connects and comprises Local Area Network 245 and wide area network (WAN) 249, but also can comprise other networks.These networked environments are common in office, enterprise-wide. computer networks, Intranet and internet.
When being used for the LAN networked environment, computer 241 is connected to LAN 245 by network interface or adapter 237.When using in the WAN networked environment, computer 241 generally includes modem 250 or is used for by setting up other means of communication such as WAN such as internet 249.Modem 250 can be built-in or external, can be connected to system bus 221 via user's input interface 236 or other suitable mechanism.In networked environment, can be stored in the remote memory storage device with respect to computer 241 described program modules or its part.And unrestricted, Fig. 4 shows remote application 248 and resides on the memory devices 247 as example.Network connection shown in should be appreciated that is exemplary, and can use other means of setting up communication link between computer.
Fig. 5 illustrates the example embodiment of the prior art systems of the control of only using wired ground or wirelessly connecting.In this embodiment, such as controllers 294 such as game console, control stick, mouse, keyboards or by cable 292 or wirelessly be connected to computing environment 12.Pressing specific button or button can make the signal of setting be sent to computing environment.When the user pressed the button, computing environment can the predetermined manner response.And these controllers generally are associated with specific physical port 290.In the example of prior art game environment, controller 1 can be inserted into first physical port, and controller 2 can be inserted into second physical port etc.Controller 1 can have the control that is associated leading, perhaps in the game environment to the control of other disabled some aspects of controller.For example, when the specific rank in selecting FTG or scene, may have only first controller to select.
Such as needing some ability, feature, authority and privilege to be associated with a user and not to use physical cables of the prior art and physical port based on system's 10 grades of posture system based on posture.If a plurality of users are arranged, each user is associated with a virtual port, and then these users may need feedback to determine which port they are associated to.After the user arrived the initial association of virtual port, if port need be related again with second user, then two users may need a certain feedback to indicate this virtual port by related again.When virtual port and different user are related again, can be in association again or near additional audio frequency or visual feedback (except the standard feedback that can continue to show) is provided, take place with further warning user association again.May need to notify user other aspects about computing environment, and user's incarnation can change to provide feedback about computing environment with one or more modes.
Fig. 6 illustrates capture region 300, and capture region 300 can as above be caught by capture device 20 with reference to Figure 1A-1C describedly.User 302 can partly be arranged in capture region 300.In Fig. 6, user 302 is not in the capture region 300 of capture device 20 fully, this means that the system 10 based on posture may not carry out the one or more actions that are associated with user 302.In this case, the feedback that offers user 302 by computing environment 12 or capture device 20 or audiovisual display 16 can be changed one or more aspects of the incarnation that is associated with this user.
In another embodiment, can be in the capture region 300 such as user 304 users such as grade.In this case, user 304 can be bound this based on the effector of the control system of posture based on the control system 10 of posture.Can provide about the one or more feedback in following to user 304 by incarnation: the scope of the control that user player number, user have computer environment or incarnation and type, user's current attitude and posture and any feature authority and privilege that is associated.
If a plurality of users are in the capture region 300, then can provide feedback about feature, authority and the privilege that is associated with each user in the capture region based on the control system of posture.For example, all users in the capture region have in response to each user's motion or attitude and the corresponding incarnation that changes based on the feature that is associated with each user, authority and privilege and in one or more modes.
The user can walk from capture device too far, too close to or go too far to the left or to the right.In this case, control system based on posture can provide feedback, the form of feedback can be ' crossing the border ' signal or to user notification he may move up in order to make capture device can correctly catch the particular feedback of his image in certain party.For example, if user 304 is moved to the left too far, then can eject the arrow that instructs him to return to the right on the screen, but the perhaps incarnation directed towards user direction that need move.These indications that offer the user can also be via incarnation, provide on capture device or by computing environment.Audio signal can be followed above-mentioned visual feedback.
Fig. 7 has described the skeleton pattern of human user 510, and this skeleton pattern can be created with capture device 20 and computing environment 12.This model can be used for determining posture etc. by the one or more aspects based on the system 10 of posture.This model can be made up of joint 512 and bone 514.These joints and bone are followed the tracks of the system that can make based on posture can determine what posture the user is making.These postures can be used for controlling the system based on posture.In addition, this skeleton pattern posture of can be used for constructing incarnation and follow the tracks of the user is controlled one or more aspects of this incarnation.
Fig. 8 has described three example incarnation, and each example incarnation can be used as the diagram based on the user in the system of posture.In one embodiment, the user can use menu, form etc. to create incarnation.For example, can from one of any amount of option, select such as features such as color development, height, eye colors.In another embodiment, capture device can catch user's skeleton pattern and about other information of user.For example, skeleton pattern can provide the bone position, and one or more camera can provide user's profile.The RGB camera can be used for determining the color of hair, eyes, dress ornament, skin etc.Therefore can create incarnation based on user's each side.In addition, computing environment can be created user representing, and the user can use one or more forms or menu to wait to revise this expression then.
As a further example, system can create at random incarnation or have the incarnation that is pre-created that the user can select.The user can have one or more profiles that can contain one or more incarnation, and user or system can select at specific gaming session, game mode etc.
The incarnation that Fig. 8 describes can track to the motion that the user can make.For example, if the user in the capture region lifts his or her arm, then the arm of this incarnation is also liftable.This can provide the information based on the motion of user's motion about incarnation to the user.For example, which the just right hand of incarnation and which left hand of incarnation just the user can determine by lifting his or her hand.In addition, observe incarnation and how to respond by making a series of motions, can determine the response of incarnation.As another example, if incarnation is restricted (that is, incarnation can not move its leg or pin) in specific environment, then the user can not receive to respond from incarnation and determine this fact by attempting mobile his or her leg.In addition, some posture can be with controlling incarnation with the not directly related mode of user's posture.For example, in car race game, a pin put forward or backward can cause automobile to accelerate or slow down.Incarnation can provide based on such posture about the feedback to the control of automobile.
Fig. 9 is the flow chart that an a kind of embodiment of method is shown, and by this method, detects the user in the capture region and in step 603 this user is associated with first incarnation in step 601.603, by being associated with this incarnation based on the system identification user of posture and with it or by allowing the user from form, to select profile or incarnation, this incarnation can being associated with first user.As another example, 603, can automatically or via from one or more forms, menu etc., selecting to create incarnation also then this incarnation be associated with the user.As another example, 603, can select incarnation at random and it is associated with the user.Different with specific physical controller system associated with incarnation wherein, shown in the method, incarnation with by being associated based on the capture device 20 of the system 10 of posture and the user that computing environment 12 is identified.
605, ability, feature, authority and/or privilege can be associated with the user who identifies.This ability, feature, authority and/or privilege can be based on any ability, feature, authority and/or privilege available in the computing environment of posture.Some examples and unrestricted comprising: the user in recreation or the permission in using, the menu that can use the user select option, input based on the authority of the order of posture, player number distribute, detect determine, with related, the binding information of virtual port, based on the system of posture to the response of posture, profile option or based on aspect any other of the computing environment of posture.
607, can be by changing aspect the incarnation be associated with the user who identifies one or more to the one or more abilities that are associated of user notification, authority, feature and/or privilege in user's calculating session.For example, incarnation can change color, increases in size or reduce, brighten or deepening, acquisition halation (halo) or another object, move up or down at screen, in circle or row himself and other incarnation are resequenced etc.Incarnation also available one or more modes moves or makes attitude to provide feedback to the user based on the computing environment of posture.
Figure 10 is the flow chart for the embodiment of the method for notifying the one or more body parts of user not to be detected in the capture region based on the computing environment of posture via user's incarnation.620, can in such as the capture region of describing with reference to figure 6 above for example such as capture region 300, detect first user.622, can as mentioned above incarnation be associated with first user.624, can determine the position of first user in capture region based on the computing environment of posture.This position can use any combination of above-mentioned each system to determine, such as: for example, capture device 20, computing environment 12, camera 26 and 27 or be used for making up user model and determine that this user is at any other element of the position of capture region 300.
626, can determine in capture region, not detect first user's a part based on the computing environment of posture.One or more body parts of determining the user when system are in capture region the time, the outward appearance of 628, the first incarnation can with one or more modes change to notify the user they do not detected fully.For example, if one of both arms of user outside the capture region based on the computing environment of posture, then corresponding arm can change outward appearance on the incarnation.This outward appearance can change by any way, includes but not limited to: the change of color, brightness, size or shape; Or will be placed on the arm or around the arm such as objects such as halation, oriented arrow, numeral or any other objects.As another example, if the user shifts out outside the capture region fully, or mobile from capture device too close to, then incarnation can with one or more modes change to notify first user they correctly do not detected.In this case, can provide demonstration to notify first user direction that they must move at display screen.In addition, one or more aspects of aforesaid incarnation can change with the not detected state that they are provided to the user and the feedback that arrives the progress of detected state.
Figure 11 illustrates the flow chart that detects a plurality of users, incarnation is associated and the embodiment of feedback is provided to each user via each user's incarnation with each user.In Figure 11,650, can detect first user in the capture region and first incarnation is associated with it 652.654, can detect second user in the capture region and 656 second incarnation is associated with this second user.658, as mentioned above, can provide about the feedback based on one or more features, authority and/or the privilege of the computing environment of posture to first user via first incarnation.Similarly, 660, can provide about the feedback based on one or more features, authority and/or the privilege of the computing environment of posture to second user via second incarnation.
Figure 12 illustrates for providing about based on the computing environment of the posture flow chart to the embodiment of the feedback of his motion to the user via user's incarnation.In Figure 12,670, detect first user in the capture region.672, first incarnation is associated with first user.Can use said method that first user is followed the tracks of and modeling, and in 674 motion or the attitudes of determining first users.Based in 674 motions of determining, can revise first incarnation in one or more modes 676.For example, if first user lifts their arm, also liftable their arm of incarnation then.By watching incarnation, first user can be provided the feedback about the each side of this computing environment and incarnation.For example, the user can receive the feedback that one of arm about which arm on their health and incarnation is associated.As another example, the user receives and notifies them not need their arm of full extension to make the feedback of its arm of incarnation full extension to them.
Should be appreciated that configuration described herein and/or method are exemplary in itself, and these specific embodiments or example are not considered to restrictive.Concrete routine described herein or method can be represented one or more in any amount of processing policy.Thus, shown each action can be carried out in the indicated order, carry out in proper order, carries out etc. concurrently by other.Equally, can change the order of said process.
In addition, theme of the present disclosure comprises combination and the sub-portfolio of various processes, system and configuration, and other features, function, action and/or characteristic and equivalent thereof disclosed herein.

Claims (15)

1. one kind is used for providing method about the feedback of computing environment to the user, and described method comprises:
Use is based on the existence of first user (18) in capture device (20) identification (601) capture region (300) of image;
First incarnation (24) is associated (603) with described first user (18) and shows described first incarnation (24) at display screen (16);
The each side of described first user (18) in identification (605) described capture region (300); And
Revise the outward appearance of (607) described first incarnation (24) to provide feedback to described first user (18), described feedback is indicated to described first user: described first user is current, and to import the posture or described first user that are identified as computer commands bound as specific controller.
2. the method for claim 1 is characterized in that, also comprises:
Use the existence of second user in described capture device based on image (20) identification (654) described capture region (300);
Second incarnation is associated (656) with described second user and shows described second incarnation at described display screen (16);
Identify the each side of described second user in the described capture region (300); And
The outward appearance of revising described second incarnation with provide to described second user about the ability of described second user in described computing environment, feature, authority perhaps can at least one feedback (660).
3. method as claimed in claim 2 is characterized in that, described first user of indication (18) that do not exist on described display screen (16) is movable player in the existence on the described display screen (16) and described second incarnation by described first incarnation (24).
4. the method for claim 1, it is characterized in that, comprise that also one or more body parts of identifying described first user are not detected (Fig. 6) in described capture region, and based on described identification, revise the each side of described first incarnation (24) visually not to be detected to the described one or more body parts of described user (18) indication.
5. the method for claim 1 is characterized in that, revises described first incarnation (24) and is included in described first incarnation (24) upward or places numeral, name or object near described first incarnation (24).
6. the method for claim 1 is characterized in that, in response to from described user's motion and the correspondence between described first incarnation (24) and the described user is indicated in the motion that shows described first incarnation (24).
7. one kind is used for providing method about the feedback of computing environment to the user, and described method comprises:
Use is based on the existence of first user (18) in capture device identification (601) capture region (300) of image;
First incarnation (24) is associated (603) with described first user (18) and shows described first incarnation (24) at display screen (16);
The each side of described first user (18) in identification (605) described capture region (300); And
Revise the outward appearance of (607) described first incarnation (24) to provide feedback to described first user (18), described feedback is indicated to described first user: described first user is current to import the posture that is identified as computer commands, described first user just is being identified or described first user is bound as specific controller.
8. method as claimed in claim 7 is characterized in that, also comprises:
Use described existence based on second user in capture device identification (654) described capture region (300) of image;
Second incarnation is associated (656) with described second user and shows described second incarnation at described display screen (16);
The aspect of described second user in identification (660) described capture region (300); And
The outward appearance of revising (660) described second incarnation with provide to described second user about the ability of described second user in described computing environment, feature, authority perhaps can at least one feedback.
9. method as claimed in claim 8, it is characterized in that, comprise that also described first user of indication (18) that do not exist on described display screen (16) is movable player in the existence on the described display screen (16) and described second incarnation by described first incarnation (24).
10. method as claimed in claim 7, it is characterized in that, comprise that also one or more body parts of identifying described first user are not detected (626) in described capture region, and the each side of revising described first incarnation based on described identification is visually to indicate described one or more body part not to be detected (628) to described user.
11. method as claimed in claim 7 is characterized in that, revises at least one in size, color or the brightness that described first incarnation (24) comprises described first incarnation of change (24).
12. method as claimed in claim 7, it is characterized in that, revise described first incarnation (24) and be included in described first incarnation (24) and add on every side or remove halation, add in described first incarnation (24) below or remove underscore or near interpolation or remove arrow or other cue marks described first incarnation (24).
13. method as claimed in claim 7, it is characterized in that, revise described first incarnation (24) and comprise described first incarnation (24) such as ordering or described first incarnation (24) is placed on such as the one or more positions in the arrangement of particular geometric such as circle in the particular arrangement such as row.
14. one kind is used for providing system about the feedback of computing environment to user (18), described system comprises:
Based on the capture device (20) of image, wherein said capture device based on image (20) comprises the view data that receives scene and the photomoduel of identifying first user's (18) existence in (650) capture region (300); And
The computing equipment of operationally communicating by letter with described capture device based on image (20), wherein said computing equipment comprises processor, described processor: first incarnation (24) is associated (652) with described first user (18) and shows described first incarnation (24) at display screen (16); Identify the each side of described first user in the described capture region; And the outward appearance of revising described first incarnation to be to provide feedback (658) to described first user, and described feedback is indicated to described first user: described first user is current to import the posture that is identified as computer commands, described first user just is being identified or described first user is bound as specific controller.
15. system as claimed in claim 14 is characterized in that, described processor also: exist (654) of using second user in the described capture region of described capture device based on image (20) identification; Second incarnation is associated (656) with described second user and shows described second incarnation at described display screen (16); Identify the each side of described second user in the described capture region (300); And the outward appearance of revising described second incarnation with provide to described second user about the ability of described second user in described computing environment, feature, authority perhaps can at least one feedback (660).
CN2010800246209A 2009-05-29 2010-05-25 User movement feedback via on-screen avatars Active CN102448560B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US12/475,304 2009-05-29
US12/475,304 US20100306685A1 (en) 2009-05-29 2009-05-29 User movement feedback via on-screen avatars
PCT/US2010/036016 WO2010138477A2 (en) 2009-05-29 2010-05-25 User movement feedback via on-screen avatars

Publications (2)

Publication Number Publication Date
CN102448560A CN102448560A (en) 2012-05-09
CN102448560B true CN102448560B (en) 2013-09-11

Family

ID=43221706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010800246209A Active CN102448560B (en) 2009-05-29 2010-05-25 User movement feedback via on-screen avatars

Country Status (3)

Country Link
US (2) US20100306685A1 (en)
CN (1) CN102448560B (en)
WO (1) WO2010138477A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104516496A (en) * 2013-10-04 2015-04-15 财团法人工业技术研究院 Multi-person guidance system and method capable of adjusting motion sensing range

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8579720B2 (en) 2008-11-10 2013-11-12 Norman Douglas Bittner Putting stroke training system
US8616993B2 (en) 2008-11-10 2013-12-31 Norman Douglas Bittner Putter path detection and analysis
US10086262B1 (en) 2008-11-12 2018-10-02 David G. Capper Video motion capture for wireless gaming
US9586135B1 (en) * 2008-11-12 2017-03-07 David G. Capper Video motion capture for wireless gaming
FR2948480B1 (en) * 2009-07-24 2012-03-09 Alcatel Lucent IMAGE PROCESSING METHOD, AVATAR DISPLAY ADAPTATION METHOD, IMAGE PROCESSING PROCESSOR, VIRTUAL WORLD SERVER, AND COMMUNICATION TERMINAL
US20110025689A1 (en) * 2009-07-29 2011-02-03 Microsoft Corporation Auto-Generating A Visual Representation
CN102918477B (en) * 2010-04-13 2017-07-21 诺基亚技术有限公司 Device, method, computer program and user interface
US8749557B2 (en) 2010-06-11 2014-06-10 Microsoft Corporation Interacting with user interface via avatar
EP2421251A1 (en) * 2010-08-17 2012-02-22 LG Electronics Display device and control method thereof
EP2421252A1 (en) * 2010-08-17 2012-02-22 LG Electronics Display device and control method thereof
US9304592B2 (en) * 2010-11-12 2016-04-05 At&T Intellectual Property I, L.P. Electronic device control based on gestures
CN102760302A (en) * 2011-04-27 2012-10-31 德信互动科技(北京)有限公司 Role image control device and method
US8788973B2 (en) 2011-05-23 2014-07-22 Microsoft Corporation Three-dimensional gesture controlled avatar configuration interface
US9159152B1 (en) * 2011-07-18 2015-10-13 Motion Reality, Inc. Mapping between a capture volume and a virtual world in a motion capture simulation environment
US9778737B1 (en) * 2011-08-31 2017-10-03 Amazon Technologies, Inc. Game recommendations based on gesture type
US9628843B2 (en) * 2011-11-21 2017-04-18 Microsoft Technology Licensing, Llc Methods for controlling electronic devices using gestures
US9051127B2 (en) * 2012-04-03 2015-06-09 Scott Conroy Grain auger protection system
US9210401B2 (en) 2012-05-03 2015-12-08 Microsoft Technology Licensing, Llc Projected visual cues for guiding physical movement
US8814683B2 (en) 2013-01-22 2014-08-26 Wms Gaming Inc. Gaming system and methods adapted to utilize recorded player gestures
US20140223326A1 (en) * 2013-02-06 2014-08-07 International Business Machines Corporation Apparatus and methods for co-located social integration and interactions
EP3068301A4 (en) 2013-11-12 2017-07-12 Highland Instruments, Inc. Analysis suite
US9462878B1 (en) 2014-02-20 2016-10-11 Appcessories Llc Self-contained, interactive gaming oral brush
GB2524993A (en) * 2014-04-08 2015-10-14 China Ind Ltd Interactive combat gaming system
KR102214194B1 (en) * 2014-08-19 2021-02-09 삼성전자 주식회사 A display device having rf sensor and method for detecting a user of the display device
WO2016045010A1 (en) * 2014-09-24 2016-03-31 Intel Corporation Facial gesture driven animation communication system
US10218882B2 (en) 2015-12-31 2019-02-26 Microsoft Technology Licensing, Llc Feedback for object pose tracker
US10771508B2 (en) 2016-01-19 2020-09-08 Nadejda Sarmova Systems and methods for establishing a virtual shared experience for media playback
EP3783461A1 (en) * 2017-08-22 2021-02-24 ameria AG User readiness for touchless gesture-controlled display systems
US10653957B2 (en) 2017-12-06 2020-05-19 Universal City Studios Llc Interactive video game system
JP7135472B2 (en) * 2018-06-11 2022-09-13 カシオ計算機株式会社 Display control device, display control method and display control program
WO2020124046A2 (en) * 2018-12-14 2020-06-18 Vulcan Inc. Virtual and physical reality integration
US20240096033A1 (en) * 2021-10-11 2024-03-21 Meta Platforms Technologies, Llc Technology for creating, replicating and/or controlling avatars in extended reality

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1764931A (en) * 2003-02-11 2006-04-26 索尼电脑娱乐公司 Method and apparatus for real time motion capture

Family Cites Families (120)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5454043A (en) * 1993-07-30 1995-09-26 Mitsubishi Electric Research Laboratories, Inc. Dynamic and static hand gesture recognition through low-level image analysis
US5347306A (en) * 1993-12-17 1994-09-13 Mitsubishi Electric Research Laboratories, Inc. Animated electronic meeting place
US5913727A (en) * 1995-06-02 1999-06-22 Ahdoot; Ned Interactive movement and contact simulation game
IL114278A (en) * 1995-06-22 2010-06-16 Microsoft Internat Holdings B Camera and method
CN1253636A (en) * 1995-06-22 2000-05-17 3Dv系统有限公司 Telecentric stop 3-D camera and its method
US6430997B1 (en) * 1995-11-06 2002-08-13 Trazer Technologies, Inc. System and method for tracking and assessing movement skills in multidimensional space
US5880731A (en) * 1995-12-14 1999-03-09 Microsoft Corporation Use of avatars with automatic gesturing and bounded interaction in on-line chat session
US6151009A (en) * 1996-08-21 2000-11-21 Carnegie Mellon University Method and apparatus for merging real and synthetic images
NL1004648C2 (en) * 1996-11-11 1998-05-14 Johan Michiel Schaaij Computer game system.
US6075895A (en) * 1997-06-20 2000-06-13 Holoplex Methods and apparatus for gesture recognition based on templates
US6072494A (en) * 1997-10-15 2000-06-06 Electric Planet, Inc. Method and apparatus for real-time gesture recognition
US6031934A (en) * 1997-10-15 2000-02-29 Electric Planet, Inc. Computer vision system for subject characterization
JPH11154240A (en) * 1997-11-20 1999-06-08 Nintendo Co Ltd Image producing device to produce image by using fetched image
JPH11195138A (en) * 1998-01-06 1999-07-21 Sharp Corp Picture processor
US6115052A (en) * 1998-02-12 2000-09-05 Mitsubishi Electric Information Technology Center America, Inc. (Ita) System for reconstructing the 3-dimensional motions of a human figure from a monocularly-viewed image sequence
US6950534B2 (en) * 1998-08-10 2005-09-27 Cybernet Systems Corporation Gesture-controlled interfaces for self-service machines and other applications
US6501515B1 (en) * 1998-10-13 2002-12-31 Sony Corporation Remote control system
US6570555B1 (en) * 1998-12-30 2003-05-27 Fuji Xerox Co., Ltd. Method and apparatus for embodied conversational characters with multimodal input/output in an interface device
EP1214609B1 (en) * 1999-09-08 2004-12-15 3DV Systems Ltd. 3d imaging system
US6512838B1 (en) * 1999-09-22 2003-01-28 Canesta, Inc. Methods for enhancing performance and data acquired from three-dimensional image systems
US7050177B2 (en) * 2002-05-22 2006-05-23 Canesta, Inc. Method and apparatus for approximating depth of an object's placement onto a monitored region with applications to virtual interface devices
US7006236B2 (en) * 2002-05-22 2006-02-28 Canesta, Inc. Method and apparatus for approximating depth of an object's placement onto a monitored region with applications to virtual interface devices
DE19960180B4 (en) * 1999-12-14 2006-03-09 Rheinmetall W & M Gmbh Method for producing an explosive projectile
US6674877B1 (en) * 2000-02-03 2004-01-06 Microsoft Corporation System and method for visually tracking occluded objects in real time
WO2001061519A1 (en) * 2000-02-15 2001-08-23 Sorceron, Inc. Method and system for distributing captured motion data over a network
US6663491B2 (en) * 2000-02-18 2003-12-16 Namco Ltd. Game apparatus, storage medium and computer program that adjust tempo of sound
JP4441979B2 (en) * 2000-04-28 2010-03-31 ソニー株式会社 Information processing apparatus and method, and recording medium
US6784901B1 (en) * 2000-05-09 2004-08-31 There Method, system and computer program product for the delivery of a chat message in a 3D multi-user environment
US20020008716A1 (en) * 2000-07-21 2002-01-24 Colburn Robert A. System and method for controlling expression characteristics of a virtual agent
US7227526B2 (en) * 2000-07-24 2007-06-05 Gesturetek, Inc. Video-based image control system
US20050206610A1 (en) * 2000-09-29 2005-09-22 Gary Gerard Cordelli Computer-"reflected" (avatar) mirror
US7058204B2 (en) * 2000-10-03 2006-06-06 Gesturetek, Inc. Multiple camera control system
JP3725460B2 (en) * 2000-10-06 2005-12-14 株式会社ソニー・コンピュータエンタテインメント Image processing apparatus, image processing method, recording medium, computer program, semiconductor device
US20030018719A1 (en) * 2000-12-27 2003-01-23 Ruths Derek Augustus Samuel Data-centric collaborative computing platform
US8939831B2 (en) * 2001-03-08 2015-01-27 Brian M. Dugan Systems and methods for improving fitness equipment and exercise
US6539931B2 (en) * 2001-04-16 2003-04-01 Koninklijke Philips Electronics N.V. Ball throwing assistant
AU2003217587A1 (en) * 2002-02-15 2003-09-09 Canesta, Inc. Gesture recognition system using depth perceptive sensors
US7883415B2 (en) * 2003-09-15 2011-02-08 Sony Computer Entertainment Inc. Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion
US7623115B2 (en) * 2002-07-27 2009-11-24 Sony Computer Entertainment Inc. Method and apparatus for light input device
US8125459B2 (en) * 2007-10-01 2012-02-28 Igt Multi-user input systems and processing techniques for serving multiple users
US7151530B2 (en) * 2002-08-20 2006-12-19 Canesta, Inc. System and method for determining an input selected by a user through a virtual interface
US7225414B1 (en) * 2002-09-10 2007-05-29 Videomining Corporation Method and system for virtual touch entertainment
US20040063480A1 (en) * 2002-09-30 2004-04-01 Xiaoling Wang Apparatus and a method for more realistic interactive video games on computers or similar devices
US7386799B1 (en) * 2002-11-21 2008-06-10 Forterra Systems, Inc. Cinematic techniques in avatar-centric communication during a multi-user online simulation
GB2398691B (en) * 2003-02-21 2006-05-31 Sony Comp Entertainment Europe Control of data processing
EP1627294A2 (en) * 2003-05-01 2006-02-22 Delta Dansk Elektronik, Lys & Akustik A man-machine interface based on 3-d positions of the human body
WO2004107266A1 (en) * 2003-05-29 2004-12-09 Honda Motor Co., Ltd. Visual tracking using depth data
US7874917B2 (en) * 2003-09-15 2011-01-25 Sony Computer Entertainment Inc. Methods and systems for enabling depth and direction detection when interfacing with a computer program
US8323106B2 (en) * 2008-05-30 2012-12-04 Sony Computer Entertainment America Llc Determination of controller three-dimensional location using image analysis and ultrasonic communication
US7755608B2 (en) * 2004-01-23 2010-07-13 Hewlett-Packard Development Company, L.P. Systems and methods of interfacing with a machine
US20050215319A1 (en) * 2004-03-23 2005-09-29 Harmonix Music Systems, Inc. Method and apparatus for controlling a three-dimensional character in a three-dimensional gaming environment
JP4708422B2 (en) * 2004-04-15 2011-06-22 ジェスチャー テック,インコーポレイテッド Tracking of two-hand movement
US20050245317A1 (en) * 2004-04-30 2005-11-03 Microsoft Corporation Voice chat in game console application
US7634533B2 (en) * 2004-04-30 2009-12-15 Microsoft Corporation Systems and methods for real-time audio-visual communication and data collaboration in a network conference environment
US20060015560A1 (en) * 2004-05-11 2006-01-19 Microsoft Corporation Multi-sensory emoticons in a communication system
US7704135B2 (en) * 2004-08-23 2010-04-27 Harrison Jr Shelton E Integrated game system, method, and device
WO2006025137A1 (en) * 2004-09-01 2006-03-09 Sony Computer Entertainment Inc. Image processor, game machine, and image processing method
EP1645944B1 (en) * 2004-10-05 2012-08-15 Sony France S.A. A content-management interface
JP4449723B2 (en) * 2004-12-08 2010-04-14 ソニー株式会社 Image processing apparatus, image processing method, and program
US8369795B2 (en) * 2005-01-12 2013-02-05 Microsoft Corporation Game console notification system
US8009871B2 (en) * 2005-02-08 2011-08-30 Microsoft Corporation Method and system to segment depth images and to detect shapes in three-dimensionally acquired data
US20060205518A1 (en) * 2005-03-08 2006-09-14 Microsoft Corporation Systems and methods for providing system level notifications in a multimedia console
KR100688743B1 (en) * 2005-03-11 2007-03-02 삼성전기주식회사 Manufacturing method of PCB having multilayer embedded passive-chips
US7317836B2 (en) * 2005-03-17 2008-01-08 Honda Motor Co., Ltd. Pose estimation based on critical point analysis
US7664571B2 (en) * 2005-04-18 2010-02-16 Honda Motor Co., Ltd. Controlling a robot using pose
US20070021207A1 (en) * 2005-07-25 2007-01-25 Ned Ahdoot Interactive combat game between a real player and a projected image of a computer generated player or a real player with a predictive method
GB2431717A (en) * 2005-10-31 2007-05-02 Sony Uk Ltd Scene analysis
US20070111796A1 (en) * 2005-11-16 2007-05-17 Microsoft Corporation Association of peripherals communicatively attached to a console device
JP4917615B2 (en) * 2006-02-27 2012-04-18 プライム センス リミティド Range mapping using uncorrelated speckle
US20070245881A1 (en) * 2006-04-04 2007-10-25 Eran Egozy Method and apparatus for providing a simulated band experience including online interaction
EP2016562A4 (en) * 2006-05-07 2010-01-06 Sony Computer Entertainment Inc Method for providing affective characteristics to computer generated avatar during gameplay
US8223186B2 (en) * 2006-05-31 2012-07-17 Hewlett-Packard Development Company, L.P. User interface for a video teleconference
EP2584494A3 (en) * 2006-08-03 2015-02-11 Alterface S.A. Method and device for identifying and extracting images of multiple users, and for recognizing user gestures
US8395658B2 (en) * 2006-09-07 2013-03-12 Sony Computer Entertainment Inc. Touch screen-like user interface that does not require actual touching
US8131011B2 (en) * 2006-09-25 2012-03-06 University Of Southern California Human detection and tracking system
US8683386B2 (en) * 2006-10-03 2014-03-25 Brian Mark Shuster Virtual environment for computer game
US7634540B2 (en) * 2006-10-12 2009-12-15 Seiko Epson Corporation Presenter view control system and method
JP5294554B2 (en) * 2006-11-16 2013-09-18 任天堂株式会社 GAME PROGRAM, GAME DEVICE, GAME SYSTEM, AND GAME PROCESSING METHOD
US20080134102A1 (en) * 2006-12-05 2008-06-05 Sony Ericsson Mobile Communications Ab Method and system for detecting movement of an object
US8351646B2 (en) * 2006-12-21 2013-01-08 Honda Motor Co., Ltd. Human pose estimation and tracking using label assignment
US9569876B2 (en) * 2006-12-21 2017-02-14 Brian Mark Shuster Animation control method for multiple participants
JP2010517427A (en) * 2007-01-23 2010-05-20 ユークリッド・ディスカバリーズ・エルエルシー System and method for providing personalized video services
JP5226960B2 (en) * 2007-02-28 2013-07-03 株式会社スクウェア・エニックス GAME DEVICE, VIRTUAL CAMERA CONTROL METHOD, PROGRAM, AND RECORDING MEDIUM
US20080215975A1 (en) * 2007-03-01 2008-09-04 Phil Harrison Virtual world user opinion & response monitoring
GB0703974D0 (en) * 2007-03-01 2007-04-11 Sony Comp Entertainment Europe Entertainment device
US20080250315A1 (en) * 2007-04-09 2008-10-09 Nokia Corporation Graphical representation for accessing and representing media files
WO2008134745A1 (en) * 2007-04-30 2008-11-06 Gesturetek, Inc. Mobile video-based therapy
US9317110B2 (en) * 2007-05-29 2016-04-19 Cfph, Llc Game with hand motion control
GB2450757A (en) * 2007-07-06 2009-01-07 Sony Comp Entertainment Europe Avatar customisation, transmission and reception
US8726194B2 (en) * 2007-07-27 2014-05-13 Qualcomm Incorporated Item selection using enhanced control
CN101836207B (en) * 2007-08-20 2017-03-01 高通股份有限公司 Enhanced refusal beyond the word of vocabulary
US9111285B2 (en) * 2007-08-27 2015-08-18 Qurio Holdings, Inc. System and method for representing content, user presence and interaction within virtual world advertising environments
JP5430572B2 (en) * 2007-09-14 2014-03-05 インテレクチュアル ベンチャーズ ホールディング 67 エルエルシー Gesture-based user interaction processing
US8325214B2 (en) * 2007-09-24 2012-12-04 Qualcomm Incorporated Enhanced interface for voice and video communications
US7970176B2 (en) * 2007-10-02 2011-06-28 Omek Interactive, Inc. Method and system for gesture classification
US8049756B2 (en) * 2007-10-30 2011-11-01 Brian Mark Shuster Time-dependent client inactivity indicia in a multi-user animation environment
CN101925916B (en) * 2007-11-21 2013-06-19 高通股份有限公司 Method and system for controlling electronic device based on media preferences
WO2009067676A1 (en) * 2007-11-21 2009-05-28 Gesturetek, Inc. Device access control
US20090221368A1 (en) * 2007-11-28 2009-09-03 Ailive Inc., Method and system for creating a shared game space for a networked game
GB2455316B (en) * 2007-12-04 2012-08-15 Sony Corp Image processing apparatus and method
US8149210B2 (en) * 2007-12-31 2012-04-03 Microsoft International Holdings B.V. Pointing device and method
US8555207B2 (en) * 2008-02-27 2013-10-08 Qualcomm Incorporated Enhanced input using recognized gestures
US8368753B2 (en) * 2008-03-17 2013-02-05 Sony Computer Entertainment America Llc Controller with an integrated depth camera
US20090259937A1 (en) * 2008-04-11 2009-10-15 Rohall Steven L Brainstorming Tool in a 3D Virtual Environment
US20110107239A1 (en) * 2008-05-01 2011-05-05 Uri Adoni Device, system and method of interactive game
US8864652B2 (en) * 2008-06-27 2014-10-21 Intuitive Surgical Operations, Inc. Medical robotic system providing computer generated auxiliary views of a camera instrument for controlling the positioning and orienting of its tip
CN102165396B (en) * 2008-07-25 2014-10-29 高通股份有限公司 Enhanced detection of waving engagement gesture
WO2010019925A1 (en) * 2008-08-15 2010-02-18 Brown Technology Partnerships Method and apparatus for estimating body shape
NO333026B1 (en) * 2008-09-17 2013-02-18 Cisco Systems Int Sarl Control system for a local telepresence video conferencing system and method for establishing a video conferencing call.
US8176421B2 (en) * 2008-09-26 2012-05-08 International Business Machines Corporation Virtual universe supervisory presence
US8648865B2 (en) * 2008-09-26 2014-02-11 International Business Machines Corporation Variable rendering of virtual universe avatars
US8108774B2 (en) * 2008-09-26 2012-01-31 International Business Machines Corporation Avatar appearance transformation in a virtual universe
US9399167B2 (en) * 2008-10-14 2016-07-26 Microsoft Technology Licensing, Llc Virtual space mapping of a variable activity region
US20100153858A1 (en) * 2008-12-11 2010-06-17 Paul Gausman Uniform virtual environments
US20100169796A1 (en) * 2008-12-28 2010-07-01 Nortel Networks Limited Visual Indication of Audio Context in a Computer-Generated Virtual Environment
US8584026B2 (en) * 2008-12-29 2013-11-12 Avaya Inc. User interface for orienting new users to a three dimensional computer-generated virtual environment
US9176579B2 (en) * 2008-12-29 2015-11-03 Avaya Inc. Visual indication of user interests in a computer-generated virtual environment
US20100169799A1 (en) * 2008-12-30 2010-07-01 Nortel Networks Limited Method and Apparatus for Enabling Presentations to Large Numbers of Users in a Virtual Environment
US9142024B2 (en) * 2008-12-31 2015-09-22 Lucasfilm Entertainment Company Ltd. Visual and physical motion sensing for three-dimensional motion capture
US8161398B2 (en) * 2009-05-08 2012-04-17 International Business Machines Corporation Assistive group setting management in a virtual world

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1764931A (en) * 2003-02-11 2006-04-26 索尼电脑娱乐公司 Method and apparatus for real time motion capture

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104516496A (en) * 2013-10-04 2015-04-15 财团法人工业技术研究院 Multi-person guidance system and method capable of adjusting motion sensing range
CN104516496B (en) * 2013-10-04 2017-11-03 财团法人工业技术研究院 Multi-person guidance system and method capable of adjusting motion sensing range

Also Published As

Publication number Publication date
CN102448560A (en) 2012-05-09
US20100306685A1 (en) 2010-12-02
WO2010138477A3 (en) 2011-02-24
US20170095738A1 (en) 2017-04-06
WO2010138477A2 (en) 2010-12-02

Similar Documents

Publication Publication Date Title
CN102448560B (en) User movement feedback via on-screen avatars
CN102413887B (en) Managing virtual ports
CN102413885B (en) Systems and methods for applying model tracking to motion capture
CN102947777B (en) Usertracking feeds back
CN102129293B (en) Tracking groups of users in motion capture system
CN102414641B (en) Altering view perspective within display environment
CN102596340B (en) Systems and methods for applying animations or motions to a character
CN102448562B (en) Systems and methods for tracking a model
CN102448561B (en) Gesture coach
CN102665838B (en) Methods and systems for determining and tracking extremities of a target
EP2524350B1 (en) Recognizing user intent in motion capture system
KR101643020B1 (en) Chaining animations
CN102576466B (en) For the system and method for trace model
CN102301398B (en) Device, method and system for catching depth information of scene
CN102448563B (en) Method and device for processing depth information of scene
CN102207771A (en) Intention deduction of users participating in motion capture system
CN102221883A (en) Active calibration of natural user interface

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: MICROSOFT TECHNOLOGY LICENSING LLC

Free format text: FORMER OWNER: MICROSOFT CORP.

Effective date: 20150506

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150506

Address after: Washington State

Patentee after: Micro soft technique license Co., Ltd

Address before: Washington State

Patentee before: Microsoft Corp.