EP3206765A1 - Bildbasierte bestimmung der verteilung von bodengewicht - Google Patents

Bildbasierte bestimmung der verteilung von bodengewicht

Info

Publication number
EP3206765A1
EP3206765A1 EP15787108.8A EP15787108A EP3206765A1 EP 3206765 A1 EP3206765 A1 EP 3206765A1 EP 15787108 A EP15787108 A EP 15787108A EP 3206765 A1 EP3206765 A1 EP 3206765A1
Authority
EP
European Patent Office
Prior art keywords
determining
user
mass
center
contact
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP15787108.8A
Other languages
English (en)
French (fr)
Inventor
Jonathan HOOF
Daniel KENNETT
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing LLC filed Critical Microsoft Technology Licensing LLC
Publication of EP3206765A1 publication Critical patent/EP3206765A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/23Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/833Hand-to-hand fighting, e.g. martial arts competition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • cameras have been used to allow users to manipulate game characters or other aspects of an application without the need for conventional handheld game controllers.
  • computing systems have been adapted to identify users captured by cameras, and to detect motion or other behaviors of the users, i.e., providing virtual ports to the system.
  • a sequence of images may be processed to interpret movements in a target recognition, analysis, and tracking system.
  • the system may determine the contour of a targeted user from an image or sequence of images, and determine points of contact between the user and the environment, e.g. , the points where a user is touching the floor or other fixtures or objects. From the contour, the center of mass of the user may be estimated, and various aspects, such as acceleration, motion, and/or balance of the center of mass may be tracked.
  • This method may be implemented in a variety of computing environments as a series of computations using an image or sequence of images, whereby the contour of the targeted user, points of contact, center of mass, and balance, acceleration, and/or movement of the center of mass are computed. Further, the methods may be encapsulated on machine -readable media as a set of instructions which may be stored in memory of a computer/computing environment and, when executed, enable the computer/computing environment to effectuate the method.
  • the forces acting on the center of mass may be inferred, without regard to any knowledge of the user's skeletal structure or relative position of limbs, for instance. This may aid in the construction of an accurate avatar representation of the user and the user's actions on a display and accurate kinetic analysis. The accuracy further may be enhanced by foreknowledge of the users intended movements and/or an additional skeletal tracking of the user.
  • FIG. 1 is an example perspective drawing of a user playing a gesture- based game using a gaming console, television, and image capture device.
  • FIG. 2 is an example system diagram of a user holding an object in an environment with multiple fixtures, along with a computing system, a display, and an image capture device.
  • FIG. 3 illustrates an example system block diagram of a gaming console computing environment.
  • FIG. 4 is an example system block diagram of a personal computer.
  • FIG. 5 is an example system block diagram of a handheld wireless device such as a cellular telephone handset.
  • FIG. 6 is an example two-dimensional representation of information derived from a sequence of images of a user.
  • FIG. 1 shows an example of a motion sensing and analysis system in the case of a user playing a gesture-based game using a gaming console, television, and image capture device.
  • System 10 may be used to bind, recognize, analyze, track, associate to a human target, provide feedback, and/or adapt to aspects of the human target such as the user 18.
  • the system 10 may include a computing environment 12.
  • the computing environment 12 may be a computer, a gaming system or console, smart phone, or the like.
  • System 10 may further include a capture device 20.
  • the capture device 20 may be, for example, a detector that may be used to monitor one or more users, such as the user 18, such that gestures performed by the one or more users may be captured, analyzed, and tracked to perform one or more controls or actions within an application, as will be described in more detail below.
  • Capture device 20 may be of any conventional form. It may be a single lens digital camera capturing two-dimensional optical images in the visual, infrared (IR), ultraviolet, or other spectrum. It may be a dual- lens stereoscopic device, for instance.
  • the capture device 20 may be a radar, sonar, infrared, or other scanning device capable of generating depth maps of the observed scene.
  • the capture device 20 may also be a composite device providing a mixture of color, brightness, thermal, depth, and other information in one or more image outputs, and may comprise multiple scanning and/or camera elements.
  • System 10 may include an audiovisual device 16 such as a television, a monitor, a high-definition television (HDTV), or the like that may provide feedback about virtual ports and binding, game or application visuals and/or audio to the user 18.
  • the computing environment 12 may include a video adapter such as a graphics card and/or an audio adapter such as a sound card that may provide audiovisual signals associated with the feedback about virtual ports and binding, game application, non-game application, or the like.
  • the audiovisual device 16 may receive the audiovisual signals from the computing environment 12 and may then output the game or application visuals and/or audio associated with the audiovisual signals to the user 18.
  • Audiovisual device 16 may be connected to the computing environment 12 via, for example, an S-Video cable, a coaxial cable, an HDMI cable, a DVI cable, a VGA cable, a wireless connection or the like.
  • System 10 may be used to recognize, analyze, and/or track a human target such as the user 18.
  • the user 18 may be tracked using the capture device 20 such that the position, movements and size of user 18 may be interpreted as controls that may be used to affect the application being executed by computer environment 12.
  • the user 18 may move his or her body to control the application.
  • system 10 may provide feedback about this unbound/non-detection state of system 10.
  • the feedback state may change from a state of unbound/non-detection to a feedback state of unbound/detecting.
  • System 10 may then bind to the user 18, which may change the feedback state from unbound/detecting to bound.
  • the user 18 is bound to a computing environment 12, he may make a gesture which will turn the rest of system 10 on.
  • the user 18 may also make a second gesture which will enter him into association with a virtual port.
  • the feedback state may change such that a user 18 knows he is associated with the virtual port.
  • the user 18 may then provide a series of gestures to control system 10. For example, if the user 18 seeks to open one or more menus, or seeks to pause one or more processes of system 10, he may make a pause or menu gesture. After finishing with the computing session, the user may make an exit gesture, which may cause system 10 to disassociate the user 18 with the virtual port. This may cause the feedback state to change from the state of associated with a virtual port to the state of bound/detected. The user 18 may then move out of the range of the sensors, which may cause the feedback state to change from bound/detected to non-detection. If a system 10 unbinds from the user 18, the feedback state may change to an unbound state.
  • the application executing on the computing environment 12 may be, as depicted in FIG. 1, a boxing game that the user 18 may be playing.
  • the computing environment 12 may use the audiovisual device 16 to provide a visual representation of a boxing opponent 22 to the user 18.
  • the computing environment 12 may also use the audiovisual device 16 to provide a visual representation of a user avatar 24 that the user 18 may control with his or her movements on a screen 14.
  • the user 18 may throw a punch in physical space to cause the user avatar 24 to throw a punch in game space.
  • the computer environment 12 and the capture device 20 of system 10 may be used to recognize and analyze the punch of the user 18 in physical space such that the punch may be interpreted as a game control of the user avatar 24 in game space.
  • the computing environment 12 would normally include a conventional general-purpose digital processor of the von Neumann architecture executing software or firmware instructions, or equivalent devices implemented via digital field-programmable gate-array (FPGA) logic devices, application-specific integrated circuit (ASIC) devices, or any equivalent device or combinations thereof. Processing may be done locally, or alternatively some or all of the image processing and avatar generation work may be done at a remote location, not depicted. Hence the system shown could, to name but a few configurations, be implemented using: the camera, processor, memory, and display of a single smart cell phone; a specialty sensor and console of a gaming system connected to a television; or using an image sensor, computing facility, and display, each located at a separate facility.
  • Computing environment 12 may include hardware components and/or software components such that the may be used to execute applications such as gaming applications, non-gaming applications, or the like.
  • the memory may comprise a storage medium having a concrete, tangible, physical structure. As is known, a signal does not have a concrete, tangible, physical structure. Memory, as well as any computer-readable storage medium described herein, is not to be construed as a signal. The memory, as well as any computer-readable storage medium described herein, is not to be construed as a transient signal. The memory, as well as any computer-readable storage medium described herein, is not to be construed as a propagating signal. The memory, as well as any computer-readable storage medium described herein, is to be construed as an article of manufacture.
  • the user 18 may be associated with a virtual port in computing environment 12. Feedback of the state of the virtual port may be given to the user 18 in the form of a sound or display on audiovisual device 16, a display such as an LED or light bulb, or a speaker on the computing environment 12, or any other means of providing feedback to the user.
  • the feedback may be used to inform the user 18 when he is in a capture area of the capture device 20, if he is bound to system 10, what virtual port he is associated with, and when he has control over an avatar such as avatar 24. Gestures by user 18 may change the state of system 10, and thus the feedback that the user 18 receives from system 10.
  • Other movements by the user 18 may also be interpreted as other controls or actions, such as controls to bob, weave, shuffle, block, jab, or throw a variety of different power punches.
  • some movements may be interpreted as controls that may correspond to actions other than controlling the user avatar 24.
  • the user 18 may use movements to enter, exit, turn system on or off, pause, volunteer, switch virtual ports, save a game, select a level, profile or menu, view high scores, communicate with a friend, etc.
  • a full range of motion of the user 18 may be available, used, and analyzed in any suitable manner to interact with an application.
  • FIG. 2 is a system diagram of a system 50, which is similar to system 10 of FIG. 1.
  • user 18 is holding an object 21 ⁇ e.g., a tennis racket) in an environment with multiple fixtures.
  • System 50 includes audiovisual device 16 with screen 14 on which the avatar 24 of user 18 is depicted.
  • Avatar 24 is created by computing environment 12 via analysis of a sequence of images provided by capture device 20.
  • User 18 may move his center of mass by impressing force upon any of the fixtures, e.g., by shifting weight from foot to foot on floor 30.
  • a fixture may be any relatively stable object capable of bearing a significant portion of the user's weight.
  • the fixtures might include permanent fixtures such as a floor 30, a ballet limber bar 32, a chin-up bar handle 34, and a wall or door frame 36.
  • a fixture could also be a moveable fixture such as a chair or table, or even a box.
  • a fixture may also be a piece of exercise gear, such as step platform, a bench, or even an exercise ball, for example.
  • a fixture could be an object moved or operated by the user in the course of the user's locomotion, such as a cane, crutch, walker, or wheelchair, for example.
  • screen 14 shows a ball 23 which does not exist in the physical environmental of user 18.
  • computing environment 12 may track the motions both of the user 18 and the object 21 he wields, to allow user 18 to control what happens in the virtual world depicted on screen 14.
  • User 18 may interact with the image onscreen by making motion which changes the relative position of his avatar 24 and ball 23.
  • FIG. 3 illustrates a multimedia console 100 that may be used as the computing environment 12 described above with respect to FIGs. 1 and 2 to. e.g., interpret movements in a target recognition, analysis, and tracking system.
  • the multimedia console 100 has a central processing unit (CPU) 101 including a processor core having a level 1 cache 102, a level 2 cache 104, and a flash ROM (Read Only Memory) 106.
  • the level 1 cache 102 and a level 2 cache 104 temporarily store data and hence reduce the number of memory access cycles, thereby improving processing speed and throughput.
  • the CPU 101 may be provided having more than one core, and thus, additional level 1 and level 2 caches 102 and 104.
  • the flash ROM 106 may store executable code that is loaded during an initial phase of a boot process when the multimedia console 100 is powered ON.
  • a graphics processing unit (GPU) 108 and a video encoder/video codec (coder/decoder) 114 form a video processing pipeline for high speed and high resolution graphics processing. Data is carried from the graphics processing unit 108 to the video encoder/video codec 114 via a bus. The video processing pipeline outputs data to an A/V (audio/video) port 140 for transmission to a television or other display.
  • a memory controller 110 is connected to the GPU 108 to facilitate processor access to various types of memory 112, such as, but not limited to, a RAM (Random Access Memory).
  • the multimedia console 100 includes an I/O controller 120, a system management controller 122, an audio processing unit 123, a network interface controller 124, a first USB host controller 126, a second USB controller 128 and a front panel I/O subassembly 130 that are preferably implemented on a module 118.
  • the USB controllers 126 and 128 serve as hosts for peripheral controllers 142(1)-142(2), a wireless adapter 148, and an external memory device 146 (e.g., flash memory, external CD/DVD ROM drive, removable media, etc.).
  • the network interface 124 and/or wireless adapter 148 provide access to a network (e.g., the Internet, home network, etc.) and may be any of a wide variety of various wired or wireless adapter components including an Ethernet card, a modem, a Bluetooth module, a cable modem, and the like.
  • a network e.g., the Internet, home network, etc.
  • wired or wireless adapter components including an Ethernet card, a modem, a Bluetooth module, a cable modem, and the like.
  • System memory 143 is provided to store application data that is loaded during the boot process.
  • a media drive 144 is provided and may comprise a DVD/CD drive, hard drive, or other removable media drive, etc.
  • the media drive 144 may be internal or external to the multimedia console 100.
  • Application data may be accessed via the media drive 144 for execution, playback, etc. by the multimedia console 100.
  • the media drive 144 is connected to the I/O controller 120 via a bus, such as a Serial ATA bus or other high speed connection (e.g., IEEE 1394).
  • the system management controller 122 provides a variety of service functions related to assuring availability of the multimedia console 100.
  • the audio processing unit 123 and an audio codec 132 form a corresponding audio processing pipeline with high fidelity and stereo processing. Audio data is carried between the audio processing unit 123 and the audio codec 132 via a communication link.
  • the audio processing pipeline outputs data to the A/V port 140 for reproduction by an external audio player or device having audio capabilities.
  • the front panel I/O subassembly 130 supports the functionality of the power button 150 and the eject button 152, as well as any LEDs (light emitting diodes) or other indicators exposed on the outer surface of the multimedia console 100.
  • a system power supply module 136 provides power to the components of the multimedia console 100.
  • a fan 138 cools the circuitry within the multimedia console 100.
  • the front panel I/O subassembly 130 may include LEDs, a visual display screen, light bulbs, a speaker or any other means that may provide audio or visual feedback of the state of control of the multimedia control 100 to a user 18. For example, if the system is in a state where no users are detected by capture device 20, such a state may be reflected on front panel I/O subassembly 130. If the state of the system changes, for example, a user becomes bound to the system, the feedback state may be updated on the front panel I/O subassembly to reflect the change in states.
  • the CPU 101, GPU 108, memory controller 110, and various other components within the multimedia console 100 are interconnected via one or more buses, including serial and parallel buses, a memory bus, a peripheral bus, and a processor or local bus using any of a variety of bus architectures.
  • buses including serial and parallel buses, a memory bus, a peripheral bus, and a processor or local bus using any of a variety of bus architectures.
  • PCI Peripheral Component Interconnects
  • PCI-Express Peripheral Component Interconnects
  • application data may be loaded from the system memory 143 into memory 112 and/or caches 102, 104 and executed on the CPU 101.
  • the application may present a graphical user interface that provides a consistent user experience when navigating to different media types available on the multimedia console 100.
  • applications and/or other media contained within the media drive 144 may be launched or played from the media drive 144 to provide additional functionalities to the multimedia console 100.
  • the multimedia console 100 may be operated as a standalone system by simply connecting the system to a television or other display. In this standalone mode, the multimedia console 100 allows one or more users to interact with the system, watch movies, or listen to music. However, with the integration of broadband connectivity made available through the network interface 124 or the wireless adapter 148, the multimedia console 100 may further be operated as a participant in a larger network community.
  • a set amount of hardware resources are reserved for system use by the multimedia console operating system. These resources may include a reservation of memory ⁇ e.g., 16MB), CPU and GPU cycles ⁇ e.g., 5%), networking bandwidth ⁇ e.g., 8 Kbs), etc. Because these resources are reserved at system boot time, the reserved resources do not exist from the application's view.
  • the memory reservation preferably is large enough to contain the launch kernel, concurrent system applications and drivers.
  • the CPU reservation is preferably constant such that if the reserved CPU usage is not used by the system applications, an idle thread will consume any unused cycles.
  • the multimedia console 100 boots and system resources are reserved, concurrent system applications execute to provide system functionalities.
  • the system functionalities are encapsulated in a set of system applications that execute within the reserved system resources described above.
  • the operating system kernel identifies threads that are system application threads versus gaming application threads.
  • the system applications are preferably scheduled to run on the CPU 101 at predetermined times and intervals in order to provide a consistent system resource view to the application. The scheduling is to minimize cache disruption for the gaming application running on the console.
  • a multimedia console application manager controls the gaming application audio level (e.g., mute, attenuate) when system applications are active.
  • Input devices are shared by gaming applications and system applications.
  • the input devices are not reserved resources, but are to be switched between system applications and the gaming application such that each will have a focus of the device.
  • the application manager preferably controls the switching of input stream, without knowledge the gaming application's knowledge and a driver maintains state information regarding focus switches.
  • Capture device 20 may define additional input device for the console 100.
  • FIG. 4 illustrates an example of a computing environment 220 that may be used as the computing environment 12 shown in FIGs. 1 and 2 to, e.g., interpret movement in a target recognition, analysis, and tracking system.
  • the computing environment 220 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the presently disclosed subject matter. Neither should the computing environment 220 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary computing environment 220.
  • the various depicted computing elements may include circuitry configured to instantiate specific aspects of the present disclosure.
  • the term circuitry used in the disclosure may include specialized hardware components configured to perform function(s) by firmware or switches.
  • the term circuitry may include a general purpose processing unit, memory, etc. , configured by software instructions that embody logic operable to perform function(s).
  • circuitry includes a combination of hardware and software, an
  • implementer may write source code embodying logic and the source code may be compiled into machine readable code that may be processed by the general purpose processing unit. Since one skilled in the art may appreciate that the state of the art has evolved to a point where there is little difference between hardware, software, or a combination of hardware/software, the selection of hardware versus software to effectuate specific functions is a design choice left to an implementer. More specifically, one of skill in the art may appreciate that a software process may be transformed into an equivalent hardware structure, and a hardware structure may itself be transformed into an equivalent software process. Thus, the selection of a hardware implementation versus a software implementation is one of design choice and left to the implementer.
  • the computing environment 220 comprises a computer 241, which typically includes a variety of computer readable media.
  • Computer readable media may be any available media that may be accessed by computer 241 and includes both volatile and nonvolatile media, removable and non-removable media.
  • the system memory 222 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 223 and random access memory (RAM) 260.
  • ROM read only memory
  • RAM random access memory
  • a basic input/output system 224 (BIOS) containing the basic routines that help to transfer information between elements within computer 241, such as during start-up, is typically stored in ROM 223.
  • BIOS basic input/output system 224
  • RAM 260 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 259.
  • FIG. 4 illustrates operating system 225, application programs 226, other program modules 227, and program data 228.
  • the computer 241 may also include other removable/non-removable, volatile/nonvolatile computer storage media.
  • FIG. 4 illustrates a hard disk drive 238 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 239 that reads from or writes to a removable, nonvolatile magnetic disk 254, and an optical disk drive 240 that reads from or writes to a removable, nonvolatile optical disk 253 such as a CD ROM or other optical media.
  • FIG. 4 illustrates a hard disk drive 238 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 239 that reads from or writes to a removable, nonvolatile magnetic disk 254, and an optical disk drive 240 that reads from or writes to a removable, nonvolatile optical disk 253 such as a CD ROM or other optical media.
  • a removable, nonvolatile optical disk 253 such as a CD ROM or other optical media.
  • removable/non-removable, volatile/nonvolatile computer storage media that may be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
  • the hard disk drive 238 is typically connected to the system bus 221 through a non-removable memory interface such as interface 234, and magnetic disk drive 239 and optical disk drive 240 are typically connected to the system bus 221 by a removable memory interface, such as interface 235.
  • the drives and their associated computer storage media discussed above and illustrated in FIG. 4, provide storage of computer readable instructions, data structures, program modules and other data for the computer 241.
  • hard disk drive 238 is illustrated as storing operating system 258, application programs 257, other program modules 256, and program data 255. Note that these components may either be the same as or different from operating system 225, application programs 226, other program modules 227, and program data 228.
  • Operating system 258, application programs 257, other program modules 256, and program data 255 are given different numbers here to illustrate that, at a minimum, they are different copies.
  • a user may enter commands and information into the computer 241 through input devices such as a keyboard 251 and pointing device 252, which may take the form of a mouse, trackball, or touch pad, for instance.
  • Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like.
  • These and other input devices are often connected to the processing unit 259 through a user input interface 236 that is coupled to the system bus 221 , but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB).
  • the cameras 27, 28 and capture device 20 may define additional input devices for the console 100.
  • a monitor 242 or other type of display device is also connected to the system bus 221 via an interface, such as a video interface 232, which may operate in conjunction with a graphics interface 231 , a graphics processing unit (GPU) 229, and/or a video memory 229.
  • a video interface 232 which may operate in conjunction with a graphics interface 231 , a graphics processing unit (GPU) 229, and/or a video memory 229.
  • computers may also include other peripheral output devices such as speakers 244 and printer 243, which may be connected through a output peripheral interface 233.
  • the computer 241 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 246.
  • the remote computer 246 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 241, although only a memory storage device 247 has been illustrated in FIG. 4.
  • the logical connections depicted in FIG. 4 include a local area network (LAN) 245 and a wide area network (WAN) 249, but may also include other networks.
  • LAN local area network
  • WAN wide area network
  • Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
  • the computer 241 When used in a LAN networking environment, the computer 241 is connected to the LAN 245 through a network interface or adapter 237. When used in a WAN networking environment, the computer 241 typically includes a modem 250 or other means for establishing communications over the WAN 249, such as the Internet.
  • the modem 250 which may be internal or external, may be connected to the system bus 221 via the user input interface 236, or other appropriate mechanism.
  • program modules depicted relative to the computer 241, or portions thereof may be stored in the remote memory storage device.
  • FIG. 4 illustrates remote application programs 248 as residing on memory device 247. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • FIG. 5 illustrates an example of a computing device 500 that may be used as the computing environment 12 shown in FIGs. 1 and 2 to, e.g., interpret movement in a target recognition, analysis, and tracking system.
  • Computing environment 500 may be, for instance, a mobile station, a fixed or mobile subscriber unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a tablet, a personal computer, a wireless sensor, consumer electronics, or the like.
  • PDA personal digital assistant
  • the device 500 may include a processor 502, a transceiver 504, a transmit/receive element 506, a speaker/microphone 510, a keypad 512, a display/touchpad 514, nonremovable memory 516, removable memory 518, a power transceiver 508, a global positioning system (GPS) chipset 522, an image capture device 530, and other peripherals 520.
  • a processor 502 a transceiver 504
  • a transmit/receive element 506 a speaker/microphone 510
  • a keypad 512 a keypad 512
  • a display/touchpad 514 nonremovable memory 516
  • removable memory 518 nonremovable memory 516
  • a power transceiver 508 a global positioning system (GPS) chipset 522
  • GPS global positioning system
  • Processor 502 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
  • the processor 502 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables device 500 to operate in a wireless environment.
  • the processor 502 may be coupled to the transceiver 504, which may be coupled to the transmit/receive element 506. While FIG.
  • processor 502 and the transceiver 504 may be integrated together in an electronic package or chip.
  • Processor 502 may perform image and movement analysis, or it may cooperate with remote devices via wireless communications to accomplish such analyses, for example.
  • the transmit/receive element 506 may be configured to transmit signals to, or receive signals from, e.g., a WLAN AN.
  • the transmit/receive element 506 may be an antenna configured to transmit and/or receive RF signals.
  • the transmit/receive element 506 may support various networks and air interfaces, such as WLAN (wireless local area network), WPAN (wireless personal area network), cellular, and the like.
  • the transmit/receive element 506 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example.
  • the transmit/receive element 506 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 506 may be configured to transmit and/or receive any combination of wireless or wired signals.
  • Processor 502 may access information from, and store data in, any type of suitable memory, such as non-removable memory 516 and/or removable memory 518.
  • Non-removable memory 516 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device.
  • Removable memory 518 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like.
  • SIM subscriber identity module
  • SD secure digital
  • the processor 502 may access information from, and store data in, memory that is not physically located on device 500, such as on a server or a home computer.
  • the processor 502 may be configured to control lighting patterns, images, or colors on the display or indicators 42 in response to various user requests, network conditions, quality of service policies, etc.
  • the processor 502 may receive power from the power transceiver 508, and may be configured to distribute and/or control the power to the other components in device 500.
  • the power transceiver 508 may be any suitable device for powering device 500.
  • the power transceiver 508 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium- ion (Li-ion), etc.), solar cells, fuel cells, and the like.
  • the processor 502 may also be coupled to the GPS chipset 522, which is configured to provide location information (e.g., longitude and latitude) regarding the current location of device 500.
  • the processor 502 may also be coupled to the image capture device 530. Capture device may be a visible spectrum camera, and IR sensor, a depth image sensor, etc.
  • the processor 502 may further be coupled to other peripherals 520, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
  • the peripherals 520 may include an accelerometer, an e-compass, a satellite transceiver, a sensor, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.
  • FM frequency modulated
  • Fig. 6 is a depiction in two dimensions 600 of information computed from one or more imaged captured of a user in his physical environment.
  • the processor has determined a contour 602 of a user, here shown as an outline.
  • the processor has estimated the location of the center of mass of the user 604, shown here marked with a cross.
  • the processor has determined two points of contact between the user and a fixture. In this case the fixture is the floor.
  • contact point 606 is the user's right foot
  • contact point 610 is his left foot.
  • the system has also computed the ground weight impressed upon the fixture by each point of contact, as shown in dashed lines with lengths proportional to the weight impressed, lines 608 and 612. Note that weight on the right foot 608 is greater than the weight on the left foot 612.
  • Contour 602 may serve as the avatar 24 of FIGs. 1 and 2, or it may serve as the basis for generating such an avatar. Thus the user may see an image or
  • an avatar may be, for example, a photographic or digitally-generated graphic image corresponding roughly to the outline user 18 presents to the system.
  • the static force impressed by each foot is shown as the vertical dashed lines 208 and 212.
  • the weight could be shown, for instance, as discs radiating out from the point of contact along the plane of the floor, and these discs could vary in size, color, or both, dynamically in proportion to the weight impressed.
  • Center of gravity 604 could be depicted by a cross which varies in position as the user's balance shifts, and varies in color in proportion to its acceleration.
  • the instantaneous weight borne by each point of contact may be computed directly from: the locations of the points of contact between the user and the fixtures; the mass of the user; and the location and acceleration of the center of mass of the user.
  • the outline of the user, including user height and width may be estimated by a computing system from image data due to the user's motion, color, temperature, and range from the image sensor.
  • a model of the environment, including fixtures is similarly inferable from image data due to its lack of motion, color, temperature, and/or range from sensor, and/or alternatively be deemed to include any objects determined not to be the user.
  • Total mass of the user may be estimated, for instance, by assuming an average density of the user and/or by reference to lookup tables of average masses according to observed height and width of users, etc. Notably this is achievable with merely 2D (two dimensional) imaging, although 3D (three dimensional)/depth imaging may provide more accurate assessments of user total volume and hence mass. For example, from a depth image taken from the front of a user, a "depth hull" model of the front of the user may be determined, and from this a 3D depth hull of the entire user may be inferred. Significantly, the true mass of the user may not be necessary to compute, for instance, relative weight distribution among points of contact. Once a user's total mass is estimated, the location of the center of a user's mass may be computed as the centroid of the mass component elements.
  • Points of contact with fixtures are inferable from location of objects in the environment relative to those points on a user which are most distant from the user's center of gravity. Identification of anatomical extremities may not be needed per se. For instance, in the case that the only fixture is a floor, it is inferable that the only points of contact will be where the user image intersects with, or is tangent upon, the floor. These will be the user's feet if he is standing, knees if kneeling, or hands if doing a hand-stand, etc. Which specific part of the user's body is touching the fixture is not pertinent to the weight and weight distribution computations per se. It suffices to know how many points of contact there are, and where they are in relation to the center of mass.
  • Acceleration of a user's center of mass may be computed from the change in position of the center of mass over time.
  • the computing system need only compare the center of mass position from a time sequence of images and measure how quickly the center of mass moved, in which direction, and at what speed, to then deduce acceleration.
  • the points of contact, center of mass, and acceleration of center of mass are known, the net forces impinging upon the center of mass, and upon the fixtures at the points of contact, are calculable by the arithmetic of Newtonian kinetics.
  • Rigid body physics for example, has been applied to video games, medical motion analysis and simulation, and forward and inverse robot kinematics.
  • a computing system may automatically solve for forces impinging upon a point of contact as a rigid body as constraint satisfaction problem where the values of directions of forces and torques are found via iterative computation, as is done in iterative dynamics with temporal coherence.
  • the position and motion of the center of mass, and the geometry of the center of mass relative to contact points determine the state vector at each contact point. For example, the position and motion of these points determine the magnitude direction of the velocity of the points and of the torques acting upon them. Factoring in gravity, the inertia tensor of the user at each point of contact may be inferred. The forces responsible for changes in movement or tension can be computed by comparing what is happening from one frame of time to the next.
  • M is a matrix of masses.
  • V 1 and V 2 are vectors containing the linear and angular velocities at a time 1 and a time 2. At is the change in time from time 1 to time 2.
  • J is a Jacobian matrix of partial differentials describing the constrained forces acting on the masses.
  • J T is the transpose of the Jacobian matrix J.
  • ⁇ (lambda) is a vector of undetermined multipliers of the magnitude of the constraint forces.
  • ⁇ ⁇ is the transpose of the Jacobian times lambda, which yields the forces and torques acting on the masses.
  • Fext is a vector of the forces and torques external to the system of masses, such as gravity.
  • the vector of mass times the vector of differential velocities is equal to the change in time multiplied by the sum of the vectors of internal and external forces and torques acting on the masses.
  • the computing system may effectuate solving Formula 1 by filling in the other variables and solving for ⁇ .
  • the direction of each the constraint controls how J T is initially computed, so the direction and value of a particular force may then be computed by multiplying the corresponding J T and ⁇ matrix elements.
  • a computational system may thus effectuate the computation of the state vector of a contact point at a series of frames of time based on the center of mass's state vector and the vector to the contact point. Then the stated vectors may be compared from frame to frame, and the magnitude and vectors of operative forces computed in accordance with what would be consistent for all contact points. This computation may be done by adjusting the state vectors in each iteration, as the system iterates over the contact points over and over again, e.g., in a Gauss-Seidel fashion.
  • the accurate generation of an avatar and weight distribution feedback to the user may be achieved in part by foreknowledge of the intended sequence of motions of a user and skeletal modeling when available.
  • skeletal modeling may be employed when the exercise begins with the user standing fully upright.
  • precise positioning of body segments may be difficult to determine.
  • the user's balance from foot to foot may still be assessed, without reference to the skeletal model, by observing accelerations on the user's overall center of mass relative to contact points with the floor.
  • the system may return to full skeletal modeling. Similar mixtures of skeletal modeling and overall center of mass acceleration modeling may be tailored to any number of, for instance, dances, yoga poses, stretches and strength exercises, or rehabilitative protocols.
  • chin-ups are called for, for instance, the system may seek to identify fixtures above the user's head. If the use of a cane in the right hand is stipulated, a three- legged user silhouette may be mapped to an appropriately accommodated avatar, and the amount of force impressed upon the cane at various points in the user's stride may be assessed.
  • the computing platform may be a processor, a memory coupled to the processor containing instructions executable by the processor, and a memory coupled to the processor containing a sequence of images from an image capture device, where the processor by executing the instructions effectuates the determination of a first force comprising a relative magnitude and a direction, where the a force is impressed upon a fixture at a point of contact by a targeted user.
  • a first force comprising a relative magnitude and a direction
  • the a force is impressed upon a fixture at a point of contact by a targeted user.
  • the stance of a person - or of an animal, or even of a walking robot - within the range of the image capture device may be analyzed from a single image. There may be many people in the field of view of the image capture device. The system would first have to determine which of these users would be targeted for analysis.
  • the analysis may begin with computing of a contour of the targeted user from a first image. Then, from the contour, a center of mass may be computed. The center of mass may depend on expected depths and density of the contour, based on a 2D image, or in part based on the observed depth of the contour in the case of 3D / depth image. Next, the system may determine a point of contact where the user touches a fixture. From just these variables, the relative or actual magnitude and direction of the force can be computed. This could be done be Newtonian arithmetic, or by number- fitting methods such as constraint analysis. In either case, the force can be determined from the observed geometrical relationship of the center of mass to the first point of contact.
  • a second contour is computed from a second image. Movement of the center of mass of the user can be computed by comparing either the first and second contour or first and second centers of mass computed from those contours. The rate of acceleration of the center of mass of the user can then be computed based on how far apart in time the two images were captured. This can be found by comparing the timestamps of the images, which is either explicit in the metadata of the images or implicit in knowledge of the frame rate of the capture device.
  • Target recognition, analysis, and tracking systems have often relied on skeletal tracking techniques to detect motion or other user behaviors and thereby control avatars representing the users on visual displays.
  • the methods of computing forces on the center of mass of a user can be combined with skeletal tracking techniques to provide seamless tracking of the user even where skeletal tracking techniques by themselves may not be successful in following the user's behavior.
  • the computing system effectuates the modeling of an avatar of the targeted user for visual display, it may, during a first period, effectuate modeling of the avatar by mapping plural identifiable portions of the contour of the user to skeletal segments of the avatar.
  • an accurate avatar can be created by mapping the observed body portions to the corresponding portions of the avatar.
  • the modeling of the avatar can be effectuated by inferring the movement of the skeletal segments of the avatar in accordance with the movement of the center of mass.
  • the system may still be able to tell where the center of the mass of the user is, and cause the motion of the avatar to move according to changes in the user's center of mass.
  • the system could switch back to using skeletal modeling in generating the avatar.
  • Any number of methods may be used to determine which modeling method will be used at which time to create the avatar.
  • the decision may be based, for example, on confidence levels of joint tracking data of a skeletal model, on the context in which the user is observed, or a combination of the two.
  • the computing system can effectuate processing of the avatar via the center-of-mass model. For instance, if the skeletal system has the feet clearly not under the user's center-of-mass, and the center-of-mass is not moving, then we have an invalid situation, and the center-of-mass model will be preferred over the skeletal model.
  • the level of confidence at used to trigger a change in the modeling method used can depend upon what motions are expected by the user. For instance, certain yoga poses may be more likely to invert limb positions than more ordinary calisthenics. Hence different thresholds of confidence may apply in different situations.
  • context can be used, independently of confidence, to determine when to switch the modeling method. For example, if a user has selected to do a squat exercise, we may anticipate that when the head gets too low with respect to the original height, the skeletal tracking may break down. At that point the system may be configured to trigger a transition to an avatar generation model based solely on the location of the center of mass or, alternatively, based on head height only, ignoring whatever results are currently arrived at by the skeleton modeling method. Similarly, when the user head gets high enough again, the skeleton modeling method may once again be employed.
  • the overall center of mass method itself can be used to check the validity of the skeletal model.
  • the computing system can effectuate a direct comparison of contact point forces as determined using the skeletal based, and overall center of mass based approaches. When the determinations diverge too much, the computing system may elect to use on the overall center-of-mass based results.
  • any or all of the systems, methods and processes described herein may be embodied in the form of computer executable instructions (i.e., program code) stored on a computer-readable storage medium which instructions, when executed by a machine, such as a computer, server, user equipment (UE), or the like, perform and/or implement the systems, methods and processes described herein.
  • a machine such as a computer, server, user equipment (UE), or the like, perform and/or implement the systems, methods and processes described herein.
  • Computer readable storage media include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, but such computer readable storage media do not includes signals.
  • Computer readable storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory
  • CD-ROM compact disc-read only memory
  • DVD digital versatile disks
  • magnetic cassettes magnetic tape
  • magnetic disk storage magnetic disk storage devices, or any other physical medium which may be used to store the desired information and which may be accessed by a computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
EP15787108.8A 2014-10-17 2015-10-14 Bildbasierte bestimmung der verteilung von bodengewicht Withdrawn EP3206765A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/517,042 US20160110593A1 (en) 2014-10-17 2014-10-17 Image based ground weight distribution determination
PCT/US2015/055407 WO2016061153A1 (en) 2014-10-17 2015-10-14 Image based ground weight distribution determination

Publications (1)

Publication Number Publication Date
EP3206765A1 true EP3206765A1 (de) 2017-08-23

Family

ID=54360575

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15787108.8A Withdrawn EP3206765A1 (de) 2014-10-17 2015-10-14 Bildbasierte bestimmung der verteilung von bodengewicht

Country Status (4)

Country Link
US (1) US20160110593A1 (de)
EP (1) EP3206765A1 (de)
CN (1) CN107077208A (de)
WO (1) WO2016061153A1 (de)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9811555B2 (en) 2014-09-27 2017-11-07 Intel Corporation Recognition of free-form gestures from orientation tracking of a handheld or wearable device
US10025989B2 (en) * 2015-05-05 2018-07-17 Dean Drako 3D event capture and image transform apparatus and method for operation
CN107346172B (zh) * 2016-05-05 2022-08-30 富泰华工业(深圳)有限公司 一种动作感应方法及装置
WO2018122600A2 (en) * 2016-12-28 2018-07-05 Quan Xiao Apparatus and method of for natural, anti-motion-sickness interaction towards synchronized visual vestibular proprioception interaction including navigation (movement control) as well as target selection in immersive environments such as vr/ar/simulation/game, and modular multi-use sensing/processing system to satisfy different usage scenarios with different form of combination
US20230103161A1 (en) * 2021-09-24 2023-03-30 Apple Inc. Devices, methods, and graphical user interfaces for tracking mitigation in three-dimensional environments

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5625577A (en) * 1990-12-25 1997-04-29 Shukyohojin, Kongo Zen Sohonzan Shorinji Computer-implemented motion analysis method using dynamics
RU2109336C1 (ru) * 1995-07-14 1998-04-20 Нурахмед Нурисламович Латыпов Способ погружения пользователя в виртуальную реальность и устройство для его реализации
US6430997B1 (en) * 1995-11-06 2002-08-13 Trazer Technologies, Inc. System and method for tracking and assessing movement skills in multidimensional space
EP1305767B1 (de) * 2000-05-18 2014-03-19 Commwell, Inc. Verfahren zur medizinischen überwachung aus der ferne mit integrierter videoverarbeitung
US7257237B1 (en) * 2003-03-07 2007-08-14 Sandia Corporation Real time markerless motion tracking using linked kinematic chains
US20050088515A1 (en) * 2003-10-23 2005-04-28 Geng Z. J. Camera ring for three-dimensional (3D) surface imaging
US8924021B2 (en) * 2006-04-27 2014-12-30 Honda Motor Co., Ltd. Control of robots from human motion descriptors
JP4388567B2 (ja) * 2007-06-26 2009-12-24 学校法人 関西大学 ゴルフクラブ解析方法
US8295546B2 (en) * 2009-01-30 2012-10-23 Microsoft Corporation Pose tracking pipeline
JP5367492B2 (ja) * 2009-07-31 2013-12-11 ダンロップスポーツ株式会社 ゴルフクラブの評価方法
US8564534B2 (en) * 2009-10-07 2013-10-22 Microsoft Corporation Human tracking system
US8933884B2 (en) * 2010-01-15 2015-01-13 Microsoft Corporation Tracking groups of users in motion capture system
WO2012046392A1 (ja) * 2010-10-08 2012-04-12 パナソニック株式会社 姿勢推定装置及び姿勢推定方法
CN102591456B (zh) * 2010-12-20 2015-09-02 微软技术许可有限责任公司 对身体和道具的检测
US10307640B2 (en) * 2011-09-20 2019-06-04 Brian Francis Mooney Apparatus and method for analyzing a golf swing
EP2674913B1 (de) * 2012-06-14 2014-07-23 Softkinetic Software Modellierungsanpassung und -verfolgung für dreidimensionales Objekt
US9322653B2 (en) * 2013-01-16 2016-04-26 Disney Enterprises, Inc. Video-based motion capture and adaptation
US9161708B2 (en) * 2013-02-14 2015-10-20 P3 Analytics, Inc. Generation of personalized training regimens from motion capture data
US9159140B2 (en) * 2013-03-14 2015-10-13 Microsoft Technology Licensing, Llc Signal analysis for repetition detection and analysis
US9142034B2 (en) * 2013-03-14 2015-09-22 Microsoft Technology Licensing, Llc Center of mass state vector for analyzing user motion in 3D images
US20140267611A1 (en) * 2013-03-14 2014-09-18 Microsoft Corporation Runtime engine for analyzing user motion in 3d images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None *
See also references of WO2016061153A1 *

Also Published As

Publication number Publication date
CN107077208A (zh) 2017-08-18
WO2016061153A1 (en) 2016-04-21
US20160110593A1 (en) 2016-04-21

Similar Documents

Publication Publication Date Title
US9943755B2 (en) Device for identifying and tracking multiple humans over time
US9384329B2 (en) Caloric burn determination from body movement
CN102356373B (zh) 虚拟对象操纵
US8633890B2 (en) Gesture detection based on joint skipping
CN102449576B (zh) 姿势快捷方式
JP5739872B2 (ja) モーションキャプチャにモデルトラッキングを適用するための方法及びシステム
EP2969079B1 (de) Signalanalyse zur wiederholungsdetektierung und analyse
CN105229666B (zh) 3d图像中的运动分析
EP2969080B1 (de) Massenmittelpunktstatusvektor zur analyse der benutzerbewegung in 3d-bildern
CN103530495B (zh) 增强现实仿真连续体
US8998718B2 (en) Image generation system, image generation method, and information storage medium
EP3206765A1 (de) Bildbasierte bestimmung der verteilung von bodengewicht
US20110199302A1 (en) Capturing screen objects using a collision volume
US20110296352A1 (en) Active calibration of a natural user interface
KR20120020137A (ko) 애니메이션 또는 모션들을 캐릭터에 적용하는 시스템 및 방법
KR20110120276A (ko) 표준 제스처
US20110070953A1 (en) Storage medium storing information processing program, information processing apparatus and information processing method
US9052746B2 (en) User center-of-mass and mass distribution extraction using depth images
US8482519B2 (en) Storage medium storing information processing program, information processing apparatus and information processing method
Poussard et al. 3DLive: A multi-modal sensing platform allowing tele-immersive sports applications

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20170412

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

17Q First examination report despatched

Effective date: 20190708

18W Application withdrawn

Effective date: 20190725