EP2389664A1 - Interface de contrôle d'animation de personnage utilisant une capture de mouvement - Google Patents

Interface de contrôle d'animation de personnage utilisant une capture de mouvement

Info

Publication number
EP2389664A1
EP2389664A1 EP10738940A EP10738940A EP2389664A1 EP 2389664 A1 EP2389664 A1 EP 2389664A1 EP 10738940 A EP10738940 A EP 10738940A EP 10738940 A EP10738940 A EP 10738940A EP 2389664 A1 EP2389664 A1 EP 2389664A1
Authority
EP
European Patent Office
Prior art keywords
virtual
effector
actor
pose
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP10738940A
Other languages
German (de)
English (en)
Inventor
Karen LUI
Satoru Ishigaki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Georgia Tech Research Institute
Georgia Tech Research Corp
Original Assignee
Georgia Tech Research Institute
Georgia Tech Research Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Georgia Tech Research Institute, Georgia Tech Research Corp filed Critical Georgia Tech Research Institute
Publication of EP2389664A1 publication Critical patent/EP2389664A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Definitions

  • Embodiments described herein relate generally to motion capture and computer animation technology, and more particularly to methods and apparatus for the generation of a virtual character based at least in part on the movements of a real-world actor.
  • Video capture devices often record the motion of an individual in the real world and use the gathered information to simulate that individual's motion in a virtual environment.
  • This technique can be used for a variety of purposes, many of which involve computer graphics and/or computer animation.
  • commercial entities often use known motion capture techniques to first record and then virtually reproduce the movements of a well-known individual, such as an athlete, in a computer or video game.
  • the generated virtual representation of real-world movements is thus familiar to the video game's target market and can accordingly improve a user's perception of game authenticity.
  • Offline data typically contains more precise measurements of an actor's movement, thus allowing a rendering system to more accurately depict the movement in a virtual world.
  • data is also limited to the specific actor movements and poses gathered during the preliminary capture session, thus constraining such a system from rendering any of the other myriad possible poses that it might be desirable to depict.
  • a processor-readable medium stores code representing instructions to cause a processor to define a virtual feature.
  • the virtual feature can be associated with at least one engaging condition.
  • the code further represents instructions to cause the processor to receive an end-effector coordinate associated with an actor and calculate an actor intention based at least in part on a comparison between the at least one engaging condition and the end-effector coordinate.
  • FIG. 1 is a schematic illustration of a motion capture and pose calculator system, according to an embodiment.
  • FIG. 2 is a schematic block diagram that shows a virtual pose calculator module, according to an embodiment.
  • FIG. 3 is a schematic block diagram that shows an intention recognition module, according to an embodiment.
  • FIG. 4 is a flowchart that illustrates a method for calculating an intermediate virtual pose associated with a real-world actor and a virtual character, according to an embodiment.
  • FIG. 5 is a flowchart that illustrates a method for determining a new virtual character center of mass, according to an embodiment.
  • FIG. 6 is a flowchart that illustrates a method for calculating a final pose that avoids penetrated geometries, according to an embodiment.
  • a virtual pose calculation module can be configured to receive information associated with the spatial positions of end-effector markers coupled to a real-world actor such as a human being.
  • the module can map the real-world end- effector markers into a virtual world to render a virtual character based on the actor.
  • the module can be configured so as to minimize discrepancies between the poses and motion of the actor and those of the corresponding virtual character.
  • module can be configured to enforce one or more constraints associated with a virtual world to ensure that the rendered virtual character moves in a manner consistent with its virtual surroundings.
  • the module can define one or more virtual features that exist within a virtual world.
  • the virtual features can be defined to include set of position coordinates, dimensions, and contact constraints, and/or a surface type.
  • one or more example motions can be defined and associated with each virtual feature.
  • the virtual pose calculation module can include one or more submodules configured to determine an intention of a real- world actor relative to one or more of the virtual features. The determination can be based on, for example, the positions of end- effectors coupled to the real-world actor and/or the set of contact constraints associated with each virtual feature.
  • the module can include a submodule that determines if the actor's current pose mimics one of the set of example motions associated with that virtual feature. The determination can be based on, for example, a measure of similarity between the positions of real-world actor end-effectors and the positions of virtual end-effectors defined by the example motion.
  • the module can calculate an intermediate virtual pose for the virtual character based on the real-world actor's position and/or movement.
  • the module can include one or more submodules configured to construct the intermediate virtual pose by cycling through each actor end-effector and calculating an intermediate virtual end-effector position corresponding to that actor end- effector.
  • the submodule can assign the value of the intermediate virtual pose end-effector to the position of the corresponding actor end-effector if the actor end-effector is unconstrained and/or the corresponding virtual character end- effector is constrained.
  • the submodule can also assign the value of the intermediate virtual pose end-effector to a value calculated based on an interpolation between the corresponding example motion end-effector position and the actor end-effector position if both the corresponding virtual end-effector is unconstrained and the corresponding actor end-effector is constrained.
  • each intermediate virtual pose end-effector position calculation can be further weighted and/or influenced based on one or more additional factors or goals, such as consistency with a previous virtual character pose, similarity with the example motion, and consistency with the actor's overall motion.
  • the pose calculation module can be further configured to calculate a next center of mass for the virtual character.
  • the pose calculation module can include a submodule that calculates a next virtual center of mass based at least in part on a spring force associated with at least one virtual end-effector of a virtual character.
  • the calculation can be based at least in part on a frictional force associated with one or more constrained virtual end-effectors of the virtual character.
  • the calculation can be based at least in part on a simulated gravitational force exerted on the virtual character.
  • the pose calculation module can be further configured to combine an intermediate virtual pose and a new virtual center of mass (or "COM") to determine a new virtual pose for the virtual character.
  • the module can include one or more submodules configured to combine the virtual end-effector position values associated with the intermediate virtual pose with the new virtual COM to define the new pose.
  • the submodule can cycle through a set of interactive contact points associated with each virtual feature in contact with the new virtual pose to determine if any end-effector of the new virtual pose penetrates the surface of any virtual feature.
  • the submodule can insert an inequality constraint for each penetrated geometry to the original new pose calculation formula so as to calculate a modified new pose that conforms to the contact constraints of each virtual feature and thus avoids any penetrated geometries.
  • the pose calculation module can send information associated with the new pose to another hardware- and/or software -based module such as a video game software module.
  • the module can send the information to a display device, such as a screen, for display of a virtual character rendered according to the new pose.
  • FIG. 1 is a schematic illustration of a motion capture and pose calculator system, according to an embodiment. More specifically, FIG. 1 illustrates an actor 100 wearing a plurality of markers 105. Based at least in part on the plurality of markers 105, the movements of the actor 100 are tracked by a capture device 110 and mapped into a virtual context by a pose calculator 120. The capture device 110 is operative Iy coupled to the pose calculator 120. In some embodiments, the pose calculator 120 can be operatively coupled to an integrated and/or external video display (not shown).
  • the actor 100 can be any real-world object, including, for example, a human being. In some embodiments, the actor 100 can be in motion. In some embodiments, the actor 100 can be clothed in special clothing sensitive to the capture device 110 and/or fitted with one or more markers sensitive to the capture device 110, such as the plurality of markers 105. In some embodiments, at least a portion of the markers 105 are associated with one or more actor end-effectors. In some embodiments, the actor 100 can be an animal, a mobile machine, a vehicle, or a robot.
  • the plurality of markers 105 can be any plurality of marker devices configured to allow tracking of movement by a capture device, such as the capture device 110.
  • the plurality of markers 105 can include one or more retro-reflective markers.
  • at least a portion of the plurality of markers 105 can be coupled or adhered to one or more articles of clothing, such as pants, a shirt, a bodysuit, and/or a hat or cap.
  • the capture device 110 can be any combination of hardware and/or software capable of capturing video. In some embodiments, the capture device 110 can be capable of detecting the spatial positions of one or more markers, such as the plurality of markers 105. In some embodiments, capture device 110 can be a dedicated video camera or a video camera coupled to or integrated within a consumer electronics device such as a personal computer, cellular telephone, or other device. In some embodiments, the capture device 110 can be a hardware-based module (e.g., a processor, an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA)).
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • the capture device 110 can be a software -based module residing on a hardware device (e.g., a processor) or in a memory (e.g., a RAM, a ROM, a hard disk drive, an optical drive, other removable media) operatively coupled to a processor.
  • a hardware device e.g., a processor
  • a memory e.g., a RAM, a ROM, a hard disk drive, an optical drive, other removable media
  • the capture device 110 can be physically coupled to a stabilization device such as a tripod or monopod, as shown in FIG. 1.
  • the capture device 110 can be held and/or stabilized by a camera operator (not shown).
  • the capture device 110 can be in motion.
  • the capture device 110 can be physically coupled to a vehicle.
  • the capture device 110 can be physically and/or operatively coupled to the pose calculator 120.
  • the capture device 110 can be coupled to the pose calculator 120 via a wire and/or cable (as shown in FIG. 1).
  • the capture device 110 can be wirelessly coupled to the pose calculator 120 via one or more wireless protocols such as Bluetooth, Ultra Wide-band (UWB), wireless Universal Serial Bus (wireless USB), microwave, WiFi, WiMax, one or more cellular network protocols such as GSM, CDMA, LTE, etc.
  • wireless protocols such as Bluetooth, Ultra Wide-band (UWB), wireless Universal Serial Bus (wireless USB), microwave, WiFi, WiMax, one or more cellular network protocols such as GSM, CDMA, LTE, etc.
  • the pose calculator 120 can be any combination of hardware and/or software capable of calculating a virtual pose and/or position associated with the actor 100 based at least in part on information received from the capture device 110.
  • the pose calculator 120 can be a hardware computing device including a processor, a memory, and firmware and/or software configured to cause the processor to calculate the actor pose and/or position.
  • the pose calculator 120 can be any other hardware- based module, such as, for example, an application-specific integrated circuit (ASIC) or a field programmable gate array (FPGA).
  • ASIC application-specific integrated circuit
  • FPGA field programmable gate array
  • the pose calculator 120 can alternatively be a software-based module residing on a hardware device (e.g., a processor) or in a memory (e.g., a RAM, a ROM, a hard disk drive, an optical drive, other removable media) operatively coupled to a processor.
  • a hardware device e.g., a processor
  • a memory e.g., a RAM, a ROM, a hard disk drive, an optical drive, other removable media
  • FIG. 2 is a schematic block diagram that shows a virtual pose calculator, according to an embodiment. More specifically, FIG. 2 illustrates a virtual pose calculator 200 that includes a first memory 210, an input/output (I/O) module 220, a processor 230, and a second memory 240 that includes an intention recognition module 242, an intermediate pose composition module 244, a simulation module 246 and a final pose composition module 248.
  • Intention recognition module 242 can receive motion capture information from I/O module 220 and send intention, motion capture and/or example motion information to the intermediate pose composition module 244.
  • Intermediate pose composition module 244 can receive intention, motion capture and/or example motion information from the intention recognition module 242 and send intermediate pose information to simulation module 246.
  • Simulation module 246 can receive intermediate pose information from intermediate pose composition module 244 and send new center of mass ("COM") information and/or contact constraint information associated with a virtual feature to final pose composition module 248.
  • Final pose composition module 248 can receive contact constraint information from the intention recognition 242 and/or the simulation module 246.
  • the final pose composition module can receive new center of mass information and/or intermediate pose information from the simulation module 246.
  • the final pose composition module can receive intermediate pose information from the intermediate pose composition module 244.
  • the final pose composition module 248 can and send final pose information to I/O module 220.
  • I/O module 220 can be configured to send at least a portion of the final pose information to an output display, such as a monitor or screen (not shown).
  • I/O module 220 can send at least a portion of the final pose information to one or more hardware and/or software modules, such as a video game module or other computerized application module.
  • the first memory 210, the I/O module 220, the processor 230 and the second memory 240 can be connected by, for example, one or more integrated circuits. Although shown as being within a single location and/or device, in some embodiments, any of the two memory modules, I/O module, and processor 230 an be connected over a network, such as a local area network, wide area network, or the Internet.
  • a network such as a local area network, wide area network, or the Internet.
  • First memory 210 and second memory 240 can be any type of memory such as, for example, a read-only memory (ROM) or a random-access memory (RAM).
  • the first memory 210 and/or the second memory 240 can be, for example, any type of computer-readable media, such as a hard-disk drive, a compact disc read-only memory (CD-ROM), a digital video disc (DVD), a Blu-ray disc, a flash memory card, or other portable digital memory type.
  • the first memory 210 can be configured to send signals to and receive signals from the second memory 240, the I/O module 220 and the processor 230.
  • the second memory 240 can be configured to send signals to and receive signals from the first memory 210, the I/O module 220 and the processor 230.
  • I/O module 220 can be any combination of hardware and/or software configured to receive information into and send information from the virtual pose calculator 200.
  • the I/O module 220 can receive information from a capture device (such as the capture device discussed in connection with FIG. 1 above) that includes video and/or motion capture information.
  • I/O module 220 can send information to another hardware and/or software module or device such as an output display, other computerized device, video game console or game module, etc.
  • Processor 230 can be any processor or microprocessor configured to send and receive information, send and receive one or more electrical signals, and process and/or generate instructions.
  • the processor 230 can include firmware and/or one or more pipelines, busses, etc.
  • the processor could be, for example, a digital signal processor (DSP) a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), etc.
  • DSP digital signal processor
  • FPGA field-programmable gate array
  • ASIC application-specific integrated circuit
  • the processor can be an embedded processor and can be and/or include one or more co-processors.
  • the intention recognition module 242 can be any combination of hardware and/or software capable of receiving motion capture data and determining an actor intention based thereon. As shown in FIG. 2, intention recognition module 242 can be a software module residing in second memory 240. In some embodiments, the intention recognition module 242 can include information associated with one or more virtual features of a virtual world, space, context or setting (not shown). For example, in some embodiments, the intention recognition module 242 can include information associated with one or more virtual features, such as furniture, equipment, projectiles, other virtual characters, structural components such as floors, walls and ceilings, etc.
  • the intention recognition module 242 can include a set of engaging conditions, contact constraints and/or one or more example motions associated with each virtual feature of a virtual world.
  • each set of engaging conditions can, when satisfied, indicate that an actor is intending to interact with an associated virtual feature.
  • each set of engaging conditions can include a set of spatial coordinates associated with a virtual feature that, when occupied by an actor, indicate that the actor intends to interact with that virtual feature.
  • the intention recognition module 242 can determine if the actor intends to interact with any of the defined virtual features based on the set of engaging conditions associated with each.
  • the intention recognition module 242 can determine if an actor is mimicking an example motion associated with a virtual feature. For example, if intention recognition module 242 has determined that the actor has satisfied one or more engaging conditions associated with that virtual feature, the module can compare one or more positions and/or velocities associated with an actor to determine if the actor's current pose closely matches a stored example motion associated with that virtual feature. In some embodiments, the intention recognition module 242 can send information associated with the determination to the intermediate pose composition module 244. In some embodiments, the intention recognition module 244 can additionally send to the pose composition module 240 one or more of: motion capture data associated with the actor, a set of engaging conditions associated with a virtual feature, and the definition of an example motion associated with the virtual feature. In some embodiments, the intention recognition module 244 can send a set of contact constraints associated with the virtual feature to the final pose composition module 248.
  • the intermediate pose composition module 244 can be any combination of hardware and/or software capable of composing an intermediate virtual pose based at least in part on actor motion capture information and an example motion associated with a virtual feature. As shown in FIG. 2, pose composition module 244 can be a software module residing in second memory 240. In some embodiments, the pose composition module 244 can use the motion capture data and example motion information associated with a virtual feature to calculate an integrated pose that describes the pose of a real-world actor in a virtual world.
  • the pose composition module 244 can receive information associated with one or more motion markers that define the position of a real- world actor and the definition of an example motion associated with a virtual feature.
  • the motion marker information can indicate the spatial positions of one or more end-effectors adhered to or associated with the actor.
  • the defined example motion can indicate the spatial positions of one or more end points of an example motion that can be performed with, on, or about a virtual feature.
  • the pose composition module 244 can use the received information to intelligently and adaptively calculate an integrated virtual pose for the actor that closely resembles the actor's real-world pose.
  • the pose composition module 244 can send at least a portion of the integrated pose information to the simulation module 246 and/or final pose composition module 248.
  • the simulation module 246 can be any combination of hardware and/or software capable of calculating a new simulated center of mass for a virtual character based on an intermediate virtual pose and the virtual character's current pose. As shown in FIG. 2, simulation module 246 can be a software module residing in second memory 240. In some embodiments, the simulation module 246 can use the intermediate pose information calculated by the pose composition module 244 and information defining the virtual character's current pose to calculate a new center of mass of the virtual character. In some embodiments, the simulation module 246 can send at least a portion of the new center of mass information to the final pose composition module 248.
  • the final pose composition module 248 can be any combination of hardware and/or software capable of calculating a final virtual character pose based at least in part on the virtual character's current pose, a simulated new center of mass for the virtual character, and a set of contact constraints associated with a virtual feature currently being engaged by the virtual character.
  • the final pose composition module 248 can receive any of: intention information, example motion information, virtual feature information, contact constraint information, intermediate pose information, and/or new center of mass information from any of intention recognition module 242, intermediate pose composition module 244, and simulation module 246.
  • FIG. 3 is a schematic block diagram that shows an intention recognition module, according to an embodiment. More specifically, FIG. 3 illustrates an intention recognition module 300 that includes a feature engagement module 310, a contact constraint module 320 and a motion mimicking module 330.
  • the feature engagement module 310 can send one or more signals to contact constraint module 320.
  • the contact constraint module 320 can send one or more signals to the motion mimicking module 330.
  • the intention recognition module 300 can include one or more hardware and/or software modules configured to receive and send signals including information to and from the module.
  • the intention recognition module 300 can be a software module stored in a memory of a computerized device configured to process motion capture information.
  • the intention recognition module 300 can be a separate hardware device operatively coupled to one or more other hardware devices for purposes of processing motion capture information and/or calculating properties of a virtual character.
  • the intention recognition module 300 can calculate whether a virtual character is attempting to interact with one or more virtual features and/or objects. For example, in some embodiments, the intention recognition module 300 can receive motion capture information based on a current position and/or movement of a real-world actor, such as a human, and use the information to determine if the actor is attempting to interact with a virtual door, chair, or book. In some embodiments, if the intention recognition module 300 determines that the actor is attempting to interact with a given virtual feature, it can then compare the actor's current real-world pose to each of a predefined set of example motions associated with that virtual feature to determine if the actor is currently mimicking any of them. In some embodiments, the intention recognition module 300 can then send the example motion information and current actor real-world pose information to another module (such as the intermediate pose calculator module discussed in connection with FIG. 2 above) for calculation of an intermediate virtual character pose based on the sent information.
  • a real-world actor such as a human
  • Feature engagement module 310 can be any combination of hardware and/or software configured to receive current actor pose information and determine whether the actor is attempting to engage with any particular virtual feature in a virtual world. More specifically, in some embodiments, the feature engagement module 310 can first receive information that defines an actor's real-world pose and/or position. In some embodiments, the information can be detected, gathered, and/or received by a capture or other device operatively or physically coupled to the intention recognition module. In some embodiments, the pose and/or position information can comprise one or more spatial coordinates of one or more end-effectors of the actor.
  • the pose information can include a series of x, y and z or r, ⁇ , and ⁇ coordinate sets, each associated with an actor end-effector such as a marker or other physical end-effector.
  • each end-effector can be physically positioned on an actor body point, such as an elbow, a hand, or another exterior portion of the body.
  • the feature engagement module 310 can include information associated with one or more virtual features the virtual world.
  • the virtual feature information can include color, spatial position, spatial dimension, mass, surface area, volume, rigidity, malleability, friction, surface type and/or other properties of a virtual feature.
  • the feature engagement module 310 can include a set of engaging conditions associated with each virtual feature.
  • the engaging conditions can include, for example, a set of spatial coordinates that, if occupied by a real-world actor (i.e., closely mapped by the actor's current end-effector positions), indicate that the actor is currently attempting to "engage", or interact with, that virtual feature.
  • the feature engagement module 310 can cycle through each virtual feature in a current virtual world and determine if the actor is currently engaging that virtual feature. For example, in some embodiments the feature engagement module 310 can, for each virtual feature, compare that virtual feature's associated engaging conditions with the current spatial positions of the actor's end-effectors. If the actor's current pose meets the engaging conditions associated with a given virtual feature, the feature engagement module 310 can define an engagement indicator variable indicating that the actor is currently engaging that particular virtual feature.
  • the feature engagement module 310 can determine if the actor's current position meets a given virtual feature's engaging conditions based on whether the difference, or ⁇ , between the actor's end-effector positions and the virtual feature's spatial position and dimensions is below a predetermined threshold. If the feature engagement module 310 does in fact set an indicator value indicating that the actor is currently engaging a particular virtual feature, it can send the engagement indicator, an identifier associated with the virtual feature, and the actor's end-effector position information to the contact constraint module 320. In some embodiments, the feature engagement module 310 can alternatively or additionally send an identifier associated with the particular virtual feature and the actor end-effector positions to the motion mimicking module 330.
  • the contact constraint module 320 can receive a virtual feature identifier, a set of actor end-effector positions, and an engagement indicator from the feature engagement module 310.
  • the engagement indicator can contain a binary value, such as "yes", “no", 1, 0, or information that identifies a virtual feature currently being engaged by the actor.
  • the contact constraint module 320 can calculate a set of contact constraints, or interactive contact points, associated with the identified virtual feature.
  • the contact constraints can be, for example, a set of points that define the position, dimensions, edges, and/or surface of an associated virtual feature.
  • the contact constraints module 320 can then send at least one of the calculated contact constraints, virtual feature identifier and actor end-effector spatial coordinates to the motion mimicking module 330.
  • Motion mimicking module 330 can be any combination of hardware and/or software configured to determine if a real-world actor, such as a human actor, is currently mimicking a predefined example motion associated with a virtual feature. As shown in FIG. 3, the motion mimicking module 330 can be a software module storing instructions configured to cause a processor to execute one or more steps that perform the above actions.
  • the motion mimicking module 330 can receive actor pose information, such as actor end-effector position information, a virtual feature identifier, and/or an engagement indicator from one or more of feature engagement module 310 and contact constraint module 320. In some embodiments, the motion mimicking module 330 can receive any of the above from another hardware and/or software module, or other hardware or computerized device.
  • actor pose information such as actor end-effector position information, a virtual feature identifier, and/or an engagement indicator from one or more of feature engagement module 310 and contact constraint module 320.
  • the motion mimicking module 330 can receive any of the above from another hardware and/or software module, or other hardware or computerized device.
  • the motion mimicking module 330 can determine whether the actor is currently mimicking any of a set of predefined example motions associated with the virtual feature that the actor is currently engaging. For example, in some embodiments, the module can cycle through each example motion associated with the engaged virtual feature, and for each, cycle through each actor end-effector to determine if the spatial position of that actor end-effector matches (or matches within an acceptable margin of error) the spatial position of a corresponding virtual end-effector defined by that example motion. In some embodiments, the module can additionally compare a velocity of that actor end-effector with the velocity of the corresponding virtual end-effector defined by the example motion.
  • the module can be configured to only consider actor end-effectors that are currently "unconstrained", i.e. currently not in direct contact with another physical mass or object.
  • an actor standing up straight on a floor with hands to the side can be considered to have constrained end-effectors on the feet (which are currently in contact with the floor), but unconstrained end-effectors on the hands (which are currently dangling in the air, acted upon only by gravity).
  • the above comparison process can be executed in reduced- or low-dimensional space so as to simplify the necessary calculations.
  • the motion mimicking module 330 can use principal component analysis (PCA) as part of the process described above.
  • PCA principal component analysis
  • the comparison can be made holistically on an entire example motion and set of actor end-effectors. In other words, a running error or discrepancy total can be kept throughout each end-effector comparison for a given example motion.
  • the motion mimicking module 330 can compare the total error for that example motion with a predetermined threshold. If, for example, the total error for the current example motion fails to exceed the predetermined threshold, the actor's current real- world pose and the example motion can be considered sufficiently similar for the mimicking module 330 to conclude that the actor is currently mimicking that example motion associated with the engaged virtual feature.
  • the above comparisons between sets of actor end-effector coordinates and sets of predefined virtual end-effector coordinates can include comparison of only subsets of the two end-effector sets.
  • the comparisons can be made on only a subset of core or bellwether end-effectors that are sufficient to indicate an actor's overall intention and/or general pose.
  • the motion mimicking module 330 can send one or more signals to another module within the intention recognition module 300 and/or an external hardware and/or software module including at least one of: an engagement indicator, an example motion indicator or identifier, a mimicked example motion definition (if applicable), and/or the actor end-effector coordinates.
  • FIG. 4 is a flowchart that illustrates a method for calculating an intermediate virtual pose associated with a virtual character, according to an embodiment. More specifically, FIG. 4 illustrates a series of steps that can be executed by a device to calculate an intermediate virtual pose based on an example motion associated with a virtual feature and a current real-world actor pose. When executed, the steps can calculate a position in virtual space (i.e., an intermediate virtual end-effector) corresponding to each of a series of end- effectors associated with a current real-world actor position as detected by a motion capture system. In some embodiments, each step can be performed by any combination of hardware and/or software, such as one or more computerized devices. Such a device will be discussed for purposes of explanation below.
  • steps 410 through 430 can be performed for each of a set of actor end-effectors, 400.
  • the discussion of each step 410-430 below will discuss execution of that step for a single actor end-effector.
  • the computerized device can execute the steps 410-430 at least once for each actor end-effector from the set of actor end-effectors associated with the real-world actor, thereby calculating a complete intermediate virtual pose.
  • the actor end-effectors can be a set of one or more actor body end points or reflective markers positioned in real space, with each position being represented by one or more spatial coordinates.
  • the position of each actor end-effector can be represented by a set of x, y and z or r, ⁇ , and ⁇ coordinates.
  • each actor end-effector position can be determined by a video capture device and a computerized hardware and/or software device coupled thereto.
  • a computerized device can determine whether an actor end-effector is constrained, at 410.
  • the computerized device can receive the actor end-effector position from an I/O module or an intention module similar to the I/O and intention modules discussed in connection with FIG. 2 above.
  • the device can determine if the end-effector's position indicates that it is currently in contact with an external surface.
  • the end-effector can be positioned on an actor's foot, and the computerized device can determine that the end-effector is currently in contact with a surface, such as a floor.
  • the computerized device can next execute one of two instructions based on the above-determined constraint state of the actor end-effector. If the actor end-effector is currently unconstrained, the device can set the position of the corresponding intermediate pose end-effector to that of the current actor end-effector. For example, in some embodiments, if the actor end-effector is determined to be unconstrained in step 400 and has a position defined by coordinates (xi, yi, Z 1 ) , the device can assign the corresponding end- effector value for the intermediate virtual pose to (X 1 , yi, Z 1 ). At that point, the device can iterate and/or proceed to consider a next actor end-effector and return to step 410 described above. Alternatively if the actor end-effector from step 410 is currently constrained, the device can proceed to step 420.
  • the computerized device can determine if the virtual character end-effector corresponding to the actor end-effector is constrained, 420. In some embodiments, the device can compare the position of the virtual character end-effector corresponding to the actor end- effector to that of one or more virtual features to determine if the virtual end-effector is positioned sufficiently close to the feature to be constrained. If the device determines that the virtual end-effector is constrained, it can proceed to step 415 described above and continue processing based on the current actor end-effector and corresponding virtual end-effector. If the device determines that the actor end-effector is not currently constrained, it can proceed to step 430 described below.
  • the computerized device can calculate the position of the intermediate virtual end-effector corresponding to the actor end-effector, 430.
  • the calculation can be based on, for example, an interpolation calculation between the actor end-effector and the corresponding virtual character end-effector positions.
  • the interpolation calculation can include an averaging calculation based on the positions of both the actor and corresponding example motion end-effectors. Such an interpolation can be advantageous inasmuch as it effects a compromise between the real-world movement of the actor and the virtual-world-specific example motion.
  • the calculation can include and/or be influenced by one or more weighting factors.
  • the one or more weighting factors can be configured to preserve similarity of the calculated intermediate virtual pose to the example pose associated with the engaged virtual feature.
  • at least one weighting factor can be configured to minimize differences between the calculated intermediate virtual pose and a previous pose of the virtual character.
  • at least one weighting factor can be configured to preserve and/or follow motion of the actor. After calculating the intermediate virtual end-effector position, the device can iterate and/or proceed to consider a next actor end-effector as discussed above.
  • the computerized device can execute the above instructions on each of at least a portion of a set of actor end-effectors so as to, in the aggregate, compute an intermediate virtual pose comprised of individual virtual end-effector values.
  • the set of actor end-effectors can be a subset of all the possible actor end- effectors associated with a real-world actor.
  • the set of actor end- effectors can be comprised of a minimal number of end-effectors, such as five. In such embodiments, the minimal number of actor end-effectors can be located on core portions of the actor's body so as to maximize the degree to which their movement is representative of the actor's as a whole.
  • FIG. 5 is a flowchart that illustrates a method for determining a new virtual character center of mass ("COM"), according to an embodiment. More specifically, FIG. 5 illustrates a series of steps that can be executed by a device to calculate a new virtual character COM based at least in part on a calculated intermediate virtual pose, sets of real- world actor end-effector positions and contact types, and one or more surface types associated with one or more constrained virtual character end-effectors.
  • a computerized device or module can receive the above information from a hardware and/or software module that calculates an intermediate virtual pose, using, for example, a method similar to that discussed in connection with FIG. 4 above.
  • each step of the process described in FIG. 5 can be performed by any combination of hardware and/or software, such as one or more computerized devices. Such a device will be discussed for purposes of explanation below.
  • a computerized device can define a virtual character to simulate a real-world actor's body and movement using a spring model.
  • the virtual character can be defined by a center-of-mass point and four damped "springs" that each approximate a human limb.
  • the center-of-mass point can be a point in virtual space defined by one or more coordinates, such as spatial coordinates in the form (x, y, z), (r, ⁇ , and ⁇ ), etc.
  • the center-of- mass point can be considered to be separately “attached” to each of the four damped “springs” and can be referred to simply as a "center of mass” or "COM".
  • the virtual character defined by the above features can be supported against gravity by a sum of spring forces exerted by each virtual end-effector of the virtual character, a sum of factional forces operating on constrained end-effectors of the virtual character, and the simulated gravitational force operating on the virtual COM.
  • a computerized device can calculate a spring force exerted by each of the virtual character's end-effectors, 500. More specifically, the device can calculate the spring force exerted by each virtual end-effector based at least in part on a relative distance between a COM and the spatial position of that end-effector in the virtual world. For example, in some embodiments, the device can calculate the spring force exerted by a given virtual end-effector at the current time by calculating the difference between the distance between the current virtual COM and that end-effector and the distance between the current real-world actor COM and the corresponding real-world end-effector.
  • this difference can indicate the amount of virtual space that the virtual character's simulated limb must move relative to the virtual COM to properly simulate the movement of the real-world actor end-effector.
  • this spring force calculation can be further based at least in part on one or more predefined spring coefficients.
  • the spring force calculation can include a gravity factor configured to compensate for the effect of simulated gravity on each constrained end-effector of the virtual character.
  • the gravity factor can be configured to equally distribute the gravitational force across all end-effectors of the virtual character.
  • the device can calculate the frictional force acting on each constrained virtual end-effector, 510. More specifically, the device can calculate the frictional force exerted on each virtual end-effector currently in contact with an external virtual feature or surface. For example, in some embodiments, the device can cycle through each virtual end-effector and determine if that end-effector is constrained, by, for example, comparing the position of that end-effector with the spatial coordinates of one or more virtual features of the virtual world.
  • the device can calculate a distance between the current virtual COM and the current actor real-world COM to determine a magnitude and/or direction of necessary movement (or "shift") necessary to "move” the virtual COM to a position that matches the real-world COM.
  • the device can use this distance, along with a virtual end-effector type and/or a virtual feature surface type to calculate the frictional force currently experienced by that virtual end-effector.
  • the above steps can be performed for each virtual end-effector so as to calculate a friction force for each constrained virtual end-effector.
  • the device can calculate a gravitational force mg currently exerted on the virtual COM, 520.
  • the device can multiply a predetermined mass value m by a predefined gravitational constant g associated with the current virtual world.
  • the gravitational constant g can be given the value 9.8 m/s 2 to simulate the gravitational force experienced by objects on Earth.
  • the device can combine the results of steps 500, 510 and 520 described above to compute a new virtual COM, 530. More specifically, in some embodiments the device can sum the summation of all the spring forces exerted by the virtual end-effectors, the summation of all frictional forces exerted on the constrained virtual end-effectors and the gravitational force to determine a new spatial position of the virtual COM.
  • FIG. 6 is a flowchart that illustrates a method for calculating a final pose that avoids penetrated geometries, according to an embodiment. More specifically, FIG. 6 illustrates a series of steps that can be executed by a device to calculate a final virtual pose based at least in part on an intermediate virtual pose, a set of interactive contact points, and a new virtual COM. The virtual pose is calculated so as to ensure that no virtual end-effector point penetrates any geometry of any virtual feature. In some embodiments, each step can be performed by any combination of hardware and/or software, such as one or more computerized devices. Such a device will be discussed for purposes of explanation below.
  • a computerized device can define a virtual character to simulate a real-world actor's body and movement using a center-of- mass and spring model similar to the model described in connection with FIG. 5 above.
  • the virtual character pose can be defined by a virtual center-of-mass point and a set of virtual end-effectors that correspond to a set of end-effectors and a spatial center-of-mass point associated with a real-world actor.
  • a computerized device can combine an intermediate virtual pose with a next virtual center-of-mass ("COM") to calculate a new virtual pose for a virtual character, 600.
  • the device can receive or have stored in a memory a set of virtual end- effector positions that define an intermediate virtual pose.
  • the intermediate virtual pose can be defined based at least in part on a process similar to the intermediate virtual pose calculation method described in connection with FIG. 4 above.
  • the next virtual COM can be a point in virtual space defined by one or more coordinates, such as spatial coordinates in the form (x, y, z), (r, ⁇ , and ⁇ ), etc.
  • the device can receive or have stored in memory a next virtual COM determined by, for example, a method similar to the virtual COM calculation method described in connection with FIG. 5 above.
  • the device can utilize a standard inverse kinematics approach and couple it with an optimization process to calculate the new virtual pose based on the next virtual COM and the intermediate virtual pose.
  • the new virtual pose can be defined at least in part by a set of new virtual end-effector positions and the new virtual COM.
  • the calculation can be bounded, constrained, or otherwise influenced by at a set of interactive contact points associated with the virtual character.
  • the device can check the new virtual pose for any penetrated geometries and/or collisions, 620. More specifically, in some embodiments, the device can ensure that the position of virtual end-effector defined as defined by the new pose passes through the surface or exterior of a virtual feature of the virtual world in which the character is rendered. For example, in some embodiments, the device can cycle through each virtual end-effector position for each end-effector defined by the new virtual pose and compare that position with a set of contact constraints for one or more virtual features. By virtue of these comparisons, the device can determine if one or more "collisions" occurs, i.e., if any virtual character contact point is currently defined such that it passes "through" the surface of a virtual feature, such as a solid object.
  • the device can include one or more inequality constraints for each collision/penetration point and re-calculate the new pose, 620. More specifically, in some embodiments the device can receive or have stored in a memory a set of inequality constraints for each virtual feature in the current virtual world. In some embodiments, the device can cycle through each collision detected in step 610 above and insert an inequality constraint associated with the collision point into the new pose calculation discussed in connection with step 600 above. By so doing, the device can modify the initially-calculated new pose to ensure that it conforms to the limitations and bounds of the virtual world, particularly with respect to the world's virtual features.
  • the device can send the new and now final pose to an output device for display, 630. More specifically, in some embodiments the device can send the new virtual center-of-mass and virtual end-effector positions of the final pose to an output device for display. For example, upon completion of the above steps, the device can send the final pose information to a screen for display to a user, such as a video game user. In some embodiments, the device can send the final pose information to one or more hardware and/or software modules configured to receive the final pose information and perform further processing thereon. For example, the device can send the final pose information to a software module associated with a video game capable of using the final pose information to render a virtual character within an interactive video game, such as a sports game or adventure game.
  • a module is intended to mean a single module or a combination of modules.
  • Some embodiments described herein relate to a computer storage product with a computer- or processor-readable medium (also can be referred to as a processor-readable medium) having instructions or computer code thereon for performing various computer- implemented operations.
  • the media and computer code also can be referred to as code
  • Examples of computer-readable media include, but are not limited to: magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (CD/DVDs), Compact Disc-Read Only Memories (CD-ROMs), and holographic devices; magneto-optical storage media such as optical disks; carrier wave signal processing modules; and hardware devices that are specially configured to store and execute program code, such as general purpose microprocessors, microcontrollers, Application- Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), and Read-Only Memory (ROM) and Random- Access Memory (RAM) devices.
  • magnetic storage media such as hard disks, floppy disks, and magnetic tape
  • optical storage media such as Compact Disc/Digital Video Discs (CD/DVDs), Compact Disc-Read Only Memories (CD-ROMs), and holographic devices
  • magneto-optical storage media such as optical disks
  • carrier wave signal processing modules such as CDs, CD-ROM
  • Examples of computer code include, but are not limited to, micro-code or microinstructions, machine instructions, such as produced by a compiler, code used to produce a web service, and files containing higher-level instructions that are executed by a computer using an interpreter.
  • embodiments may be implemented using Java, C++, or other programming languages (e.g., object-oriented programming languages) and development tools.
  • Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Un support lisible par un processeur stocke des instructions représentant un code pour amener un processeur à définir une caractéristique virtuelle. La caractéristique virtuelle peut être associée à au moins une condition d'engagement. Le code représente en outre des instructions pour amener le processeur à recevoir une coordonnée d'un organe terminal effecteur associée à un acteur et calcule l'intention d'un acteur en se basant au moins en partie sur une comparaison entre la ou les conditions d'engagement et la coordonnée de l'organe terminal effecteur.
EP10738940A 2009-01-21 2010-01-21 Interface de contrôle d'animation de personnage utilisant une capture de mouvement Withdrawn EP2389664A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US46115409P 2009-01-21 2009-01-21
PCT/US2010/021587 WO2010090856A1 (fr) 2009-01-21 2010-01-21 Interface de contrôle d'animation de personnage utilisant une capture de mouvement

Publications (1)

Publication Number Publication Date
EP2389664A1 true EP2389664A1 (fr) 2011-11-30

Family

ID=42542352

Family Applications (1)

Application Number Title Priority Date Filing Date
EP10738940A Withdrawn EP2389664A1 (fr) 2009-01-21 2010-01-21 Interface de contrôle d'animation de personnage utilisant une capture de mouvement

Country Status (3)

Country Link
EP (1) EP2389664A1 (fr)
CN (1) CN102341767A (fr)
WO (1) WO2010090856A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3274912B1 (fr) 2015-03-26 2022-05-11 Biomet Manufacturing, LLC Système de planification et d'exécution d'interventions d'arthroplastie à l'aide de données de capture de mouvement
CN105678211A (zh) * 2015-12-03 2016-06-15 广西理工职业技术学院 一种人体动态特征的智能识别系统
WO2018082692A1 (fr) * 2016-11-07 2018-05-11 Changchun Ruixinboguan Technology Development Co., Ltd. Systèmes et procédés d'interaction avec une application
CN114155324B (zh) * 2021-12-02 2023-07-25 北京字跳网络技术有限公司 虚拟角色的驱动方法、装置、电子设备及可读存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1168057C (zh) * 1996-08-14 2004-09-22 挪拉赫梅特·挪利斯拉莫维奇·拉都包夫 追踪并显示使用者在空间的位置与取向的方法,向使用者展示虚拟环境的方法以及实现这些方法的系统
US6088042A (en) * 1997-03-31 2000-07-11 Katrix, Inc. Interactive motion data animation system
US20020140633A1 (en) * 2000-02-03 2002-10-03 Canesta, Inc. Method and system to present immersion virtual simulations using three-dimensional measurement
US6522332B1 (en) * 2000-07-26 2003-02-18 Kaydara, Inc. Generating action data for the animation of characters
US20040104935A1 (en) * 2001-01-26 2004-06-03 Todd Williamson Virtual reality immersion system
AU2002303082A1 (en) * 2001-01-26 2002-09-12 Zaxel Systems, Inc. Real-time virtual viewpoint in simulated reality environment
US7403202B1 (en) * 2005-07-12 2008-07-22 Electronic Arts, Inc. Computer animation of simulated characters using combinations of motion-capture data and external force modelling or other physics models

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2010090856A1 *

Also Published As

Publication number Publication date
CN102341767A (zh) 2012-02-01
WO2010090856A1 (fr) 2010-08-12

Similar Documents

Publication Publication Date Title
CN102129551B (zh) 基于关节跳过的姿势检测
US11948376B2 (en) Method, system, and device of generating a reduced-size volumetric dataset
EP2969079B1 (fr) Analyse de signaux pour la détection et analyse de répétitions
EP2969080B1 (fr) Vecteur d'état de centre de masse pour analyser un mouvement d'utilisateur dans des images tridimensionnelles (3d)
US20140267611A1 (en) Runtime engine for analyzing user motion in 3d images
US10825197B2 (en) Three dimensional position estimation mechanism
CN102184009A (zh) 跟踪系统中的手位置后处理精炼
US20110175918A1 (en) Character animation control interface using motion capure
US11164321B2 (en) Motion tracking system and method thereof
CN102221883A (zh) 自然用户界面的主动校准
Park AR-Room: a rapid prototyping framework for augmented reality applications
KR20120041086A (ko) 아바타 생성을 위한 처리 장치 및 방법
WO2010090856A1 (fr) Interface de contrôle d'animation de personnage utilisant une capture de mouvement
US11721056B2 (en) Motion model refinement based on contact analysis and optimization
CN115515487A (zh) 基于使用多视图图像的3d人体姿势估计的基于视觉的康复训练系统
CN117581272A (zh) 用于体育分析中的队伍分类的方法和装置
EP3206765A1 (fr) Détermination de la répartition des poids au sol par imagerie
Kim et al. Human motion reconstruction from sparse 3D motion sensors using kernel CCA‐based regression
US20220362630A1 (en) Method, device, and non-transitory computer-readable recording medium for estimating information on golf swing
US20240046583A1 (en) Real-time photorealistic view rendering on augmented reality (ar) device
Richter et al. Human Climbing and Bouldering Motion Analysis: A Survey on Sensors, Motion Capture, Analysis Algorithms, Recent Advances and Applications.
US8933940B2 (en) Method and system for creating animation with contextual rigging
JP2023525185A (ja) 改良されたポーズ追跡を用いた運動学的インタラクションシステム
CN111353345B (zh) 提供训练反馈的方法、装置、系统、电子设备、存储介质
Hachaj et al. RMoCap: an R language package for processing and kinematic analyzing motion capture data

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20110819

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20150801