CN102341767A - Character animation control interface using motion capture - Google Patents

Character animation control interface using motion capture Download PDF

Info

Publication number
CN102341767A
CN102341767A CN2010800100090A CN201080010009A CN102341767A CN 102341767 A CN102341767 A CN 102341767A CN 2010800100090 A CN2010800100090 A CN 2010800100090A CN 201080010009 A CN201080010009 A CN 201080010009A CN 102341767 A CN102341767 A CN 102341767A
Authority
CN
China
Prior art keywords
virtual
actor
end effect
effect device
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010800100090A
Other languages
Chinese (zh)
Inventor
C·K·刘
石垣圣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Georgia Tech Research Institute
Original Assignee
Georgia Tech Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Georgia Tech Research Institute filed Critical Georgia Tech Research Institute
Publication of CN102341767A publication Critical patent/CN102341767A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A processor-readable medium stores code representing instructions to cause a processor to define a virtual feature. The virtual feature can be associated with at least one engaging condition. The code further represents instructions to cause the processor to receive an end-effector coordinate associated with an actor and calculate an actor intention based at least in part on a comparison between the at least one engaging condition and the end-effector coordinate.

Description

Use the role animation control interface of capturing movement
The cross reference of related application
The application requires the U.S. Provisional Application No.61/461 that is entitled as " Character AnimationControl Interface Using Motion Capture " of submission on January 21st, 2009; 154 right of priority, this U.S. Provisional Application are included in here through reference thus.
Technical field
The embodiments described herein relates generally to capturing movement and Computer Animated Graph, relates in particular to be used at least part generates virtual role based on real world actor's motion method and apparatus.
Background technology
Video capture device writes down motion individual in the real world usually, and uses the information of collecting in virtual environment, to simulate the motion of this individuality.This technology can be used for various purposes, wherein many computer graphical and/or computer animations of relating to.For example, commercial entity uses known capturing movement technology at first to write down then in computing machine or video game machine the well-known individuality of virtual reappearance such as athletic motion usually.The virtual representation of the real world that generated motion thereby be familiar with by the target market of this video game machine, thus the recreation authenticity that can improve the user is felt.
Because being obtained and store in the very first time, it is used in time reproduction subsequently, so this data are commonly referred to " offline data ".Offline data contains the more accurately measurement of actor's motion usually, allows playback system in virtual world, to describe this motion more accurately thus.Yet this data also are limited to the concrete action person's motion and the posture of during preliminary capture session, collecting, and retrain this system thus and reproduce in maybe posture any of other countless versions that it possibly expect to describe.
In other method, actor's real-world locations and motion be by quasi real time being mapped in the Virtual Space, and this provides the meticulousr control of respective virtual role's motion and the possible virtual location of theory unlimited quantity to the actor.Make the data of collecting in this way be called " on line data ", its instantaneity of catching allows the element " alternately " in actor and the Virtual Space or the world.Yet the time-constrain of given these methods and processing requirements, on line data are more inaccurate than its off line homologue usually, particularly when relevant virtual world is different from actor's concrete real world basically.
Therefore, need to collect online motion capture data, distinguish that the actor is intended to and under the condition of the constraint of virtual world, accurately reproduce the system and the equipment of actor's motion based on this.
Summary of the invention
The processor readable medium storage representation makes the code of the instruction of processor defining virtual characteristic.This virtual feature can be associated with at least one engaging condition.This code representes that also processor is received end effect device coordinate that is associated with the actor and the instruction of at least partly calculating actor's intention based on the comparison between this at least one engaging condition and the end effect device coordinate.
Description of drawings
Fig. 1 is schematically illustrating according to the capturing movement of embodiment and posture calculator system.
Fig. 2 is the schematic block diagram that illustrates according to the virtual posture calculator modules of embodiment.
Fig. 3 is the schematic block diagram that illustrates according to the intention identification module of embodiment.
Fig. 4 is an illustration according to the process flow diagram of method that is used to calculate the intermediate virtual posture that is associated with real world actor and virtual role of embodiment.
Fig. 5 is the process flow diagram of illustration according to the method that is used for definite new virtual role barycenter of embodiment.
Fig. 6 is an illustration according to the process flow diagram of the method that is used to calculate the final posture of avoiding penetrating geometric configuration of embodiment.
Embodiment
The configurable one-tenth of virtual posture computing module receives the information that is associated with the locus of the end effect device mark that is coupled to real world actor such as people.In certain embodiments, module can be mapped to virtual world with based on actor's reproducing virtual role with real world end effect device mark.Feasible posture and the posture of motion and corresponding virtual role and the difference between the motion that minimizes the actor of the configurable one-tenth of this module.In certain embodiments, the one or more constraints implementing to be associated of the configurable one-tenth of module with virtual world with the virtual role guaranteeing to reproduce to move with the consistent mode of its virtual environment.
In certain embodiments, one or more virtual features of in virtual world, existing of this module definable.Virtual feature may be defined to set and/or the surface type that comprises position coordinates, dimension and contiguity constraint.In certain embodiments, the one or more examples of definable are moved, and it is associated with each virtual feature.In certain embodiments, virtual posture computing module can comprise the one or more submodules that are configured to confirm with respect to one or more virtual features real world actor intention.Should confirm for example can be based on the position of the end effect device that is coupled to the real world actor and/or the contiguity constraint set that is associated with each virtual feature.In certain embodiments, module can comprise the submodule whether current posture of confirming the actor imitates one of example motion set of being associated with that virtual feature.For example should confirm can be based on the measurement of similarity between the position of the virtual terminal effector of the position of real world actor end effect device and example movement definition.
In certain embodiments, this module can be based on real world actor's the position and/or the intermediate virtual posture of motion calculation virtual role.For example; In certain embodiments, this module can comprise and is configured to through cycling through each actor's end effect device and calculating one or more submodules of constructing the intermediate virtual posture corresponding to the intermediate virtual end effect device position of that actor's end effect device.
For example, in certain embodiments, if actor's end effect device is unfettered and/or corresponding virtual role end effect device is tied, then submodule can be distributed to corresponding actor's end effect device position with the value of intermediate virtual posture end effect device.If actor's end effect device that corresponding virtual end effect device is unfettered and corresponding is tied, then submodule can be distributed to the value of intermediate virtual posture end effect device based on the example motion terminals effector position of correspondence and the value of the interpolation calculation between actor's end effect device position.In certain embodiments; Also can be based on one or more additive factors or target; Such as with the consistance of last virtual role posture, with the similarity of example motion and with the consistance of actor's overall movement, weighting and/or influence each intermediate virtual posture end effect device position calculation.
The also configurable one-tenth of posture computing module calculates next barycenter of virtual role.For example, in certain embodiments, the posture computing module can comprise the submodule that at least partly calculates next virtual center of mass based on the elastic force that is associated with at least one virtual terminal effector of virtual role.In certain embodiments, this calculating can be at least partly based on the friction force that is associated with the one or more affined virtual terminal effector of virtual role.In certain embodiments, this calculating can be at least partly based on the simulated gravity that is applied on the virtual role.
The also configurable one-tenth combination of posture computing module intermediate virtual posture and the new virtual posture of new virtual center of mass (" COM ") with definite virtual role.For example, in certain embodiments, this module can comprise and is configured to make up the virtual terminal effector positional value that is associated with the intermediate virtual posture and new virtual COM to define one or more submodules of new posture.In certain embodiments, if any end effect device of new virtual posture penetrates the surface of any virtual feature, then submodule is capable of circulation through gathering with the mutual contact point that each virtual feature that contacts with new virtual posture is associated.If submodule detects any this type of and penetrates; Then in certain embodiments; It can be inserted into original new posture computing formula with each constraint that do not wait that penetrates geometric configuration, so that calculate the contiguity constraint that meets each virtual feature and avoid any new posture that penetrates the modification of geometric configuration thus.
In certain embodiments, the posture computing module can send to another module based on hardware and/or software with the information that is associated with new posture, such as the video game software module.In certain embodiments, module can send to display device with this information, such as screen, so that show the virtual role that reproduces according to new posture.Through carrying out above-mentioned steps, and the virtual role that reproduces according to each continuously new posture output of calculating in time, this module can generate the accurate visual representation of real world actor motion in the Virtual Space.
Fig. 1 is schematically illustrating according to the capturing movement of embodiment and posture calculator system.More particularly, Fig. 1 illustration have on the actor 100 of a plurality of marks 105.At least part is based on a plurality of marks 105, and acquisition equipment 110 is followed the tracks of actor 100 motion, and posture counter 120 is mapped to the virtual field border with it.Be coupled to posture counter 120 in acquisition equipment 110 operations.In certain embodiments, posture counter 120 can be operated and be coupled to integrated and/or external video display (not shown).
Actor 100 can be the object of any real world, comprises for example people.In certain embodiments, actor 100 can be in motion.In certain embodiments, actor 100 can wear the special clothes to acquisition equipment 110 sensitivities, and/or is equipped with the one or more marks to acquisition equipment 110 sensitivities, such as a plurality of marks 105.In certain embodiments, part mark 105 is associated with one or more actor's end effect devices at least.In certain embodiments, actor 100 can be animal, exercise machine, vehicle or robot.
A plurality of marks 105 can be any a plurality of labelling apparatus that are configured to allow acquisition equipment such as acquisition equipment 110 pursuit movements.In certain embodiments, a plurality of marks 105 can comprise one or more reflective markers.In certain embodiments, at least a portion can be coupled or adhere to one or more clothings in a plurality of marks 105, such as trousers, shirt, tights and/or cap.
Acquisition equipment 110 can be can capturing video hardware and/or any combination of software.In certain embodiments, acquisition equipment 110 can detect the locus of one or more marks such as a plurality of marks 105.In certain embodiments, acquisition equipment 110 can be the dedicated video video camera or be coupled to or be integrated in the video camera in consumer electronics device such as personal computer, cellular telephone or other device.In certain embodiments, acquisition equipment 110 can be based on the module (for example processor, special IC (ASIC), field programmable gate array (FPGA)) of hardware.In certain embodiments, acquisition equipment 110 can be the module based on software that resides in the storer (for example RAM, ROM, hard disk drive, CD-ROM drive, other portable medium) that is coupled to processor on the hardware unit (for example processor) or in the operation.
In certain embodiments, acquisition equipment 110 can physically be coupled to stabilising arrangement, and is such as tripod or prop stand, as shown in Figure 1.In certain embodiments, acquisition equipment 110 can be held and/or stable by camera operation personnel (not shown).In certain embodiments, acquisition equipment 110 can be in motion.In certain embodiments, acquisition equipment 110 can physically be coupled to vehicle.In certain embodiments, acquisition equipment 110 can physically and/or in the operation be coupled to posture counter 120.For example, acquisition equipment 110 can be coupled to posture counter 120 through lead and/or cable (as shown in Figure 1).In certain embodiments, acquisition equipment 110 can be wirelessly coupled to posture counter 120 through one or more wireless protocols such as bluetooth, ultra broadband (UWB), radio universal serial bus (Wireless USB), microwave, WiFi, WiMax, one or more cellular network protocols (such as GSM, CDMA, LTE etc.).
Posture counter 120 can be virtual posture and/or the hardware of position and/or any combination of software that can at least partly be associated with actor 100 based on the information calculations that receives from acquisition equipment 110.In certain embodiments, posture counter 120 can be to comprise processor, storer and be configured to make processor to calculate actor's posture and/or the firmware of position and/or the hardware calculation element of software.In certain embodiments, posture counter 120 can be any other hardware based module, such as special IC (ASIC) or field programmable gate array (FPGA).Posture counter 120 alternatively can be the module based on software that resides in the storer (for example RAM, ROM, hard disk drive, CD-ROM drive, other portable medium) that is coupled to processor on the hardware unit (for example processor) or in the operation.
Fig. 2 is the schematic block diagram that illustrates according to the virtual posture counter of embodiment.More particularly; Fig. 2 illustration virtual posture counter 200; This counter 200 comprises first memory 210, I/O (I/O) module 220, processor 230 and second memory 240, and this storer 240 comprises intention identification module 242, middle posture synthesis module 244, analog module 246 and final posture synthesis module 248.Intention identification module 242 can receive the intention capturing information from I/O module 220, and sends intention, capturing movement and/or example movable information to middle posture synthesis module 244.Middle posture synthesis module 244 can receive intention, capturing movement and/or example movable information from intention identification module 242, and sends middle pose information to analog module 246.Pose information in the middle of analog module 246 can receive from middle posture synthesis module 244, and the contiguity constraint information of sending new barycenter (COM) information and/or being associated to final posture synthesis module 248 with virtual feature.Final posture synthesis module 248 can receive contiguity constraint information from intention identification 242 and/or analog module 246.In certain embodiments, final posture synthesis module can receive new barycenter information and/or middle pose information from analog module 246.In certain embodiments, pose information in the middle of final posture synthesis module can receive from middle posture synthesis module 244.
In certain embodiments, final posture synthesis module 248 can send final pose information to I/O module 220.In certain embodiments, I/O module 220 configurable one-tenth send the final pose information of part at least to Output Display Unit such as monitor or screen (not shown).In certain embodiments, I/O module 220 can be to one or more hardware and/or software module such as video game module or at least partly final pose information of other computerized applications module transmission.
In certain embodiments, first memory 210, I/O module 220, processor 230 and second memory 240 for example can be connected through one or more integrated circuit.Though be shown as in single place and/or device, in certain embodiments, any Tong Guo network such as LAN, wide area network or the Internet in two memory modules, I/O module and the processors 230 are connected.
First memory 210 can be the storer of any kind with second memory 240, such as ROM (read-only memory) (ROM) or random-access memory (ram).In certain embodiments; First memory 210 and/or second memory 240 for example can be the computer-readable mediums of any kind, such as hard disk drive, compact disc read-only memory (CD-ROM), digital video disc (DVD), Blu-ray disc, flash card or other portable digital memory type.First memory 210 configurable one-tenth send signal to second memory 240, I/O module 220 and processor 230, and receive signal from there.Second memory 240 configurable one-tenth send signal to first memory 210, I/O module 220 and processor 230, and receive signal from there.
I/O module 220 can be to be configured to send the hardware of information and/or any combination of software from virtual posture counter 200 reception information and to it.In certain embodiments, I/O module 220 can be from acquisition equipment (such as the acquisition equipment of top combination Fig. 1 argumentation) the reception information that comprises video and/or capturing movement information.In certain embodiments, I/O module 220 can or be installed such as transmission information such as Output Display Unit, other computerized device, video-game control stand or game module to another hardware and/or software module.
Processor 230 can be any processor or the microprocessor that is configured to send and receive information, send and receive one or more electric signal and handles and/or generate instruction.In certain embodiments, processor 230 can comprise firmware and/or one or more streamline, bus etc.In certain embodiments, this processor for example can be digital signal processor (DSP), field programmable gate array (FPGA), special IC (ASIC) etc.In certain embodiments, this processor can be a flush bonding processor, and can be and/or comprise one or more coprocessors.
Intention identification module 242 can be can receive motion capture data and confirm hardware that the actor is intended to and/or any combination of software based on it.As shown in Figure 2, intention identification module 242 can be the software module that resides in the second memory 240.In certain embodiments, intention identification module 242 can comprise with virtual world, space, border, field or the information that one or more virtual features of (not shown) are associated is set.For example, in certain embodiments, intention identification module 242 can comprise the information that is associated with one or more virtual features such as furniture, equipment, projectile, other virtual role, construction package (such as floor, wall and ceiling) etc.
In certain embodiments, intention identification module 242 can comprise engaging condition set, contiguity constraint and/or the motion of one or more example that is associated with each virtual feature of virtual world.In certain embodiments, the set of each engaging condition can work as and indicated the actor being intended to mutual with the virtual feature that is associated when being satisfied.In certain embodiments, each engaging condition set can comprise the volume coordinate set that is associated with virtual feature, and these volume coordinates indicate actor's intention mutual with that virtual feature when being occupied by the actor.In these embodiment, intention identification module 242 can confirm whether the actor is intended to the virtual feature of any definition mutual based on the engaging condition set that is associated with the virtual feature of each definition.
In certain embodiments, intention identification module 242 can confirm whether the actor is imitating the example motion that is associated with virtual feature.For example; If intention identification module 242 confirmed the actor and satisfied the one or more engaging conditions that are associated with that virtual feature, then module one or more positions that can relatively be associated and/or speed with the actor with the current posture of confirming this action person whether close match be associated with that virtual feature the example of being stored move.In certain embodiments, intention identification module 242 can send and this confirms the information that is associated to middle posture synthesis module 244.In certain embodiments, intention identification module 242 can additionally send like the next item down or multinomial to posture synthesis module 240: the definition that the motion capture data that is associated with the actor, the engaging condition set that is associated with virtual feature and the example that is associated with virtual feature are moved.In certain embodiments, intention identification module 242 can send the contiguity constraint set that is associated with this virtual feature to final posture synthesis module 248.
Middle posture synthesis module 244 can be at least part based on the hardware of actor's capturing movement information and the virtual posture of example motion synthetic mesophase that is associated with virtual feature and/or any combination of software.As shown in Figure 2, posture synthesis module 244 can be the software module that resides in the second memory 240.In certain embodiments, posture synthesis module 244 can use motion capture data and the example movable information that is associated with virtual feature calculates the comprehensive posture of describing real world actor's posture in the virtual world.
For example, in certain embodiments, posture synthesis module 244 can receive the definition of information that is associated with the one or more movement marks that define real world actor position and the example that is associated with virtual feature motion.In certain embodiments, movement mark information can be indicated the locus that adheres to actor or one or more end effect devices associated therewith.In certain embodiments, the motion of defined example can indicate the available virtual characteristic, above that or about the locus of one or more end point of the example motion of its execution.In this embodiment, posture synthesis module 244 can use the information intelligent ground of reception and calculate actor's the integrated virtual posture of close analogous action person's real world posture adaptively.In certain embodiments, posture synthesis module 244 can send the comprehensive pose information of part at least to analog module 246 and/or final posture synthesis module 248.
Analog module 246 can be hardware and/or any combination of software that can calculate the new simulation barycenter of virtual role based on the current posture of intermediate virtual posture and virtual role.As shown in Figure 2, analog module 246 can be the software module that resides in the second memory 240.In certain embodiments, analog module 246 can use the information of the middle pose information calculated by posture synthesis module 244 and defining virtual role's current posture to calculate the new barycenter of virtual role.In certain embodiments, analog module 246 can send the new barycenter information of part at least to final posture synthesis module 248.
Final posture synthesis module 248 can be that part is gathered the hardware that calculates final virtual role posture and/or any combination of software based on the new barycenter of the simulation of the current posture of virtual role, virtual role and the contiguity constraint that is associated with the virtual feature of the current joint of virtual role at least.In certain embodiments, final posture synthesis module 248 can receive following any one: from any intent information, example movable information, virtual feature information, contiguity constraint information, middle pose information and/or new barycenter information in intention identification module 242, middle posture synthesis module 244 and the analog module 246.
Fig. 3 is the schematic block diagram that illustrates according to the intention identification module of embodiment.More particularly, Fig. 3 illustration comprise the intention identification module 300 of characteristic splice module 310, contiguity constraint module 320 and motion mimics module 330.In certain embodiments, characteristic splice module 310 can be sent one or more signals to contiguity constraint module 320.In certain embodiments, contiguity constraint module 320 can be sent one or more signals to motion mimics module 330.
In certain embodiments, intention identification module 300 can comprise be configured to from the one or more hardware from the signal that comprises information to module and/or the software module that receive and send.In certain embodiments, intention identification module 300 can be the software module that is stored in the storer of the computerized device that is configured to handle capturing movement information.In certain embodiments, intention identification module 300 can be to be coupled to one or more other hardware units in the operation so that handle capturing movement information and/or the independent hardware unit of the attribute of calculating virtual role.
In certain embodiments, whether intention identification module 300 can calculate virtual role and attempting with one or more virtual features and/or object mutual.For example, in certain embodiments, intention identification module 300 can receive capturing movement information based on real world actor such as people's current location and/or motion, and it is mutual to use this information to confirm whether this action person is attempting with virtual door, chair or book.In certain embodiments; If intention identification module 300 confirms that this action person is attempting with given virtual feature mutual; Then it can compare actor's current real world posture then and the set of the example motion predefine that is associated with that virtual feature in each, to confirm whether current any of imitating in them of this action person.In certain embodiments; Intention identification module 300 can send example movable information and current actor's real world pose information to another module (such as the middle posture calculator modules of top combination Fig. 2 argumentation) then, so that based on the information calculations intermediate virtual character pose of sending.
Characteristic splice module 310 can be configured to receive current actor's pose information and confirm this action person whether attempting with virtual world in the hardware that engages of any particular virtual characteristic and/or any combination of software.More particularly, in certain embodiments, characteristic splice module 310 at first can receive definition actor's the real world posture and/or the information of position.In certain embodiments, in the operation or physically be coupled to the intention identification module catch or other device can detect, collect and/or reception information.In certain embodiments, posture and/or positional information can comprise one or more volume coordinates of one or more end effect devices of actor.For example; In certain embodiments; Pose information can comprise a series of x, y and z or r, θ and coordinate set, and each is associated with actor's end effect device such as mark or other physical terminal effector.In certain embodiments, each end effect device can physically be positioned on actor's body points, such as another outer portion of elbow, hand or health.
In certain embodiments, characteristic splice module 310 can comprise with virtual world in the information that is associated of one or more virtual features.For example, in certain embodiments, virtual feature information can comprise other attribute of color, locus, Spatial Dimension, quality, surface area, volume, rigidity, toughness, friction, surface type and/or virtual feature.In certain embodiments, characteristic splice module 310 can comprise the engaging condition set that is associated with each virtual feature.Engaging condition for example can comprise volume coordinate set, if this coordinate is occupied (promptly being shone upon closely by actor's current end effect device position) by the real world actor, indicates then that the actor is current to attempt " to engage " with that virtual feature or alternately.
In certain embodiments, characteristic splice module 310 is capable of circulation through each virtual feature in the current virtual world, and current that characteristic that whether engaging of definite this action person.For example, in certain embodiments, characteristic splice module 310 can be for the current locus of be associated the joint condition and the actor's of each virtual feature comparison virtual feature end effect device.If actor's current posture satisfies the engaging condition that is associated with given virtual feature, then characteristic splice module 310 definables are indicated the current joint designator variable that is engaging that particular virtual characteristic of this action person.In certain embodiments; Whether characteristic splice module 310 can be lower than predetermined threshold based on the locus of actor's end effect device position and virtual feature and the difference between the latitude or Δ, confirms whether actor's current location satisfies the combination condition of given virtual feature.If characteristic splice module 310 is provided with the current indicator value that is engaging the particular virtual characteristic of indication this action person in fact really, then it can send the end effect device positional information that engages designator, the identifier that is associated with virtual feature and actor to contiguity constraint module 320.In certain embodiments, characteristic splice module 310 can alternatively or additionally be sent identifier and the actor's end effect device position that is associated with the particular virtual characteristic to motion mimics module 330.
In certain embodiments, contiguity constraint module 320 can receive virtual feature identifier, actor's end effect device location sets and engage designator from characteristic splice module 310.In certain embodiments, engage designator and can contain binary value, such as " being ", " deny ", 1,0 or identify the information of the current virtual feature that is engaging of actor.In certain embodiments, contiguity constraint module 320 can be calculated contiguity constraint set or the mutual contact point that is associated with the sign virtual feature.Contiguity constraint for example can be position, dimension, edge and/or the surperficial some set that defines the virtual feature that is associated.In certain embodiments, contiguity constraint module 320 can be sent at least one in contiguity constraint, virtual feature identifier and the actor's end effect device volume coordinate of being calculated to motion mimics module 330 then.
Motion mimics module 330 can be to be configured to confirm that real world actor such as human action person is current whether is imitating the hardware of the predefine example motion that is associated with virtual feature and/or any combination of software.As shown in Figure 3, motion mimics module 330 can be that stored configuration becomes to make the processor operation to carry out the instruction software module of one or more steps of above action.
In certain embodiments, motion mimics module 330 can and/or engage designator from one or more receiving activity person pose information such as the actor's end effect device positional information characteristic splice module 310 and the contiguity constraint module 320, virtual feature identifier.In certain embodiments, motion mimics module 330 can from another hardware and/or software module or other hardware or computerized device receive above-mentioned each.
In certain embodiments, motion mimics module 330 can be confirmed current any of whether imitating in the predefine example motion set that is associated with the current virtual feature that is engaging of this action person of this action person.For example; In certain embodiments; Each the example motion through being associated capable of circulation of this module with the virtual feature that engages; And, cycle through each actor's end effect device and whether mate (or in the acceptable error tolerance limit, mating) locus by the corresponding virtual terminal effector of that example movement definition with the locus of confirming that actor's end effect device for each.In certain embodiments, this module speed that can additionally compare that actor's end effect device and speed by the corresponding virtual terminal effector of this example movement definition.In certain embodiments, the configurable one-tenth of module only considers that current " unfettered " is the current actor's end effect device that does not directly contact with another physical quality or object.For example; In this embodiment; Can be considered the actor who stands erectly on the floor and hand puts aside has affined end effect device (pin is current to be contacted with the floor) on pin, but free end effect device (hand is current is suspended in aerial, gravitate only) is being arranged on hand.
In certain embodiments, above comparison procedure can dwindle or lower dimensional space in carry out so that simplify essential calculating.In certain embodiments, motion mimics module 330 can be used the part of principal component analysis (PCA) (PCA) as said process.
In certain embodiments, this comparison can be carried out in complete example motion and the set of actor's end effect device comprehensively.In other words, can remain kinematic error or total variances on relatively at each end effect device of given example motion.In case compared current all end effect devices in the example motion of considering, in certain embodiments, motion mimics module 330 just can be compared the total error of that example motion with predetermined threshold.For example; If the total error of current example motion can't surpass predetermined threshold; Then the motion of actor's current real world posture and example can be considered and fully is similar to imitation module 330, and conclusion is that the current example that is associated with the virtual feature that engages of imitating of actor is moved.
In certain embodiments, the above comparison between actor's end effect device coordinate set and the predefine virtual terminal effector coordinate set can only comprise the comparison of the subclass of two end effect device set.For example, in certain embodiments, this comparison can be only carried out on the subclass of the core of abundant indication actor's overall intention and/or general posture or leading end effect device.
In certain embodiments; In case motion mimics module 330 has been accomplished above comparison, it just can send to another module and/or external hardware and/or the software module of intention in the identification module 300 and comprise following at least one one or more signals: the example movement definition (if suitable) and/or the actor's end effect device coordinate that engage designator, example motion designator or identifier, imitation.
Fig. 4 is the process flow diagram of illustration according to the method that is used to calculate the intermediate virtual posture that is associated with virtual role of embodiment.More particularly, Fig. 4 illustration can carry out with series of steps by device based on the example motion calculation intermediate virtual posture that is associated with virtual feature and current real world actor posture.When carrying out, these steps can calculate corresponding to a series of end effect devices that current real world actor position that motion capture system is detected is associated in position (being intermediate virtual end effect device) in each the Virtual Space.In certain embodiments, each step can be carried out such as one or more computerized device by any combination of hardware and/or software.For following explanation, will discuss this device.
As shown in Figure 4, step 400 expression can be for each execution in step 410 to 430 in the set of actor's end effect device.Like this, the argumentation of each step 410-430 will be discussed for the person's of acting separately end effect device and carry out that step below.Yet; Should be appreciated that; In certain embodiments, computerized device can be calculated complete intermediate virtual posture thus for carrying out step 410-430 at least one time from each actor's end effect device of the actor's end effect device set that is associated with the real world actor.
In certain embodiments, actor's end effect device can be one or more actor's health end point or the reflecrtive mark set that is positioned at real space, and wherein each position is represented by one or more volume coordinates.For example; In certain embodiments, the position of each actor's end effect device can be represented by x, y and z or r, θ and
Figure BDA0000087639420000141
coordinate set.In certain embodiments, each actor's end effect device position can be confirmed by video capture device with the computerize hardware and software device of its coupling.
In step 410, computerized device can confirm whether actor's end effect device is tied.In certain embodiments, computerized device can be from combining I/O module or intention module receiving activity person end effect device position like I/O that Fig. 2 discusses and the intention module class with top.In certain embodiments, this device can confirm whether the position of end effect device indicates it currently contacting with outer surface.For example, in certain embodiments, the end effect device can be positioned on actor's the pin, and computerized device can be confirmed that the end effect device is current and contacting such as the floor with the surface.
Computerized device can be next carried out one of two instructions based on the restrained condition of top definite actor's end effect device.If actor's end effect device is current unfettered, then this device can be arranged to the position of posture end effect device in the middle of corresponding the position of current actor's end effect device.For example, unfettered if actor's end effect device is confirmed as in step 410 in certain embodiments, and have by coordinate (x 1, y 1, z 1) definition the position, then the device can be to (x 1, y 1, z 1) distribute the counterpart terminal effector (EE) of intermediate virtual posture to be worth.At that, but next actor's end effect device is considered in device iteration and/or continuation, and returns above-mentioned steps 410.Alternatively, if confirm current being tied of actor's end effect device according to step 410, then device can proceed to step 420.
In step 420, computerized device can confirm whether be tied corresponding to the virtual role end effect device of actor's end effect device.In certain embodiments, this device can be relatively corresponding to the position of the position of the virtual role end effect device of actor's end effect device and one or more virtual features to confirm whether the virtual terminal effector locatees fully near affined characteristic.If this device is confirmed the virtual terminal effector and is tied that then it can proceed to above-mentioned steps 415, and based on current actor's end effect device and corresponding virtual terminal effector continuation processing.If this device is confirmed current not being tied of actor's end effect device, then it can proceed to following step 430.
In step 430, computerized device can be calculated the position corresponding to the intermediate virtual end effect device of actor's end effect device.This calculated example is as can be based on the interpolation calculation between actor's end effect device and the corresponding virtual role end effect device position.For example, in certain embodiments, interpolation calculation can comprise the average computation based on the position of actor and corresponding example motion terminals effector.This interpolation possibly be favourable, because it influences trading off between motion of actor's real world and the motion of virtual world particular example.
In certain embodiments, calculating can comprise one or more weighting factors and/or influenced by it.In certain embodiments, the configurable one-tenth of one or more weighting factors keeps institute's intermediate virtual posture of calculating and the similarity of the example posture that is associated with the virtual feature of joint.In certain embodiments, the configurable one-tenth of at least one weighting factor minimizes the difference between the last posture of intermediate virtual posture and the virtual role of calculating.In certain embodiments, the configurable one-tenth of at least one weighting factor keeps and/or follows actor's motion.After calculating intermediate virtual end effect device position, but next actor's end effect device is considered in device iteration and/or continuation, as stated.
In certain embodiments, computerized device can be carried out above instruction on each of the set of part actor end effect device at least, so that the intermediate virtual posture that is made up of each virtual terminal effector value as overall calculating.In certain embodiments, actor's end effect device set can be the subclass of all possible action person end effect devices of being associated with the real world actor.In certain embodiments, the set of actor's end effect device can be made up of the end effect device of minimum number, such as 5.In this embodiment, actor's end effect device of minimum number can be positioned on the core position of actor's health, so that maximize their motion representative actor's as a whole degree.
Fig. 5 is the process flow diagram of illustration according to the method that is used for definite new virtual role barycenter (COM) of embodiment.More particularly, Fig. 5 illustration can carry out the series of steps that calculate new virtual role COM with part at least based on the intermediate virtual posture of being calculated, real world actor end effect device position and contact type set and one or more surface types of being associated with one or more affined virtual role end effect devices by device.In certain embodiments, computerized device or module can combine the hardware and/or the software module of the method calculating intermediate virtual posture of Fig. 4 argumentation to receive above information above for example use is similar to.In certain embodiments, each step of the process of Fig. 5 description can be carried out such as one or more computerized device by any combination of hardware and/or software.For following explanation, will discuss this device.
For the following argumentation of Fig. 5, define possible virtual role analogy method now.In certain embodiments, computerized device definable virtual role is to use spring model simulate real world actor's health and motion.For example, in certain embodiments, virtual role can be by four dampings " spring " definition of center of mass point and each approximate people's four limbs.In certain embodiments; Center of mass point can be in the Virtual Space by one or more coordinates such as with (x; Y, z), the point of the volume coordinate definition of form such as
Figure BDA0000087639420000161
.Center of mass point can be considered each that " is attached to " separately in four dampings " spring ", and can be called " barycenter " or " COM " simply.In certain embodiments, but the elastic force sum that the virtual role relative gravity of above characterizing definition is applied by each virtual terminal effector of virtual role, operate in virtual role the end effect device that is tied on friction force sum and the simulated gravity that operates on the virtual COM support.
In step 500, computerized device can be calculated the elastic force that the end effect device of each virtual role applies.More particularly, this device can at least partly calculate the elastic force that is applied by each virtual terminal effector based on the COM and the relative distance between the locus of that end effect device in the virtual world.For example; In certain embodiments, this device elastic force that can come calculated for given virtual terminal effector to apply through the difference of calculating between the distance between distance and current real world actor COM and the corresponding real world end effect device between current virtual COM and that end effect device in the current time.This difference can indicate the simulation four limbs of virtual role must be with respect to the amount of the Virtual Space of virtual COM motion, with the motion of appropriate simulate real world actor's end effect device.In certain embodiments, this elastic force calculating also can be at least partly based on one or more predefine spring constants.In certain embodiments, elastic force calculates the gravity factor can comprise the influence of simulated gravity on each the end effect device that is tied that is configured to compensate virtual role.In certain embodiments, the configurable one-tenth of the gravity factor is evenly distributed in gravity on all end effect devices of virtual role.
In step 510, this device can calculate the friction force that acts on each affined virtual terminal effector.More particularly, this device can calculate the friction force that is applied on current and outside virtual feature or surperficial each virtual terminal effector that contacts.For example; In certain embodiments; This device is capable of circulation through each virtual terminal effector, and for example the volume coordinate of one or more virtual features of the position through that end effect device relatively and virtual world confirms whether that end effect device is tied.If given end effect device is tied, then this device can calculate distance between current virtual COM and the current actor's real world COM to confirm amplitude and/or the direction to the position necessary necessary move (or " displacement ") of coupling real world COM with virtual COM " motion ".In certain embodiments, this device can use this distance to calculate the current friction force that stands of that virtual terminal effector together with virtual terminal effector type and/or virtual feature surface type.As above-mentioned, can carry out above step so that calculate the friction force of each affined virtual terminal effector for each virtual terminal effector.
In step 520, this device can calculate the current gravity mg that is applied on the virtual COM.More particularly, this device can multiply by the predefine gravity constant g that is associated with current virtual world with predetermined quality value m.For example, in certain embodiments, gravity constant g can given value 9.8m/s 2To simulate the gravity that tellurian object is stood.
In step 530, this result who installs above-mentioned steps 500,510 capable of being combined and 520 is to calculate new virtual COM.More particularly; In certain embodiments; All elastic force sums that this device can apply the virtual terminal effector, all friction force sums and the said gravity that is applied on the affined virtual terminal effector are added up, to confirm the new locus of virtual COM.
Fig. 6 is an illustration according to the process flow diagram of the method that is used to calculate the final posture of avoiding penetrating geometric configuration of embodiment.More particularly, Fig. 6 illustration can carry out the series of steps of calculating final virtual posture with part at least based on intermediate virtual posture, mutual contact point set and new virtual COM by device.Calculate virtual posture so that guarantee not have virtual terminal effector point to penetrate any geometric configuration of any virtual feature.In certain embodiments, each step can be carried out such as one or more computerized device by any combination of hardware and/or software.For following explanation, will discuss this device.
For the following argumentation of Fig. 6, define possible virtual role analogy method now.In certain embodiments, computerized device definable virtual role is to use health and the motion with barycenter like the top model class that combines Fig. 5 description and spring model simulate real world actor.In this embodiment, the virtual role posture can be by virtual center of mass point and the virtual terminal effector sets definition corresponding to set of the end effect device that is associated with the real world actor and space center of mass point.
In step 600, computerized device intermediate virtual posture capable of being combined and next virtual center of mass (COM) are to calculate the new virtual posture of virtual role.In certain embodiments, this device can receive the virtual terminal effector location sets of definition intermediate virtual posture, or it has been stored in the storer.In certain embodiments, the intermediate virtual posture can be at least part based on the procedure definition of the intermediate virtual posture computing method that combine Fig. 4 to describe above being similar to.In certain embodiments; Next virtual COM can be in the Virtual Space by one or more coordinates such as with (x; Y, z), the point of the volume coordinate definition of form such as
Figure BDA0000087639420000181
.In certain embodiments, this device can receive next the virtual COM that for example confirms through the method for the virtual COM computing method of combination Fig. 5 description above being similar to, or it is stored in the storer.For example, in certain embodiments, this installs standard inverse kinematics method capable of using, and it and optimizing process are coupled to calculate new virtual posture based on next virtual COM and intermediate virtual posture.In certain embodiments, new virtual posture can be at least partly by new virtual terminal effector location sets and new virtual COM definition.In certain embodiments, this calculating can receive mutual contact point set restriction, the constraint that is associated with virtual role or very influence.
In step 610, this device can be checked new virtual posture anyly penetrates geometric configuration and/or collision.More particularly, in certain embodiments, this device can be guaranteed to pass the surface of virtual feature of the virtual world that reproduces the role or outside by the position of the defined virtual terminal effector of new posture.For example, in certain embodiments, this device each virtual terminal effector position of passing through by each end effect device of new virtual posture definition capable of circulation, and the contiguity constraint set of comparing that position and one or more virtual features.By these relatively, this device can determine whether to take place one or more " collisions ", and promptly whether current being defined as of any virtual role contact point makes it " pass " surface of virtual feature such as solid objects.
If detect one or more collisions, then can comprise the one or more of each collision/breakthrough point and not wait constraint, and recomputate new posture at step 620 device in step 610.More particularly, in certain embodiments, what this device can receive each virtual feature in the current virtual world does not wait constrain set, or it has been stored in the storer.In certain embodiments, this device is capable of circulation through detected each collision in the step 610 in the above, and the constraint that do not wait that will be associated with the point of impingement is inserted in the new posture calculating that top integrating step 600 discusses.Through doing like this, this device can be revised the new posture of initial calculation, meets the restriction and the limit of virtual world to guarantee it, particularly with respect to the virtual feature in this world.
If do not detect collision,, then can send new and final posture now so that show to output unit at this device of step 630 perhaps in case recomputated new posture in step 620 in step 610.More particularly, in certain embodiments, this device can send new virtual center of mass and the virtual terminal effector position of final posture so that show to output unit.For example, after accomplishing above step, this device can send final pose information so that be shown to the user to screen, such as the video game machine user.In certain embodiments, this device can be to being configured to receive final pose information and it being carried out one or more hardware and/or software module of further handling send final pose information.For example, this device can send final pose information to the software module that can use final pose information to reproduce the virtual role in interactive video recreation such as the motor play or the recreation of taking a risk that is associated with video-game.
Singulative " one " and " being somebody's turn to do " used in this instructions comprise a plurality of referents, only if context is clearly stipulated in addition.Thus, for example, term " module " intention is meant individual module or module combinations.
Though described various embodiment above, should be appreciated that, only unrestrictedly present them through example.Though some incident of said method indication takes place by certain in proper order, the order of some incident can be revised.In addition, in parallel procedure, carry out simultaneously when some incident can be worked as possibility, and order is carried out as stated.
Embodiment more described herein relate to having and comprise instruction or the computing machine of computer code or the Computer Storage product of processor readable medium (also can be described as processor readable medium) that is used to carry out various computer implemented operations on it.Medium and computer code (also can be described as code) can design and construct for specific purposes.The example of computer-readable medium includes but not limited to: magnetic storage medium, such as hard disk, floppy disk and tape; Optical storage media is such as CD/digital video disc (CD/DVD), compact disc read-only memory (CD-ROM) and holographic apparatus; Magnetic-optical storage medium is such as CD; The carrier signal processing module; And be configured to store the hardware unit with the executive routine code especially, such as general purpose microprocessor, microcontroller, special IC (ASIC), PLD (PLD) and ROM (read-only memory) (ROM) and random-access memory (ram) device.
The example of computer code includes but not limited to microcode or micro-order, such as the machine instruction that produces by compiler, be used to produce the code of web services and contain the file that computing machine uses the more high level instructions that translater carries out.For example, embodiment can use Java, C++ or other programming language (for example object oriented programming languages) and developing instrument to realize.The additional example of computer code includes but not limited to control signal, encrypted code and compressed code.
Though various embodiment have been described to have special characteristic and/or combination of components, other embodiment that under suitable situation, has from the combination of any characteristic of any embodiment and/or assembly also is feasible.

Claims (20)

1. method comprises:
The defining virtual characteristic, said virtual feature is associated with at least one engaging condition;
Receive the end effect device coordinate that is associated with the actor; And
At least part is intended to based on the relatively calculating actor between said at least one engaging condition and the said end effect device coordinate.
2. method according to claim 1, wherein calculate actor's intention and also comprise:
The example motion that definition is associated with said virtual feature; And
More said end effect device coordinate and with the exemplary terminal effector coordinate of said example movements.
3. method according to claim 2, the second speed of the wherein said exemplary terminal effector that relatively comprises first speed of the end effect device that comparison is associated with said end effect device coordinate and be associated with said exemplary terminal effector coordinate.
4. method according to claim 3, wherein said part more at least is based on low dimension end effect device vector.
5. method according to claim 3, wherein said end effect device coordinate is associated with free end effect device.
6. method according to claim 1 also comprises:
If be lower than predetermined threshold with the said value that relatively is associated, then distribute actor's intention value.
7. method according to claim 1 also comprises:
Calculate the one or more contiguity constraints that are associated with said virtual feature, said contiguity constraint is based at least one dimension of said virtual feature.
8. method comprises:
Definition example posture, said example posture comprises at least one exemplary terminal effector position;
Receiving activity person end effect device position;
At least part is based on the virtual postural position of said actor's end effect device position calculation.
9. method according to claim 8, wherein said calculating also at least part based on the interpolation of said exemplary terminal effector position and said actor's end effect device position.
10. method according to claim 9, wherein said actor's end effect device position are current actor's end effect device positions, and said interpolation also at least the part based in following at least one:
Poor between said current actor's end effect device position and the last actor's end effect device position;
Said example posture; With
Actor's travel direction.
11. method according to claim 8; Wherein said virtual postural position is new virtual postural position; Said actor's end effect device position is associated with affined actor's end effect device, and is associated with free virtual terminal effector corresponding to the last virtual terminal effector position of said actor's end effect device position.
12. method according to claim 8, wherein, if said actor's end effect device position is not associated with affined actor's end effect device, then said virtual postural position is not at least partly based on said at least one exemplary terminal effector position.
13. method according to claim 8; Wherein, If the last virtual terminal effector position corresponding to said actor's end effect device position is not associated with free virtual terminal effector, then said virtual postural position is not at least partly based on said at least one exemplary terminal effector position.
14. a method comprises:
Receive virtual role centroid position " virtual COM ", actor's centroid position " actor COM ", virtual role end effect device position " virtual terminal effector " and actor's end effect device position " actor's end effect device ";
At least partly based on the new virtual role centroid position of one or more calculating in following " next virtual COM ":
At least part is based on the elastic force of second relative position of first relative position of said actor COM and said actor's end effect device and said virtual COM and said virtual terminal effector; With
The gravity compensation value.
On one or more virtual terminal effectors that 15. method according to claim 14, wherein said gravity compensation value are evenly distributed on said virtual COM is associated.
16. method according to claim 14 also comprises:
At least part is based at least one calculates the virtual role centroid position " COM of renewal " that upgrades in following:
Said next virtual COM; With
At least part is based on the friction force value of the pseudo range between said virtual COM and said next virtual COM.
17. method according to claim 14, wherein said friction force value at least the part based in following at least one:
The surface in contact type of the virtual feature that current and said virtual terminal effector contacts; With
The contact type of said virtual terminal effector.
18. method according to claim 14 also comprises:
At least part is based on said next virtual COM and the new virtual posture of at least one next virtual terminal effector position calculation.
19. method according to claim 18, poor between said new virtual posture of wherein said computational minimization and the last virtual posture.
20. method according to claim 18 also comprises:
At least part detects based on said at least one next end effect device position and the contiguity constraint that is associated with virtual feature and penetrates geometric configuration; And
At least part does not wait constraint to recomputate said new virtual posture based on the said geometric configuration that penetrates with at least one.
CN2010800100090A 2009-01-21 2010-01-21 Character animation control interface using motion capture Pending CN102341767A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US46115409P 2009-01-21 2009-01-21
US61/461,154 2009-01-21
PCT/US2010/021587 WO2010090856A1 (en) 2009-01-21 2010-01-21 Character animation control interface using motion capture

Publications (1)

Publication Number Publication Date
CN102341767A true CN102341767A (en) 2012-02-01

Family

ID=42542352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010800100090A Pending CN102341767A (en) 2009-01-21 2010-01-21 Character animation control interface using motion capture

Country Status (3)

Country Link
EP (1) EP2389664A1 (en)
CN (1) CN102341767A (en)
WO (1) WO2010090856A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678211A (en) * 2015-12-03 2016-06-15 广西理工职业技术学院 Human body dynamic characteristic intelligent identification system
CN107995855A (en) * 2015-03-26 2018-05-04 拜欧米特制造有限责任公司 For planning and performing the method and system of joint replacement flow using motion capture data
WO2018082692A1 (en) * 2016-11-07 2018-05-11 Changchun Ruixinboguan Technology Development Co., Ltd. Systems and methods for interaction with application

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155324B (en) * 2021-12-02 2023-07-25 北京字跳网络技术有限公司 Virtual character driving method and device, electronic equipment and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6005548A (en) * 1996-08-14 1999-12-21 Latypov; Nurakhmed Nurislamovich Method for tracking and displaying user's spatial position and orientation, a method for representing virtual reality for a user, and systems of embodiment of such methods
US20020140633A1 (en) * 2000-02-03 2002-10-03 Canesta, Inc. Method and system to present immersion virtual simulations using three-dimensional measurement
US7403202B1 (en) * 2005-07-12 2008-07-22 Electronic Arts, Inc. Computer animation of simulated characters using combinations of motion-capture data and external force modelling or other physics models

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088042A (en) * 1997-03-31 2000-07-11 Katrix, Inc. Interactive motion data animation system
US6522332B1 (en) * 2000-07-26 2003-02-18 Kaydara, Inc. Generating action data for the animation of characters
AU2002303082A1 (en) * 2001-01-26 2002-09-12 Zaxel Systems, Inc. Real-time virtual viewpoint in simulated reality environment
US20040104935A1 (en) * 2001-01-26 2004-06-03 Todd Williamson Virtual reality immersion system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6005548A (en) * 1996-08-14 1999-12-21 Latypov; Nurakhmed Nurislamovich Method for tracking and displaying user's spatial position and orientation, a method for representing virtual reality for a user, and systems of embodiment of such methods
US20020140633A1 (en) * 2000-02-03 2002-10-03 Canesta, Inc. Method and system to present immersion virtual simulations using three-dimensional measurement
US7403202B1 (en) * 2005-07-12 2008-07-22 Electronic Arts, Inc. Computer animation of simulated characters using combinations of motion-capture data and external force modelling or other physics models

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHIN ET AL.: ""Motion synthesis and editing in low-dimensional spaces"", 《COMPUTER ANIMATION AND VIRTUAL WORLDS》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107995855A (en) * 2015-03-26 2018-05-04 拜欧米特制造有限责任公司 For planning and performing the method and system of joint replacement flow using motion capture data
US10973580B2 (en) 2015-03-26 2021-04-13 Biomet Manufacturing, Llc Method and system for planning and performing arthroplasty procedures using motion-capture data
CN105678211A (en) * 2015-12-03 2016-06-15 广西理工职业技术学院 Human body dynamic characteristic intelligent identification system
WO2018082692A1 (en) * 2016-11-07 2018-05-11 Changchun Ruixinboguan Technology Development Co., Ltd. Systems and methods for interaction with application

Also Published As

Publication number Publication date
EP2389664A1 (en) 2011-11-30
WO2010090856A1 (en) 2010-08-12

Similar Documents

Publication Publication Date Title
CN111402290B (en) Action restoration method and device based on skeleton key points
US11948376B2 (en) Method, system, and device of generating a reduced-size volumetric dataset
CN107428004B (en) Automatic collection and tagging of object data
US10825197B2 (en) Three dimensional position estimation mechanism
US10733798B2 (en) In situ creation of planar natural feature targets
JP2021170341A (en) Method and system for generating detailed dataset of environment through gameplay
CN104781849A (en) Fast initialization for monocular visual simultaneous localization and mapping (SLAM)
Ren et al. Change their perception: RGB-D for 3-D modeling and recognition
CN115699091A (en) Apparatus and method for three-dimensional pose estimation
CN109840508A (en) One robot vision control method searched for automatically based on the depth network architecture, equipment and storage medium
CN102341767A (en) Character animation control interface using motion capture
Haggag et al. An adaptable system for rgb-d based human body detection and pose estimation: Incorporating attached props
Ohkawa et al. Efficient annotation and learning for 3d hand pose estimation: A survey
CN116700471A (en) Method and system for enhancing user experience of virtual reality system
US20110175918A1 (en) Character animation control interface using motion capure
Kalampokas et al. Performance benchmark of deep learning human pose estimation for UAVs
Eom et al. Data‐Driven Reconstruction of Human Locomotion Using a Single Smartphone
Li Badminton motion capture with visual image detection of picking robotics
Hachaj et al. RMoCap: an R language package for processing and kinematic analyzing motion capture data
Peng Research on dance teaching based on motion capture system
Juang et al. Human body 3D posture estimation using significant points and two cameras
Zeng et al. Motion capture and reconstruction based on depth information using Kinect
Saha et al. A study on leg posture recognition from Indian classical dance using Kinect sensor
Corbett-Davies et al. Physically interactive tabletop augmented reality using the Kinect
Urgo et al. AI-based pose estimation of human operators in manufacturing environments

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120201