US11798176B2 - Universal body movement translation and character rendering system - Google Patents

Universal body movement translation and character rendering system Download PDF

Info

Publication number
US11798176B2
US11798176B2 US17/157,713 US202117157713A US11798176B2 US 11798176 B2 US11798176 B2 US 11798176B2 US 202117157713 A US202117157713 A US 202117157713A US 11798176 B2 US11798176 B2 US 11798176B2
Authority
US
United States
Prior art keywords
biomechanical
character
motion
source
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/157,713
Other versions
US20210217184A1 (en
Inventor
Simon PAYNE
Darren Rudy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronic Arts Inc
Original Assignee
Electronic Arts Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electronic Arts Inc filed Critical Electronic Arts Inc
Priority to US17/157,713 priority Critical patent/US11798176B2/en
Publication of US20210217184A1 publication Critical patent/US20210217184A1/en
Assigned to ELECTRONIC ARTS INC. reassignment ELECTRONIC ARTS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RUDY, DARREN, PAYNE, SIMON
Application granted granted Critical
Publication of US11798176B2 publication Critical patent/US11798176B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the described technology generally relates to computer technology and, more specifically, to animation.
  • Modern video games often include characters and creatures that have detailed, lifelike movement and animation. This is often implemented through a computationally expensive animation process, through which a 3D model is animated using a complex script. Generally, the 3D model must be manipulated through the entire range of motion captured in the animation.
  • a video game modeler may have to utilize software to create a 3D model of the character's body and then separately adjust the pose of the model for each frame in the run animation. In other words, the video game modeler may have to manually adjust a pose of the character model for each step defined in the run animation.
  • the animation may only be suitably applied to that particular character model since translating that movement to an entirely different model having different features, dimensions, and extremities may not be possible, may yield unusual results, or more result in the loss of data and fidelity of animation.
  • hard-coding the moving animation for a character is a process that can result in a large amount of work which is not transferable between characters and creatures, requiring that the distinct animations of each character or creature to be created separately.
  • Described herein are systems and methods for universal body movement translation and character rendering, such that motion data from a source character can be translated and used to direct movement of a target character model in a way that respects the anatomical differences between the two characters.
  • a three-dimensional character model can be defined for a character of a certain creature type.
  • the various biomechanical parts of the three-dimensional character model may have specifically defined constraints, which can include ranges of motion and neutral positioning, that are associated with that character.
  • the biomechanical parts of the three-dimensional character model can be arranged into different poses (e.g., adjustments from a neutral positioning of that part) and an expression or movement animation may be thought of as a series of full-body poses that are stitched together, with each full-body pose made up of the many poses of the different biomechanical parts.
  • a pose for a biomechanical part may be converted into a singular action code that indicates the adjustment (e.g., rotational and translational positioning) of the part in all six axes of freedom, normalized for the constraints that are specific to that character.
  • a complex, full-body pose of a three-dimensional character model to be represented based on a collection of action codes, which represent the combination of adjustments made to the various biomechanical parts of the model to arrive at that full-body pose.
  • an animator may desire a first biomechanical part of a three-dimensional character model to be in a certain pose and can specify a first action code for adjusting the first biomechanical part of the model.
  • the animator may want a second biomechanical part of the three-dimensional character model to be in certain pose at the same time, and may therefore combine the first action code with a second action code indicating positioning of the second biomechanical part. In this way, the animator can easily generate complex, full-body poses for a character model.
  • an animator can simply specify combinations of action codes to cause generation of a full-body pose of the three-dimensional character model.
  • the animator may be able to move around and adjust the parts of the three-dimensional character model until the desired full-body pose is obtained, and the combination of action codes associated with that full-body pose may be generated.
  • an animation can be thought of as a sequence of full-body poses that are stitched together, which is represented by a series of different collections of action codes.
  • the action codes serve as a universal language for describing the movement and positioning of the biomechanical parts in a three-dimensional character model, and animators can rapidly create full-body poses and animations for any particular three-dimensional character model via a combinations of action codes. Combinations of these action codes can generate complex poses and animation that are not possible in prior systems.
  • action codes may be applied universally to any three-dimensional character model, including different character models of the same or different type of creatures. However, the action codes may be evaluated in a manner that respects the different constraints and anatomical differences associated with each character. In other words, an animator may take a first action code for a first biomechanical part and similarly specify the action code for other target character models.
  • target character models will then express the same pose for their first biomechanical part, subject to any relative adjustments made for the constraints or anatomical differences associated with each target character, such as restrictions on the full range of motion for that first biomechanical part. Therefore, the techniques described herein enable an animator to rapidly specify full-body poses used in expressions (e.g., movement animations) via combinations of action codes, even when the characters have different body dimensions (e.g., both a first character and a second character are human beings, but the first character may have longer limbs).
  • the actual resulting poses or expressions of the 3D character model that are generated for each character may be configured to be distinct, if desired.
  • a second character may have a slightly different walking animation than a first character despite using similar action codes due to the second character's biomechanical parts having different restrictions on the full range of motion (e.g., the second character may have a different gait, possibly due to an injury that restricted the range of motion of the second character's legs).
  • each character may optionally have unique movement characteristics that can be layered on top of the universal language.
  • transferable action codes which may include a common set of reference codes for fundamental biomechanical parts shared by different characters and/or animals
  • transferable action codes which may include a common set of reference codes for fundamental biomechanical parts shared by different characters and/or animals
  • an animator may instead rely on the common set of action codes to translate movement animation.
  • prior systems may require an animator to uniquely arrange a 3D character model into a full-body pose for each frame of a movement animation. Therefore, a character will have only a small defined set of full-body poses which have been rendered. Any modification to the movement animation or the 3D model itself may require significant additional work by the animator.
  • the rules-based approach described herein utilizes action codes to describe hundreds, or thousands, and so on, different poses. Additionally, since the rules-based approach relies on a common set of defined biomechanical parts across different characters/animals for the action codes, animators can rapidly specify combinations of action codes for any character.
  • a video game may store pre-rendered animation for a character that was rendered using sets of action codes.
  • the a video game system executing the video game may generate and render an animation for a character during runtime of the video game based on sets of action codes.
  • the techniques described herein can allow for reductions in storage space.
  • systems and/or computer systems comprise computer readable storage media having program instructions embodied therewith, and one or more processors configured to execute the program instructions to cause the one or more processors to perform operations comprising one or more aspects of the above- and/or below-described embodiments (including one or more aspects of the appended claims).
  • computer program products comprising computer readable storage media
  • the computer readable storage media have program instructions embodied therewith, the program instructions executable by one or more processors to cause the one or more processors to perform operations comprising one or more aspects of the above- and/or below-described embodiments (including one or more aspects of the appended claims).
  • a computer-implemented method includes obtaining motion data for a source character; determining, from the motion data, motion of a source biomechanical part of the source character; determining one or more constraints for the source biomechanical part, including a first range of motion defined for the source biomechanical part and the source character; generating, based on the one or more constraints for the source biomechanical part, an action code representative of the motion of the source biomechanical part; determining, for a target character, a target biomechanical part that corresponds to the source biomechanical part; determining one or more constraints for the target biomechanical part, including a second range of motion defined for the target biomechanical part and the target character; evaluating, based on the one or more constraints for the target biomechanical part, the action code to determine relative motion of the target biomechanical part; and applying the relative motion of the target biomechanical part to a three-dimensional model of the target character.
  • the method may further include determining one or more offsets associated with the source character and the target character; and prior
  • a non-transitory computer storage media that stores instructions that when executed by a system of one or more computers, cause the one or more computers to perform operations that include: obtaining motion data for a source character; determining, from the motion data, motion of a source biomechanical part of the source character; determining one or more constraints for the source biomechanical part, including a first range of motion defined for the source biomechanical part and the source character; generating, based on the one or more constraints for the source biomechanical part, an action code representative of the motion of the source biomechanical part; determining, for a target character, a target biomechanical part that corresponds to the source biomechanical part; determining one or more constraints for the target biomechanical part, including a second range of motion defined for the target biomechanical part and the target character; evaluating, based on the one or more constraints for the target biomechanical part, the action code to determine relative motion of the target biomechanical part; and applying the relative motion of the target biomechanical part to a three-dimensional model of the target character.
  • a system includes one or more computers and computer storage media storing instructions that when executed by the one or more computers, cause the one or more computers to perform operations including: obtaining motion data for a source character; determining, from the motion data, motion of a source biomechanical part of the source character; determining one or more constraints for the source biomechanical part, including a first range of motion defined for the source biomechanical part and the source character; generating, based on the one or more constraints for the source biomechanical part, an action code representative of the motion of the source biomechanical part; determining, for a target character, a target biomechanical part that corresponds to the source biomechanical part; determining one or more constraints for the target biomechanical part, including a second range of motion defined for the target biomechanical part and the target character; evaluating, based on the one or more constraints for the target biomechanical part, the action code to determine relative motion of the target biomechanical part; and applying the relative motion of the target biomechanical part to a three-dimensional model of the target character.
  • the one or more offsets associated with the source character and the target character are stored in a configuration associated with motion translation between the source character and the target character.
  • the motion of the source biomechanical part includes at least one of: a rotation around the X-axis, a rotation around the Y-axis, a rotation around the Z-axis, a translation in the X-axis, a translation in the Y-axis, and a translation in the Z-axis.
  • the first range of motion defined for the source biomechanical part and the source character is a range for one of: a rotation around the X-axis, a rotation around the Y-axis, or a rotation around the Z-axis.
  • the second range of motion defined for the target biomechanical part and the target character is a range for one of: a rotation around the X-axis, a rotation around the Y-axis, or a rotation around the Z-axis.
  • the action code includes a serial identifying the source biomechanical part.
  • the action code represents the motion of the source biomechanical part for each of: a rotation around the X-axis, a rotation around the Y-axis, a rotation around the Z-axis, a translation in the X-axis, a translation in the Y-axis, and a translation in the Z-axis.
  • generating the action code may include normalizing the motion of the source biomechanical part using the one or more constraints for the source biomechanical part and a normalization scheme.
  • the normalization scheme includes a range of values between ⁇ 10 and 10.
  • FIG. 1 illustrates a block diagram of an example universal biomechanical expression system.
  • FIG. 2 illustrates an example of how a motion animation can be translated from a human character to a different human character, in accordance with embodiments of the present disclosure.
  • FIG. 3 illustrates an example protocol for a unified action code, in accordance with embodiments of the present disclosure.
  • FIG. 4 illustrates an example of how a motion animation can be translated from a humanoid character to a non-humanoid character having a similar body structure, in accordance with embodiments of the present disclosure.
  • FIG. 5 illustrates an example of how a motion animation can be translated from a humanoid character to a non-humanoid character having a different body structure, in accordance with embodiments of the present disclosure.
  • FIG. 6 illustrates an example of how a motion animation can be translated from a non-human character to a different non-human character, in accordance with embodiments of the present disclosure.
  • FIG. 7 illustrates an example of how a motion animation can be translated from a non-human character to a human character, in accordance with embodiments of the present disclosure.
  • FIG. 8 is a flowchart that illustrates how an action code usable for motion translation may be determined, such as by a universal biomechanical expression system.
  • FIG. 9 is a flowchart that illustrates how an action code may be interpreted and used to translate motion, such as by a universal biomechanical expression system.
  • FIG. 10 is flowchart that illustrates an overview of an example of translating complex animation between characters, such as by a universal biomechanical expression system.
  • FIG. 11 is a block diagram of an example computing system.
  • This specification describes systems and methods for utilizing universal languages for action codes, that can be used to specify poses, full-body poses, and expressions (e.g., animation) that can be applied to three-dimensional character models of different animated characters (e.g., animated characters in video games, films, and so on).
  • expressions e.g., animation
  • an animator can specify a particular collection of action codes according to a universal language, and different animated character models can automatically update to present the particular pose associated with those action codes.
  • a series of these collections of action codes can be used in sequence in order to obtain expressions and more complex animation.
  • these universal action codes may allow complex animation for a target character to be quickly generated based on motion data associated with a source character, by translating the motion data into action codes that can be then applied to the target character.
  • each animated character may have a distinct version of specified poses.
  • each character may have a unique restrictions on the full range of motion of its biomechanical parts, such that the specified pose will be adjusted based on those different parameters.
  • each animated character may have a unique body shape (e.g., they may be associated with different types of animals), such as different numbers of limbs, different limb lengths or designs, and so forth. These differences may result in each animated character having a distinct version of a specific pose. Adjustments to the poses may be made in order to express these differences, resulting in lifelike, and unique looking, animated characters.
  • a three dimensional character model can refer to a wire-frame mesh, or point-cloud, model of a body, with textures (e.g., blended textures) on the model representative of the body.
  • textures e.g., blended textures
  • images of a person e.g., an actor
  • a modeler e.g., a blend-shape artist
  • the character model can be divided into a plurality of sections or portions that are associated with the various biomechanical parts in the anatomy associated with the character model (e.g., the skeletal system of the underlying creature).
  • each biomechanical part may be associated with a serial number.
  • each biomechanical part may be associated with a neutral pose (e.g., based on a neutral position within each of the six axes of freedom). The biomechanical parts may be adjusted relative to the neutral pose to conform to each of a multitude of poses defined by the action codes.
  • a biomechanical part may be any moving part in an animal, such as a joint, a bone, cartilage, muscle tissue, and so forth.
  • there may be analogous biomechanical parts between different animals though they do not necessarily have to have the same structure and/or function.
  • many birds and mammals have necks that provide the head additional rotational flexibility.
  • the neck in each of these animals may be considered to be analogous biomechanical parts.
  • fish do not have necks, but rather a series of bones that connect their skull to the shoulder girdle. Those series of bones could be either considered to be analogous or not analogous to a neck.
  • These relationships between biomechanical parts of different animals may be defined within a relationship table and used to facilitate the mapping of movement animations between different animals.
  • analogous biomechanical parts across different animals may be assigned the same serial number reference code.
  • a range of motion may include a range of rotation or range of translation for a biomechanical part of a character in a given axis of movement.
  • the range of motion may be described in absolute terms (e.g., Euler units or degrees for the range of rotation).
  • a range of rotation may be associated with a specific minimum and a specific maximum, which may refer to the rotational limits for a joint or body part, in a particular axis of rotation, of a particular character or creature.
  • owls are well known to be able to horizontally rotate their heads (e.g., around the Y-axis) up to 270 degrees either left or right in order to see over their shoulder. If the neutral pose associated with the owl is looking straight ahead, then a designated neutral position within the available range of rotation in the Y-axis can defined as the zero degree position.
  • the owl rotating its head fully to the left to look behind it may be considered the ⁇ 270 degree position and the owl rotating its head fully to the right to look behind it may be considered the +270 degree position.
  • the ⁇ 270 degree position e.g., a full leftward rotation
  • the 270 degree position e.g., a full rightward rotation
  • the directions may be swapped, such that the ⁇ 270 degree position may be associated with a full rightward rotation and a 270 degree position may be associated with a full leftward rotation. Either is acceptable, as long as the orientation of the reference system remains consistent across different animals).
  • the full range of motion for a joint or body part, in a particular axis, for a specific character or animal can be used to normalize motion data and generate action codes that can be applied across different character or animals.
  • Any suitable numerical, unit-less range may be used for the normalization, such as ⁇ 1 to 1, 0 to 10, and so forth.
  • positions within the range of motion for a joint or body part may be normalized and expressed based on a range of ⁇ 1 to 1, such that ⁇ 1 is associated with the specific minimum of the range of motion and 1 is associated with the specific maximum of the range of motion.
  • a normalized value of ⁇ 1 may be associated with a full leftward rotation (e.g., the ⁇ 270 degree position) and a normalized value of 1 may be associated with a full rightward rotation (e.g., the +270 degree position).
  • a human being may only be capable of rotating their head around the Y-axis up to 90 degrees to the left or right.
  • a normalized value of ⁇ 1 may be associated with a full leftward rotation (e.g., the ⁇ 90 degree position) and a normalized value of 1 may be associated with a full rightward rotation (e.g., +90 degree position).
  • This normalization may allow animation data to be meaningfully transferred between animals.
  • the full range of motion for each joint or body part, in each particular axis may be pre-defined for different characters and creatures. Those values may be stored in a table or database in order to enable the translation of normalized values.
  • the normalized position of a biomechanical part may refer to the position of a particular biomechanical part once it is normalized (e.g., made unitless) against the full range of motion for that biomechanical part and character in a particular axis, based on the chosen normalization scheme.
  • This normalized position may referred to as an action unit, and an action code may include a set of action units describing the normalized position of a biomechanical in each axis of freedom.
  • the chosen scale for normalization may be between ⁇ 10 to 10, such that ⁇ 10 corresponds to the specific minimum of the full range of rotation and 10 corresponds to the specific maximum of the full range of rotation.
  • An owl which is capable of rotating its head around the z-axis up to 270 degrees to the left or right, may have its head in the rotational position of ⁇ 135 degrees (e.g., halfway towards the full leftward rotation).
  • an action serial or serial number may be a reference code or identifier used to reference a particular biomechanical part (e.g., a joint or body part, or the corresponding analogous joint or body part) across different animals.
  • all recognized biomechanical parts may be indexed and given a serial number that serves as a reference code.
  • a reference code e.g., the number 033
  • Some animals may have unique biomechanical parts with their own reference codes.
  • an action code may be an identifier that informs of the relative positioning of a particular biomechanical part within its full range of motion for each axis of freedom (e.g., at a singular point in time).
  • the action code may include a set of action units, with each action unit describing the relative positioning of a particular biomechanical part within its full range of motion for a specific axis of freedom.
  • an action code may be a unified action code, which includes an action serial or reference code associated with a particular biomechanical part. Action codes are described in further detail in regards to FIG. 3 .
  • a pose may be associated with its ordinary meaning as it pertains to a biomechanical part (e.g., a joint or body part), but it may also be associated with an adjustment of the biomechanical part from a neutral position or the relative positioning of a particular biomechanical part (e.g., a joint or body part) at a singular point in time.
  • a pose for a biomechanical part may be captured and described using an action code.
  • a full-body pose may be associated with its ordinary meaning as it pertains to one or more biomechanical parts, up to all of the biomechanical parts within a character model.
  • a full-body pose may be associated with the adjustments of the biomechanical parts from their neutral position or the relative positioning of the biomechanical parts at a singular point in time.
  • a full-body pose may be captured and described using a collection of action codes, one for each biomechanical part associated with movement.
  • an expression or movement animation may be associated with a series of full-body poses captured over time (e.g., frames within a movement animation).
  • an expression may be communicated as a series of different collections of action codes. If each collection of action codes is used to render a full-body pose that serves as a frame in the movement animation, the various full-body poses can be stitched together in sequence to create the movement animation. Additional interpolation can also be used to smooth out the animation.
  • a biomechanical parts table or database may be used to define the constraints associated with the biomechanical parts of a particular character or creature.
  • this reference may list the full range of motion of each biomechanical part, in each axis, for each creature and/or character. This may further enable movement animations to be translated between different characters, since motion data can be normalized against the different full ranges of motion specified for the biomechanical parts of each character.
  • an owl may be able to horizontally rotate their heads (e.g., around the Y-axis) up to 270 degrees either left or right in order to see over their shoulder.
  • this reference may include this defined full range of motion on the Y-axis for the neck for owls.
  • a human being may be able to horizontally rotate their heads (e.g., around the Y-axis) up to 90 degrees either left or right.
  • This reference may include this defined full range of motion around the Y-axis for the neck of human beings. The use of this information is described in regards to FIGS. 8 , 9 , and 10 .
  • a relationship table or database may map out the relationships between biomechanical parts of different characters or creatures when translating motion between characters or creatures. Information in the relationship table or database can be also be included a configuration associated with two different characters or creatures to facilitate the translation of motion between those two different characters or creatures.
  • the relationship table or database may list a mapping of biomechanical parts between creatures or characters. This may enable motion data to be translated between characters or creatures by having the movement for a biomechanical part in a first creature be translated into movement for a specific biomechanical part of a second creature.
  • the relationship table may also list offsets that may be applied when translating motion data between two different characters or creatures.
  • user input is a broad term that refers to any type of input provided by a user that is intended to be received and/or stored by the system, to cause an update to data that is displayed by the system, and/or to cause an update to the way that data is displayed by the system.
  • user input include keyboard inputs, mouse inputs, digital pen inputs, voice inputs, finger touch inputs (e.g., via touch sensitive display), gesture inputs (e.g., hand movements, finger movements, arm movements, movements of any other appendage, and/or body movements), and/or the like.
  • user inputs to the system may include inputs via tools and/or other objects manipulated by the user.
  • user inputs may include motion, position, rotation, angle, alignment, orientation, configuration (e.g., fist, hand flat, one finger extended, etc.), and/or the like.
  • user inputs may comprise a position, orientation, and/or motion of a hand and/or a 3D mouse.
  • a data store can refer to any computer readable storage medium and/or device (or collection of data storage mediums and/or devices). Examples of data stores include, but are not limited to, optical disks (e.g., CD-ROM, DVD-ROM, etc.), magnetic disks (e.g., hard disks, floppy disks, etc.), memory circuits (e.g., solid state drives, random-access memory (RAM), etc.), and/or the like.
  • Another example of a data store is a hosted storage environment that includes a collection of physical data storage devices that may be remotely accessible and may be rapidly provisioned as needed (commonly referred to as “cloud” storage).
  • a database can refer to any data structure (and/or combinations of multiple data structures) for storing and/or organizing data, including, but not limited to, relational databases (e.g., Oracle databases, mySQL databases, and so on), non-relational databases (e.g., NoSQL databases, and so on), in-memory databases, spreadsheets, as comma separated values (CSV) files, eXtendible markup language (XML) files, TeXT (TXT) files, flat files, spreadsheet files, and/or any other widely used or proprietary format for data storage.
  • Databases are typically stored in one or more data stores. Accordingly, each database referred to herein (e.g., in the description herein and/or the figures of the present application) is to be understood as being stored in one or more data stores.
  • FIG. 1 illustrates a block diagram of an example universal biomechanical expression system 100 .
  • the universal biomechanical expression system 100 can be a system of one or more computers, one or more virtual machines executing on a system of one or more computers, and so on. As described above, the universal biomechanical expression system 100 may be able to store and analyze motion data associated with a three-dimensional character model.
  • the three-dimensional character model can be generated in multiple ways and any suitable method will do.
  • an animator may rely on full-body images or scans of a subject (e.g., a real-life person or animal).
  • a subject e.g., a real-life person or animal
  • one or more cameras may be used to capture images of the entire body of the subject from different angles.
  • depth sensors e.g., e.g., lidar, infrared points being projected onto the body, stereo cameras, and so on
  • the actor may be requested to make poses with portions (e.g., biomechanical parts) of their body.
  • the system can obtain the captured images and generate a photogrammetric model of the subject's body (e.g., a point cloud of the subject's body, such as points connected via vertices).
  • the photogrammetric model can be used to generate a three-dimensional character model that will be imported into a video game.
  • the three-dimensional model can include textures for the subject's body, and preserve a portion of the vertices included in the point cloud.
  • the three-dimensional character model may be further optimized for processing and storage constraints.
  • the generated three-dimensional model may be have biomechanical parts in neutral (e.g., resting) positions, from which the positions of the biomechanical parts may be adjusted.
  • the biomechanical parts of the three-dimensional character model may also be adjusted and manipulated using action codes. Different characters may have different neutral positions for their biomechanical parts and an action code may inform of how a particular biomechanical part should be adjusted from the neutral position of that biomechanical part, relative to the full range of motion of the biomechanical part for the character.
  • a character model with its different poses may be stored in one or more databases.
  • a range can be defined for its permitted movement along with a neutral position within that range. For example, the maximum limit that a head can be turned left or right on the Y-axis can be provided, along with the neutral position within that range (e.g., the resting position may be looking straight ahead).
  • Motion data (e.g., for a complex animation) for a three-dimensional character model can also be generated in multiple ways.
  • An animator may be able to manipulate the biomechanical parts of the three-dimensional character model by hand for each frame of a complex animation, or the animator may be able to provide a combination of action codes for each frame that is used to adjust the biomechanical parts of the three-dimensional character model.
  • motion capture can be used to capture movement from a human actor that can be converted to motion data applicable to the three-dimensional character model. This motion data can then be translated and used by the universal biomechanical expression system 100 to drive the animation of a different three-dimensional character model.
  • the universal biomechanical expression system 100 may include a camera 104 , as illustrated, taking images or video of an actor 102 . While the example includes one camera 104 , it should be understood that multitudes of cameras can be utilized. For example, the cameras may be included in a camera rig, with each camera capturing a high-resolution iposemage of a specific portion of the actor's 102 body. Additionally, two or more cameras may capture a same portion of the actor's 102 body, but taken at different angles (e.g., stereo cameras). In this way, depth information can be obtained. Various images may be captured and utilized to generate a more complete three-dimensional model of the actor. The camera 104 may also be used to capture movement from the actor 102 that can be applied to a three-dimensional character model.
  • the universal biomechanical expression system 100 includes a capture engine 110 that can receive the images captured by the camera(s) 104 and generate a three-dimensional character model.
  • a user of the user device 130 can generate a three-dimensional model of the actor's 102 body.
  • the capture engine 110 can combine (e.g., stitch together) images of the actor's body, and generate a point cloud of the body.
  • the point cloud can include multitudes of points defining depth associated with the actor's 102 body at a respective location. This point cloud can therefore represent an accurate model of a topology of the actor's 102 body.
  • the capture engine 110 can output the point cloud, for example for presentation on the user device 130 , and the user can generate a three-dimensional model of a character based on the point cloud.
  • the universal biomechanical expression system 100 is in communication with a user device 130 of a user (e.g., a modeler, an animator, and so on).
  • the user device 130 can be a desktop computer system, a laptop, a tablet, a mobile device, a wearable computer, and so on.
  • the universal biomechanical expression system 100 may be connected (e.g., a wireless or wired connection) to a display, and a user can directly utilize the universal biomechanical expression system 100 .
  • the universal biomechanical expression system 100 may implement a web application which the user device 130 can access.
  • the user device 130 can present a web page or a user interface associated with an application 132 executing on the user device 130 (e.g., an ‘app’ obtained from an electronic application store, a web application, and so on).
  • the universal biomechanical expression system 100 can then provide information to the user device 130 for inclusion in the web page or application 132 .
  • the user can provide user interactions, such as a combination of action codes, to the user device 130 , and the universal biomechanical expression system 100 can receive these user interactions and generate an output associated with them (e.g., a resulting pose from a combination of action codes).
  • the universal biomechanical expression system 100 may able to take the captured images or motion capture data of an actor's body during movement (e.g., as frames of a movement animation or full-body poses associated with the three-dimensional character model) and convert it into raw motion data (e.g., the positions of each biomechanical part in the model) associated with the three-dimensional character model.
  • the universal biomechanical expression system 100 may be able to take motion data for a first character model and then translate it to action codes that can be applied to animate a second character model. As a practical outcome, this may effectively enable the movement of the second character model to mimic the motion data captured from the actor 102 that was applied to a first character model.
  • the universal biomechanical expression system 100 may have a mapping engine 120 that is configured to consult a biomechanical part database 122 .
  • the biomechanical part database 122 may include a serial number or reference code associated with each biomechanical part of every character, as well as constraints associated with those biomechanical parts.
  • the constraints may include the full range of motion of each biomechanical part, in each axis, for each creature and/or character.
  • the constraints may also include the designated neutral positions of each biomechanical part, in each axis, for each creature and/or character.
  • the universal biomechanical expression system 100 may consult the biomechanical part database 122 to determine the serial number and the constraints associated with that biomechanical part of the character.
  • the biomechanical part database 122 may organize information in an object-oriented manner, such that default constraints and serial numbers associated with biomechanical parts are defined for different creatures.
  • the biomechanical part database 122 may also include any modified constraints associated with a specific character within each of those creature types, but in the absence of any constraints that is particularly defined for that specific character, the constraints associated with the overall creature type may be used.
  • the biomechanical part database 122 may define a range of rotation in the Y-axis for the head of human beings to be 90 degrees. There may be two human characters, Jack and Jill, which are also listed within the biomechanical part database 122 .
  • Jack has the additional constraint, which is that the range of rotation in the Y-axis for his head is only 70 degrees.
  • the mapping engine 120 may use the range of 70 degrees for evaluating action codes associated with Jack, while using the range of 90 degrees applied to humans overall for evaluating action codes associated with Jill.
  • the mapping engine 120 may also be configured to consult a relationship database 140 .
  • the relationship database 140 may serve to map out the relationships between biomechanical parts of different characters or creatures when translating motion between those characters or creatures. This relationship information may include serial numbers for corresponding biomechanical parts between two characters or creatures, how the different axes of freedom may be mapped between those biomechanical parts, and also any offset values that need to be applied to the action units of the action code when translating motion between the two characters or creatures (in either direction).
  • the relationship database 140 can be consulted to determine the relationship between humans and spiders with the serial number of the right upper leg joint, and it may list the serial number(s) of the corresponding biomechanical parts within a spider anatomy, how the action unit input values for the human right upper leg joint should be mapped to different axes of those corresponding biomechanical parts (e.g., rotations in the X-axis of the human right upper leg joint should correspond to rotations in the Y-axis of a corresponding biomechanical part in the spider), and any offset values that need to be additionally applied to the action unit input values.
  • the relationship database 140 may organize information in an object-oriented manner, such that information is defined and organized by different pairs of creatures.
  • the relationship database 140 may also include any particular modifications associated with translations for specific characters within each of those creature types, but in the absence of that information, the relationship information associated with the overall pair of creatures may be used.
  • the relationship database 140 may define overall relationship data for translating motion between humans and spiders. However, there may be two separate spider characters, Spider A and Spider B. Spider A may have additional relationship information, such as a different set of offsets to apply when translating motion data from humans to Spider A.
  • the mapping engine 120 may use the offsets specific to Spider A when translating motion data to Spider A, while using the overall spider offsets when translating motion data for Spider B.
  • FIG. 2 illustrates an example of how a motion animation can be translated from a human character to a different human character. More specifically, four different human characters are shown in FIG. 2 , including Character A 210 , Character B 220 , Character C 230 , and Character D 240 . A three-dimensional reference axes 202 is also shown in the lower left hand corner and the following description is written with the reference axes 202 in mind.
  • each different character may have a distinct full range of movement (e.g., rotation or translation) and designated neutral position for each of the six axes of freedom.
  • rotation or translation e.g., rotation or translation
  • This full range of rotation may be described in absolute terms (e.g., in Euler units or degrees).
  • the designated neutral position also referred to as the neutral pose) of rotation around the Z-axis for the right shoulder is also shown for all four human characters.
  • the designated neutral position may serve as a reference point (e.g., the 0 degree position) within the full range of rotation, which divides the corresponding full range of rotation into a positive range (e.g., from zero to a specific maximum) and a negative range (e.g., from zero to a specific minimum).
  • the rotational position of the right shoulder can be described as a positive or negative number of degrees, and any rotation of the right shoulder can be described as a positive rotation (e.g., towards the positive range) or a negative rotation (e.g., towards the negative range).
  • the biomechanical terms of flexion and extension may also be used to refer to these rotational directions.
  • FIG. 2 shows the right shoulder of Character A 210 having a 220 degree full range of rotation 212 around the Z-axis with a designated neutral position 214 at the midpoint of the full range of rotation 212 .
  • This divides the full range of rotation 212 into a positive range from 0 degrees to 110 degrees (e.g., the specific maximum) and a negative range from 0 degrees to negative 110 degrees (e.g., the specific minimum).
  • the motion (e.g., a series of rotational positions) can be translated from a first character to a second character based on the defined rotational ranges for both characters (e.g., by re-factoring the ranges using a normalization scheme), which allows these idiosyncrasies and the underlying nature-of-movement to be preserved.
  • This re-factorization process respects the relative differences in physical nature between two or more characters and produces a different translation result compared to retargeting, in which motion data (e.g., rotational positions) is simply applied from one character to another in absolute terms (e.g., in Euler units or degrees) using an approximated pose-match.
  • the right shoulder of Character B 220 is shown having a 235 degree full range of rotation 222 around the Z-axis, with a designated neutral position 224 that separates the full range of rotation 222 into a positive range from 0 degrees to 125 degrees (e.g., the specific maximum) and a negative range from 0 degrees to ⁇ 110 degrees (e.g., the specific minimum).
  • the right shoulder of Character C 230 is shown also having a 235 degree full range of rotation 232 around the Z-axis, with a designated neutral position 234 that separates the full range of rotation 232 into a positive range from 0 degrees to 120 degrees (e.g., the specific maximum) and a negative range from 0 degrees to ⁇ 115 degrees (e.g., the specific minimum).
  • the designated neutral position 234 of Character C 230 is different from the designated neutral position 224 of Character B 220 despite both characters having similar, 235 degree full ranges of rotation.
  • the right shoulder of Character D 240 is shown having a 205 degree full range of rotation 242 around the Z-axis, with a designated neutral position 244 that separates the full range of rotation 242 into a positive range from 0 degrees to 90 degrees (e.g., the specific maximum) and a negative range from 0 degrees to ⁇ 115 degrees (e.g., the specific minimum).
  • the designated neutral position does not necessarily have to divide a full range of rotation into two equal ranges in both the positive and negative directions.
  • the ratio between the positive range and the negative range may vary depending on the designated neutral position relative to the full range of movement for that movement axis, which may change based on the character and the creature's anatomical structure. For instance, even though the designated neutral position 214 for Character A 210 divides the full range of rotation 212 into two equal movement ranges (resulting in a 1:1 ratio between the positive and negative ranges), the other characters depicted in FIG. 2 have different ratios between the positive and negative ranges.
  • the full range of motion for a biomechanical part can be described using rescaled, normalized values instead.
  • Any suitable normalization scheme may be used. For instance, normalized values between ⁇ 1 and 1 can be used, with ⁇ 1 corresponding to the specific minimum of the range of motion, 1 corresponding to the specific maximum of the range of motion, and 0 corresponding to the designated neutral position.
  • these values can be factorized to make them easier to work with, such as factorizing the previous normalized range by 10 (e.g., 0-10 is a more animation-friendly unit range than 0-1), or a different normalization scheme can be chosen altogether.
  • a normalized rotational value of 10 would correspond the rotational position at the specific maximum
  • a normalized rotational value of ⁇ 10 would correspond to the rotational position at the specific minimum
  • a normalized rotational value of 0 would correspond to the designated neutral position.
  • a normalized rotational value of 10 for the right shoulder around the Z-axis would correspond to a rotational position of +110 degrees (e.g., Character A 210 has their right upper arm lifted as high as possible)
  • a normalized rotational value of 5 would correspond to a rotational position of +55 degrees (e.g., 110/2) for Character A 210 .
  • This same normalized rotational value of 5 would correspond to a rotational position of +45 degrees (e.g., 90/2) for Character D 240 , whose anatomy and physical condition results in a shorter range of rotation than that of Character A 210 (e.g., Character D 240 has a full range of rotation 242 with a more-restricted positive range that spans from 0 to 90 degrees).
  • Movement for a first character can be translated into movement for a second character by first normalizing, based on the first character's defined range of motion, the motion data for the first character that is in absolute terms, and then de-normalizing against the second character's defined range of motion.
  • motion data for Character A 210 indicated that the right shoulder is at a rotational position of +55 degrees relative to the designated neutral position 214 , that would correspond to a normalized rotational value of 5.
  • motion data can also be expressed as a series of unified action codes.
  • the unified action code may be configured to indicate a specific biomechanical part and any normalized values associated with each of the six axes of freedom (such as the relative rotational positioning of the biomechanical part within the available full range of rotation for a particular axis of rotation).
  • FIG. 3 illustrates an example protocol for a unified action code.
  • a unified action code may include two parts, an action serial 302 and an action code 304 .
  • the action serial 302 may be a serial number (e.g., a three digit number) that is arbitrarily assigned to a particular biomechanical part (which may be common across different characters and/or creatures) and used to reference that biomechanical part.
  • a serial number e.g., a three digit number
  • each of the four characters shown in FIG. 2 has a right shoulder joint, which may be associated with a serial number of 003.
  • the action code 304 may be a value set used to represent relative positioning of the referenced biomechanical part within the character's range-of-motion for each of the six axes of freedom. For instance, under a normalization scheme that represents a full range of motion using values from ⁇ 10 to +10, the action code 304 may include corresponding inputs for each of the three rotational axes and three translation axes, in the form xyz (e.g., [rotateX, rotateY, rotateZ, translateX, translateY, translateZ]). Each input may have a padding of two zeroes for a double digit factor range of ⁇ 10 to 10 (with 00 being the designated neutral position). Thus, a value set of “000000000000” would be associated with the default positions for all axes.
  • the example unified action code 310 (“003000010000000”) can be interpreted as having an action serial of “003” for the right shoulder joint and an action code of “000010000000”, which can be broken down into the corresponding inputs of [00, 00, 10, 00, 00, 00] for [rX, rY, rZ, tX, tY, tZ].
  • the normalized rotational value of +10 corresponding to the rotational Z-axis indicates that this example unified action code 310 is associated with a maximum positive angle for right shoulder joint flexion/extension rotation around the Z-axis (e.g., the specific maximum of the available full range of rotation).
  • this example unified action code 310 were associated with Character A 210 in FIG. 2 , for instance, who has a right shoulder joint with a 220 degree full range of rotation around the Z-axis and a designated neutral position that divides this full range of rotation into an equal positive range (0 degrees to +110 degrees) and negative range (0 degrees to ⁇ 110 degrees), then the example unified action code 310 would correspond with a rotational positioning of +110 degrees relative to the designated neutral position.
  • FIG. 4 illustrates an example of how a motion animation can be translated from a humanoid character to a non-humanoid character having a similar body structure (like-for-like). More specifically, a human Character A 410 is shown alongside a badger Character B 420 , but this specific example of translating human-to-badger motion can be more generally applied to translate motion from a human character to a non-human character (e.g., a different creature).
  • a three-dimensional reference axes 402 is also shown in the lower left hand corner and the following description is written with the reference axes 402 in mind.
  • One use case for translating motion from a human character to a non-human character is to allow motion data for the human character (e.g., captured from a human actor) to be used for driving animation of the non-human character, which can save money and time.
  • the universal biomechanical expression system described herein translates motion between two characters based on the defined ranges of motion specific to those two characters (e.g., respecting the relative differences in physical nature between the two characters), the results may be considerably different from the standard practice of retargeting, in which motion data (e.g., rotational positions) is simply applied from one character to another in absolute terms (e.g., in Euler units or degrees) using an approximated pose-match.
  • the universal biomechanical expression system may not necessarily have the capabilities for making logical assumptions (e.g., without additional human guidance) about how types of movement for a first type of creature can be used to drive a like-movement on a second type of creature.
  • the universal biomechanical expression system may be configured to handle movement data rather than deciding which types of movement to use.
  • motion data for a walking human Character A 410 is used to drive arm and leg movement of the badger Character B 420 (e.g., using the process described in regards to FIG. 2 )
  • the badger Character B 420 may not be expected to walk “like a human” because the biomechanical systems of a badger produces a very different nature of movement for a walk as compared to that of a human walk.
  • the biomechanical systems of a badger may result in a badger's ‘neutral’ pose (ready to move, no specific-use movement within the pose yet) looking like the ghosted outline 422 shown in FIG. 4 .
  • the hind legs are perpendicular to the body, the front legs/arms are perpendicular to the chest, and paws are at about 75 degrees, pressed against the ground.
  • the spine and head are parallel to the ground.
  • the badger In order to make correct use of human animation on a badger (e.g., translate motion data from a human character to a badger character), the badger should “mimic” the human being.
  • the universal biomechanical expression system may be able to apply offset values or multipliers to action unit input values (e.g., the normalized values for rotational or translational position) in order to represent the anatomical differences between one creature and another. It is important to note that the offset values may need to affect both positions of target joints and rotations. In other words, the normalized values obtained in the translation process described in regards to FIG.
  • joint-to-joint motion mapping can be additionally evaluated with preset offsets (e.g., additive values) to enable human motion to be mimicked by a non-human character.
  • preset offsets e.g., additive values
  • FIG. 4 illustrates this by providing an example of offsets being evaluated on top of action unit values (e.g., the normalized values for rotational or translational position). For instance, offsets can be applied to the action units associated with the hip and base of the spine for a human Character A 410 , resulting in Euler values on those joints belonging to the badger Character B 420 .
  • the badger Character B 420 now stands upright due to the offsets.
  • the ghosted outline 424 in this position depicts the unaffected head now pointing directly upwards, rather than forward-facing. In order to achieve the depicted badger pose 426 , the head and neck joints will also receive applied offsets.
  • the badger may be additional parts of the badger (possibly all of the badger) that can receive offsets in order to mimic a human-like base pose. All the offsets used may be recorded as a configuration for translating motion for a human to a badger. The final resulting movement animation for the badger would not vaguely resemble a walking badger, but rather a badger walking like a human. This mimicking nature is not possible with standard retargeting methods.
  • FIG. 5 illustrates an example of how a motion animation can be translated from a human character to a non-human character having a different body structure. More specifically, a human Character A 510 is shown alongside a spider Character B 520 , but this specific example of translating human-to-spider motion can be more generally applied to translate motion from a human character to a non-human character (e.g., a different creature).
  • a three-dimensional reference axes 502 is also shown in the lower left hand corner and the following description is written with the reference axes 502 in mind.
  • FIG. 5 is meant to show a more extreme example compared to FIG. 4 , for how motion data can be translated between a human character and a non-human character, even with wildly different body structures (and wildly different animation rigs).
  • the leg joints of the human Character A 510 may be used to drive the legs of the spider Character B 520 . This may involve switching effective axes between characters or creatures, either through the use of offsets or by exchanging serials between the source and the target.
  • the human hip and base of the spine with an offset in order to animate the body of the spider via serializing the thorax of the spider to that of the human pelvis.
  • the legs of the spider would then inherit the human leg animation, also via offsets.
  • an effective re-wiring of the spider's data inputs can be used to switch the Y and Z rotation channels.
  • motion data associated with the rotation of a human's upper leg in the rotational Y-axis can be used to drive rotation of the spider's leg in the rotational Z-axis
  • motion data associated with the rotation of a human's upper leg in the rotational Z-axis can be used to drive rotation of the spider's leg in the rotational Y-axis. This will cause the illustrated leg 522 to rotate in the axis expected of a spider leg, proportionally to the rotation of the human upper leg 512 .
  • Each subsequent leg joint of the spider will similarly require offsets, and perhaps match the tarsus action serial to that of a human toe, and metatarsus to that of a human foot.
  • a time based offset or standard value offset could be used to drive the other three legs on the left and three legs on the right of the spider Character B 520 .
  • These settings can be tweaked and recorded in a reusable configuration, which can allow for a spider walking based off of human animation.
  • This spider walking animation although it is more than a mimic of how a human walks (due to major differences in biomechanical structure), may not necessarily appear natural or correct to the motion of a real spider. However, it may serve as a great place to start rather than having to animate a spider model from scratch. Furthermore, for the purposes of in-game swarms of small spiders or small creatures on-screen, this rough walking animation may actually be enough as is, and therefore be very cheap or free animation at volume.
  • FIG. 6 illustrates an example of how a motion animation can be translated from a non-human character to a different non-humanoid character. More specifically, Creature C 610 is shown alongside a Creature D 620 . A three-dimensional reference axes 602 is also shown in the lower left hand corner and the following description is written with the reference axes 602 in mind.
  • FIG. 6 is meant to show a particular example of how motion data can be translated between quadruped-to-quadruped or ‘like’-creatures.
  • the angular base-pose differences between the two creatures can be observed and an offset value can be applied to the normalized values of the motion data, in order to account for the differences in scale, proportion, and anatomy between the two creatures.
  • scale differences between the Creature C 610 and the Creature D 620 can be observed in the length of the spine, the height from floor-to-base-of-the-neck, and the height from floor-to-wrist. There are also differences between the neck and head pose of the Creature C 610 and the Creature D 620 .
  • the offset values used to account for these differences between the two rigs/creatures can be used in either direction (e.g., to drive the animation of one creature using the motion data for the other).
  • motion data for Creature C 610 can be used with a set of offsets in order to drive animation of Creature D 620 , or the other way around.
  • These offsets may be part of one configuration that can be used to drive the translation of motion data in either direction.
  • the offset values between two rigs/creatures can be used in either direction, thus one configuration that can drive either of these creatures, from the other. It should be noted that, for translation of motion between similarly structured creatures (like-for-like creatures) with similar or identical anatomical features, but in subtly, or less subtly, differing proportions, such as between two quadrupeds (e.g., two canines), one would expect a more direct animation sharing in which animation translates directly between limbs and biomechanical parts. However, using this system, the result would be one creature “mimicking” the other as opposed to mechanically copying, due to the proportional differences being considered via the factorization process that translates motion in the form of actions and units of those actions, from source-to-target.
  • Scale factors (which may also be considered offsets) can be used to multiply and re-evaluate the action units either for individual anatomical features, or all of them at once. This would produce, for example, a larger creature using the motion of a small creature, but with foot and body positions relative to the larger creature's own size, rather than attempting to exactly match the foot and body positions of the smaller creature, which would not look pleasing nor would it appear to be the same motion in its nature.
  • scale factors are used to evaluate the motion data, the motion-over-distance of Creature C 610 , can be matched with that of Creature D 620 , respective of the weight and size of Creature D 620 . For instance, the length and timing of a foot stride and foot plant will scale with the size of the creature.
  • an animator may want the footsteps to exactly match and may choose not to use a scale-factor in the evaluation of actions. This may be desirable in some instances, such as for example, if a hand/paw is interacting with an object.
  • a foot and body position match can be performed using the same factorization in reverse to negate the difference, if needed.
  • the smaller creature can mimic the larger creature in the same way, and this would effectively be using the reverse scale factor.
  • the one scale-factor and understanding of the differences between creatures is needed to use motion from either creature to the other and vice versa. It is not required to record this configuration twice/once-for-each creature.
  • FIG. 7 illustrates an example of how a motion animation can be translated from a non-human character to a human character. More specifically, a human Character A 710 is shown alongside a horse Character B 720 . A three-dimensional reference axes 702 is also shown in the lower left hand corner and the following description is written with the reference axes 702 in mind.
  • the human Character A 710 may mimic the motion of the horse Character B 720 . Offset values may be applied to the motion data for the horse Character B 720 , to position and pose the human rig in a manner that would enable the motion of the horse Character B 720 to drive the motion of the human Character A 710 , as if his fingers were hooves and his arms were the horse's front legs.
  • the head and neck of the human Character A 710 shown in FIG. 7 also received offsets in order to match the eye-line and head trajectory.
  • a human is bound by its differences to the horse.
  • the translated movement performed by the human does not perfectly match the movement of the horse. It only resembles the movement of the horse. This may be attributed to the human pelvis being elevated due to the leg offsets and the tip-toe pose being prioritized to closer match the pose of the horse's legs, ready to enact motion.
  • Scale factors can be used as a layer of offset values to the motion data, in order to closer match the weight and feel of the human's mimicked animation to that of the horse, with respect to the obvious size and mass differences. If the scale factors are not used, the human motion is not likely to resemble the motion of the horse very well, as a result of arm and leg movement being hyper-extended to match the foot-planting of the horse's hooves.
  • FIG. 8 is a flowchart that illustrates how an action code usable for motion translation may be determined, such as by a universal biomechanical expression system.
  • FIG. 8 describes a process of generating an action code for a singular biomechanical part.
  • Animation may be complex and involve the movement of numerous biomechanical parts, which may require the generation of multiple action codes—one for each biomechanical part. That would require performing this process multiple times.
  • Longer animation may also involve sequences of movement and actions, comparable to frames in a video with each frame having a different pose (e.g., full-body pose). Thus, longer animations may require this process to be performed repeatedly for not only the different biomechanical parts, but also across the different poses.
  • the universal biomechanical expression system may determine a biomechanical part associated with movement (e.g., from motion data). Any suitable source for the motion data may be used. For instance, the right shoulder joint of a human character model may be rotated upwards in the Z-axis of rotation in order to raise the right arm of that human character model. This may be performed manually (e.g., an animator manipulates the human character model) or motion captured from a human actor.
  • the universal biomechanical expression system may look up an action serial or serial number associated with that biomechanical part, such as by referencing a database. For example, the serial number could be a three digit number that is arbitrary assigned to analogous biomechanical parts that are common across different characters and/or creatures. For instance, many different creatures have a right shoulder joint and the right shoulder joint for all of them may be associated with the same serial number of 003.
  • the universal biomechanical expression system may determine constraints that are defined for the character for that biomechanical part, such as range(s) of motion and designated neutral position(s). This may be done by referencing a table or database. For example, there may be a table associated with a particular human character that lists all the range(s) of motion and designated neutral position(s) for all the biomechanical parts in that character's anatomy. For each character, any particular biomechanical part may be associated with multiple ranges of motion and designated neutral positions. There may be a defined range of motion and designated neutral position for each of the six axes of freedom (e.g., rotate X, rotate Y, rotate Z, translate X, translate Y, translate Z). As an example, the universal biomechanical expression system may determine that the right shoulder joint for the specific human character has a full range of rotation around the Z-axis of 220 degrees with a designated neutral position in the middle of that range of rotation.
  • constraints that are defined for the character for that biomechanical part such as range(s) of motion and designated neutral position(s). This may
  • the universal biomechanical expression system may determine movement values associated with the biomechanical part, in all six axes of freedom (e.g., rotate X, rotate Y, rotate Z, translate X, translate Y, translate Z). These movement values may be in absolute terms. For example, rotational movement may be in Euler units or degrees. In some cases, the movement values may include the stop positions of the biomechanical part after the movement occurs. In some cases, the movement values may describe the change in position of the biomechanical part (e.g., a +45 degree rotation).
  • the universal biomechanical expression system may normalize and/or refactor the movement values based on the constraints (e.g., ranges of motion and designated neutral positions) in order to obtain action units.
  • the constraints e.g., ranges of motion and designated neutral positions
  • the right shoulder joint of the character may have a full range of rotation around the Z-axis of 220 degrees with a designated neutral position in the middle of that range of rotation.
  • a normalization scheme can be applied that involves values from ⁇ 10 to 10, with 10 corresponding to the position at the specific maximum of the range of motion, ⁇ 10 corresponding to the position at the specific minimum of the range of motion, and 0 corresponding to the designated neutral position.
  • the determined movement value from block 806 and the designated neutral position can be used to determine the rotational position that the right shoulder joint is at, and that rotational position relative to the full range of motion can be used to calculate a normalized value within the normalization scheme (e.g., an action unit).
  • the rotational position of the right shoulder joint in the Z-axis would be at the very maximum of the full range of rotation (e.g., +110 degrees), which would come out a normalized value of +10.
  • this normalization can be performed for movement in each of the six axes of freedom.
  • the universal biomechanical expression system may generate an action code based on the obtained action units. For instance, as described in regards to FIG. 3 , an action code can be used to represent relative positioning of the referenced biomechanical part within the character's range-of-motion for each of the six axes of freedom.
  • the universal biomechanical expression system may generate this action code based on an established standard or protocol.
  • the action code is to include corresponding action units for each of the three rotational axes and three translation axes, in the form xyz (e.g., [rotateX, rotateY, rotateZ, translateX, translateY, translateZ]), then the action code that involves only a rotation in the Z-axis to the maximum of the full range of rotation will look like “000010000000”.
  • the action code may be included with the serial number referencing the biomechanical part in order to obtain a unified action code.
  • FIG. 9 is a flowchart that illustrates how an action code may be interpreted and used to translate motion, such as by a universal biomechanical expression system.
  • the universal biomechanical expression system may receive an action code including action units, such as via the process described in regards to FIG. 8 .
  • the action code may be part of a unified action code, which will also include information (e.g., an action serial or serial number) that references a particular biomechanical part associated with the movement.
  • the action units will describe relative positioning of the referenced biomechanical part within a range-of-motion for each of the six axes of freedom.
  • the universal biomechanical expression system may receive an action code based on a right shoulder joint rotation in the Z-axis for a human character.
  • the universal biomechanical expression system may determine an analogous biomechanical part for the target character, whose motion is to be driven based on the received action code. This can be determined in a number of ways. For instance, the anatomy of the target character may be similar to the anatomy of the source character used to generate the action code, in which case the analogous biomechanical part may be easily identified and determined (e.g., based on the serial number provided in a unified action code).
  • the analogous biomechanical part for the target character may be the right shoulder joint, which also should be associated with the serial number of “003.”
  • the anatomies of different creatures may differ or a different mapping between biomechanical parts may be preferred.
  • the universal biomechanical expression system may need to be supplied information regarding the source character, as there may be a relationship table, database, or configuration (e.g., file) which provides a mapping of biomechanical parts between two different characters or creatures for the purposes of motion translation.
  • a relationship table, database, or configuration e.g., file
  • the relationship table or database may define how human biomechanical parts map to badger biomechanical parts. If, upon consulting this reference, the universal biomechanical expression system determines that the right shoulder joint (“003”) in a human maps to the right shoulder joint (“003”) of a badger, then the right shoulder joint of the badger may be selected as the analogous biomechanical part.
  • the mapping may be very granular and describe how each particular axes of freedom of a biomechanical part in a first character corresponds to an axes of freedom of the analogous biomechanical part in the second character.
  • the relationship table, database, or configuration may not only include the mapping of biomechanical parts between two characters, but also includes any offsets that need to be applied at block 908 .
  • the universal biomechanical expression system may determine constraints that are defined for the target character for the analagous biomechanical part, such as range(s) of motion and designated neutral position(s). This may be done by referencing a table or database. This table or database may be separate from the relationship table or database. For example, there may be a table associated with the badger character that lists all the range(s) of motion and designated neutral position(s) for all the biomechanical parts in that character's anatomy. Each biomechanical part may be associated with multiple ranges of motion and designated neutral positions, as there may be a defined range of motion and designated neutral position for each of the six axes of freedom (e.g., rotate X, rotate Y, rotate Z, translate X, translate Y, translate Z). For instance, within the context of the example provided in FIG. 4 , the universal biomechanical expression system may determine that a badger character has a limited full range of rotation for the right shoulder joint in the Z-axis.
  • constraints that are defined for the target character for the analagous biomechanical part such as
  • the universal biomechanical expression system may optionally determine and apply offsets to the action units of the action code.
  • the offset values or multipliers to the action unit input values are used to represent the anatomical differences between one creature and another.
  • the offset values may exist to affect both positions of target joints and rotations. These offset values may be used to enable the target character to mimic the movement of the source character.
  • all the offsets for translating motion between a source creature/character and a target creature/character may be recorded (e.g., in the relationship table, database, or configuration). For instance, there may be a configuration associated with translating motion between a human and a badger that includes all the offsets.
  • the universal biomechanical expression system may consult this configuration and determine the offsets to be applied for rotations of the right shoulder joint in the Z-axis and apply them to the action units of the action code.
  • the universal biomechanical expression system may interpret the action units (with any applied offsets) based on the determined constraints of the target character (at block 906 ). For instance, continuing within the context of the example provided in FIG. 4 , if the action units are primarily associated with a right shoulder joint rotation in the Z-axis, then the universal biomechanical expression system would interpret the action units against the range of rotation for the right shoulder joint of a badger. Movement of the badger character model can then be driven by rotating the right shoulder joint of the badger model.
  • FIG. 10 is flowchart that illustrates an overview of an example of translating complex animation between characters, such as by a universal biomechanical expression system. More specifically, the flowchart illustrates how the motion data associated with complex animation for a human character can be applied to a dog character.
  • the movements of a human actor can be obtained via motion capture to create a complex animation.
  • an animator can create a complex animation by manipulating a human character model.
  • motion capture may be faster to do than animating by hand.
  • the universal biomechanical expression system may use known motion capture techniques in order to convert the motion capture data into a complex animation for a corresponding human character model.
  • the human character may have predefined constraints (e.g., ranges of motion and designated neutral poses), or those constraints can be defined here.
  • the universal biomechanical expression system may obtain poses (e.g., motion data) from the complex animation.
  • poses e.g., motion data
  • Longer, complex animation involves sequences of movement and actions, but can be broken down into a series of different poses (e.g., full-body pose) that are comparable to frames in a video, with each frame being a different full-body pose.
  • These full-body poses can be sampled from the complex animation and for each full-body pose, the positioning of each biomechanical part in the character model can be determined and recorded.
  • each full-body pose e.g., frame in the animation
  • a complex animation can be thought of as a sequence of different collections of action codes.
  • the universal biomechanical expression system may reduce each full-body pose from the complex animation into to a collection of action codes using the process described in FIG. 8 , which is performed for every biomechanical part in the pose that is associated with movement. This is done by normalizing the positioning of the biomechanical parts in each full-body pose against the constraints (e.g., ranges of motion, designated neutral position, and so forth) that are defined for the human character.
  • constraints e.g., ranges of motion, designated neutral position, and so forth
  • the universal biomechanical expression system may interpret the action codes using dog constraints/offsets, based on the process described in FIG. 9 .
  • the complex animation is represented by a sequence of different collections of action codes, with a collection of action codes defining the movement for all the biomechanical parts in a full-body pose.
  • Each action code is interpreted using the constraints of the dog character after any appropriate offsets have been applied, in order to determine the appropriate positioning of the corresponding biomechanical part within the context of the dog character model.
  • the universal biomechanical expression system may create an animation for the dog character model by applying the collections of action codes, in sequence, to the dog character model.
  • Each collection of action codes, which is associated with a full-body pose may result in a corresponding full-body pose for the dog character model.
  • These full-body poses can be stitched together to create an animation and interpolation can be used to smooth out the animation.
  • FIG. 11 illustrates an embodiment of a hardware configuration for a computing system 1100 (e.g., user device 130 and/or universal biomechanical expression system 100 of FIG. 1 ).
  • a computing system 1100 e.g., user device 130 and/or universal biomechanical expression system 100 of FIG. 1 .
  • Other variations of the computing system 1100 may be substituted for the examples explicitly presented herein, such as removing or adding components to the computing system 1100 .
  • the computing system 1100 may include a computer, a server, a smart phone, a tablet, a personal computer, a desktop, a laptop, a smart television, and the like.
  • the computing system 1100 includes a processing unit 1102 that interacts with other components of the computing system 1100 and also components external to the computing system 1100 .
  • a game media reader 1122 may be included that can communicate with game media.
  • Game media reader 1122 may be an optical disc reader capable of reading optical discs, such as CD-ROM or DVDs, or any other type of reader that can receive and read data from game media.
  • the game media reader 1122 may be optional or omitted.
  • game content or applications may be accessed over a network via the network I/O 1138 rendering the game media reader 1122 and/or the game media optional.
  • the computing system 1100 may include a separate graphics processor 1124 .
  • the graphics processor 1124 may be built into the processing unit 1102 , such as with an APU. In some such cases, the graphics processor 1124 may share Random Access Memory (RAM) with the processing unit 1102 .
  • the computing system 1100 may include a discrete graphics processor 1124 that is separate from the processing unit 1102 . In some such cases, the graphics processor 1124 may have separate RAM from the processing unit 1102 . Further, in some cases, the graphics processor 1124 may work in conjunction with one or more additional graphics processors and/or with an embedded or non-discrete graphics processing unit, which may be embedded into a motherboard and which is sometimes referred to as an on-board graphics chip or device.
  • the computing system 1100 also includes various components for enabling input/output, such as an I/O 1132 , a user interface I/O 1134 , a display I/O 1136 , and a network I/O 1138 .
  • the input/output components may, in some cases, including touch-enabled devices.
  • the I/O 1132 interacts with storage element 1103 and, through a device 1142 , removable storage media 1144 in order to provide storage for the computing system 1100 .
  • the storage element 1103 can store a database that includes the failure signatures, clusters, families, and groups of families.
  • Processing unit 1102 can communicate through I/O 1132 to store data, such as game state data and any shared data files.
  • the computing system 1100 is also shown including ROM (Read-Only Memory) 1146 and RAM 1148 .
  • RAM 1148 may be used for data that is accessed frequently, such as when a game is being played, or for all data that is accessed by the processing unit 1102 and/or the graphics processor 1124 .
  • User I/O 1134 is used to send and receive commands between processing unit 1102 and user devices, such as game controllers.
  • the user I/O 1134 can include touchscreen inputs.
  • the touchscreen can be a capacitive touchscreen, a resistive touchscreen, or other type of touchscreen technology that is configured to receive user input through tactile inputs from the user.
  • Display I/O 1136 provides input/output functions that are used to display images from the game being played.
  • Network I/O 1138 is used for input/output functions for a network. Network I/O 1138 may be used during execution of a game, such as when a game is being played online or being accessed online.
  • Display output signals may be produced by the display I/O 1136 and can include signals for displaying visual content produced by the computing system 1100 on a display device, such as graphics, user interfaces, video, and/or other visual content.
  • the computing system 1100 may comprise one or more integrated displays configured to receive display output signals produced by the display I/O 1136 , which may be output for display to a user.
  • display output signals produced by the display I/O 1136 may also be output to one or more display devices external to the computing system 1100 .
  • the computing system 1100 can also include other features that may be used with a game, such as a clock 1150 , flash memory 1152 , and other components.
  • An audio/video player 1156 might also be used to play a video sequence, such as a movie. It should be understood that other components may be provided in the computing system 1100 and that a person skilled in the art will appreciate other variations of the computing system 1100 .
  • Program code can be stored in ROM 1146 , RAM 1148 , or storage 1103 (which might comprise hard disk, other magnetic storage, optical storage, solid state drives, and/or other non-volatile storage, or a combination or variation of these). At least part of the program code can be stored in ROM that is programmable (ROM, PROM, EPROM, EEPROM, and so forth), in storage 1103 , and/or on removable media such as game media 1112 (which can be a CD-ROM, cartridge, memory chip or the like, or obtained over a network or other electronic channel as needed). In general, program code can be found embodied in a tangible non-transitory signal-bearing medium.
  • Random access memory (RAM) 1148 (and possibly other storage) is usable to store variables and other game and processor data as needed. RAM is used and holds data that is generated during the play of the game and portions thereof might also be reserved for frame buffers, game state and/or other data needed or usable for interpreting user input and generating game displays. Generally, RAM 1148 is volatile storage and data stored within RAM 1148 may be lost when the computing system 1100 is turned off or loses power.
  • computing system 1100 reads game media 1112 and provides a game
  • information may be read from game media 1112 and stored in a memory device, such as RAM 1148 .
  • a memory device such as RAM 1148 .
  • data from storage 1103 , ROM 1146 , servers accessed via a network (not shown), or removable storage media 46 may be read and loaded into RAM 1148 .
  • data is described as being found in RAM 1148 , it will be understood that data does not have to be stored in RAM 1148 and may be stored in other memory accessible to processing unit 1102 or distributed among several media, such as game media 1112 and storage 1103 .
  • All of the processes described herein may be embodied in, and fully automated via, software code modules executed by a computing system that includes one or more computers or processors.
  • the code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some or all the methods may be embodied in specialized computer hardware.
  • a processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like.
  • a processor can include electrical circuitry configured to process computer-executable instructions.
  • a processor includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions.
  • a processor can also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components.
  • a computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
  • Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, and the like, may be either X, Y, or Z, or any combination thereof (for example, X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
  • a device configured to are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations.
  • a processor configured to carry out recitations A, B and C can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Systems and methods are disclosed for universal body movement translation and character rendering. Motion data from a source character can be translated and used to direct movement of a target character model in a way that respects the anatomical differences between the two characters. The movement of biomechanical parts in the source character can be converted into normalized values based on defined constraints associated with the source character, and those normalized values can be used to inform the animation of movement of biomechanical parts in a target character based on defined constraints associated with the target character.

Description

INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS
Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are incorporated by reference under 37 CFR 1.57 and made a part of this specification.
FIELD OF THE DISCLOSURE
The described technology generally relates to computer technology and, more specifically, to animation.
BACKGROUND
Modern video games often include characters and creatures that have detailed, lifelike movement and animation. This is often implemented through a computationally expensive animation process, through which a 3D model is animated using a complex script. Generally, the 3D model must be manipulated through the entire range of motion captured in the animation. As an example, in order to animate a human character running, a video game modeler may have to utilize software to create a 3D model of the character's body and then separately adjust the pose of the model for each frame in the run animation. In other words, the video game modeler may have to manually adjust a pose of the character model for each step defined in the run animation. Additionally, once the moving animation is scripted, the animation may only be suitably applied to that particular character model since translating that movement to an entirely different model having different features, dimensions, and extremities may not be possible, may yield unusual results, or more result in the loss of data and fidelity of animation. Thus, hard-coding the moving animation for a character is a process that can result in a large amount of work which is not transferable between characters and creatures, requiring that the distinct animations of each character or creature to be created separately.
Accordingly, there exists a need to be able to transfer or translate motion data (e.g., for an animation) between different character or creature models, and even across different projects, video games, and companies. This would greatly reduce the time and cost associated with developing modern video games. Embodiments of the present disclosure address these issues and more.
SUMMARY OF THE DISCLOSURE
Described herein are systems and methods for universal body movement translation and character rendering, such that motion data from a source character can be translated and used to direct movement of a target character model in a way that respects the anatomical differences between the two characters.
This can be done by having a universal language, set of rules, or protocol, for describing the movement (e.g., rotational and translation positioning) of the different biomechanical parts (e.g., moving joints/parts) in the anatomies of animated, three-dimensional character models. While reference is made herein to video games, the techniques described can apply to any scenario in which animated, three-dimensional character models are used, such as films, TV shows, and so on.
As will be described, a three-dimensional character model can be defined for a character of a certain creature type. The various biomechanical parts of the three-dimensional character model may have specifically defined constraints, which can include ranges of motion and neutral positioning, that are associated with that character. The biomechanical parts of the three-dimensional character model can be arranged into different poses (e.g., adjustments from a neutral positioning of that part) and an expression or movement animation may be thought of as a series of full-body poses that are stitched together, with each full-body pose made up of the many poses of the different biomechanical parts.
A pose for a biomechanical part may be converted into a singular action code that indicates the adjustment (e.g., rotational and translational positioning) of the part in all six axes of freedom, normalized for the constraints that are specific to that character. Thus, a complex, full-body pose of a three-dimensional character model to be represented based on a collection of action codes, which represent the combination of adjustments made to the various biomechanical parts of the model to arrive at that full-body pose. As an example, an animator may desire a first biomechanical part of a three-dimensional character model to be in a certain pose and can specify a first action code for adjusting the first biomechanical part of the model. The animator may want a second biomechanical part of the three-dimensional character model to be in certain pose at the same time, and may therefore combine the first action code with a second action code indicating positioning of the second biomechanical part. In this way, the animator can easily generate complex, full-body poses for a character model.
In other words, an animator can simply specify combinations of action codes to cause generation of a full-body pose of the three-dimensional character model. Alternatively, the animator may be able to move around and adjust the parts of the three-dimensional character model until the desired full-body pose is obtained, and the combination of action codes associated with that full-body pose may be generated. Furthermore, an animation can be thought of as a sequence of full-body poses that are stitched together, which is represented by a series of different collections of action codes.
Thus, the action codes serve as a universal language for describing the movement and positioning of the biomechanical parts in a three-dimensional character model, and animators can rapidly create full-body poses and animations for any particular three-dimensional character model via a combinations of action codes. Combinations of these action codes can generate complex poses and animation that are not possible in prior systems.
An example standard or protocol utilized to define the format of these action codes is described herein, but any other standard can be selected and falls within the scope of this disclosure. These action codes may be applied universally to any three-dimensional character model, including different character models of the same or different type of creatures. However, the action codes may be evaluated in a manner that respects the different constraints and anatomical differences associated with each character. In other words, an animator may take a first action code for a first biomechanical part and similarly specify the action code for other target character models.
These target character models will then express the same pose for their first biomechanical part, subject to any relative adjustments made for the constraints or anatomical differences associated with each target character, such as restrictions on the full range of motion for that first biomechanical part. Therefore, the techniques described herein enable an animator to rapidly specify full-body poses used in expressions (e.g., movement animations) via combinations of action codes, even when the characters have different body dimensions (e.g., both a first character and a second character are human beings, but the first character may have longer limbs).
Additionally, while an animator can specify similar action code(s) for each character, the actual resulting poses or expressions of the 3D character model that are generated for each character may be configured to be distinct, if desired. For example, a second character may have a slightly different walking animation than a first character despite using similar action codes due to the second character's biomechanical parts having different restrictions on the full range of motion (e.g., the second character may have a different gait, possibly due to an injury that restricted the range of motion of the second character's legs).
As will be described, these subtle variations may be informed via defined constraints for each character that can be stored in a table or database, as well as offsets that can be defined in a relationship table or database, thus ensuring that different lifelike characters can be realistically animated. Therefore, each character may optionally have unique movement characteristics that can be layered on top of the universal language.
In this way, the techniques described herein allow animators to utilize transferable action codes (which may include a common set of reference codes for fundamental biomechanical parts shared by different characters and/or animals) to create full-body poses and expressions for any character. In contrast to other example systems in which the animators may be required to uniquely adjust the individual 3D models of different characters, frame-by-frame, in order to generate movement animations, an animator may instead rely on the common set of action codes to translate movement animation.
The systems and methods described herein therefore improve the functioning of the computer and address technological problems. In contrast to prior systems, which may rely on individually adjusting a separate 3D model for every character, frame-by-frame, to create unique motion animation for a required scenario, the rules-based approach described herein allows animators to bring realistic uniformity to each character while providing flexibility, speed, and efficiency which has not been possible.
For example, prior systems may require an animator to uniquely arrange a 3D character model into a full-body pose for each frame of a movement animation. Therefore, a character will have only a small defined set of full-body poses which have been rendered. Any modification to the movement animation or the 3D model itself may require significant additional work by the animator. In contrast, the rules-based approach described herein utilizes action codes to describe hundreds, or thousands, and so on, different poses. Additionally, since the rules-based approach relies on a common set of defined biomechanical parts across different characters/animals for the action codes, animators can rapidly specify combinations of action codes for any character.
Furthermore, since expressions and complex animation can be generated from sets of action codes referencing fundamental biomechanical parts, less storage space may be required to animate scenes on a resulting video game. For example, a video game may store pre-rendered animation for a character that was rendered using sets of action codes. Or, the a video game system executing the video game may generate and render an animation for a character during runtime of the video game based on sets of action codes. Thus, for video games that utilize full, lifelike, animations for its characters, the techniques described herein can allow for reductions in storage space.
Various aspects of the novel systems, apparatuses, and methods are described more fully hereinafter with reference to the accompanying drawings. Aspects of this disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Based on the teachings herein, one skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the novel systems, apparatuses, and methods disclosed herein, whether implemented independently of or combined with any other aspect. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope is intended to encompass such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects set forth herein. It should be understood that any aspect disclosed herein may be embodied by one or more elements of a claim.
Although particular aspects are described herein, many variations and permutations of these aspects fall within the scope of the disclosure. Although some benefits and advantages of the preferred aspects are mentioned, the scope of the disclosure is not intended to be limited to particular benefits, uses, or objectives. Rather, aspects of the disclosure are intended to be broadly applicable to any systems and/or devices that could benefit from universal facial expression. The detailed description and drawings are merely illustrative of the disclosure rather than limiting, the scope of the disclosure being defined by the appended claims and equivalents thereof.
In various embodiments, systems and/or computer systems are disclosed that comprise computer readable storage media having program instructions embodied therewith, and one or more processors configured to execute the program instructions to cause the one or more processors to perform operations comprising one or more aspects of the above- and/or below-described embodiments (including one or more aspects of the appended claims).
In various embodiments, computer-implemented methods are disclosed in which, by one or more processors executing program instructions, one or more aspects of the above- and/or below-described embodiments (including one or more aspects of the appended claims) are implemented and/or performed.
In various embodiments, computer program products comprising computer readable storage media are disclosed, wherein the computer readable storage media have program instructions embodied therewith, the program instructions executable by one or more processors to cause the one or more processors to perform operations comprising one or more aspects of the above- and/or below-described embodiments (including one or more aspects of the appended claims).
In some embodiments, a computer-implemented method is contemplated that includes obtaining motion data for a source character; determining, from the motion data, motion of a source biomechanical part of the source character; determining one or more constraints for the source biomechanical part, including a first range of motion defined for the source biomechanical part and the source character; generating, based on the one or more constraints for the source biomechanical part, an action code representative of the motion of the source biomechanical part; determining, for a target character, a target biomechanical part that corresponds to the source biomechanical part; determining one or more constraints for the target biomechanical part, including a second range of motion defined for the target biomechanical part and the target character; evaluating, based on the one or more constraints for the target biomechanical part, the action code to determine relative motion of the target biomechanical part; and applying the relative motion of the target biomechanical part to a three-dimensional model of the target character. In various embodiments, the method may further include determining one or more offsets associated with the source character and the target character; and prior to evaluating the action code, applying the one or more offsets to the action code.
In some embodiments, a non-transitory computer storage media is contemplated that stores instructions that when executed by a system of one or more computers, cause the one or more computers to perform operations that include: obtaining motion data for a source character; determining, from the motion data, motion of a source biomechanical part of the source character; determining one or more constraints for the source biomechanical part, including a first range of motion defined for the source biomechanical part and the source character; generating, based on the one or more constraints for the source biomechanical part, an action code representative of the motion of the source biomechanical part; determining, for a target character, a target biomechanical part that corresponds to the source biomechanical part; determining one or more constraints for the target biomechanical part, including a second range of motion defined for the target biomechanical part and the target character; evaluating, based on the one or more constraints for the target biomechanical part, the action code to determine relative motion of the target biomechanical part; and applying the relative motion of the target biomechanical part to a three-dimensional model of the target character. In various embodiments, the instructions may further cause the one or more computers to perform operations including: determining one or more offsets associated with the source character and the target character; and prior to evaluating the action code, applying the one or more offsets to the action code.
In some embodiments, a system is contemplated that includes one or more computers and computer storage media storing instructions that when executed by the one or more computers, cause the one or more computers to perform operations including: obtaining motion data for a source character; determining, from the motion data, motion of a source biomechanical part of the source character; determining one or more constraints for the source biomechanical part, including a first range of motion defined for the source biomechanical part and the source character; generating, based on the one or more constraints for the source biomechanical part, an action code representative of the motion of the source biomechanical part; determining, for a target character, a target biomechanical part that corresponds to the source biomechanical part; determining one or more constraints for the target biomechanical part, including a second range of motion defined for the target biomechanical part and the target character; evaluating, based on the one or more constraints for the target biomechanical part, the action code to determine relative motion of the target biomechanical part; and applying the relative motion of the target biomechanical part to a three-dimensional model of the target character.
In various embodiments, the one or more offsets associated with the source character and the target character are stored in a configuration associated with motion translation between the source character and the target character. In various embodiments, the motion of the source biomechanical part includes at least one of: a rotation around the X-axis, a rotation around the Y-axis, a rotation around the Z-axis, a translation in the X-axis, a translation in the Y-axis, and a translation in the Z-axis. In various embodiments, the first range of motion defined for the source biomechanical part and the source character is a range for one of: a rotation around the X-axis, a rotation around the Y-axis, or a rotation around the Z-axis. In various embodiments, the second range of motion defined for the target biomechanical part and the target character is a range for one of: a rotation around the X-axis, a rotation around the Y-axis, or a rotation around the Z-axis. In various embodiments, the action code includes a serial identifying the source biomechanical part. In various embodiments, the action code represents the motion of the source biomechanical part for each of: a rotation around the X-axis, a rotation around the Y-axis, a rotation around the Z-axis, a translation in the X-axis, a translation in the Y-axis, and a translation in the Z-axis. In various embodiments, generating the action code may include normalizing the motion of the source biomechanical part using the one or more constraints for the source biomechanical part and a normalization scheme. In various embodiments, the normalization scheme includes a range of values between −10 and 10.
BRIEF DESCRIPTION OF THE DRAWINGS
The following drawings and the associated description herein are provided to illustrate specific embodiments of the disclosure and are not intended to be limiting.
FIG. 1 illustrates a block diagram of an example universal biomechanical expression system.
FIG. 2 illustrates an example of how a motion animation can be translated from a human character to a different human character, in accordance with embodiments of the present disclosure.
FIG. 3 illustrates an example protocol for a unified action code, in accordance with embodiments of the present disclosure.
FIG. 4 illustrates an example of how a motion animation can be translated from a humanoid character to a non-humanoid character having a similar body structure, in accordance with embodiments of the present disclosure.
FIG. 5 illustrates an example of how a motion animation can be translated from a humanoid character to a non-humanoid character having a different body structure, in accordance with embodiments of the present disclosure.
FIG. 6 illustrates an example of how a motion animation can be translated from a non-human character to a different non-human character, in accordance with embodiments of the present disclosure.
FIG. 7 illustrates an example of how a motion animation can be translated from a non-human character to a human character, in accordance with embodiments of the present disclosure.
FIG. 8 is a flowchart that illustrates how an action code usable for motion translation may be determined, such as by a universal biomechanical expression system.
FIG. 9 is a flowchart that illustrates how an action code may be interpreted and used to translate motion, such as by a universal biomechanical expression system.
FIG. 10 is flowchart that illustrates an overview of an example of translating complex animation between characters, such as by a universal biomechanical expression system.
FIG. 11 is a block diagram of an example computing system.
DETAILED DESCRIPTION
This specification describes systems and methods for utilizing universal languages for action codes, that can be used to specify poses, full-body poses, and expressions (e.g., animation) that can be applied to three-dimensional character models of different animated characters (e.g., animated characters in video games, films, and so on). For example, an animator can specify a particular collection of action codes according to a universal language, and different animated character models can automatically update to present the particular pose associated with those action codes. A series of these collections of action codes can be used in sequence in order to obtain expressions and more complex animation. Furthermore, these universal action codes may allow complex animation for a target character to be quickly generated based on motion data associated with a source character, by translating the motion data into action codes that can be then applied to the target character.
As will be described, each animated character may have a distinct version of specified poses. As an example, each character may have a unique restrictions on the full range of motion of its biomechanical parts, such that the specified pose will be adjusted based on those different parameters. As another example, each animated character may have a unique body shape (e.g., they may be associated with different types of animals), such as different numbers of limbs, different limb lengths or designs, and so forth. These differences may result in each animated character having a distinct version of a specific pose. Adjustments to the poses may be made in order to express these differences, resulting in lifelike, and unique looking, animated characters.
In order to facilitate an understanding of the systems and methods discussed herein, a number of terms are described below. The terms described below, as well as other terms used herein, should be construed broadly to include the provided definitions, the ordinary and customary meaning of the terms, and/or any other implied meaning for the respective terms.
As used herein, a three dimensional character model, also referred to as a three-dimensional character model, a three-dimensional model, or a character model, can refer to a wire-frame mesh, or point-cloud, model of a body, with textures (e.g., blended textures) on the model representative of the body. For example, images of a person (e.g., an actor) may be obtained via a camera rig. These images can be utilized to generate a point-cloud of the person's body, in which points with location and depth information are connected via vertices. A modeler (e.g., a blend-shape artist) can modify the point-cloud, blend textures, and so on, to generate a three-dimensional character model based on the person's body. The character model can be divided into a plurality of sections or portions that are associated with the various biomechanical parts in the anatomy associated with the character model (e.g., the skeletal system of the underlying creature). In some embodiments, each biomechanical part may be associated with a serial number. In some embodiments, each biomechanical part may be associated with a neutral pose (e.g., based on a neutral position within each of the six axes of freedom). The biomechanical parts may be adjusted relative to the neutral pose to conform to each of a multitude of poses defined by the action codes.
As used herein, a biomechanical part may be any moving part in an animal, such as a joint, a bone, cartilage, muscle tissue, and so forth. In some instances, there may be analogous biomechanical parts between different animals, though they do not necessarily have to have the same structure and/or function. For example, many birds and mammals have necks that provide the head additional rotational flexibility. The neck in each of these animals may be considered to be analogous biomechanical parts. However, fish do not have necks, but rather a series of bones that connect their skull to the shoulder girdle. Those series of bones could be either considered to be analogous or not analogous to a neck. These relationships between biomechanical parts of different animals may be defined within a relationship table and used to facilitate the mapping of movement animations between different animals. In some embodiments, analogous biomechanical parts across different animals may be assigned the same serial number reference code.
As used herein, a range of motion may include a range of rotation or range of translation for a biomechanical part of a character in a given axis of movement. The range of motion may be described in absolute terms (e.g., Euler units or degrees for the range of rotation).
A range of rotation may be associated with a specific minimum and a specific maximum, which may refer to the rotational limits for a joint or body part, in a particular axis of rotation, of a particular character or creature. For example, owls are well known to be able to horizontally rotate their heads (e.g., around the Y-axis) up to 270 degrees either left or right in order to see over their shoulder. If the neutral pose associated with the owl is looking straight ahead, then a designated neutral position within the available range of rotation in the Y-axis can defined as the zero degree position. The owl rotating its head fully to the left to look behind it may be considered the −270 degree position and the owl rotating its head fully to the right to look behind it may be considered the +270 degree position. Under this reference system, the −270 degree position (e.g., a full leftward rotation) may be considered the specific minimum and the 270 degree position (e.g., a full rightward rotation) may be considered the specific maximum associated with the full range of rotation around the Y-axis for the owl's head. (However, it should be noted that under an alternate reference system, the directions may be swapped, such that the −270 degree position may be associated with a full rightward rotation and a 270 degree position may be associated with a full leftward rotation. Either is acceptable, as long as the orientation of the reference system remains consistent across different animals).
In some embodiments, the full range of motion for a joint or body part, in a particular axis, for a specific character or animal, can be used to normalize motion data and generate action codes that can be applied across different character or animals. Any suitable numerical, unit-less range may be used for the normalization, such as −1 to 1, 0 to 10, and so forth. For instance, in some embodiments, positions within the range of motion for a joint or body part may be normalized and expressed based on a range of −1 to 1, such that −1 is associated with the specific minimum of the range of motion and 1 is associated with the specific maximum of the range of motion. When this normalized range is applied to the owl in the previous example, which is capable of rotating its head around the Y-axis up to 270 degrees to the left or right, a normalized value of −1 may be associated with a full leftward rotation (e.g., the −270 degree position) and a normalized value of 1 may be associated with a full rightward rotation (e.g., the +270 degree position). However, in comparison to an owl, a human being may only be capable of rotating their head around the Y-axis up to 90 degrees to the left or right. Thus, when the same normalization scheme is applied to the full range of motion of the human being, a normalized value of −1 may be associated with a full leftward rotation (e.g., the −90 degree position) and a normalized value of 1 may be associated with a full rightward rotation (e.g., +90 degree position). This normalization may allow animation data to be meaningfully transferred between animals. In some embodiments, the full range of motion for each joint or body part, in each particular axis, may be pre-defined for different characters and creatures. Those values may be stored in a table or database in order to enable the translation of normalized values.
As used herein, the normalized position of a biomechanical part may refer to the position of a particular biomechanical part once it is normalized (e.g., made unitless) against the full range of motion for that biomechanical part and character in a particular axis, based on the chosen normalization scheme. This normalized position may referred to as an action unit, and an action code may include a set of action units describing the normalized position of a biomechanical in each axis of freedom. As an example, the chosen scale for normalization may be between −10 to 10, such that −10 corresponds to the specific minimum of the full range of rotation and 10 corresponds to the specific maximum of the full range of rotation. An owl, which is capable of rotating its head around the z-axis up to 270 degrees to the left or right, may have its head in the rotational position of −135 degrees (e.g., halfway towards the full leftward rotation). The normalized rotational position may be considered −5 in the normalized scale (e.g., −135/−270=X/−10, solving for X).
As used herein, an action serial or serial number may be a reference code or identifier used to reference a particular biomechanical part (e.g., a joint or body part, or the corresponding analogous joint or body part) across different animals. For instance, all recognized biomechanical parts may be indexed and given a serial number that serves as a reference code. As a more specific example, consider the left knee joint in humans and primates (e.g., the joint between the femur, tibia, and patella) may correspond to the stifle joint in the left hind leg of quadrupeds such as dogs, horses, and mice (e.g., the joint between the femur, tibia, and patella). The same reference code (e.g., the number 033) may be used to refer to the joint across the different animals. Some animals may have unique biomechanical parts with their own reference codes.
As used herein, an action code may be an identifier that informs of the relative positioning of a particular biomechanical part within its full range of motion for each axis of freedom (e.g., at a singular point in time). The action code may include a set of action units, with each action unit describing the relative positioning of a particular biomechanical part within its full range of motion for a specific axis of freedom. In some embodiments, an action code may be a unified action code, which includes an action serial or reference code associated with a particular biomechanical part. Action codes are described in further detail in regards to FIG. 3 .
As used herein, a pose may be associated with its ordinary meaning as it pertains to a biomechanical part (e.g., a joint or body part), but it may also be associated with an adjustment of the biomechanical part from a neutral position or the relative positioning of a particular biomechanical part (e.g., a joint or body part) at a singular point in time. A pose for a biomechanical part may be captured and described using an action code.
As used herein, a full-body pose may be associated with its ordinary meaning as it pertains to one or more biomechanical parts, up to all of the biomechanical parts within a character model. In some embodiments, a full-body pose may be associated with the adjustments of the biomechanical parts from their neutral position or the relative positioning of the biomechanical parts at a singular point in time. Thus, a full-body pose may be captured and described using a collection of action codes, one for each biomechanical part associated with movement.
As used herein, an expression or movement animation may be associated with a series of full-body poses captured over time (e.g., frames within a movement animation). In some embodiments, an expression may be communicated as a series of different collections of action codes. If each collection of action codes is used to render a full-body pose that serves as a frame in the movement animation, the various full-body poses can be stitched together in sequence to create the movement animation. Additional interpolation can also be used to smooth out the animation.
As used herein, a biomechanical parts table or database may be used to define the constraints associated with the biomechanical parts of a particular character or creature. For example, this reference may list the full range of motion of each biomechanical part, in each axis, for each creature and/or character. This may further enable movement animations to be translated between different characters, since motion data can be normalized against the different full ranges of motion specified for the biomechanical parts of each character. For example, an owl may be able to horizontally rotate their heads (e.g., around the Y-axis) up to 270 degrees either left or right in order to see over their shoulder. Thus, this reference may include this defined full range of motion on the Y-axis for the neck for owls. A human being may be able to horizontally rotate their heads (e.g., around the Y-axis) up to 90 degrees either left or right. This reference may include this defined full range of motion around the Y-axis for the neck of human beings. The use of this information is described in regards to FIGS. 8, 9, and 10 .
As used herein, a relationship table or database may map out the relationships between biomechanical parts of different characters or creatures when translating motion between characters or creatures. Information in the relationship table or database can be also be included a configuration associated with two different characters or creatures to facilitate the translation of motion between those two different characters or creatures. In some embodiments, the relationship table or database may list a mapping of biomechanical parts between creatures or characters. This may enable motion data to be translated between characters or creatures by having the movement for a biomechanical part in a first creature be translated into movement for a specific biomechanical part of a second creature. In some embodiments, the relationship table may also list offsets that may be applied when translating motion data between two different characters or creatures.
As used herein in reference to user interactions with data displayed by a computing system, “user input” is a broad term that refers to any type of input provided by a user that is intended to be received and/or stored by the system, to cause an update to data that is displayed by the system, and/or to cause an update to the way that data is displayed by the system. Non-limiting examples of such user input include keyboard inputs, mouse inputs, digital pen inputs, voice inputs, finger touch inputs (e.g., via touch sensitive display), gesture inputs (e.g., hand movements, finger movements, arm movements, movements of any other appendage, and/or body movements), and/or the like. Additionally, user inputs to the system may include inputs via tools and/or other objects manipulated by the user. For example, the user may move an object, such as a tool, stylus, or wand, to provide inputs. Further, user inputs may include motion, position, rotation, angle, alignment, orientation, configuration (e.g., fist, hand flat, one finger extended, etc.), and/or the like. For example, user inputs may comprise a position, orientation, and/or motion of a hand and/or a 3D mouse.
As used herein, a data store can refer to any computer readable storage medium and/or device (or collection of data storage mediums and/or devices). Examples of data stores include, but are not limited to, optical disks (e.g., CD-ROM, DVD-ROM, etc.), magnetic disks (e.g., hard disks, floppy disks, etc.), memory circuits (e.g., solid state drives, random-access memory (RAM), etc.), and/or the like. Another example of a data store is a hosted storage environment that includes a collection of physical data storage devices that may be remotely accessible and may be rapidly provisioned as needed (commonly referred to as “cloud” storage).
As used herein, a database can refer to any data structure (and/or combinations of multiple data structures) for storing and/or organizing data, including, but not limited to, relational databases (e.g., Oracle databases, mySQL databases, and so on), non-relational databases (e.g., NoSQL databases, and so on), in-memory databases, spreadsheets, as comma separated values (CSV) files, eXtendible markup language (XML) files, TeXT (TXT) files, flat files, spreadsheet files, and/or any other widely used or proprietary format for data storage. Databases are typically stored in one or more data stores. Accordingly, each database referred to herein (e.g., in the description herein and/or the figures of the present application) is to be understood as being stored in one or more data stores.
Example Universal Biomechanical Expression System
With regards to the figures, FIG. 1 illustrates a block diagram of an example universal biomechanical expression system 100. The universal biomechanical expression system 100 can be a system of one or more computers, one or more virtual machines executing on a system of one or more computers, and so on. As described above, the universal biomechanical expression system 100 may be able to store and analyze motion data associated with a three-dimensional character model.
The three-dimensional character model can be generated in multiple ways and any suitable method will do. For example, in order to initially generate a three-dimensional character model for a character, in some cases, an animator may rely on full-body images or scans of a subject (e.g., a real-life person or animal). For example, one or more cameras may be used to capture images of the entire body of the subject from different angles. Optionally, depth sensors (e.g., e.g., lidar, infrared points being projected onto the body, stereo cameras, and so on) may be utilized to obtain accurate depth information of the subject's body. While these images are being captured, the actor may be requested to make poses with portions (e.g., biomechanical parts) of their body. The system can obtain the captured images and generate a photogrammetric model of the subject's body (e.g., a point cloud of the subject's body, such as points connected via vertices).
The photogrammetric model can be used to generate a three-dimensional character model that will be imported into a video game. The three-dimensional model can include textures for the subject's body, and preserve a portion of the vertices included in the point cloud. The three-dimensional character model may be further optimized for processing and storage constraints. The generated three-dimensional model may be have biomechanical parts in neutral (e.g., resting) positions, from which the positions of the biomechanical parts may be adjusted. The biomechanical parts of the three-dimensional character model may also be adjusted and manipulated using action codes. Different characters may have different neutral positions for their biomechanical parts and an action code may inform of how a particular biomechanical part should be adjusted from the neutral position of that biomechanical part, relative to the full range of motion of the biomechanical part for the character. Furthermore, additional offsets or differences may be applied to adjust the biomechanical part for that character. These little variations serve to tie the poses and expressions of the character model to a realistic, lifelike character (e.g., person or animal). In some cases, images can be used to help render the biomechanical parts of the character model in different poses. For example, a character model with its different poses (including all the biomechanical parts in their neutral positions) may be stored in one or more databases. With respect to each biomechanical part of a character, a range can be defined for its permitted movement along with a neutral position within that range. For example, the maximum limit that a head can be turned left or right on the Y-axis can be provided, along with the neutral position within that range (e.g., the resting position may be looking straight ahead).
Motion data (e.g., for a complex animation) for a three-dimensional character model can also be generated in multiple ways. An animator may be able to manipulate the biomechanical parts of the three-dimensional character model by hand for each frame of a complex animation, or the animator may be able to provide a combination of action codes for each frame that is used to adjust the biomechanical parts of the three-dimensional character model. For human characters where motion capture is a possibility, motion capture can be used to capture movement from a human actor that can be converted to motion data applicable to the three-dimensional character model. This motion data can then be translated and used by the universal biomechanical expression system 100 to drive the animation of a different three-dimensional character model.
Thus, in some embodiments, the universal biomechanical expression system 100 may include a camera 104, as illustrated, taking images or video of an actor 102. While the example includes one camera 104, it should be understood that multitudes of cameras can be utilized. For example, the cameras may be included in a camera rig, with each camera capturing a high-resolution iposemage of a specific portion of the actor's 102 body. Additionally, two or more cameras may capture a same portion of the actor's 102 body, but taken at different angles (e.g., stereo cameras). In this way, depth information can be obtained. Various images may be captured and utilized to generate a more complete three-dimensional model of the actor. The camera 104 may also be used to capture movement from the actor 102 that can be applied to a three-dimensional character model.
The universal biomechanical expression system 100 includes a capture engine 110 that can receive the images captured by the camera(s) 104 and generate a three-dimensional character model. For example, a user of the user device 130 can generate a three-dimensional model of the actor's 102 body. To generate this three-dimensional model, the capture engine 110 can combine (e.g., stitch together) images of the actor's body, and generate a point cloud of the body. For example, the point cloud can include multitudes of points defining depth associated with the actor's 102 body at a respective location. This point cloud can therefore represent an accurate model of a topology of the actor's 102 body. The capture engine 110 can output the point cloud, for example for presentation on the user device 130, and the user can generate a three-dimensional model of a character based on the point cloud.
As illustrated, the universal biomechanical expression system 100 is in communication with a user device 130 of a user (e.g., a modeler, an animator, and so on). The user device 130 can be a desktop computer system, a laptop, a tablet, a mobile device, a wearable computer, and so on. Optionally, the universal biomechanical expression system 100 may be connected (e.g., a wireless or wired connection) to a display, and a user can directly utilize the universal biomechanical expression system 100. Optionally, the universal biomechanical expression system 100 may implement a web application which the user device 130 can access. That is, the user device 130 can present a web page or a user interface associated with an application 132 executing on the user device 130 (e.g., an ‘app’ obtained from an electronic application store, a web application, and so on). The universal biomechanical expression system 100 can then provide information to the user device 130 for inclusion in the web page or application 132. For example, the user can provide user interactions, such as a combination of action codes, to the user device 130, and the universal biomechanical expression system 100 can receive these user interactions and generate an output associated with them (e.g., a resulting pose from a combination of action codes).
In some embodiments, the universal biomechanical expression system 100 may able to take the captured images or motion capture data of an actor's body during movement (e.g., as frames of a movement animation or full-body poses associated with the three-dimensional character model) and convert it into raw motion data (e.g., the positions of each biomechanical part in the model) associated with the three-dimensional character model. In some embodiments, the universal biomechanical expression system 100 may be able to take motion data for a first character model and then translate it to action codes that can be applied to animate a second character model. As a practical outcome, this may effectively enable the movement of the second character model to mimic the motion data captured from the actor 102 that was applied to a first character model.
In order to do this translation, the universal biomechanical expression system 100 may have a mapping engine 120 that is configured to consult a biomechanical part database 122. The biomechanical part database 122 may include a serial number or reference code associated with each biomechanical part of every character, as well as constraints associated with those biomechanical parts. The constraints may include the full range of motion of each biomechanical part, in each axis, for each creature and/or character. The constraints may also include the designated neutral positions of each biomechanical part, in each axis, for each creature and/or character. In order to generate an action code associated with motion data for a biomechanical part of a source character, the universal biomechanical expression system 100 may consult the biomechanical part database 122 to determine the serial number and the constraints associated with that biomechanical part of the character. That information can be used to normalize the motion data and generate the action code. In some embodiments, the biomechanical part database 122 may organize information in an object-oriented manner, such that default constraints and serial numbers associated with biomechanical parts are defined for different creatures. The biomechanical part database 122 may also include any modified constraints associated with a specific character within each of those creature types, but in the absence of any constraints that is particularly defined for that specific character, the constraints associated with the overall creature type may be used. For example, the biomechanical part database 122 may define a range of rotation in the Y-axis for the head of human beings to be 90 degrees. There may be two human characters, Jack and Jill, which are also listed within the biomechanical part database 122. Jack has the additional constraint, which is that the range of rotation in the Y-axis for his head is only 70 degrees. In this scenario, the mapping engine 120 may use the range of 70 degrees for evaluating action codes associated with Jack, while using the range of 90 degrees applied to humans overall for evaluating action codes associated with Jill.
The mapping engine 120 may also be configured to consult a relationship database 140. The relationship database 140 may serve to map out the relationships between biomechanical parts of different characters or creatures when translating motion between those characters or creatures. This relationship information may include serial numbers for corresponding biomechanical parts between two characters or creatures, how the different axes of freedom may be mapped between those biomechanical parts, and also any offset values that need to be applied to the action units of the action code when translating motion between the two characters or creatures (in either direction). For instance, if motion data for a right upper leg joint is being translated from a human being to a spider, then the relationship database 140 can be consulted to determine the relationship between humans and spiders with the serial number of the right upper leg joint, and it may list the serial number(s) of the corresponding biomechanical parts within a spider anatomy, how the action unit input values for the human right upper leg joint should be mapped to different axes of those corresponding biomechanical parts (e.g., rotations in the X-axis of the human right upper leg joint should correspond to rotations in the Y-axis of a corresponding biomechanical part in the spider), and any offset values that need to be additionally applied to the action unit input values. In some embodiments, the relationship database 140 may organize information in an object-oriented manner, such that information is defined and organized by different pairs of creatures. The relationship database 140 may also include any particular modifications associated with translations for specific characters within each of those creature types, but in the absence of that information, the relationship information associated with the overall pair of creatures may be used. For example, the relationship database 140 may define overall relationship data for translating motion between humans and spiders. However, there may be two separate spider characters, Spider A and Spider B. Spider A may have additional relationship information, such as a different set of offsets to apply when translating motion data from humans to Spider A. In this scenario, the mapping engine 120 may use the offsets specific to Spider A when translating motion data to Spider A, while using the overall spider offsets when translating motion data for Spider B.
Examples of Translating Motion Animation Between Humanoids
FIG. 2 illustrates an example of how a motion animation can be translated from a human character to a different human character. More specifically, four different human characters are shown in FIG. 2 , including Character A 210, Character B 220, Character C 230, and Character D 240. A three-dimensional reference axes 202 is also shown in the lower left hand corner and the following description is written with the reference axes 202 in mind.
For any common biomechanical part, each different character may have a distinct full range of movement (e.g., rotation or translation) and designated neutral position for each of the six axes of freedom. For the purpose of facilitating understanding of this concept, the full range of rotation around the Z-axis for the right shoulder of all four human characters is shown.
This full range of rotation may be described in absolute terms (e.g., in Euler units or degrees). The designated neutral position (also referred to as the neutral pose) of rotation around the Z-axis for the right shoulder is also shown for all four human characters. The designated neutral position may serve as a reference point (e.g., the 0 degree position) within the full range of rotation, which divides the corresponding full range of rotation into a positive range (e.g., from zero to a specific maximum) and a negative range (e.g., from zero to a specific minimum). Based on this convention, the rotational position of the right shoulder can be described as a positive or negative number of degrees, and any rotation of the right shoulder can be described as a positive rotation (e.g., towards the positive range) or a negative rotation (e.g., towards the negative range). The biomechanical terms of flexion and extension may also be used to refer to these rotational directions.
For instance, FIG. 2 shows the right shoulder of Character A 210 having a 220 degree full range of rotation 212 around the Z-axis with a designated neutral position 214 at the midpoint of the full range of rotation 212. This divides the full range of rotation 212 into a positive range from 0 degrees to 110 degrees (e.g., the specific maximum) and a negative range from 0 degrees to negative 110 degrees (e.g., the specific minimum).
These rotational ranges are subject to the physical limits associated with Character A 210 and represent available movement in the flexion/extension rotational Z-axis of this character's right shoulder joint. For example, if Character A 210 had an injury at the time of performance, or the animation made for this character suggested an injury, it would result in affected movement. This would be reflected in the rotational ranges associated with Character A 210. When translating motion between two characters, the motion (e.g., a series of rotational positions) can be translated from a first character to a second character based on the defined rotational ranges for both characters (e.g., by re-factoring the ranges using a normalization scheme), which allows these idiosyncrasies and the underlying nature-of-movement to be preserved. This re-factorization process respects the relative differences in physical nature between two or more characters and produces a different translation result compared to retargeting, in which motion data (e.g., rotational positions) is simply applied from one character to another in absolute terms (e.g., in Euler units or degrees) using an approximated pose-match.
The right shoulder of Character B 220 is shown having a 235 degree full range of rotation 222 around the Z-axis, with a designated neutral position 224 that separates the full range of rotation 222 into a positive range from 0 degrees to 125 degrees (e.g., the specific maximum) and a negative range from 0 degrees to −110 degrees (e.g., the specific minimum).
The right shoulder of Character C 230 is shown also having a 235 degree full range of rotation 232 around the Z-axis, with a designated neutral position 234 that separates the full range of rotation 232 into a positive range from 0 degrees to 120 degrees (e.g., the specific maximum) and a negative range from 0 degrees to −115 degrees (e.g., the specific minimum). Thus, the designated neutral position 234 of Character C 230 is different from the designated neutral position 224 of Character B 220 despite both characters having similar, 235 degree full ranges of rotation.
Finally, the right shoulder of Character D 240 is shown having a 205 degree full range of rotation 242 around the Z-axis, with a designated neutral position 244 that separates the full range of rotation 242 into a positive range from 0 degrees to 90 degrees (e.g., the specific maximum) and a negative range from 0 degrees to −115 degrees (e.g., the specific minimum).
Thus, it can be seen from FIG. 2 , that for just the Z-axis of rotation for the right shoulder alone, there may be a distinct full range of rotation and designated neutral position that is defined for each different character. This is a simplistic example, as each character may have a full range of movement and designated neutral position defined for each axes of freedom of not only the right shoulder, but every other biomechanical part as well.
Furthermore, it should be noted that the designated neutral position does not necessarily have to divide a full range of rotation into two equal ranges in both the positive and negative directions. The ratio between the positive range and the negative range may vary depending on the designated neutral position relative to the full range of movement for that movement axis, which may change based on the character and the creature's anatomical structure. For instance, even though the designated neutral position 214 for Character A 210 divides the full range of rotation 212 into two equal movement ranges (resulting in a 1:1 ratio between the positive and negative ranges), the other characters depicted in FIG. 2 have different ratios between the positive and negative ranges.
In order to make it easier to animate the movement of character models and translate movement between different characters, the full range of motion for a biomechanical part can be described using rescaled, normalized values instead. Any suitable normalization scheme may be used. For instance, normalized values between −1 and 1 can be used, with −1 corresponding to the specific minimum of the range of motion, 1 corresponding to the specific maximum of the range of motion, and 0 corresponding to the designated neutral position. However, these values can be factorized to make them easier to work with, such as factorizing the previous normalized range by 10 (e.g., 0-10 is a more animation-friendly unit range than 0-1), or a different normalization scheme can be chosen altogether.
In an example normalization scheme where the rotational position can be described using normalized values between −10 to 10, a normalized rotational value of 10 would correspond the rotational position at the specific maximum, a normalized rotational value of −10 would correspond to the rotational position at the specific minimum, and a normalized rotational value of 0 would correspond to the designated neutral position. Thus, for Character A 210, a normalized rotational value of 10 for the right shoulder around the Z-axis would correspond to a rotational position of +110 degrees (e.g., Character A 210 has their right upper arm lifted as high as possible), whereas a normalized rotational value of 5 would correspond to a rotational position of +55 degrees (e.g., 110/2) for Character A 210. This same normalized rotational value of 5 would correspond to a rotational position of +45 degrees (e.g., 90/2) for Character D 240, whose anatomy and physical condition results in a shorter range of rotation than that of Character A 210 (e.g., Character D 240 has a full range of rotation 242 with a more-restricted positive range that spans from 0 to 90 degrees).
Thus, by using a normalization scheme that factors in the range of movement of each character, the description-of-movement on one character can be expressed relative to the description-of-movement of another character. Movement for a first character can be translated into movement for a second character by first normalizing, based on the first character's defined range of motion, the motion data for the first character that is in absolute terms, and then de-normalizing against the second character's defined range of motion. Using the previous example, if motion data for Character A 210 indicated that the right shoulder is at a rotational position of +55 degrees relative to the designated neutral position 214, that would correspond to a normalized rotational value of 5. When this normalized rotational value is applied to Character D 240, it can be de-normalized against the defined full range of rotation 242 for Character D 240 (e.g., (5/10)*(the positive range of motion)), which results in a rotational position of +45 degrees relative to the designated neutral position 244 for Character D 240. This overall process is summarized in the flow charts of FIGS. 8 and 9 .
To make the translation of movement between characters faster and more efficient, motion data can also be expressed as a series of unified action codes. The unified action code may be configured to indicate a specific biomechanical part and any normalized values associated with each of the six axes of freedom (such as the relative rotational positioning of the biomechanical part within the available full range of rotation for a particular axis of rotation).
Example Protocol for Unified Action Code
FIG. 3 illustrates an example protocol for a unified action code. Under this example protocol, a unified action code may include two parts, an action serial 302 and an action code 304. The action serial 302 may be a serial number (e.g., a three digit number) that is arbitrarily assigned to a particular biomechanical part (which may be common across different characters and/or creatures) and used to reference that biomechanical part. For example, each of the four characters shown in FIG. 2 has a right shoulder joint, which may be associated with a serial number of 003.
The action code 304 may be a value set used to represent relative positioning of the referenced biomechanical part within the character's range-of-motion for each of the six axes of freedom. For instance, under a normalization scheme that represents a full range of motion using values from −10 to +10, the action code 304 may include corresponding inputs for each of the three rotational axes and three translation axes, in the form xyz (e.g., [rotateX, rotateY, rotateZ, translateX, translateY, translateZ]). Each input may have a padding of two zeroes for a double digit factor range of −10 to 10 (with 00 being the designated neutral position). Thus, a value set of “000000000000” would be associated with the default positions for all axes.
Thus, the example unified action code 310 (“003000010000000”) can be interpreted as having an action serial of “003” for the right shoulder joint and an action code of “000010000000”, which can be broken down into the corresponding inputs of [00, 00, 10, 00, 00, 00] for [rX, rY, rZ, tX, tY, tZ]. The normalized rotational value of +10 corresponding to the rotational Z-axis indicates that this example unified action code 310 is associated with a maximum positive angle for right shoulder joint flexion/extension rotation around the Z-axis (e.g., the specific maximum of the available full range of rotation).
If this example unified action code 310 were associated with Character A 210 in FIG. 2 , for instance, who has a right shoulder joint with a 220 degree full range of rotation around the Z-axis and a designated neutral position that divides this full range of rotation into an equal positive range (0 degrees to +110 degrees) and negative range (0 degrees to −110 degrees), then the example unified action code 310 would correspond with a rotational positioning of +110 degrees relative to the designated neutral position.
Example of Motion Translation Between Humanoid and Non-Humanoid
FIG. 4 illustrates an example of how a motion animation can be translated from a humanoid character to a non-humanoid character having a similar body structure (like-for-like). More specifically, a human Character A 410 is shown alongside a badger Character B 420, but this specific example of translating human-to-badger motion can be more generally applied to translate motion from a human character to a non-human character (e.g., a different creature). A three-dimensional reference axes 402 is also shown in the lower left hand corner and the following description is written with the reference axes 402 in mind.
One use case for translating motion from a human character to a non-human character is to allow motion data for the human character (e.g., captured from a human actor) to be used for driving animation of the non-human character, which can save money and time. Since the universal biomechanical expression system described herein translates motion between two characters based on the defined ranges of motion specific to those two characters (e.g., respecting the relative differences in physical nature between the two characters), the results may be considerably different from the standard practice of retargeting, in which motion data (e.g., rotational positions) is simply applied from one character to another in absolute terms (e.g., in Euler units or degrees) using an approximated pose-match.
However, in some embodiments, the universal biomechanical expression system may not necessarily have the capabilities for making logical assumptions (e.g., without additional human guidance) about how types of movement for a first type of creature can be used to drive a like-movement on a second type of creature. In such embodiments, the universal biomechanical expression system may be configured to handle movement data rather than deciding which types of movement to use. Thus, if motion data for a walking human Character A 410 is used to drive arm and leg movement of the badger Character B 420 (e.g., using the process described in regards to FIG. 2 ), the badger Character B 420 may not be expected to walk “like a human” because the biomechanical systems of a badger produces a very different nature of movement for a walk as compared to that of a human walk.
The biomechanical systems of a badger may result in a badger's ‘neutral’ pose (ready to move, no specific-use movement within the pose yet) looking like the ghosted outline 422 shown in FIG. 4 . The hind legs are perpendicular to the body, the front legs/arms are perpendicular to the chest, and paws are at about 75 degrees, pressed against the ground. The spine and head are parallel to the ground. If the universal biomechanical expression system used motion data for human arms and legs to drive movement for a badger (e.g., using the process described in regards to FIG. 2 and without any additional changes), it would result in the front legs of the badger swinging backwards and forwards. Although the movement of the hind legs of the badger would have some similarities to the desired motion, it would only vaguely resemble it. None of the paws would correctly contact the ground and the body of the badger might wave slightly, pivoting from the pelvis.
In order to make correct use of human animation on a badger (e.g., translate motion data from a human character to a badger character), the badger should “mimic” the human being. In order to achieve this, the universal biomechanical expression system may be able to apply offset values or multipliers to action unit input values (e.g., the normalized values for rotational or translational position) in order to represent the anatomical differences between one creature and another. It is important to note that the offset values may need to affect both positions of target joints and rotations. In other words, the normalized values obtained in the translation process described in regards to FIG. 2 for serial-to-serial, joint-to-joint motion mapping can be additionally evaluated with preset offsets (e.g., additive values) to enable human motion to be mimicked by a non-human character. This cannot be done with traditional retargeting technology, which can only be performed on a like-for-like character/creature basis (e.g., biped-to-biped, quadruped-to-quadruped only).
FIG. 4 illustrates this by providing an example of offsets being evaluated on top of action unit values (e.g., the normalized values for rotational or translational position). For instance, offsets can be applied to the action units associated with the hip and base of the spine for a human Character A 410, resulting in Euler values on those joints belonging to the badger Character B 420. The badger Character B 420 now stands upright due to the offsets. The ghosted outline 424 in this position depicts the unaffected head now pointing directly upwards, rather than forward-facing. In order to achieve the depicted badger pose 426, the head and neck joints will also receive applied offsets. Furthermore, there may be additional parts of the badger (possibly all of the badger) that can receive offsets in order to mimic a human-like base pose. All the offsets used may be recorded as a configuration for translating motion for a human to a badger. The final resulting movement animation for the badger would not vaguely resemble a walking badger, but rather a badger walking like a human. This mimicking nature is not possible with standard retargeting methods.
Example of Motion Translation Between Different Body Structures
FIG. 5 illustrates an example of how a motion animation can be translated from a human character to a non-human character having a different body structure. More specifically, a human Character A 510 is shown alongside a spider Character B 520, but this specific example of translating human-to-spider motion can be more generally applied to translate motion from a human character to a non-human character (e.g., a different creature). A three-dimensional reference axes 502 is also shown in the lower left hand corner and the following description is written with the reference axes 502 in mind.
FIG. 5 is meant to show a more extreme example compared to FIG. 4 , for how motion data can be translated between a human character and a non-human character, even with wildly different body structures (and wildly different animation rigs). For instance, the leg joints of the human Character A 510 may be used to drive the legs of the spider Character B 520. This may involve switching effective axes between characters or creatures, either through the use of offsets or by exchanging serials between the source and the target.
For example, it may be possible to use the human hip and base of the spine with an offset in order to animate the body of the spider via serializing the thorax of the spider to that of the human pelvis. The legs of the spider would then inherit the human leg animation, also via offsets.
Alternatively, an effective re-wiring of the spider's data inputs can be used to switch the Y and Z rotation channels. In other words, motion data associated with the rotation of a human's upper leg in the rotational Y-axis can be used to drive rotation of the spider's leg in the rotational Z-axis, and motion data associated with the rotation of a human's upper leg in the rotational Z-axis can be used to drive rotation of the spider's leg in the rotational Y-axis. This will cause the illustrated leg 522 to rotate in the axis expected of a spider leg, proportionally to the rotation of the human upper leg 512. Each subsequent leg joint of the spider will similarly require offsets, and perhaps match the tarsus action serial to that of a human toe, and metatarsus to that of a human foot. Furthermore, a time based offset or standard value offset could be used to drive the other three legs on the left and three legs on the right of the spider Character B 520. These settings can be tweaked and recorded in a reusable configuration, which can allow for a spider walking based off of human animation. This spider walking animation, although it is more than a mimic of how a human walks (due to major differences in biomechanical structure), may not necessarily appear natural or correct to the motion of a real spider. However, it may serve as a great place to start rather than having to animate a spider model from scratch. Furthermore, for the purposes of in-game swarms of small spiders or small creatures on-screen, this rough walking animation may actually be enough as is, and therefore be very cheap or free animation at volume.
Example of Motion Translation Between Non-Humanoid Body Structures
FIG. 6 illustrates an example of how a motion animation can be translated from a non-human character to a different non-humanoid character. More specifically, Creature C 610 is shown alongside a Creature D 620. A three-dimensional reference axes 602 is also shown in the lower left hand corner and the following description is written with the reference axes 602 in mind.
FIG. 6 is meant to show a particular example of how motion data can be translated between quadruped-to-quadruped or ‘like’-creatures. In order to translate motion data between the two creatures, the angular base-pose differences between the two creatures can be observed and an offset value can be applied to the normalized values of the motion data, in order to account for the differences in scale, proportion, and anatomy between the two creatures.
For instance, in FIG. 6 , scale differences between the Creature C 610 and the Creature D 620 can be observed in the length of the spine, the height from floor-to-base-of-the-neck, and the height from floor-to-wrist. There are also differences between the neck and head pose of the Creature C 610 and the Creature D 620.
The offset values used to account for these differences between the two rigs/creatures can be used in either direction (e.g., to drive the animation of one creature using the motion data for the other). For instance, motion data for Creature C 610 can be used with a set of offsets in order to drive animation of Creature D 620, or the other way around. These offsets may be part of one configuration that can be used to drive the translation of motion data in either direction.
The offset values between two rigs/creatures can be used in either direction, thus one configuration that can drive either of these creatures, from the other. It should be noted that, for translation of motion between similarly structured creatures (like-for-like creatures) with similar or identical anatomical features, but in subtly, or less subtly, differing proportions, such as between two quadrupeds (e.g., two canines), one would expect a more direct animation sharing in which animation translates directly between limbs and biomechanical parts. However, using this system, the result would be one creature “mimicking” the other as opposed to mechanically copying, due to the proportional differences being considered via the factorization process that translates motion in the form of actions and units of those actions, from source-to-target.
Scale factors (which may also be considered offsets) can be used to multiply and re-evaluate the action units either for individual anatomical features, or all of them at once. This would produce, for example, a larger creature using the motion of a small creature, but with foot and body positions relative to the larger creature's own size, rather than attempting to exactly match the foot and body positions of the smaller creature, which would not look pleasing nor would it appear to be the same motion in its nature. If scale factors are used to evaluate the motion data, the motion-over-distance of Creature C 610, can be matched with that of Creature D 620, respective of the weight and size of Creature D 620. For instance, the length and timing of a foot stride and foot plant will scale with the size of the creature.
Alternatively, an animator may want the footsteps to exactly match and may choose not to use a scale-factor in the evaluation of actions. This may be desirable in some instances, such as for example, if a hand/paw is interacting with an object. A foot and body position match can be performed using the same factorization in reverse to negate the difference, if needed. The smaller creature can mimic the larger creature in the same way, and this would effectively be using the reverse scale factor. Thus, the one scale-factor and understanding of the differences between creatures, is needed to use motion from either creature to the other and vice versa. It is not required to record this configuration twice/once-for-each creature.
Example of Translated Translation Between Non-Humanoid and Humanoid
FIG. 7 illustrates an example of how a motion animation can be translated from a non-human character to a human character. More specifically, a human Character A 710 is shown alongside a horse Character B 720. A three-dimensional reference axes 702 is also shown in the lower left hand corner and the following description is written with the reference axes 702 in mind.
The human Character A 710 may mimic the motion of the horse Character B 720. Offset values may be applied to the motion data for the horse Character B 720, to position and pose the human rig in a manner that would enable the motion of the horse Character B 720 to drive the motion of the human Character A 710, as if his fingers were hooves and his arms were the horse's front legs. The head and neck of the human Character A 710 shown in FIG. 7 also received offsets in order to match the eye-line and head trajectory.
Anatomically, a human is bound by its differences to the horse. Thus, even though movement of the horse can be described and inherited through the use of action codes, the translated movement performed by the human does not perfectly match the movement of the horse. It only resembles the movement of the horse. This may be attributed to the human pelvis being elevated due to the leg offsets and the tip-toe pose being prioritized to closer match the pose of the horse's legs, ready to enact motion. Scale factors can be used as a layer of offset values to the motion data, in order to closer match the weight and feel of the human's mimicked animation to that of the horse, with respect to the obvious size and mass differences. If the scale factors are not used, the human motion is not likely to resemble the motion of the horse very well, as a result of arm and leg movement being hyper-extended to match the foot-planting of the horse's hooves.
Process for Determining Action Codes for Animations
FIG. 8 is a flowchart that illustrates how an action code usable for motion translation may be determined, such as by a universal biomechanical expression system.
It should be noted that FIG. 8 describes a process of generating an action code for a singular biomechanical part. Animation may be complex and involve the movement of numerous biomechanical parts, which may require the generation of multiple action codes—one for each biomechanical part. That would require performing this process multiple times. Longer animation may also involve sequences of movement and actions, comparable to frames in a video with each frame having a different pose (e.g., full-body pose). Thus, longer animations may require this process to be performed repeatedly for not only the different biomechanical parts, but also across the different poses.
At block 802, the universal biomechanical expression system may determine a biomechanical part associated with movement (e.g., from motion data). Any suitable source for the motion data may be used. For instance, the right shoulder joint of a human character model may be rotated upwards in the Z-axis of rotation in order to raise the right arm of that human character model. This may be performed manually (e.g., an animator manipulates the human character model) or motion captured from a human actor. The universal biomechanical expression system may look up an action serial or serial number associated with that biomechanical part, such as by referencing a database. For example, the serial number could be a three digit number that is arbitrary assigned to analogous biomechanical parts that are common across different characters and/or creatures. For instance, many different creatures have a right shoulder joint and the right shoulder joint for all of them may be associated with the same serial number of 003.
At block 804, the universal biomechanical expression system may determine constraints that are defined for the character for that biomechanical part, such as range(s) of motion and designated neutral position(s). This may be done by referencing a table or database. For example, there may be a table associated with a particular human character that lists all the range(s) of motion and designated neutral position(s) for all the biomechanical parts in that character's anatomy. For each character, any particular biomechanical part may be associated with multiple ranges of motion and designated neutral positions. There may be a defined range of motion and designated neutral position for each of the six axes of freedom (e.g., rotate X, rotate Y, rotate Z, translate X, translate Y, translate Z). As an example, the universal biomechanical expression system may determine that the right shoulder joint for the specific human character has a full range of rotation around the Z-axis of 220 degrees with a designated neutral position in the middle of that range of rotation.
At block 806, the universal biomechanical expression system may determine movement values associated with the biomechanical part, in all six axes of freedom (e.g., rotate X, rotate Y, rotate Z, translate X, translate Y, translate Z). These movement values may be in absolute terms. For example, rotational movement may be in Euler units or degrees. In some cases, the movement values may include the stop positions of the biomechanical part after the movement occurs. In some cases, the movement values may describe the change in position of the biomechanical part (e.g., a +45 degree rotation).
At block 808, the universal biomechanical expression system may normalize and/or refactor the movement values based on the constraints (e.g., ranges of motion and designated neutral positions) in order to obtain action units. For example, the right shoulder joint of the character may have a full range of rotation around the Z-axis of 220 degrees with a designated neutral position in the middle of that range of rotation. A normalization scheme can be applied that involves values from −10 to 10, with 10 corresponding to the position at the specific maximum of the range of motion, −10 corresponding to the position at the specific minimum of the range of motion, and 0 corresponding to the designated neutral position. Thus, if the character model had moved lifted their right arm as high as possible, the determined movement value from block 806 and the designated neutral position can be used to determine the rotational position that the right shoulder joint is at, and that rotational position relative to the full range of motion can be used to calculate a normalized value within the normalization scheme (e.g., an action unit). In this case, the rotational position of the right shoulder joint in the Z-axis would be at the very maximum of the full range of rotation (e.g., +110 degrees), which would come out a normalized value of +10. Although in this example, the right shoulder joint has only been rotated in the Z-axis, this normalization can be performed for movement in each of the six axes of freedom.
At block 810, the universal biomechanical expression system may generate an action code based on the obtained action units. For instance, as described in regards to FIG. 3 , an action code can be used to represent relative positioning of the referenced biomechanical part within the character's range-of-motion for each of the six axes of freedom. The universal biomechanical expression system may generate this action code based on an established standard or protocol. For instance, if the action code is to include corresponding action units for each of the three rotational axes and three translation axes, in the form xyz (e.g., [rotateX, rotateY, rotateZ, translateX, translateY, translateZ]), then the action code that involves only a rotation in the Z-axis to the maximum of the full range of rotation will look like “000010000000”. Furthermore, as described in regards to FIG. 3 , the action code may be included with the serial number referencing the biomechanical part in order to obtain a unified action code.
Process for Translating Motion Using Action Codes
FIG. 9 is a flowchart that illustrates how an action code may be interpreted and used to translate motion, such as by a universal biomechanical expression system.
At block 902, the universal biomechanical expression system may receive an action code including action units, such as via the process described in regards to FIG. 8 . In some cases, the action code may be part of a unified action code, which will also include information (e.g., an action serial or serial number) that references a particular biomechanical part associated with the movement. The action units will describe relative positioning of the referenced biomechanical part within a range-of-motion for each of the six axes of freedom. For instance, within the context of the example provided in FIG. 4 , the universal biomechanical expression system may receive an action code based on a right shoulder joint rotation in the Z-axis for a human character.
At block 904, the universal biomechanical expression system may determine an analogous biomechanical part for the target character, whose motion is to be driven based on the received action code. This can be determined in a number of ways. For instance, the anatomy of the target character may be similar to the anatomy of the source character used to generate the action code, in which case the analogous biomechanical part may be easily identified and determined (e.g., based on the serial number provided in a unified action code). For example, if the unified action code specifies “003” for a right shoulder joint, then the analogous biomechanical part for the target character may be the right shoulder joint, which also should be associated with the serial number of “003.” However, this may not always be the case, since the anatomies of different creatures may differ or a different mapping between biomechanical parts may be preferred.
In some cases, the universal biomechanical expression system may need to be supplied information regarding the source character, as there may be a relationship table, database, or configuration (e.g., file) which provides a mapping of biomechanical parts between two different characters or creatures for the purposes of motion translation. For instance, in the example provided in FIG. 4 , in which motion is being translated from a human to a badger, the relationship table or database may define how human biomechanical parts map to badger biomechanical parts. If, upon consulting this reference, the universal biomechanical expression system determines that the right shoulder joint (“003”) in a human maps to the right shoulder joint (“003”) of a badger, then the right shoulder joint of the badger may be selected as the analogous biomechanical part. In some cases, the mapping may be very granular and describe how each particular axes of freedom of a biomechanical part in a first character corresponds to an axes of freedom of the analogous biomechanical part in the second character. In some cases, the relationship table, database, or configuration may not only include the mapping of biomechanical parts between two characters, but also includes any offsets that need to be applied at block 908.
At block 906, the universal biomechanical expression system may determine constraints that are defined for the target character for the analagous biomechanical part, such as range(s) of motion and designated neutral position(s). This may be done by referencing a table or database. This table or database may be separate from the relationship table or database. For example, there may be a table associated with the badger character that lists all the range(s) of motion and designated neutral position(s) for all the biomechanical parts in that character's anatomy. Each biomechanical part may be associated with multiple ranges of motion and designated neutral positions, as there may be a defined range of motion and designated neutral position for each of the six axes of freedom (e.g., rotate X, rotate Y, rotate Z, translate X, translate Y, translate Z). For instance, within the context of the example provided in FIG. 4 , the universal biomechanical expression system may determine that a badger character has a limited full range of rotation for the right shoulder joint in the Z-axis.
At block 908, the universal biomechanical expression system may optionally determine and apply offsets to the action units of the action code. The offset values or multipliers to the action unit input values (e.g., the normalized values for rotational or translational position) are used to represent the anatomical differences between one creature and another. The offset values may exist to affect both positions of target joints and rotations. These offset values may be used to enable the target character to mimic the movement of the source character. In some cases, all the offsets for translating motion between a source creature/character and a target creature/character may be recorded (e.g., in the relationship table, database, or configuration). For instance, there may be a configuration associated with translating motion between a human and a badger that includes all the offsets. The universal biomechanical expression system may consult this configuration and determine the offsets to be applied for rotations of the right shoulder joint in the Z-axis and apply them to the action units of the action code.
At block 910, the universal biomechanical expression system may interpret the action units (with any applied offsets) based on the determined constraints of the target character (at block 906). For instance, continuing within the context of the example provided in FIG. 4 , if the action units are primarily associated with a right shoulder joint rotation in the Z-axis, then the universal biomechanical expression system would interpret the action units against the range of rotation for the right shoulder joint of a badger. Movement of the badger character model can then be driven by rotating the right shoulder joint of the badger model.
Process for Translating Animations Between Characters
FIG. 10 is flowchart that illustrates an overview of an example of translating complex animation between characters, such as by a universal biomechanical expression system. More specifically, the flowchart illustrates how the motion data associated with complex animation for a human character can be applied to a dog character.
At block 1010, the movements of a human actor can be obtained via motion capture to create a complex animation. Alternatively, an animator can create a complex animation by manipulating a human character model. However, motion capture may be faster to do than animating by hand.
At block 1020, the universal biomechanical expression system may use known motion capture techniques in order to convert the motion capture data into a complex animation for a corresponding human character model. The human character may have predefined constraints (e.g., ranges of motion and designated neutral poses), or those constraints can be defined here.
At block 1030, the universal biomechanical expression system may obtain poses (e.g., motion data) from the complex animation. Longer, complex animation involves sequences of movement and actions, but can be broken down into a series of different poses (e.g., full-body pose) that are comparable to frames in a video, with each frame being a different full-body pose. These full-body poses can be sampled from the complex animation and for each full-body pose, the positioning of each biomechanical part in the character model can be determined and recorded. Thus, each full-body pose (e.g., frame in the animation) can be represented based on a collection of biomechanical part positions, which can be converted into a collection of action codes. Thus, a complex animation can be thought of as a sequence of different collections of action codes.
At block 1040, the universal biomechanical expression system may reduce each full-body pose from the complex animation into to a collection of action codes using the process described in FIG. 8 , which is performed for every biomechanical part in the pose that is associated with movement. This is done by normalizing the positioning of the biomechanical parts in each full-body pose against the constraints (e.g., ranges of motion, designated neutral position, and so forth) that are defined for the human character.
At block 1050, the universal biomechanical expression system may interpret the action codes using dog constraints/offsets, based on the process described in FIG. 9 . Again, the complex animation is represented by a sequence of different collections of action codes, with a collection of action codes defining the movement for all the biomechanical parts in a full-body pose. Each action code is interpreted using the constraints of the dog character after any appropriate offsets have been applied, in order to determine the appropriate positioning of the corresponding biomechanical part within the context of the dog character model.
At block 1060, the universal biomechanical expression system may create an animation for the dog character model by applying the collections of action codes, in sequence, to the dog character model. Each collection of action codes, which is associated with a full-body pose, may result in a corresponding full-body pose for the dog character model. These full-body poses can be stitched together to create an animation and interpolation can be used to smooth out the animation.
Example Hardware Configuration of Computing System
FIG. 11 illustrates an embodiment of a hardware configuration for a computing system 1100 (e.g., user device 130 and/or universal biomechanical expression system 100 of FIG. 1 ). Other variations of the computing system 1100 may be substituted for the examples explicitly presented herein, such as removing or adding components to the computing system 1100. The computing system 1100 may include a computer, a server, a smart phone, a tablet, a personal computer, a desktop, a laptop, a smart television, and the like.
As shown, the computing system 1100 includes a processing unit 1102 that interacts with other components of the computing system 1100 and also components external to the computing system 1100. A game media reader 1122 may be included that can communicate with game media. Game media reader 1122 may be an optical disc reader capable of reading optical discs, such as CD-ROM or DVDs, or any other type of reader that can receive and read data from game media. In some embodiments, the game media reader 1122 may be optional or omitted. For example, game content or applications may be accessed over a network via the network I/O 1138 rendering the game media reader 1122 and/or the game media optional.
The computing system 1100 may include a separate graphics processor 1124. In some cases, the graphics processor 1124 may be built into the processing unit 1102, such as with an APU. In some such cases, the graphics processor 1124 may share Random Access Memory (RAM) with the processing unit 1102. Alternatively, or in addition, the computing system 1100 may include a discrete graphics processor 1124 that is separate from the processing unit 1102. In some such cases, the graphics processor 1124 may have separate RAM from the processing unit 1102. Further, in some cases, the graphics processor 1124 may work in conjunction with one or more additional graphics processors and/or with an embedded or non-discrete graphics processing unit, which may be embedded into a motherboard and which is sometimes referred to as an on-board graphics chip or device.
The computing system 1100 also includes various components for enabling input/output, such as an I/O 1132, a user interface I/O 1134, a display I/O 1136, and a network I/O 1138. As previously described, the input/output components may, in some cases, including touch-enabled devices. The I/O 1132 interacts with storage element 1103 and, through a device 1142, removable storage media 1144 in order to provide storage for the computing system 1100. The storage element 1103 can store a database that includes the failure signatures, clusters, families, and groups of families. Processing unit 1102 can communicate through I/O 1132 to store data, such as game state data and any shared data files. In addition to storage 1103 and removable storage media 1144, the computing system 1100 is also shown including ROM (Read-Only Memory) 1146 and RAM 1148. RAM 1148 may be used for data that is accessed frequently, such as when a game is being played, or for all data that is accessed by the processing unit 1102 and/or the graphics processor 1124.
User I/O 1134 is used to send and receive commands between processing unit 1102 and user devices, such as game controllers. In some embodiments, the user I/O 1134 can include touchscreen inputs. As previously described, the touchscreen can be a capacitive touchscreen, a resistive touchscreen, or other type of touchscreen technology that is configured to receive user input through tactile inputs from the user. Display I/O 1136 provides input/output functions that are used to display images from the game being played. Network I/O 1138 is used for input/output functions for a network. Network I/O 1138 may be used during execution of a game, such as when a game is being played online or being accessed online.
Display output signals may be produced by the display I/O 1136 and can include signals for displaying visual content produced by the computing system 1100 on a display device, such as graphics, user interfaces, video, and/or other visual content. The computing system 1100 may comprise one or more integrated displays configured to receive display output signals produced by the display I/O 1136, which may be output for display to a user. According to some embodiments, display output signals produced by the display I/O 1136 may also be output to one or more display devices external to the computing system 1100.
The computing system 1100 can also include other features that may be used with a game, such as a clock 1150, flash memory 1152, and other components. An audio/video player 1156 might also be used to play a video sequence, such as a movie. It should be understood that other components may be provided in the computing system 1100 and that a person skilled in the art will appreciate other variations of the computing system 1100.
Program code can be stored in ROM 1146, RAM 1148, or storage 1103 (which might comprise hard disk, other magnetic storage, optical storage, solid state drives, and/or other non-volatile storage, or a combination or variation of these). At least part of the program code can be stored in ROM that is programmable (ROM, PROM, EPROM, EEPROM, and so forth), in storage 1103, and/or on removable media such as game media 1112 (which can be a CD-ROM, cartridge, memory chip or the like, or obtained over a network or other electronic channel as needed). In general, program code can be found embodied in a tangible non-transitory signal-bearing medium.
Random access memory (RAM) 1148 (and possibly other storage) is usable to store variables and other game and processor data as needed. RAM is used and holds data that is generated during the play of the game and portions thereof might also be reserved for frame buffers, game state and/or other data needed or usable for interpreting user input and generating game displays. Generally, RAM 1148 is volatile storage and data stored within RAM 1148 may be lost when the computing system 1100 is turned off or loses power.
As computing system 1100 reads game media 1112 and provides a game, information may be read from game media 1112 and stored in a memory device, such as RAM 1148. Additionally, data from storage 1103, ROM 1146, servers accessed via a network (not shown), or removable storage media 46 may be read and loaded into RAM 1148. Although data is described as being found in RAM 1148, it will be understood that data does not have to be stored in RAM 1148 and may be stored in other memory accessible to processing unit 1102 or distributed among several media, such as game media 1112 and storage 1103.
It is to be understood that not necessarily all objects or advantages may be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that certain embodiments may be configured to operate in a manner that achieves, increases, or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein.
All of the processes described herein may be embodied in, and fully automated via, software code modules executed by a computing system that includes one or more computers or processors. The code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some or all the methods may be embodied in specialized computer hardware.
Many other variations than those described herein will be apparent from this disclosure. For example, depending on the embodiment, certain acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (for example, not all described acts or events are necessary for the practice of the algorithms). Moreover, in certain embodiments, acts or events can be performed concurrently, for example, through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. In addition, different tasks or processes can be performed by different machines and/or computing systems that can function together.
The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a processing unit or processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor can also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
Conditional language such as, among others, “can,” “could,” “might” or “may,” unless specifically stated otherwise, are otherwise understood within the context as used in general to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, and the like, may be either X, Y, or Z, or any combination thereof (for example, X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
Any process descriptions, elements or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown, or discussed, including substantially concurrently or in reverse order, depending on the functionality involved as would be understood by those skilled in the art.
Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.
It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure.

Claims (18)

What is claimed is:
1. A computer-implemented method comprising:
obtaining motion data for a source character;
determining, from the motion data, motion of a source biomechanical part of the source character;
determining one or more constraints for the source biomechanical part, including a first range of motion defined for the source biomechanical part and the source character;
normalizing the motion of the source biomechanical part using the one or more constraints for the source biomechanical part and a normalization scheme to generate one or more action units;
generating, based on the one or more action units for the source biomechanical part, an action code representative of the motion of the source biomechanical part; and
determining relative motion of a target biomechanical part of a three-dimensional model of a target character based on the action code.
2. The method of claim 1, further comprising:
determining one or more offsets associated with the source character and the target character; and
prior to evaluating the action code, applying the one or more offsets to the action code.
3. The method of claim 2, wherein the one or more offsets associated with the source character and the target character are stored in a configuration associated with motion translation between the source character and the target character.
4. The method of claim 1, wherein the motion of the source biomechanical part includes at least one of: a rotation around a X-axis, a rotation around a Y-axis, a rotation around a Z-axis, a translation in the X-axis, a translation in the Y-axis, and a translation in the Z-axis.
5. The method of claim 1, wherein the first range of motion defined for the source biomechanical part and the source character is a range for one of: a rotation around a X-axis, a rotation around a Y-axis, or a rotation around a Z-axis.
6. The method of claim 1, wherein a second range of motion defined for the target biomechanical part and the target character is a range for one of: a rotation around a X-axis, a rotation around a Y-axis, or a rotation around a Z-axis.
7. The method of claim 1, wherein the action code includes a serial identifying the source biomechanical part.
8. The method of claim 1, wherein the action code represents the motion of the source biomechanical part for each of: a rotation around a X-axis, a rotation around a Y-axis, a rotation around a Z-axis, a translation in the X-axis, a translation in the Y-axis, and a translation in the Z-axis.
9. The method of claim 1, wherein the normalization scheme includes a range of values between −10 and 10.
10. Non-transitory computer storage media storing instructions that when executed by a system of one or more computers, cause the one or more computers to perform operations comprising:
obtaining motion data for a source character;
determining, from the motion data, motion of a source biomechanical part of the source character;
determining one or more constraints for the source biomechanical part, including a first range of motion defined for the source biomechanical part and the source character;
normalizing the motion of the source biomechanical part using the one or more constraints for the source biomechanical part and a normalization scheme to generate one or more action units;
generating, based on the one or more action units for the source biomechanical part, an action code representative of the motion of the source biomechanical part; and
determining relative motion of a target biomechanical part of a three-dimensional model of a target character based on the action code.
11. The computer storage media of claim 10, wherein the instructions further cause the one or more computers to perform operations comprising:
determining one or more offsets associated with the source character and the target character; and
prior to evaluating the action code, applying the one or more offsets to the action code.
12. The computer storage media of claim 11, wherein the one or more offsets associated with the source character and the target character are stored in a configuration associated with motion translation between the source character and the target character.
13. The computer storage media of claim 10, wherein the motion of the source biomechanical part includes at least one of: a rotation around a X-axis, a rotation around a Y-axis, a rotation around a Z-axis, a translation in the X-axis, a translation in the Y-axis, and a translation in the Z-axis.
14. The computer storage media of claim 10, wherein the first range of motion defined for the source biomechanical part and the source character is a range for one of: a rotation around a X-axis, a rotation around a Y-axis, or a rotation around a Z-axis.
15. The computer storage media of claim 10, wherein a second range of motion defined for the target biomechanical part and the target character is a range for one of: a rotation around a X-axis, a rotation around a Y-axis, or a rotation around a Z-axis.
16. The computer storage media of claim 10, wherein the action code includes a serial identifying the source biomechanical part.
17. The computer storage media of claim 10, wherein the action code represents the motion of the source biomechanical part for each of: a rotation around a X-axis, a rotation around a Y-axis, a rotation around a Z-axis, a translation in the X-axis, a translation in the Y-axis, and a translation in the Z-axis.
18. A system comprising one or more computers and computer storage media storing instructions that when executed by the one or more computers, cause the one or more computers to perform operations comprising:
obtaining motion data for a source character;
determining, from the motion data, motion of a source biomechanical part of the source character;
determining one or more constraints for the source biomechanical part, including a first range of motion defined for the source biomechanical part and the source character;
normalizing the motion of the source biomechanical part using the one or more constraints for the source biomechanical part and a normalization scheme to generate one or more action units;
generating, based on the one or more action units for the source biomechanical part, an action code representative of the motion of the source biomechanical part; and
determining relative motion of a target biomechanical part of a three-dimensional model of a target character based on the action code.
US17/157,713 2019-06-14 2021-01-25 Universal body movement translation and character rendering system Active 2039-07-26 US11798176B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/157,713 US11798176B2 (en) 2019-06-14 2021-01-25 Universal body movement translation and character rendering system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/442,190 US10902618B2 (en) 2019-06-14 2019-06-14 Universal body movement translation and character rendering system
US17/157,713 US11798176B2 (en) 2019-06-14 2021-01-25 Universal body movement translation and character rendering system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/442,190 Continuation US10902618B2 (en) 2019-06-14 2019-06-14 Universal body movement translation and character rendering system

Publications (2)

Publication Number Publication Date
US20210217184A1 US20210217184A1 (en) 2021-07-15
US11798176B2 true US11798176B2 (en) 2023-10-24

Family

ID=73745128

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/442,190 Active 2039-07-20 US10902618B2 (en) 2019-06-14 2019-06-14 Universal body movement translation and character rendering system
US17/157,713 Active 2039-07-26 US11798176B2 (en) 2019-06-14 2021-01-25 Universal body movement translation and character rendering system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/442,190 Active 2039-07-20 US10902618B2 (en) 2019-06-14 2019-06-14 Universal body movement translation and character rendering system

Country Status (1)

Country Link
US (2) US10902618B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11992768B2 (en) 2020-04-06 2024-05-28 Electronic Arts Inc. Enhanced pose generation based on generative modeling

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10096133B1 (en) 2017-03-31 2018-10-09 Electronic Arts Inc. Blendshape compression system
US10535174B1 (en) 2017-09-14 2020-01-14 Electronic Arts Inc. Particle-based inverse kinematic rendering system
US10902618B2 (en) 2019-06-14 2021-01-26 Electronic Arts Inc. Universal body movement translation and character rendering system
US11972353B2 (en) 2020-01-22 2024-04-30 Electronic Arts Inc. Character controllers using motion variational autoencoders (MVAEs)
US11504625B2 (en) 2020-02-14 2022-11-22 Electronic Arts Inc. Color blindness diagnostic system
US11232621B2 (en) 2020-04-06 2022-01-25 Electronic Arts Inc. Enhanced animation generation based on conditional modeling
US11341703B2 (en) * 2020-07-24 2022-05-24 Unity Technologies Sf Methods and systems for generating an animation control rig
US11439904B2 (en) * 2020-11-11 2022-09-13 Activision Publishing, Inc. Systems and methods for imparting dynamic and realistic movement to player-controlled avatars in video games
US11830121B1 (en) 2021-01-26 2023-11-28 Electronic Arts Inc. Neural animation layering for synthesizing martial arts movements
US11887232B2 (en) 2021-06-10 2024-01-30 Electronic Arts Inc. Enhanced system for generation of facial models and animation
US11670030B2 (en) 2021-07-01 2023-06-06 Electronic Arts Inc. Enhanced animation generation based on video with local phase
US11562523B1 (en) 2021-08-02 2023-01-24 Electronic Arts Inc. Enhanced animation generation based on motion matching using local bone phases
CN114974506B (en) * 2022-05-17 2024-05-03 重庆大学 Human body posture data processing method and system
US12019793B2 (en) * 2022-11-22 2024-06-25 VRChat Inc. Tracked shoulder position in virtual reality multiuser application

Citations (165)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5274801A (en) 1988-04-29 1993-12-28 International Business Machines Corp. Artifical intelligence delivery system
US5548798A (en) 1994-11-10 1996-08-20 Intel Corporation Method and apparatus for solving dense systems of linear equations with an iterative method that employs partial multiplications using rank compressed SVD basis matrices of the partitioned submatrices of the coefficient matrix
US5982389A (en) 1996-06-17 1999-11-09 Microsoft Corporation Generating optimized motion transitions for computer animated objects
US5999195A (en) 1997-03-28 1999-12-07 Silicon Graphics, Inc. Automatic generation of transitions between motion cycles in an animation
US6064808A (en) 1997-08-01 2000-05-16 Lucent Technologies Inc. Method and apparatus for designing interconnections and passive components in integrated circuits and equivalent structures by efficient parameter extraction
US6088040A (en) 1996-09-17 2000-07-11 Atr Human Information Processing Research Laboratories Method and apparatus of facial image conversion by interpolation/extrapolation for plurality of facial expression components representing facial image
US6253193B1 (en) 1995-02-13 2001-06-26 Intertrust Technologies Corporation Systems and methods for the secure transaction management and electronic rights protection
US20020054054A1 (en) 2000-02-28 2002-05-09 Toshiba Tec Kabushiki Kaisha Window design alteration method and system
US20020089504A1 (en) 1998-02-26 2002-07-11 Richard Merrick System and method for automatic animation generation
US20020180739A1 (en) 2001-04-25 2002-12-05 Hugh Reynolds Method and apparatus for simulating soft object movement
US20030038818A1 (en) 2001-08-23 2003-02-27 Tidwell Reed P. System and method for auto-adjusting image filtering
US6556196B1 (en) 1999-03-19 2003-04-29 Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. Method and apparatus for the processing of images
US20040027352A1 (en) 2000-10-02 2004-02-12 Mitsuru Minakuchi Device, system, method, and program for reproducing or transfering animation
US20040227761A1 (en) 2003-05-14 2004-11-18 Pixar Statistical dynamic modeling method and apparatus
US20040227760A1 (en) 2003-05-14 2004-11-18 Pixar Animation Studios Statistical dynamic collisions method and apparatus
US20050237550A1 (en) 2002-08-28 2005-10-27 Hu Shane C Automatic color constancy for image sensors
US6961060B1 (en) 1999-03-16 2005-11-01 Matsushita Electric Industrial Co., Ltd. Virtual space control data receiving apparatus,virtual space control data transmission and reception system, virtual space control data receiving method, and virtual space control data receiving program storage media
US20060036514A1 (en) 2002-01-24 2006-02-16 Ryan Steelberg Dynamic selection and scheduling of radio frequency communications
US7006090B2 (en) 2003-02-07 2006-02-28 Crytek Gmbh Method and computer program product for lighting a computer graphics image and a computer
US20060061574A1 (en) * 2003-04-25 2006-03-23 Victor Ng-Thow-Hing Joint component framework for modeling complex joint behavior
US20060149516A1 (en) 2004-12-03 2006-07-06 Andrew Bond Physics simulation apparatus and method
US20060217945A1 (en) 2005-03-23 2006-09-28 Electronic Arts Inc. Computer simulation of body dynamics including a solver that solves in linear time for a set of constraints
US20060262114A1 (en) 2005-03-23 2006-11-23 Electronic Arts Inc. Computer simulation of body dynamics including a solver that solves in linear time for a set of constraints using vector processing
US20060262113A1 (en) * 2005-03-23 2006-11-23 Electronic Arts Inc. Computer simulation of body dynamics including a solver that solves for position-based constraints
US20070085851A1 (en) 2005-10-19 2007-04-19 Matthias Muller Method of simulating dynamic objects using position based dynamics
US20070097125A1 (en) 2005-10-28 2007-05-03 Dreamworks Animation Llc Artist directed volume preserving deformation and collision resolution for animation
US20080049015A1 (en) 2006-08-23 2008-02-28 Baback Elmieh System for development of 3D content used in embedded devices
US20080111831A1 (en) 2006-11-15 2008-05-15 Jay Son Efficient Panoramic Image Generation
US20080152218A1 (en) 2006-10-27 2008-06-26 Kabushiki Kaisha Toshiba Pose estimating device and pose estimating method
US7403202B1 (en) 2005-07-12 2008-07-22 Electronic Arts, Inc. Computer animation of simulated characters using combinations of motion-capture data and external force modelling or other physics models
US7415152B2 (en) 2005-04-29 2008-08-19 Microsoft Corporation Method and system for constructing a 3D representation of a face from a 2D representation
US20080268961A1 (en) 2007-04-30 2008-10-30 Michael Brook Method of creating video in a virtual world and method of distributing and using same
US20080316202A1 (en) 2007-06-22 2008-12-25 Microsoft Corporation Direct manipulation of subdivision surfaces using a graphics processing unit
US20090066700A1 (en) 2007-09-11 2009-03-12 Sony Computer Entertainment America Inc. Facial animation using motion capture data
US20090315839A1 (en) 2008-06-24 2009-12-24 Microsoft Corporation Physics simulation-based interaction for surface computing
US20100134501A1 (en) 2008-12-01 2010-06-03 Thomas Lowe Defining an animation of a virtual object within a virtual world
US20100251185A1 (en) 2009-03-31 2010-09-30 Codemasters Software Company Ltd. Virtual object appearance control
US20100277497A1 (en) 2009-04-30 2010-11-04 International Business Machines Corporation Method for highlighting topic element and system thereof
US20110012903A1 (en) 2009-07-16 2011-01-20 Michael Girard System and method for real-time character animation
US20110074807A1 (en) 2009-09-30 2011-03-31 Hitachi, Ltd. Method of color customization of content screen
US20110086702A1 (en) 2009-10-13 2011-04-14 Ganz Method and system for providing a virtual presentation including a virtual companion and virtual photography
US7944449B2 (en) 2003-05-14 2011-05-17 Pixar Methods and apparatus for export of animation data to non-native articulation schemes
US20110119332A1 (en) 2007-11-14 2011-05-19 Cybersports Limited Movement animation method and apparatus
US20110128292A1 (en) 2009-12-02 2011-06-02 Electronics And Telecommunications Research Institute Dynamics-based motion generation apparatus and method
US20110164831A1 (en) 2010-01-05 2011-07-07 Stmicroelectronics (Grenoble 2) Sas Method for detecting orientation of contours
US20110187731A1 (en) 2009-07-10 2011-08-04 Yasuhiro Tsuchida Marker display control device, integrated circuit, and marker display control method
US20110269540A1 (en) 2007-03-01 2011-11-03 Sony Computer Entertainment Europe Limited Entertainment device and method
US20110292055A1 (en) 2010-05-25 2011-12-01 Disney Enterprises, Inc. Systems and methods for animating non-humanoid characters with human motion data
US8100770B2 (en) 2007-04-20 2012-01-24 Nintendo Co., Ltd. Game controller, storage medium storing game program, and game apparatus
US8142282B2 (en) 2006-11-15 2012-03-27 Microsoft Corporation Console integrated downloadable game service
US20120083330A1 (en) 2010-10-05 2012-04-05 Zynga Game Network, Inc. System and Method for Generating Achievement Objects Encapsulating Captured Event Playback
US8154544B1 (en) 2007-08-03 2012-04-10 Pixar User specified contact deformations for computer graphics
US20120115580A1 (en) 2010-11-05 2012-05-10 Wms Gaming Inc. Wagering game with player-directed pursuit of award outcomes
CN102509272A (en) 2011-11-21 2012-06-20 武汉大学 Color image enhancement method based on color constancy
US8207971B1 (en) 2008-12-31 2012-06-26 Lucasfilm Entertainment Company Ltd. Controlling animated character expressions
US20120220376A1 (en) 2011-02-25 2012-08-30 Nintendo Co., Ltd. Communication system, information processing apparatus, computer-readable storage medium having a program stored therein, and information processing method
US8267764B1 (en) 2011-04-21 2012-09-18 Wms Gaming Inc. Wagering game having enhancements to queued outcomes
US20120244941A1 (en) 2007-10-29 2012-09-27 Microsoft Corporation User to user game referrals
US8281281B1 (en) 2006-06-07 2012-10-02 Pixar Setting level of detail transition points
US20120303343A1 (en) 2011-05-26 2012-11-29 Sony Computer Entertainment Inc. Program, Information Storage Medium, Information Processing System, And Information Processing Method.
US20120313931A1 (en) 2011-06-07 2012-12-13 Sony Computer Entertainment Inc. Image generating device, image generating method, and non-transitory information storage medium
US20130050464A1 (en) 2011-08-31 2013-02-28 Keyence Corporation Magnification Observation Device, Magnification Observation Method, And Magnification Observation Program
US8395626B2 (en) 2006-04-08 2013-03-12 Alan Millman Method and system for interactive simulation of materials
US20130063555A1 (en) 2011-09-08 2013-03-14 Casio Computer Co., Ltd. Image processing device that combines a plurality of images
US8398476B1 (en) 2007-02-02 2013-03-19 Popcap Games, Inc. Electronic game, such as a computer game involving removing pegs
US8406528B1 (en) 2009-10-05 2013-03-26 Adobe Systems Incorporated Methods and apparatuses for evaluating visual accessibility of displayable web based content and/or other digital images
US20130121618A1 (en) 2011-05-27 2013-05-16 Vikas Yadav Seamless Image Composition
US20130120439A1 (en) 2009-08-28 2013-05-16 Jerry G. Harris System and Method for Image Editing Using Visual Rewind Operation
US20130222433A1 (en) 2012-02-29 2013-08-29 Danny Chapman Animation processing
US20130235045A1 (en) 2012-03-06 2013-09-12 Mixamo, Inc. Systems and methods for creating and distributing modifiable animated video messages
US8540560B2 (en) 2009-03-27 2013-09-24 Infomotion Sports Technologies, Inc. Monitoring of physical training events
US20130263027A1 (en) 2012-03-29 2013-10-03 FiftyThree, Inc. Methods and apparatus for providing a digital illustration system
US20130311885A1 (en) 2012-05-15 2013-11-21 Capso Vision, Inc. System and Method for Displaying Annotated Capsule Images
US20140002463A1 (en) 2012-06-27 2014-01-02 Pixar Skin and flesh simulation using finite elements, biphasic materials, and rest state retargeting
CN103546736A (en) 2012-07-12 2014-01-29 三星电子株式会社 Image processing apparatus and method
US8648863B1 (en) 2008-05-20 2014-02-11 Pixar Methods and apparatus for performance style extraction for quality control of animation
US20140066196A1 (en) 2012-08-30 2014-03-06 Colin William Crenshaw Realtime color vision deficiency correction
US20140198106A1 (en) 2013-01-11 2014-07-17 Disney Enterprises, Inc. Rig-Based Physics Simulation
US20140198107A1 (en) 2013-01-11 2014-07-17 Disney Enterprises, Inc. Fast rig-based physics simulation
US20140267312A1 (en) * 2013-03-15 2014-09-18 Dreamworks Animation Llc Method and system for directly manipulating the constrained model of a computer-generated character
US8860732B2 (en) 2010-09-27 2014-10-14 Adobe Systems Incorporated System and method for robust physically-plausible character animation
US20140327694A1 (en) 2012-01-19 2014-11-06 Microsoft Corporation Simultaneous Display of Multiple Content Items
US20140340644A1 (en) 2013-05-16 2014-11-20 Successfactors, Inc. Display accessibility for color vision impairment
US8914251B2 (en) 2008-07-11 2014-12-16 Nintendo Co., Ltd. Storage medium storing digital data correction program and digital data correction apparatus
US20150113370A1 (en) 2013-10-18 2015-04-23 Apple Inc. Object matching in a presentation application
US20150126277A1 (en) 2012-07-31 2015-05-07 Kabushiki Kaisha Square Enix (Also Trading As Square Enix Co., Ltd.) Data provision system, provision apparatus, execution apparatus, control method, and recording medium
US9067097B2 (en) * 2009-04-10 2015-06-30 Sovoz, Inc. Virtual locomotion controller apparatus and methods
US20150187113A1 (en) 2013-12-31 2015-07-02 Dreamworks Animation Llc Multipoint offset sampling deformation techniques
US9098766B2 (en) * 2007-12-21 2015-08-04 Honda Motor Co., Ltd. Controlled human pose estimation from depth image streams
US20150235351A1 (en) 2012-09-18 2015-08-20 Iee International Electronics & Engineering S.A. Depth image enhancement method
US9117134B1 (en) 2013-03-19 2015-08-25 Google Inc. Image merging with blending
US20150243326A1 (en) 2014-02-24 2015-08-27 Lyve Minds, Inc. Automatic generation of compilation videos
US9177409B2 (en) * 2010-04-29 2015-11-03 Naturalmotion Ltd Animating a virtual object within a virtual world
US9208613B2 (en) * 2011-02-16 2015-12-08 Kabushiki Kaisha Square Enix Action modeling device, method, and program
US20150381925A1 (en) 2014-06-25 2015-12-31 Thomson Licensing Smart pause for neutral facial expression
US20160026926A1 (en) 2012-11-12 2016-01-28 Singapore University Of Technology And Design Clothing matching system and method
US20160042548A1 (en) 2014-03-19 2016-02-11 Intel Corporation Facial expression and/or interaction driven avatar apparatus and method
US20160071470A1 (en) 2014-09-05 2016-03-10 Samsung Display Co., Ltd. Display apparatus, display control method, and display method
US20160078662A1 (en) * 2005-04-19 2016-03-17 Digitalfish, Inc. Techniques and workflows for computer graphics animation system
US9317954B2 (en) 2013-09-23 2016-04-19 Lucasfilm Entertainment Company Ltd. Real-time performance capture with on-the-fly correctives
US20160217723A1 (en) 2015-01-26 2016-07-28 Samsung Display Co., Ltd. Display device
US20160307369A1 (en) 2013-12-13 2016-10-20 Aveva Solutions Limited Image rendering of laser scan data
US20160314617A1 (en) 2015-04-21 2016-10-27 Sony Computer Entertainment Inc. Device and method of selecting an object for 3d printing
US9483860B2 (en) 2009-09-18 2016-11-01 Samsung Electronics Co., Ltd. Apparatus and method to extract three-dimensional (3D) facial expression
US20160354693A1 (en) 2014-03-12 2016-12-08 Tencent Technology (Shenzhen) Company Limited Method and apparatus for simulating sound in virtual scenario, and terminal
US9616329B2 (en) 2012-06-28 2017-04-11 Electronic Arts Inc. Adaptive learning system for video game enhancement
US20170132827A1 (en) 2015-11-10 2017-05-11 Disney Enterprises, Inc. Data Driven Design and Animation of Animatronics
US9741146B1 (en) 2015-09-30 2017-08-22 Electronic Arts, Inc. Kinetic energy smoother
US20170301310A1 (en) 2016-04-19 2017-10-19 Apple Inc. Displays with Improved Color Accessibility
US20170301316A1 (en) 2016-04-13 2017-10-19 James Paul Farell Multi-path graphics rendering
US9811716B2 (en) 2014-11-21 2017-11-07 Korea Institute Of Science And Technology Method for face recognition through facial expression normalization, recording medium and device for performing the method
US9826898B1 (en) 2016-08-19 2017-11-28 Apple Inc. Color vision assessment for displays
US9827496B1 (en) 2015-03-27 2017-11-28 Electronics Arts, Inc. System for example-based motion synthesis
US9858700B2 (en) 2015-05-13 2018-01-02 Lucasfilm Entertainment Company Ltd. Animation data transfer between geometric models and associated animation models
US20180024635A1 (en) 2016-07-25 2018-01-25 Patrick Kaifosh Methods and apparatus for predicting musculo-skeletal position information using wearable autonomous sensors
US20180068178A1 (en) 2016-09-05 2018-03-08 Max-Planck-Gesellschaft Zur Förderung D. Wissenschaften E.V. Real-time Expression Transfer for Facial Reenactment
US9928663B2 (en) * 2015-07-27 2018-03-27 Technische Universiteit Delft Skeletal joint optimization for linear blend skinning deformations utilizing skeletal pose sampling
US9947123B1 (en) 2008-02-22 2018-04-17 Pixar Transfer of rigs with temporal coherence
US20180122125A1 (en) 2016-11-03 2018-05-03 Naturalmotion Ltd. Animating a virtual object in a virtual world
US9987749B2 (en) * 2014-08-15 2018-06-05 University Of Central Florida Research Foundation, Inc. Control interface for robotic humanoid avatar system and related methods
US9990754B1 (en) 2014-02-04 2018-06-05 Electronic Arts Inc. System for rendering using position based finite element simulation
US20180165864A1 (en) * 2016-12-13 2018-06-14 DeepMotion, Inc. Virtual reality system using multiple force arrays for a solver
US10022628B1 (en) 2015-03-31 2018-07-17 Electronic Arts Inc. System for feature-based motion adaptation
US20180211102A1 (en) 2017-01-25 2018-07-26 Imam Abdulrahman Bin Faisal University Facial expression recognition
JP2018520820A (en) 2015-06-12 2018-08-02 ルーク アンダーソンLuke ANDERSON Method and system for inspecting visual aspects
US20180239526A1 (en) 2014-05-28 2018-08-23 Kiran Varanasi Method and systems for touch input
US10096133B1 (en) 2017-03-31 2018-10-09 Electronic Arts Inc. Blendshape compression system
US10118097B2 (en) 2016-08-09 2018-11-06 Electronic Arts Inc. Systems and methods for automated image processing for images with similar luminosities
US10163001B2 (en) * 2015-07-14 2018-12-25 Korea Institute Of Science And Technology Method and system for controlling virtual model formed in virtual space
US10198845B1 (en) 2018-05-29 2019-02-05 LoomAi, Inc. Methods and systems for animating facial expressions
US10314477B1 (en) 2018-10-31 2019-06-11 Capital One Services, Llc Systems and methods for dynamically modifying visual content to account for user visual impairment
US10403018B1 (en) 2016-07-12 2019-09-03 Electronic Arts Inc. Swarm crowd rendering system
JP2019162400A (en) 2018-03-19 2019-09-26 株式会社リコー Color vision examination device, color vision examination method, color vision examination program and storage medium
US20190325633A1 (en) * 2018-04-23 2019-10-24 Magic Leap, Inc. Avatar facial expression representation in multidimensional space
US20190392587A1 (en) 2018-06-22 2019-12-26 Microsoft Technology Licensing, Llc System for predicting articulated object feature location
US10535174B1 (en) 2017-09-14 2020-01-14 Electronic Arts Inc. Particle-based inverse kinematic rendering system
US20200051304A1 (en) * 2018-08-08 2020-02-13 Samsung Electronics Co., Ltd Electronic device for displaying avatar corresponding to external object according to change in position of external object
US20200151963A1 (en) 2018-11-12 2020-05-14 Electronics And Telecommunications Research Institute Training data set generation apparatus and method for machine learning
US20200226811A1 (en) * 2019-01-14 2020-07-16 Samsung Electronics Co., Ltd. Electronic device for generating avatar and method thereof
US10726611B1 (en) 2016-08-24 2020-07-28 Electronic Arts Inc. Dynamic texture mapping using megatextures
US20200258280A1 (en) * 2019-02-07 2020-08-13 Samsung Electronics Co., Ltd. Electronic device for providing avatar animation and method thereof
US10755466B2 (en) 2015-09-21 2020-08-25 TuringSense Inc. Method and apparatus for comparing two motions
US10783690B2 (en) * 2015-09-07 2020-09-22 Sony Interactive Entertainment America Llc Image regularization and retargeting system
US20200302668A1 (en) * 2018-02-09 2020-09-24 Tencent Technology (Shenzhen) Company Limited Expression animation data processing method, computer device, and storage medium
US20200310541A1 (en) 2019-03-29 2020-10-01 Facebook Technologies, Llc Systems and methods for control schemes based on neuromuscular data
US20200306640A1 (en) * 2019-03-27 2020-10-01 Electronic Arts Inc. Virtual character generation from image or video data
US10792566B1 (en) 2015-09-30 2020-10-06 Electronic Arts Inc. System for streaming content within a game application environment
US10810780B2 (en) * 2017-07-28 2020-10-20 Baobab Studios Inc. Systems and methods for real-time complex character animations and interactivity
US10818065B1 (en) * 2019-05-17 2020-10-27 Adobe Inc. Inverse kinematic solution blending in digital character animation
US20200364303A1 (en) 2019-05-15 2020-11-19 Nvidia Corporation Grammar transfer using one or more neural networks
US10860838B1 (en) 2018-01-16 2020-12-08 Electronic Arts Inc. Universal facial expression translation and character rendering system
US10878540B1 (en) 2017-08-15 2020-12-29 Electronic Arts Inc. Contrast ratio detection and rendering system
US10902618B2 (en) 2019-06-14 2021-01-26 Electronic Arts Inc. Universal body movement translation and character rendering system
US20210074004A1 (en) * 2019-01-18 2021-03-11 Beijing Sensetime Technology Development Co., Ltd. Image processing method and apparatus, image device, and storage medium
US11062494B2 (en) * 2018-03-06 2021-07-13 Didimo, Inc. Electronic messaging utilizing animatable 3D models
US20210252403A1 (en) 2020-02-14 2021-08-19 Electronic Arts Inc. Color blindness diagnostic system
US20210279956A1 (en) 2020-03-04 2021-09-09 Disney Enterprises, Inc. Semantic deep face models
US20210308580A1 (en) 2020-04-06 2021-10-07 Electronic Arts Inc. Enhanced pose generation based on generative modeling
US20210312689A1 (en) 2020-04-06 2021-10-07 Electronic Arts Inc. Enhanced pose generation based on conditional modeling of inverse kinematics
US20210335039A1 (en) 2020-04-24 2021-10-28 Roblox Corporation Template based generation of 3d object meshes from 2d images
US20210390789A1 (en) 2020-06-13 2021-12-16 Qualcomm Incorporated Image augmentation for analytics
US20220020195A1 (en) 2020-07-15 2022-01-20 De-Identification Ltd. System and a method for artificial neural-network based animation
US20220051003A1 (en) 2020-08-14 2022-02-17 Fujitsu Limited Image synthesis for balanced datasets
US20220138455A1 (en) 2020-11-02 2022-05-05 Pinscreen, Inc. Normalization of facial images using deep neural networks
US20220222892A1 (en) 2021-01-11 2022-07-14 Pinscreen, Inc. Normalized three-dimensional avatar synthesis and perceptual refinement

Patent Citations (184)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5274801A (en) 1988-04-29 1993-12-28 International Business Machines Corp. Artifical intelligence delivery system
US5548798A (en) 1994-11-10 1996-08-20 Intel Corporation Method and apparatus for solving dense systems of linear equations with an iterative method that employs partial multiplications using rank compressed SVD basis matrices of the partitioned submatrices of the coefficient matrix
US6253193B1 (en) 1995-02-13 2001-06-26 Intertrust Technologies Corporation Systems and methods for the secure transaction management and electronic rights protection
US5982389A (en) 1996-06-17 1999-11-09 Microsoft Corporation Generating optimized motion transitions for computer animated objects
US6088040A (en) 1996-09-17 2000-07-11 Atr Human Information Processing Research Laboratories Method and apparatus of facial image conversion by interpolation/extrapolation for plurality of facial expression components representing facial image
US5999195A (en) 1997-03-28 1999-12-07 Silicon Graphics, Inc. Automatic generation of transitions between motion cycles in an animation
US6064808A (en) 1997-08-01 2000-05-16 Lucent Technologies Inc. Method and apparatus for designing interconnections and passive components in integrated circuits and equivalent structures by efficient parameter extraction
US20020089504A1 (en) 1998-02-26 2002-07-11 Richard Merrick System and method for automatic animation generation
US6961060B1 (en) 1999-03-16 2005-11-01 Matsushita Electric Industrial Co., Ltd. Virtual space control data receiving apparatus,virtual space control data transmission and reception system, virtual space control data receiving method, and virtual space control data receiving program storage media
US6556196B1 (en) 1999-03-19 2003-04-29 Max-Planck-Gesellschaft Zur Forderung Der Wissenschaften E.V. Method and apparatus for the processing of images
US20020054054A1 (en) 2000-02-28 2002-05-09 Toshiba Tec Kabushiki Kaisha Window design alteration method and system
US20040027352A1 (en) 2000-10-02 2004-02-12 Mitsuru Minakuchi Device, system, method, and program for reproducing or transfering animation
US20020180739A1 (en) 2001-04-25 2002-12-05 Hugh Reynolds Method and apparatus for simulating soft object movement
US20030038818A1 (en) 2001-08-23 2003-02-27 Tidwell Reed P. System and method for auto-adjusting image filtering
US20060036514A1 (en) 2002-01-24 2006-02-16 Ryan Steelberg Dynamic selection and scheduling of radio frequency communications
US20050237550A1 (en) 2002-08-28 2005-10-27 Hu Shane C Automatic color constancy for image sensors
US7006090B2 (en) 2003-02-07 2006-02-28 Crytek Gmbh Method and computer program product for lighting a computer graphics image and a computer
US20060061574A1 (en) * 2003-04-25 2006-03-23 Victor Ng-Thow-Hing Joint component framework for modeling complex joint behavior
US20040227760A1 (en) 2003-05-14 2004-11-18 Pixar Animation Studios Statistical dynamic collisions method and apparatus
US7944449B2 (en) 2003-05-14 2011-05-17 Pixar Methods and apparatus for export of animation data to non-native articulation schemes
US20040227761A1 (en) 2003-05-14 2004-11-18 Pixar Statistical dynamic modeling method and apparatus
US20060149516A1 (en) 2004-12-03 2006-07-06 Andrew Bond Physics simulation apparatus and method
US20060262114A1 (en) 2005-03-23 2006-11-23 Electronic Arts Inc. Computer simulation of body dynamics including a solver that solves in linear time for a set of constraints using vector processing
US20060262113A1 (en) * 2005-03-23 2006-11-23 Electronic Arts Inc. Computer simulation of body dynamics including a solver that solves for position-based constraints
US20060217945A1 (en) 2005-03-23 2006-09-28 Electronic Arts Inc. Computer simulation of body dynamics including a solver that solves in linear time for a set of constraints
US20160078662A1 (en) * 2005-04-19 2016-03-17 Digitalfish, Inc. Techniques and workflows for computer graphics animation system
US7415152B2 (en) 2005-04-29 2008-08-19 Microsoft Corporation Method and system for constructing a 3D representation of a face from a 2D representation
US7403202B1 (en) 2005-07-12 2008-07-22 Electronic Arts, Inc. Computer animation of simulated characters using combinations of motion-capture data and external force modelling or other physics models
US20070085851A1 (en) 2005-10-19 2007-04-19 Matthias Muller Method of simulating dynamic objects using position based dynamics
US20070097125A1 (en) 2005-10-28 2007-05-03 Dreamworks Animation Llc Artist directed volume preserving deformation and collision resolution for animation
US8395626B2 (en) 2006-04-08 2013-03-12 Alan Millman Method and system for interactive simulation of materials
US8281281B1 (en) 2006-06-07 2012-10-02 Pixar Setting level of detail transition points
US20080049015A1 (en) 2006-08-23 2008-02-28 Baback Elmieh System for development of 3D content used in embedded devices
US20080152218A1 (en) 2006-10-27 2008-06-26 Kabushiki Kaisha Toshiba Pose estimating device and pose estimating method
US20080111831A1 (en) 2006-11-15 2008-05-15 Jay Son Efficient Panoramic Image Generation
US8142282B2 (en) 2006-11-15 2012-03-27 Microsoft Corporation Console integrated downloadable game service
US8398476B1 (en) 2007-02-02 2013-03-19 Popcap Games, Inc. Electronic game, such as a computer game involving removing pegs
US20110269540A1 (en) 2007-03-01 2011-11-03 Sony Computer Entertainment Europe Limited Entertainment device and method
US8100770B2 (en) 2007-04-20 2012-01-24 Nintendo Co., Ltd. Game controller, storage medium storing game program, and game apparatus
US20080268961A1 (en) 2007-04-30 2008-10-30 Michael Brook Method of creating video in a virtual world and method of distributing and using same
US20080316202A1 (en) 2007-06-22 2008-12-25 Microsoft Corporation Direct manipulation of subdivision surfaces using a graphics processing unit
US8154544B1 (en) 2007-08-03 2012-04-10 Pixar User specified contact deformations for computer graphics
US20090066700A1 (en) 2007-09-11 2009-03-12 Sony Computer Entertainment America Inc. Facial animation using motion capture data
US20120244941A1 (en) 2007-10-29 2012-09-27 Microsoft Corporation User to user game referrals
US20110119332A1 (en) 2007-11-14 2011-05-19 Cybersports Limited Movement animation method and apparatus
US9098766B2 (en) * 2007-12-21 2015-08-04 Honda Motor Co., Ltd. Controlled human pose estimation from depth image streams
US9947123B1 (en) 2008-02-22 2018-04-17 Pixar Transfer of rigs with temporal coherence
US8648863B1 (en) 2008-05-20 2014-02-11 Pixar Methods and apparatus for performance style extraction for quality control of animation
US20090315839A1 (en) 2008-06-24 2009-12-24 Microsoft Corporation Physics simulation-based interaction for surface computing
US8914251B2 (en) 2008-07-11 2014-12-16 Nintendo Co., Ltd. Storage medium storing digital data correction program and digital data correction apparatus
US20100134501A1 (en) 2008-12-01 2010-06-03 Thomas Lowe Defining an animation of a virtual object within a virtual world
US9256973B2 (en) 2008-12-31 2016-02-09 Lucasfilm Entertainment Company Ltd. Controlling animated character expression
US8624904B1 (en) 2008-12-31 2014-01-07 Lucasfilm Entertainment Company Ltd. Controlling animated character expressions
US8207971B1 (en) 2008-12-31 2012-06-26 Lucasfilm Entertainment Company Ltd. Controlling animated character expressions
US8540560B2 (en) 2009-03-27 2013-09-24 Infomotion Sports Technologies, Inc. Monitoring of physical training events
US20100251185A1 (en) 2009-03-31 2010-09-30 Codemasters Software Company Ltd. Virtual object appearance control
US9067097B2 (en) * 2009-04-10 2015-06-30 Sovoz, Inc. Virtual locomotion controller apparatus and methods
US20100277497A1 (en) 2009-04-30 2010-11-04 International Business Machines Corporation Method for highlighting topic element and system thereof
US20110187731A1 (en) 2009-07-10 2011-08-04 Yasuhiro Tsuchida Marker display control device, integrated circuit, and marker display control method
US20110012903A1 (en) 2009-07-16 2011-01-20 Michael Girard System and method for real-time character animation
US20130120439A1 (en) 2009-08-28 2013-05-16 Jerry G. Harris System and Method for Image Editing Using Visual Rewind Operation
US9483860B2 (en) 2009-09-18 2016-11-01 Samsung Electronics Co., Ltd. Apparatus and method to extract three-dimensional (3D) facial expression
US20110074807A1 (en) 2009-09-30 2011-03-31 Hitachi, Ltd. Method of color customization of content screen
US8406528B1 (en) 2009-10-05 2013-03-26 Adobe Systems Incorporated Methods and apparatuses for evaluating visual accessibility of displayable web based content and/or other digital images
US20110086702A1 (en) 2009-10-13 2011-04-14 Ganz Method and system for providing a virtual presentation including a virtual companion and virtual photography
US20110128292A1 (en) 2009-12-02 2011-06-02 Electronics And Telecommunications Research Institute Dynamics-based motion generation apparatus and method
US20110164831A1 (en) 2010-01-05 2011-07-07 Stmicroelectronics (Grenoble 2) Sas Method for detecting orientation of contours
US9177409B2 (en) * 2010-04-29 2015-11-03 Naturalmotion Ltd Animating a virtual object within a virtual world
US20110292055A1 (en) 2010-05-25 2011-12-01 Disney Enterprises, Inc. Systems and methods for animating non-humanoid characters with human motion data
US8599206B2 (en) 2010-05-25 2013-12-03 Disney Enterprises, Inc. Systems and methods for animating non-humanoid characters with human motion data
US8860732B2 (en) 2010-09-27 2014-10-14 Adobe Systems Incorporated System and method for robust physically-plausible character animation
US20120083330A1 (en) 2010-10-05 2012-04-05 Zynga Game Network, Inc. System and Method for Generating Achievement Objects Encapsulating Captured Event Playback
US20120115580A1 (en) 2010-11-05 2012-05-10 Wms Gaming Inc. Wagering game with player-directed pursuit of award outcomes
US9208613B2 (en) * 2011-02-16 2015-12-08 Kabushiki Kaisha Square Enix Action modeling device, method, and program
US20120220376A1 (en) 2011-02-25 2012-08-30 Nintendo Co., Ltd. Communication system, information processing apparatus, computer-readable storage medium having a program stored therein, and information processing method
US8267764B1 (en) 2011-04-21 2012-09-18 Wms Gaming Inc. Wagering game having enhancements to queued outcomes
US20120303343A1 (en) 2011-05-26 2012-11-29 Sony Computer Entertainment Inc. Program, Information Storage Medium, Information Processing System, And Information Processing Method.
US20130121618A1 (en) 2011-05-27 2013-05-16 Vikas Yadav Seamless Image Composition
US20120313931A1 (en) 2011-06-07 2012-12-13 Sony Computer Entertainment Inc. Image generating device, image generating method, and non-transitory information storage medium
US20130050464A1 (en) 2011-08-31 2013-02-28 Keyence Corporation Magnification Observation Device, Magnification Observation Method, And Magnification Observation Program
US20130063555A1 (en) 2011-09-08 2013-03-14 Casio Computer Co., Ltd. Image processing device that combines a plurality of images
CN102509272A (en) 2011-11-21 2012-06-20 武汉大学 Color image enhancement method based on color constancy
US20140327694A1 (en) 2012-01-19 2014-11-06 Microsoft Corporation Simultaneous Display of Multiple Content Items
US20130222433A1 (en) 2012-02-29 2013-08-29 Danny Chapman Animation processing
US20130235045A1 (en) 2012-03-06 2013-09-12 Mixamo, Inc. Systems and methods for creating and distributing modifiable animated video messages
US20130263027A1 (en) 2012-03-29 2013-10-03 FiftyThree, Inc. Methods and apparatus for providing a digital illustration system
US20130311885A1 (en) 2012-05-15 2013-11-21 Capso Vision, Inc. System and Method for Displaying Annotated Capsule Images
US20140002463A1 (en) 2012-06-27 2014-01-02 Pixar Skin and flesh simulation using finite elements, biphasic materials, and rest state retargeting
US9616329B2 (en) 2012-06-28 2017-04-11 Electronic Arts Inc. Adaptive learning system for video game enhancement
CN103546736A (en) 2012-07-12 2014-01-29 三星电子株式会社 Image processing apparatus and method
US20150126277A1 (en) 2012-07-31 2015-05-07 Kabushiki Kaisha Square Enix (Also Trading As Square Enix Co., Ltd.) Data provision system, provision apparatus, execution apparatus, control method, and recording medium
US20140066196A1 (en) 2012-08-30 2014-03-06 Colin William Crenshaw Realtime color vision deficiency correction
US20150235351A1 (en) 2012-09-18 2015-08-20 Iee International Electronics & Engineering S.A. Depth image enhancement method
US20160026926A1 (en) 2012-11-12 2016-01-28 Singapore University Of Technology And Design Clothing matching system and method
US20140198107A1 (en) 2013-01-11 2014-07-17 Disney Enterprises, Inc. Fast rig-based physics simulation
US20140198106A1 (en) 2013-01-11 2014-07-17 Disney Enterprises, Inc. Rig-Based Physics Simulation
US20140267312A1 (en) * 2013-03-15 2014-09-18 Dreamworks Animation Llc Method and system for directly manipulating the constrained model of a computer-generated character
US9117134B1 (en) 2013-03-19 2015-08-25 Google Inc. Image merging with blending
US20140340644A1 (en) 2013-05-16 2014-11-20 Successfactors, Inc. Display accessibility for color vision impairment
US9317954B2 (en) 2013-09-23 2016-04-19 Lucasfilm Entertainment Company Ltd. Real-time performance capture with on-the-fly correctives
US20150113370A1 (en) 2013-10-18 2015-04-23 Apple Inc. Object matching in a presentation application
US20160307369A1 (en) 2013-12-13 2016-10-20 Aveva Solutions Limited Image rendering of laser scan data
US20150187113A1 (en) 2013-12-31 2015-07-02 Dreamworks Animation Llc Multipoint offset sampling deformation techniques
US9990754B1 (en) 2014-02-04 2018-06-05 Electronic Arts Inc. System for rendering using position based finite element simulation
US20150243326A1 (en) 2014-02-24 2015-08-27 Lyve Minds, Inc. Automatic generation of compilation videos
US20160354693A1 (en) 2014-03-12 2016-12-08 Tencent Technology (Shenzhen) Company Limited Method and apparatus for simulating sound in virtual scenario, and terminal
US20160042548A1 (en) 2014-03-19 2016-02-11 Intel Corporation Facial expression and/or interaction driven avatar apparatus and method
US20180239526A1 (en) 2014-05-28 2018-08-23 Kiran Varanasi Method and systems for touch input
US20150381925A1 (en) 2014-06-25 2015-12-31 Thomson Licensing Smart pause for neutral facial expression
US9987749B2 (en) * 2014-08-15 2018-06-05 University Of Central Florida Research Foundation, Inc. Control interface for robotic humanoid avatar system and related methods
US20160071470A1 (en) 2014-09-05 2016-03-10 Samsung Display Co., Ltd. Display apparatus, display control method, and display method
CN105405380A (en) 2014-09-05 2016-03-16 三星显示有限公司 Display apparatus, display control method, and display method
US9811716B2 (en) 2014-11-21 2017-11-07 Korea Institute Of Science And Technology Method for face recognition through facial expression normalization, recording medium and device for performing the method
CN105825778A (en) 2015-01-26 2016-08-03 三星显示有限公司 Display device
US20160217723A1 (en) 2015-01-26 2016-07-28 Samsung Display Co., Ltd. Display device
US10388053B1 (en) 2015-03-27 2019-08-20 Electronic Arts Inc. System for seamless animation transition
US9827496B1 (en) 2015-03-27 2017-11-28 Electronics Arts, Inc. System for example-based motion synthesis
US10022628B1 (en) 2015-03-31 2018-07-17 Electronic Arts Inc. System for feature-based motion adaptation
US20160314617A1 (en) 2015-04-21 2016-10-27 Sony Computer Entertainment Inc. Device and method of selecting an object for 3d printing
US9858700B2 (en) 2015-05-13 2018-01-02 Lucasfilm Entertainment Company Ltd. Animation data transfer between geometric models and associated animation models
JP2018520820A (en) 2015-06-12 2018-08-02 ルーク アンダーソンLuke ANDERSON Method and system for inspecting visual aspects
US10856733B2 (en) 2015-06-12 2020-12-08 Okulo Ltd. Methods and systems for testing aspects of vision
US10163001B2 (en) * 2015-07-14 2018-12-25 Korea Institute Of Science And Technology Method and system for controlling virtual model formed in virtual space
US9928663B2 (en) * 2015-07-27 2018-03-27 Technische Universiteit Delft Skeletal joint optimization for linear blend skinning deformations utilizing skeletal pose sampling
US10783690B2 (en) * 2015-09-07 2020-09-22 Sony Interactive Entertainment America Llc Image regularization and retargeting system
US10755466B2 (en) 2015-09-21 2020-08-25 TuringSense Inc. Method and apparatus for comparing two motions
US9741146B1 (en) 2015-09-30 2017-08-22 Electronic Arts, Inc. Kinetic energy smoother
US10792566B1 (en) 2015-09-30 2020-10-06 Electronic Arts Inc. System for streaming content within a game application environment
US20170132827A1 (en) 2015-11-10 2017-05-11 Disney Enterprises, Inc. Data Driven Design and Animation of Animatronics
US20170301316A1 (en) 2016-04-13 2017-10-19 James Paul Farell Multi-path graphics rendering
US20170301310A1 (en) 2016-04-19 2017-10-19 Apple Inc. Displays with Improved Color Accessibility
US9984658B2 (en) 2016-04-19 2018-05-29 Apple Inc. Displays with improved color accessibility
US10403018B1 (en) 2016-07-12 2019-09-03 Electronic Arts Inc. Swarm crowd rendering system
US20180024635A1 (en) 2016-07-25 2018-01-25 Patrick Kaifosh Methods and apparatus for predicting musculo-skeletal position information using wearable autonomous sensors
US10118097B2 (en) 2016-08-09 2018-11-06 Electronic Arts Inc. Systems and methods for automated image processing for images with similar luminosities
US9826898B1 (en) 2016-08-19 2017-11-28 Apple Inc. Color vision assessment for displays
US10726611B1 (en) 2016-08-24 2020-07-28 Electronic Arts Inc. Dynamic texture mapping using megatextures
US20180068178A1 (en) 2016-09-05 2018-03-08 Max-Planck-Gesellschaft Zur Förderung D. Wissenschaften E.V. Real-time Expression Transfer for Facial Reenactment
US20180122125A1 (en) 2016-11-03 2018-05-03 Naturalmotion Ltd. Animating a virtual object in a virtual world
US10297066B2 (en) * 2016-11-03 2019-05-21 Naturalmotion Ltd. Animating a virtual object in a virtual world
US20180165864A1 (en) * 2016-12-13 2018-06-14 DeepMotion, Inc. Virtual reality system using multiple force arrays for a solver
US20180211102A1 (en) 2017-01-25 2018-07-26 Imam Abdulrahman Bin Faisal University Facial expression recognition
US20210019916A1 (en) 2017-03-31 2021-01-21 Electronic Arts Inc. Blendshape compression system
US11295479B2 (en) 2017-03-31 2022-04-05 Electronic Arts Inc. Blendshape compression system
US10096133B1 (en) 2017-03-31 2018-10-09 Electronic Arts Inc. Blendshape compression system
US10733765B2 (en) 2017-03-31 2020-08-04 Electronic Arts Inc. Blendshape compression system
US10810780B2 (en) * 2017-07-28 2020-10-20 Baobab Studios Inc. Systems and methods for real-time complex character animations and interactivity
US10878540B1 (en) 2017-08-15 2020-12-29 Electronic Arts Inc. Contrast ratio detection and rendering system
US11113860B2 (en) 2017-09-14 2021-09-07 Electronic Arts Inc. Particle-based inverse kinematic rendering system
US20200294299A1 (en) 2017-09-14 2020-09-17 Electronic Arts Inc. Particle-based inverse kinematic rendering system
US10535174B1 (en) 2017-09-14 2020-01-14 Electronic Arts Inc. Particle-based inverse kinematic rendering system
US10860838B1 (en) 2018-01-16 2020-12-08 Electronic Arts Inc. Universal facial expression translation and character rendering system
US20200302668A1 (en) * 2018-02-09 2020-09-24 Tencent Technology (Shenzhen) Company Limited Expression animation data processing method, computer device, and storage medium
US11062494B2 (en) * 2018-03-06 2021-07-13 Didimo, Inc. Electronic messaging utilizing animatable 3D models
JP2019162400A (en) 2018-03-19 2019-09-26 株式会社リコー Color vision examination device, color vision examination method, color vision examination program and storage medium
US20190325633A1 (en) * 2018-04-23 2019-10-24 Magic Leap, Inc. Avatar facial expression representation in multidimensional space
US10198845B1 (en) 2018-05-29 2019-02-05 LoomAi, Inc. Methods and systems for animating facial expressions
US20190392587A1 (en) 2018-06-22 2019-12-26 Microsoft Technology Licensing, Llc System for predicting articulated object feature location
US20200051304A1 (en) * 2018-08-08 2020-02-13 Samsung Electronics Co., Ltd Electronic device for displaying avatar corresponding to external object according to change in position of external object
US10314477B1 (en) 2018-10-31 2019-06-11 Capital One Services, Llc Systems and methods for dynamically modifying visual content to account for user visual impairment
US20200151963A1 (en) 2018-11-12 2020-05-14 Electronics And Telecommunications Research Institute Training data set generation apparatus and method for machine learning
US20200226811A1 (en) * 2019-01-14 2020-07-16 Samsung Electronics Co., Ltd. Electronic device for generating avatar and method thereof
US20210074004A1 (en) * 2019-01-18 2021-03-11 Beijing Sensetime Technology Development Co., Ltd. Image processing method and apparatus, image device, and storage medium
US20200258280A1 (en) * 2019-02-07 2020-08-13 Samsung Electronics Co., Ltd. Electronic device for providing avatar animation and method thereof
US20200306640A1 (en) * 2019-03-27 2020-10-01 Electronic Arts Inc. Virtual character generation from image or video data
US20200310541A1 (en) 2019-03-29 2020-10-01 Facebook Technologies, Llc Systems and methods for control schemes based on neuromuscular data
US20200364303A1 (en) 2019-05-15 2020-11-19 Nvidia Corporation Grammar transfer using one or more neural networks
US10818065B1 (en) * 2019-05-17 2020-10-27 Adobe Inc. Inverse kinematic solution blending in digital character animation
US10902618B2 (en) 2019-06-14 2021-01-26 Electronic Arts Inc. Universal body movement translation and character rendering system
US20210252403A1 (en) 2020-02-14 2021-08-19 Electronic Arts Inc. Color blindness diagnostic system
US11504625B2 (en) 2020-02-14 2022-11-22 Electronic Arts Inc. Color blindness diagnostic system
US20210279956A1 (en) 2020-03-04 2021-09-09 Disney Enterprises, Inc. Semantic deep face models
US20210308580A1 (en) 2020-04-06 2021-10-07 Electronic Arts Inc. Enhanced pose generation based on generative modeling
US11217003B2 (en) 2020-04-06 2022-01-04 Electronic Arts Inc. Enhanced pose generation based on conditional modeling of inverse kinematics
US11232621B2 (en) 2020-04-06 2022-01-25 Electronic Arts Inc. Enhanced animation generation based on conditional modeling
US20210312688A1 (en) 2020-04-06 2021-10-07 Electronic Arts Inc. Enhanced animation generation based on conditional modeling
US20220198733A1 (en) 2020-04-06 2022-06-23 Electronic Arts Inc. Enhanced pose generation based on conditional modeling of inverse kinematics
US20210312689A1 (en) 2020-04-06 2021-10-07 Electronic Arts Inc. Enhanced pose generation based on conditional modeling of inverse kinematics
US20210335039A1 (en) 2020-04-24 2021-10-28 Roblox Corporation Template based generation of 3d object meshes from 2d images
US20210390789A1 (en) 2020-06-13 2021-12-16 Qualcomm Incorporated Image augmentation for analytics
US20220020195A1 (en) 2020-07-15 2022-01-20 De-Identification Ltd. System and a method for artificial neural-network based animation
US20220051003A1 (en) 2020-08-14 2022-02-17 Fujitsu Limited Image synthesis for balanced datasets
US20220138455A1 (en) 2020-11-02 2022-05-05 Pinscreen, Inc. Normalization of facial images using deep neural networks
US20220222892A1 (en) 2021-01-11 2022-07-14 Pinscreen, Inc. Normalized three-dimensional avatar synthesis and perceptual refinement

Non-Patent Citations (60)

* Cited by examiner, † Cited by third party
Title
Ali, Kamran, and Charles E. Hughes. "Facial expression recognition using disentangled adversarial learning." arXiv preprint arXiv: 1909.13135 (2019). (Year: 2019).
Anagnostopoulos et al., "Intelligent modification for the daltonization process", International Conference on Computer Vision Published in 2007 by Applied Computer Science Group of digitized paintings.
Andersson, S., Goransson, J.: Virtual Texturing with WebGL. Master's thesis, Chalmers University of Technology, Gothenburg, Sweden (2012).
Avenali, Adam, "Color Vision Deficiency and Video Games", The Savannah College of Art and Design, Mar. 2013.
Badlani et al., "A Novel Technique for Modification of Images for Deuteranopic Viewers", May 2016.
Belytschko et al., "Assumed strain stabilization of the eight node hexahedral element," Computer Methods in Applied Mechanics and Engineering, vol. 105(2), pp. 225-260 (1993), 36 pages.
Belytschko et al., Nonlinear Finite Elements for Continua and Structures, Second Edition, Wiley (Jan. 2014), 727 pages (uploaded in 3 parts).
Blanz et al., "Reanimating Faces in Images and Video" Sep. 2003, vol. 22, No. 3, pp. 641-650, 10 pages.
Blanz V, Vetter T. A morphable model for the synthesis of 3D faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques Jul. 1, 1999 (pp. 187-194). ACM Press/Addison-Wesley Publishing Co.
Chao et al., "A Simple Geometric Model for Elastic Deformations", 2010, 6 pgs.
Cook et al., Concepts and Applications of Finite Element Analysis, 1989, Sections 6-11 through 6-14.
Cournoyer et al., "Massive Crowd on Assassin's Creed Unity: AI Recycling," Mar. 2, 2015, 55 pages.
Dick et al., "A Hexahedral Multigrid Approach for Simulating Cuts in Deformable Objects", IEEE Transactions on Visualization and Computer Graphics, vol. X, No. X, Jul. 2010, 16 pgs.
Diziol et al., "Robust Real-Time Deformation of Incompressible Surface Meshes", to appear in Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (2011), 10 pgs.
Dudash, Bryan. "Skinned instancing." NVidia white paper(2007).
Fikkan, Eirik. Incremental loading of terrain textures. MS thesis. Institutt for datateknikk og informasjonsvitenskap, 2013.
Geijtenbeek, T. et al., "Interactive Character Animation using Simulated Physics", Games and Virtual Worlds, Utrecht University, The Netherlands, The Eurographics Association 2011, 23 pgs.
Georgii et al., "Corotated Finite Elements Made Fast and Stable", Workshop in Virtual Reality Interaction and Physical Simulation VRIPHYS (2008), 9 pgs.
Habibie et al., "A Recurrent Variational Autoencoder for Human Motion Synthesis", 2017, in 12 pages.
Halder et al., "Image Color Transformation for Deuteranopia Patients using Daltonization", IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) vol. 5, Issue 5, Ver. I (Sep.-Oct. 2015), pp. 15-20.
Han et al., "On-line Real-time Physics-based Predictive Motion Control with Balance Recovery," Eurographics, vol. 33(2), 2014, 10 pages.
Hernandez, Benjamin, et al. "Simulating and visualizing real-time crowds on GPU clusters." Computaci6n y Sistemas 18.4 (2014): 651-664.
Hu G, Chan CH, Yan F, Christmas W, Kittler J. Robust face recognition by an albedo based 3D morphable model. In Biometrics (IJCB), 2014 IEEE International Joint Conference on Sep. 29, 2014 (pp. 1-8). IEEE.
Hu Gousheng, Face Analysis using 3D Morphable Models, Ph.D. Thesis, University of Surrey, Apr. 2015, pp. 1-112.
Irving et al., "Invertible Finite Elements for Robust Simulation of Large Deformation", Eurographics/ACM SIGGRAPH Symposium on Computer Animation (2004), 11 pgs.
Kaufmann et al., "Flexible Simulation of Deformable Models Using Discontinuous Galerkin FEM", Oct. 1, 2008, 20 pgs.
Kavan et al., "Skinning with Dual Quaternions", 2007, 8 pgs.
Kim et al., "Long Range Attachments—A Method to Simulate Inextensible Clothing in Computer Games", Eurographics/ACM SIGGRAPH Symposium on Computer Animation (2012), 6 pgs.
Klein, Joseph. Rendering Textures Up Close in a 3D Environment Using Adaptive Micro-Texturing. Diss. Mills College, 2012.
Komura et al., "Animating reactive motion using momentum-based inverse kinematics," Computer Animation and Virtual Worlds, vol. 16, pp. 213-223, 2005, 11 pages.
Lee, Y. et al., "Motion Fields for Interactive Character Animation", University of Washington, Bungie, Adobe Systems, 8 pgs, obtained Mar. 20, 2015.
Levine, S. et al., "Continuous Character Control with Low-Dimensional Embeddings", Stanford University, University of Washington, 10 pgs, obtained Mar. 20, 2015.
Macklin et al., "Position Based Fluids", to appear in ACM TOG 32(4), 2013, 5 pgs.
McAdams et al., "Efficient Elasticity for Character Skinning with Contact and Collisions", 2011, 11 pgs.
McDonnell, Rachel, et al. "Clone attack! perception of crowd variety." ACM Transactions on Graphics (TOG). vol. 27. No. 3. ACM, 2008.
Muller et al., "Adding Physics to Animated Characters with Oriented Particles", Workshop on Virtual Reality Interaction and Physical Simulation VRIPHYS (2011), 10 pgs.
Muller et al., "Meshless Deformations Based on Shape Matching", SIGGRAPH 2005, 29 pgs.
Muller et al., "Position Based Dymanics", VRIPHYS 2006, Oct. 21, 2014, Computer Graphics, Korea University, 23 pgs.
Muller et al., "Real Time Dynamic Fracture with Columetric Approximate Convex Decompositions", ACM Transactions of Graphics, Jul. 2013, 11 pgs.
Musse, Soraia Raupp, and Daniel Thalmann. "Hierarchical model for real time simulation of virtual human crowds." IEEE Transactions on Visualization and Computer Graphics 7.2 (2001): 152-164.
Nguyen et al., "Adaptive Dynamics With Hybrid Response," 2012, 4 pages.
O'Brien et al., "Graphical Modeling and Animation of Brittle Fracture", GVU Center and College of Computing, Georgia Institute of Technology, Reprinted from the Proceedings of ACM SIGGRAPH 99, 10 pgs, dated 1999.
Orin et al., "Centroidal dynamics of a humanoid robot," Auton Robot, vol. 35, pp. 161-176, 2013, 18 pages.
Parker et al., "Real-Time Deformation and Fracture in a Game Environment", Eurographics/ACM SIGGRAPH Symposium on Computer Animation (2009), 12 pgs.
Pelechano, Nuria, Jan M. Allbeck, and Norman I. Badler. "Controlling individual agents in high-density crowd simulation." Proceedings of the 2007 ACM SIGGRAPH/Eurographics symposium on Computer animation. Eurographics Association, 2007. APA.
Qiao, Fengchun, et al. "Geometry-contrastive gan for facial expression transfer." arXiv preprint arXiv: 1802.01822 (2018). (Year: 2018).
Rivers et al., "FastLSM: Fast Lattice Shape Matching for Robust Real-Time Deformation", ACM Transactions on Graphics, vol. 26, No. 3, Article 82, Publication date: Jul. 2007, 6 pgs.
Ruiz, Sergio, et al. "Reducing memory requirements for diverse animated crowds." Proceedings of Motion on Games. ACM, 2013.
Rungjiratananon et al., "Elastic Rod Simulation by Chain Shape Matching withTwisting Effect" SIGGRAPH Asia 2010, Seoul, South Korea, Decemer 15-18, 2010, ISBN 978-1-4503-0439-9/10/0012, 2 pgs.
Seo et al., "Compression and Direct Manipulation of Complex Blendshape Models", Dec. 2011, in 10 pgs.
Sifakis, Eftychios D., "Fem Simulations of 3D Deformable Solids: A Practioner's Guide to Theory, Discretization and Model Reduction. Part One: The Classical FEM Method and Discretization Methodology", SIGGRAPH 2012 Course, Version 1.0 [Jul. 10, 2012], 50 pgs.
Stomakhin et al., "Energetically Consistent Invertible Elasticity", Eurographics/ACM SIGRAPH Symposium on Computer Animation (2012), 9 pgs.
Thalmann, Daniel, and Soraia Raupp Musse. "Crowd rendering." Crowd Simulation. Springer London, 2013. 195-227.
Thalmann, Daniel, and Soraia Raupp Musse. "Modeling of Populations." Crowd Simulation. Springer London, 2013. 31-80.
Treuille, A. et al., "Near-optimal Character Animation with Continuous Control", University of Washington, 2007, 7 pgs.
Ulicny, Branislav, and Daniel Thalmann. "Crowd simulation for interactive virtual environments and VR training systems." Computer Animation and Simulation 2001 (2001 ): 163-170.
Vaillant et al., "Implicit Skinning: Real-Time Skin Deformation with Contact Modeling", (2013) ACM Transactions on Graphics, vol. 32 (n° 4). pp. 1-11. ISSN 0730-0301, 12 pgs.
Vigueras, Guillermo, et al. "A distributed visualization system for crowd simulations." Integrated Computer-Aided Engineering 18.4 (2011 ): 349-363.
Wu et al., "Goal-Directed Stepping with Momentum Control," Eurographics/ ACM SIGGRAPH Symposium on Computer Animation, 2010, 6 pages.
Zhang, Jiangning, et al. "Freenet: Multi-identity face reenactment." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020. (Year: 2020).

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11992768B2 (en) 2020-04-06 2024-05-28 Electronic Arts Inc. Enhanced pose generation based on generative modeling

Also Published As

Publication number Publication date
US20200394806A1 (en) 2020-12-17
US10902618B2 (en) 2021-01-26
US20210217184A1 (en) 2021-07-15

Similar Documents

Publication Publication Date Title
US11798176B2 (en) Universal body movement translation and character rendering system
US10860838B1 (en) Universal facial expression translation and character rendering system
US11145133B2 (en) Methods and systems for generating an animated 3D model based on a 2D image
Jacobson et al. Tangible and modular input device for character articulation
Seol et al. Creature features: online motion puppetry for non-human characters
JP7299414B2 (en) Image processing method, device, electronic device and computer program
WO2017044499A1 (en) Image regularization and retargeting system
Tong et al. Research on skeleton animation motion data based on Kinect
Feng et al. Fast, automatic character animation pipelines
CN114049468A (en) Display method, device, equipment and storage medium
WO2008116426A1 (en) Controlling method of role animation and system thereof
WO2024000480A1 (en) 3d virtual object animation generation method and apparatus, terminal device, and medium
GB2546815B (en) Animating a virtual object in a virtual world
WO2024169276A1 (en) Trajectory information processing method and apparatus, and computer device and readable storage medium
US10282883B2 (en) Hierarchy-based character rigging
Thiery et al. ARAPLBS: Robust and efficient elasticity‐based optimization of weights and skeleton joints for linear blend skinning with parametrized bones
Wang et al. Zero-shot pose transfer for unrigged stylized 3d characters
Kim et al. Human motion reconstruction from sparse 3D motion sensors using kernel CCA‐based regression
CN115908664B (en) Animation generation method and device for man-machine interaction, computer equipment and storage medium
KR102443260B1 (en) Method and system for providing companion animal remembrance service
Apostolakis et al. Natural user interfaces for virtual character full body and facial animation in immersive virtual worlds
EP4191541A1 (en) Information processing device and information processing method
CN108198234B (en) Virtual character generating system and method capable of realizing real-time interaction
Han et al. Customizing blendshapes to capture facial details
Sung Fast motion synthesis of quadrupedal animals using a minimum amount of motion capture data

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: ELECTRONIC ARTS INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PAYNE, SIMON;RUDY, DARREN;SIGNING DATES FROM 20191122 TO 20191125;REEL/FRAME:058783/0006

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE