WO2018081732A1 - Systèmes et procédés pour identité virtuelle portable et persistante - Google Patents

Systèmes et procédés pour identité virtuelle portable et persistante Download PDF

Info

Publication number
WO2018081732A1
WO2018081732A1 PCT/US2017/059083 US2017059083W WO2018081732A1 WO 2018081732 A1 WO2018081732 A1 WO 2018081732A1 US 2017059083 W US2017059083 W US 2017059083W WO 2018081732 A1 WO2018081732 A1 WO 2018081732A1
Authority
WO
WIPO (PCT)
Prior art keywords
asset
application
standards
base
virtual identity
Prior art date
Application number
PCT/US2017/059083
Other languages
English (en)
Inventor
James Thornton
Matthew WILBURN
Berkley FREI
Jon MIDDLETON
Steve Spencer
Brian Howell
Chris Madsen
Jesse Gomez
Jesse JANZER
Original Assignee
Dg Holdings, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dg Holdings, Inc. filed Critical Dg Holdings, Inc.
Publication of WO2018081732A1 publication Critical patent/WO2018081732A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5526Game data structure
    • A63F2300/5533Game data structure using program state or machine event data, e.g. server keeps track of the state of multiple players on in a multiple player game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/57Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player
    • A63F2300/575Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player for trading virtual items

Definitions

  • the present disclosure relates to electronic or other virtual representations of an individual or entity in a computer generated environment.
  • the present disclosure relates to systems and methods for portable and persistent virtual identity across applications and platforms.
  • FIG. 1 is a system diagram for a persistent virtual identity system, according to one embodiment.
  • FIG. 2 is a block diagram of an asset transfer client, according to one embodiment.
  • FIG. 3 is a block diagram of an asset lookup and delivery service, according to one embodiment.
  • FIG. 4 is a block diagram of artist tools, according to one embodiment.
  • FIG. 5 is a block diagram of a 3D character SDK, according to one embodiment.
  • FIG. 6 is a block diagram of a persistent virtual identity system, according to one embodiment.
  • FIG. 7 is a flow chart for generating a nearest neighbor vertices index.
  • FIG. 8 is a flow chart for generating an occlusion index and an alpha injection map.
  • FIG. 9 is a flow chart for generating a heterogeneous mesh behavior index.
  • FIG. 10 is a flow chart for creating a new art style, according to one embodiment.
  • FIG. 1 1 is a flow chart for transitioning a 3D asset from one art style to another.
  • FIG. 12 is a flow diagram of a method for adjusting a following 3D asset based on the deformation of a related base 3D asset.
  • FIG. 13 is a block diagram for stacking multiple meshes according to one embodiment.
  • FIG. 14 is a block diagram for stacking multiple meshes according to one embodiment.
  • FIG. 15 is a graphical user interface 1500 of persistent virtual identity system, according to one embodiment.
  • a virtual identity can be embodied as simple as a profile, and can be more complex such as including an avatar or other graphical representation, a persona (e.g., an aspect of character of the virtual identity that is presented to or perceived by others), and/or a reputation (e.g., beliefs or opinions that are generally held about the virtual identity).
  • virtual identity can be very complex in order to provide a fuller, richer identity to other entities in VR encounters or other interactions.
  • a virtual identity can be used to associate application data with a user. For example, a virtual identity can be used to correlate user data, application settings, pictures, and/or profiles with users, among other types of application data.
  • virtual identities are limited to a single application (e.g., specific to a given application and nontransferable to other applications). That is, a user may create a virtual identity for a given application and that virtual identity is not portable to, or persistent in, a different application. A user must create a separate virtual identity to use with each of a plurality of applications. As such, the user may have the burden of managing and/or maintaining a plurality of virtual identities. If the user experiences a change (e.g., a change of name, address, phone number, or the like), or desires to effectuate a change to a virtual identity (e.g. a change of an aesthetic feature such as an icon), then the user may have the burden of propagating the change through a plurality of virtual identities, each corresponding to a different application.
  • a change e.g., a change of name, address, phone number, or the like
  • a change of an aesthetic feature such as an icon e.g. a change of an aesthetic feature such as an icon
  • the burden on a user may be further enhanced.
  • the application identity is associated with a virtual application having a visual aspect
  • the virtual identity may include a virtual avatar and other types of data associated with the virtual identity.
  • a user may create, manage, and/or maintain a different virtual avatar for each of a plurality of virtual applications. If a user makes a change to an avatar associated with one virtual identity (e.g., a change of hair color), the user would need to then make the same change to the avatar associated with each other virtual identity in which the user may interact.
  • a user wants consistent (e.g., identical or similar) virtual identities across multiple applications, then when the user changes the hair color (or most any other visual aspect, such as shape, height, muscle definition, tattoos, sex, art style, default animation sets) of an avatar (e.g., a bipedal humanoid character) for a given virtual identity in one application the user will also have to make that same change for all other applications in which the user desires the corresponding avatars and/or virtual identities to be consistent.
  • an avatar e.g., a bipedal humanoid character
  • a persistent virtual identity (e.g., including such aspects as an avatar, persona, reputation, etc.) that is portable across applications and/or platforms may be desirable.
  • a single persistent virtual identity can be created, managed, and maintained for (and may be portable to) a plurality of applications, platforms, and/or virtual environments, whether social, business, gaming, entertainment, or any other platform that facilitates or otherwise wants users to have a visual presence in it.
  • An application can be a standalone computer program.
  • An application can be a computer program to perform a group of coordinated functions, tasks, or activities for the benefit of a user.
  • a video game may be an example of an application.
  • An application that is a standalone computer program may optionally include or interact with online or remotely located components, such as data stores and cloud computing resources.
  • a platform can be a group of different applications, services, and/or computing resources that provide a broader service.
  • a platform can be or otherwise provide the environment in which an application is executed, and may include the hardware or the operating system (OS), or other underlying software (e.g., the stage on which applications and other computer programs can run).
  • a platform may be heavily tied to or directed to an online functionality, such that the different applications, services, and/or computing resources may be distributed, or remotely located and interconnected via a network.
  • a platform can provide a computing environment for a virtual environment (e.g., a virtual world).
  • a persistent virtual identity can be developed and/or employed in multiple applications, platforms, and/or virtual environments.
  • FIG. 1 is a system 100 diagram for a persistent virtual identity system according to one embodiment.
  • the system 100 can include 3D content 102, a content conversion system 106, artist tools 108, an asset lookup and delivery service 1 10 (and/or library), content standards 1 14, an asset transfer client 1 16, a 3D character SDK 1 18, and a ready room in virtual reality (VR) 120.
  • the system 100 can also include brand modules 1 12a, 1 12b, 1 12c, and 1 12d (sometimes referred to generally and collectively as "brand module(s) 1 12").
  • the system 100 can include a plurality of applications 122a, 122b, 122c, 122d (sometimes referred to generally and collectively as "application(s) 122") As can be appreciated, in some embodiments the applications 122 may be on a single common computing platform (e.g., in a common VR environment). In other embodiments, one or more on the applications may be on different, unique computing platforms.
  • a user may interact with the system 100 by way of a user interface 124 that interfaces via the applications 122 and/or the ready room VR 120. The user interface 124 may be or operate on a user computing device.
  • a user of the system 100 may include electronic users, such as a bot or an Al application, in addition to human users.
  • the system 100 can provide the ability to create and/or maintain a persistent virtual identity and/or a corresponding 3D asset(s), and to enable transport of such between applications (e.g., different games) and/or platforms (e.g., different augmented reality (AR) or VR systems).
  • the persistent virtual identity can include a base 3D asset (e.g., an avatar model and modifications thereto), following 3D assets (e.g., clothing, accessories, etc.), history associated with a user of the system, social reputations, social standing, inventory, wardrobe (e.g., additional clothing following 3D assets, which may include pre-saved outfits), and/or trophies, among other items associated with the persistent virtual identity.
  • a base 3D asset e.g., an avatar model and modifications thereto
  • following 3D assets e.g., clothing, accessories, etc.
  • history associated with a user of the system e.g., social reputations, social standing, inventory, wardrobe (e.g., additional clothing following 3D assets
  • a virtual identity may include multiple 3D assets, which can include one or more base 3D assets (e.g., multiple avatars) and one or more following 3D assets.
  • the 3D asset(s) can be at least partially defined using geometric data.
  • the 3D asset(s) can further be presented as an avatar associated with the persistent virtual identity.
  • a "3D asset" referenced hereafter may be a base 3D asset, a following 3D asset, or a combination of one or more of these.
  • the applications 122 can be VR applications.
  • the applications 122 can be independent of each other.
  • the applications 122 can be gaming applications, social media applications, instructional applications, business applications, and/or any other type of application employing VR techniques.
  • the brand modules 1 12 can provide conformity standards for the applications 122. That is, a 3D asset generated in the system 100 can conform to the standards defined by the brand modules 1 12 to be compatible with the respective applications 122.
  • the applications 122 may all be on a single platform (e.g., HTC Vive®, Oculus Rift®, PlayStation VR®), or may be on different platforms.
  • the applications 122 and/or the brand modules 1 12 can be external to the system 100.
  • the applications 122 and/or the brand modules 1 12 can be implemented independent of the system 100 (e.g., separate and distinct from the ready room VR 120, the asset lookup and delivery service 1 10, and the content standards, 1 14, although interfacing or otherwise communicating such as through an API).
  • the applications 122 and the brand modules 1 12 are correlated. Stated differently, the brand modules 1 12 correspond to and provide standards, rules, protocols, and/or the like for the applications 122.
  • the application 122a is associated with the brand module 1 12a
  • the application 122b is associated with the brand module 1 12b
  • the application 122c is associated with the brand module 1 12c
  • the application 122d is associated with the brand module 1 12d.
  • the system 100 can enable a persistent virtual identity that is portable and persistent to exist and be transported between the applications 122.
  • a developer and/or a user can integrate or otherwise interconnect with the system 100 (e.g., via applications 122 and/or user interface 124, and generally over a network) to both create a persistent virtual identity, and potentially to interact with other persistent virtual identities created by and corresponding to other users.
  • the user and/or the application developer can exercise control over the created persistent virtual identity.
  • the user can, through the user interface 124, interconnect with the ready room VR 120 to manipulate the virtual identity.
  • the user can also manipulate the virtual identity through applications 122.
  • FIG. 1 shows the user interface 124 interconnected with the system 100 through the ready room VR 120 and the application 122b.
  • the system 100 can include a three dimensional (3D) character software developer kit (SDK) 1 18 (e.g., an MCS Plugin).
  • SDK three dimensional (3D) character software developer kit
  • the 3D character SDK 1 18 may be a library that can be implemented in an application 122.
  • the 3D character SDK 1 18 includes functionality to perform operations like create 3D assets (e.g., avatars, in a scene), shape them, add / remove clothing and other following 3D meshes, etc.
  • the 3D character SDK 1 18 also includes functionality to obtain 3D models (for base 3D assets) and accompanying information from the local cache (and if a 3D model and/or accompanying information isn't in the local cache, the 3D character SDK 1 18 can transparently fetch the 3D model and/or accompanying information from the cloud).
  • the 3D character SDK 1 18 can also transform 3D models into game ready objects, namely 3D assets.
  • the 3D character SDK 1 18 can also provide other asynchronous operations, which provides an event or task queue.
  • the system 100 can also include an asset transfer client 1 16 (e.g., ready room plugin) and an asset lookup and delivery service 1 10 (and/or library).
  • the asset transfer client 1 16 and the asset lookup and delivery service 1 10 can be local and/or remote to the system 100. That is, the asset transfer client 1 16 and/or the asset lookup and delivery service 1 10 can be executed (e.g., hosted) on a network computing device remote to the system 100 that can be accessed by the
  • the asset lookup and delivery service 1 10 allows the asset transfer client 1 16 to request a specific 3D asset with permutations on the request for specific level of detail, material, and texture variation.
  • the asset lookup and delivery service 1 10 can also provide (e.g., stream) the 3D asset to the asset transfer client 1 16.
  • a material may be a combination of texture files, shaders, and different maps that shaders use (normal map, occlusion map) and other data such as specularity and metallic levels depending on the material type.
  • a material may be a visual layer that makes something within a 3D asset look like more than just polygons.
  • the asset transfer client 1 16 can include two or more components remotely located from each other and communicating together, such as over a network.
  • a first component of the asset transfer client 1 16 can be implemented in the user interface 124 and/or the applications 122.
  • a second component of the asset transfer client 1 16 can be implemented in the system 100. The first
  • component of the asset transfer client 1 16 can communicate with the second component of the asset transfer client 1 16, for example to request a 3D asset.
  • the component of the asset transfer client 1 16 implemented in the system 100 can request or otherwise obtain the 3D asset from the asset client lookup and delivery service 1 10.
  • the system 100 can also include the content standards 1 14 which includes standards for the brand modules 1 12 and/or the applications 122.
  • the content standards 1 14 can specify types of content or groups of content based upon the creator of the asset, the genre of the asset, or the art style of the asset.
  • the content standards 1 14 can specify types of content or groups of content through the use of filters.
  • the filters can operate on metadata associated with the 3D assets comprising the content or groups of content.
  • the metadata can identify a vendor from which the 3D asset originated, a genre of the 3D asset, and an artistic style of the 3D asset, among other types of data included in the metadata.
  • a genre can include, for example a fantasy genre, a science fiction (sci-fi) genre, a comic book genre, and/or contemporary genre, among other genres.
  • An artistic style can be defined by vendors who create new artistic styles.
  • the system 100 can have a default artistic style such as a Nikae artistic style and a Minecraft- esque artistic style.
  • the content standards 1 14 can also specify what types of 3D assets are allowed in respective applications 122 and/or what types of 3D assets are not allowed in respective applications 122.
  • the content standards 1 14 can define that 3D assets with a default artistic style and a fantasy genre are allowed in a given corresponding application 122c and that 3D assets of a different artistic style and a different genre are not allowed in the application 122c.
  • the content standards 1 14 can also specify that 3D assets originating from a particular vendor are allowed in a corresponding application from the applications 122.
  • the content standards 1 14 can restrict the transfer of 3D assets to the application 122d to 3D assets that were originated by a vendor of the application 122d.
  • the content standards 1 14 can define 3D assets that are restricted from specific brand modules 1 12 and/or applications 122 to maintain consistent or inconsistent visual effect.
  • the content standards 1 14 can be implemented in the asset lookup and delivery service 1 10 to regulate content provided to the
  • the content standards 1 14 can also be implemented in
  • the artist tools 108, the content conversion system 106, and the ready room VR 120 may be supporting systems to the system 100. Stated otherwise, they may be supplemental and/or ancillary to the system 100, such that they could be implemented separately and distinctly (e.g., on a different computing device, network, or the like) from other elements of the system 100.
  • the artist tools 108 can modify 3D assets to make the 3D assets compatible with the 3D character SDK 1 18.
  • the content conversion system 106 can convert the 3D content 102 to be performant (e.g., to perform well, such as within performance metrics) for run time applications.
  • the content conversion system 106 can also convert the 3D content 102 to be compatible with the 3D character SDK 1 18.
  • the 3D content 102 can include, for example, high fidelity photo-real assets and/or low fidelity game-ready assets.
  • the 3D content 102 can be created, for example, by a 3D content creator such as a Daz® 3D application.
  • the ready room VR 120 can be an application.
  • the ready room VR 120 can be a hub and/or a starting point for persistent virtual identity creation.
  • the ready room VR 120 can also be a default process for moving persistent virtual identities between applications.
  • the 3D character SDK 1 18 can enable a base figure (e.g., a base 3D asset representing an avatar that is part of a persistent virtual identity, or other base 3D asset) to be changed into any shape and/or size and retain full functionality for fitting clothing, animating, and/or customizing.
  • a base figure e.g., a base 3D asset representing an avatar that is part of a persistent virtual identity, or other base 3D asset
  • the base 3D asset can be extendable to a potentially unlimited number of variations for creation of a unique avatar. Stated otherwise, characteristics of a 3D asset can be modified in a potentially unlimited number of combinations of variations.
  • the system 100 can enable the resultant unique avatar to retain a visual identity across artistic stylings (e.g., if the application 122a implements a first styling, for example a cartoon styling, and the application 122b implements a second styling, for example a realistic styling, then the unique avatar can retain a visual identity as the avatar is shown in a cartoon styling in the application 122a and a realistic styling in the application 122b).
  • the 3D character SDK 1 18 can include a number of modules and/or services for performing specific operations to modify or otherwise configure characteristics of 3D assets.
  • the 3D character SDK 1 18 can include a morphing module, a joint center transform (JCT) bone module, a standard shape module, a projection module, a head scanning to a dynamic mesh fitting module, a heterogeneous mesh behavior module, a hair module, and a smart props module.
  • JCT joint center transform
  • the artist tools 108 is one or more standalone modules, potentially including computer-readable instructions configured to convert 3D assets to a form/format compatible with the system 100.
  • the artist tools 108 can receive a 3D asset (e.g., geometry), which may be configured in a number of different formats.
  • the artist tools 108 can be configured to group the geometry into items; set up the level of details (LODs) for an item; generate geographical maps (geomaps); add self- defining behavioral information to objects for runtime simulation; set up materials and generate materials for different platforms; configure the geometries' multilayered characteristics for runtime-optimized multilayer depth and volumetric preservation between meshes; and/or set up zones on items for heterogeneous mesh
  • a geomap comprises geometry, a vertex index, and a map outlining an optimized correlation between a following mesh and base mesh to be used for real time calculation and generation of multilayer depth solutions and/or projection solutions.
  • Projection references the act of projecting a deformer from one mesh to another.
  • the artist tools 108 also set up the custom shaping of a base 3D asset and set up the 3D assets into specific art styles to allow automatic avatar
  • the output of the artist tools 108 can be either a single 3D asset and/or a collection of 3D assets which can be compatible with the 3D character SDK 1 18.
  • the 3D assets modified by the artist tools 108 can be uploaded to the asset lookup and delivery service 1 10.
  • the 3D assets can further be configured at the asset lookup and delivery service 1 10 for user specified distribution based upon rules and conditions associated with the 3D asset and as provided by the brand modules 1 12.
  • the ready room VR 120 can be a base application that facilitates interaction between a user and the system 100.
  • a base application can be different from the applications 122, such that the base application is a standalone application that can be executed independently from the applications 122.
  • the user can create and customize a 3D asset via the ready room VR 120 using additional content (e.g., following 3D assets) converted with the artist tools 108, made available through the asset lookup and delivery service 1 10, delivered through the asset transfer client library 1 16, and passed to the 3D character SDK 1 18.
  • additional content e.g., following 3D assets
  • the user can, via the user interface 124, access the ready room VR 120 to create and/or customize a 3D asset and launch at least one of the applications 122 through the ready room VR 120.
  • the user can create and customize an avatar (or otherwise configure a base 3D asset) via the application 122b.
  • the user can access the application 122b and through the application 122b access the ready room VR 120, or functionality of the ready room VR 120 (e.g., to create and customize a 3D asset).
  • functionality of the ready room VR 120 may be implemented or otherwise integrated with the application 122b, such that a user of the application 122b can create and/or customize an avatar or other 3D assets within the a context of the application 122b.
  • the ready room VR 120 can showcase the core functionality of the system 100 from an end user's perspective.
  • the ready room VR 120 can provide both a place to customize a 3D asset, including an avatar, a shape and/or clothing associated with the 3D asset, and a place to demonstrate the process and/or standards of "walking between applications.”
  • the ready room VR 120 provides multiple means to transfer an identity between applications 122, interconnect between multiple open VR applications 122, and incorporate face scan data onto the avatar.
  • the ready room VR 120 can provide different example implementations of a user interface (Ul) for shopping, previewing, and/or checking out of stores, among different types of checkout processes.
  • Ul user interface
  • a user can use and customize the 3D asset.
  • a persistent virtual identity for the user can be created, and then the user can activate a mechanism to allow an acquired and/or created 3D asset (e.g., avatar) to transport (e.g., transfer) or step into any one of the applications 122. That is, a 3D asset associated with a user can retain an identity as the 3D asset transitions from the ready room VR 120 into one of the applications 122, and then provide end points for the user to return to the ready room VR 120.
  • an acquired and/or created 3D asset e.g., avatar
  • transport e.g., transfer
  • the virtual identity of a user can be maintained consistent across multiple applications 122, and as the virtual identity is transported from one application 122, to the ready room VR 120, and/or to another application 122.
  • the 3D asset can also extract and retain items (e.g., a virtual weapon, or other object 3D asset) from the applications 122 that can persist in the ready room VR 120 as the 3D asset transitions from one of the applications 122 into the ready room VR 120 and then to another of the applications 122.
  • the persistent virtual identity can be associated with, and representative of a user that is external to the system 100.
  • a user can be a human user and/or an automated user.
  • transitioning a 3D asset from a first application (e.g., application 122a) to a second application (e.g., application 122b) can include conforming to standards set by the second application.
  • the standards can include a specific art style and/or theme.
  • Transitioning a 3D asset from a first application to a second application can include placing the 3D asset in a VR room (e.g., lobby ) of the first application 122a where the user and/or the 3D character SDK can initiate the required changes to the 3D asset before fully transitioning the 3D asset to the second application.
  • the transfer of 3D assets between applications includes configuring a 3D asset so that the 3D asset's custom izations are retained as the 3D asset transitions from one application to a different application, such that the settings and status of the 3D asset remain the same.
  • the transfer of 3D assets is one example of a persistent virtual identity.
  • the transfer of 3D assets can be accomplished by utilizing a local backdoor module and/or a remote restore module. These modules enable the transfer of an identity between applications 122.
  • the local backdoor module can include an application 122a calling the asset transfer client 1 16 to export a 3D asset (e.g., a 3D asset file) and/or a persistent virtual identity (e.g., an identity file) comprising geometry, skinning, rig, textures, materials, and shaders of the current 3D asset with associated items in use, and/or any additional metadata describing the 3D asset and/or persistent virtual identity.
  • a 3D asset e.g., a 3D asset file
  • a persistent virtual identity e.g., an identity file
  • the application 122a launches the application 122b with reference to the local identity file, and then shuts itself down.
  • the application 122b can access the identity and request the local identity definition from the asset transfer client 1 16 and load the identity into the application 122b.
  • the remote restore module can be configured to cause the application 122a to call the asset transfer client 1 16 to push the identity definition metadata to the asset lookup and delivery service 1 10.
  • the application 122a can then launch the application 122b with an identity string, and then shut itself down.
  • the application 122b can request that the asset transfer client 1 16 call the asset lookup and delivery service 1 10 requesting the identity string.
  • the application 122b can likewise retrieve metadata associated with the persistent virtual identity.
  • the application 122b can use either local 3D assets (e.g., locally stored) or remote 3D assets (e.g., streamed or otherwise provided or accessed from a remote location) to render the avatar.
  • the asset transfer client 1 16 can comprise one or more components.
  • the asset transfer client 1 16 can comprise a client and a server.
  • the client can be implemented in the applications 122 and/or computing devices on which the applications 122 are executing.
  • the server of the asset transfer client 1 16 can be implemented in the system 100.
  • the client can communicate with the server to transfer a 3D asset from the system 100 to the computing device of the applications 122.
  • the user can select a destination application 122b from a source application 122a.
  • a gate or portal may be generated within the source application 122a.
  • the source application may portray the gate and/or portal as a visual appearance branded for the destination application 122b.
  • the gate and/or portal may transition the 3D asset and/or persistent virtual identity from the source application 122a to virtual space (e.g., referred to as the "airlock") that is configurable and customized by the destination application 122b (e.g., a destination application vendor and/or the corresponding brand module 1 12b).
  • the mechanism to trigger the transfer of a 3D asset may include walking and/or other locomotion methods within a VR environment provided by the source application 122a toward the gate or portal of the destination application 122b.
  • the transferring of the 3D asset and/or the persistent virtual identity from source application to the virtual space through the virtual portal may trigger a VR passport check.
  • the VR passport check compares clothing and/or an art style associated with the 3D asset and/or the persistent virtual identity with vendor specific standards of the destination application 122b. If the 3D asset and/or persistent virtual identity does not conform to the destination application 122b, then the user is provided an opportunity to change clothing, art style, or any other aspect associated with the 3D asset and/or persistent virtual identity, to meet the destination application standards. Once the standards are met, a launch mechanism, through another virtual portal, the pushing of a button, or the act of meeting the standards, will initiate a transfer of the 3D asset and/or the persistent virtual identity between the source application 122a and the destination application 122b.
  • a set of standards between applications 122 and vendors can be defined.
  • the standards can foster an increased level of persistence and transfer to exist between different applications 122.
  • the standards enable enhanced functionality to allow standard behavior and transfer of assets or mechanics between disparate applications 122.
  • an application agnostic content interchange can be defined to facilitate the association between 3D assets and/or persistent virtual identities and a given application 122a (e.g., a source application 122a) and the transfer of the persistent virtual identity to other applications 122 (e.g., a destination application 122b).
  • Transferring the persistent virtual identity and/or 3D asset can include losing permanence in the source application 122a and creating permanence in the destination application 122b with a conforming set of behaviors, mechanics, and appearances.
  • face scan data can be associated with a dynamic mesh.
  • Associating scan data with a dynamic mesh can include taking face scan data and changing a base figure, associated with the 3D asset and/or persistent virtual identity, to incorporate the face scan data such that the base figure retains the same mesh topology while retaining functionality for further shaping of the mesh (e.g., making the face narrower, nose larger, ears pointed, etc.).
  • the face scan data can be placed on a dynamic mesh.
  • the face scan data can be 3D scanner generated and/or photogrammetry generated (e.g., mesh and texture).
  • the face scan data can also be generated using various images and/or other means. Placing the face scan data on a dynamic mesh can deform the base figure associated with a 3D asset and/or persistent virtual identity to match the visual appearance of the face scan data. Placing the face scan data on a dynamic mesh can generate texture to match the face scan data on the base figure associated with the 3D assets and/or persistent virtual identity.
  • the face scan data can be compared with the base figure to identify where key facial and head landmarks are on both sets of data (e.g., face scan data and base figure and/or base 3D asset).
  • the base mesh associated with the base figure is deformed to the same shape as the face scan data using automated adjustments of existing blend shapes for each key region of the face.
  • a new blend shape can be generated for the base figure to match the face scan data.
  • the face scan generated texture can be analyzed and, using key face and head landmarks, the texture can be rebuilt to fit the base figure's UV map.
  • the face scan generated texture can comprise multiple texture files. The multiple texture files can be combined into a single texture file for the head of a base figure.
  • Fitting face scan data to a base figure can be performed using a custom rig and geomap technology to compare and match the base figure mesh to the face scan data.
  • blend shaping, morphing, and deforming references a set of data attached to a mesh which contains positional deltas on geometry and bones to allow the mesh to change shape while not changing its fundamental geometry and/or rig.
  • the ready room VR 120 can associate the face scan data with the 3D asset such that the 3D asset retains the full customization and compatibility of the base figure without any scanned data.
  • a 3D asset can be configured with the face scan data.
  • the face scan data can be provided, by the user, by uploading a mesh to a server associated with system 100 through at least one of a web form or mobile application.
  • the system 100 can be implemented in a single computing device and/or over a plurality of computing devices.
  • each of the components of the system 100 can be implemented using independent computing devices coupled via a network.
  • the system 100 can be
  • FIG. 2 is a block diagram of an asset transfer client 216 according to one embodiment.
  • the asset transfer client 216 can receive a 3D asset from the asset lookup and delivery service 210.
  • the asset transfer client 216 can comprise an asset query module 230, filters 232, persona definition module 234, assets 236, and an asset fetch module 238.
  • the asset transfer client 216 can process the communications (e.g., messages) between remote services and the local handling of assets (e.g., local to the applications 122 in FIG. 1 ).
  • the asset transfer client 216 can operate
  • the asset transfer client 216 independently of the 3D character SDK 1 18 in FIG. 1 as the asset transfer client 216 is generic, is not tied to any specific 3D engine, and provides an application program interface (API) and/or local service for the 3D character SDK 1 18 to call against to obtain 3D assets. That is, a client of the asset transfer client 216 can communicate with a server of the asset transfer client 216 to obtain 3D assets that can then be passed to a 3D character SDK 1 18.
  • the 3D character SDK 1 18 can also include a client implemented in the computing device of the applications 122 and a server implemented in the system 100 in FIG. 1 .
  • the logical paths in the asset transfer client 216 can perform an asset query via the asset query module 230 and/or an asset filter via the filters 232.
  • the logical paths in the asset transfer client 216 can also access a list of details associated with a 3D asset (e.g., an avatar) via the persona definition module 234.
  • the asset transfer client 216 can also access a specific 3D asset via the asset module 236 and decrypt and/or cache the 3D asset locally.
  • decrypting and/or caching a 3D asset locally can include receiving an encrypted 3D asset from the identity system (e.g., system 100 in FIG.
  • decrypting the 3D asset decrypting the 3D asset, re-encrypting the 3D asset using a key unique to the user, application, and/or associated computing device, and storing the re-encrypted 3D asset.
  • Examples of the functionality built into those two paths include, but are not limited to, the following: authentication of accounts globally and per vendor channel; encryption and decryption of 3D assets; querying of assets for what is available for vendor channels or for what matches user inputted criteria; reading and updating identity definitions; caching and handling local caching of 3D assets; and shopping and purchasing 3D assets.
  • the asset transfer client 216 can run as a compiled against library as part of the application and/or as a local service that handles calls.
  • the asset transfer client 216 can process communications, across all applications, with the asset lookup and delivery service 210, local download and storage of 3D assets, and local caching strategies to allow a single download of a 3D asset which is then useable by all applications that request the 3D asset.
  • the asset transfer client 216 can register a new account, create a new identity, update avatar settings, update social rating and reputation data, retrieve a list of content that is associated to an avatar, retrieve a list of potentially wearable content (e.g., following 3D asset) that is available through that application, store requests, download content requests, stream content requests, and any other remote calls that might need to be made.
  • a new account creates a new identity, update avatar settings, update social rating and reputation data
  • retrieve a list of content that is associated to an avatar retrieve a list of potentially wearable content (e.g., following 3D asset) that is available through that application, store requests, download content requests, stream content requests, and any other remote calls that might need to be made.
  • the asset transfer client 216 may include two logical paths of communication for querying for assets.
  • the query module 230 can be used for listing and retrieving metadata and details about 3D assets that are available through the brand modules 1 12 in FIG. 1 , and queries that are specific to the avatar and/or a persistent virtual identity comprising a persona definition as defined by the persona definition module 234.
  • the asset query module 230 provides a list of queries that then get passed to the filter module 232.
  • the filter module 232 allows the 3D assets to be sorted and filtered by various data types.
  • the various data types can include, for example, genre, base figure, clothing type, vendor, etc.
  • the persona definition module 234 can store data about a persistent virtual identity and specifically what virtual objects an avatar, associated with the 3D asset, can use and is currently using.
  • the persona definition module 234 can define, for example, a username, clothing associated with the 3D asset, and/or social reputation associated with the persistent virtual identity.
  • the asset module 236 can also pass the request to the asset fetch module 238.
  • the asset fetch module 238 can identify local content versus remote content, and as necessary download, decrypt and re-encrypt content against the downloading account.
  • the asset fetch module 238 can also provide the content to the requesting plugin (e.g., the 3D character SDK 1 18) and run any cache cleanup routines as necessary.
  • FIG. 3 is a block diagram of an asset lookup and delivery service 310 according to one embodiment.
  • the asset lookup and delivery service 310 can include an admin module 342, a ready room cloud module 348, a store module 350, and a content delivery network 352.
  • the asset lookup and delivery service 310 can be in communication with the asset transfer client 316.
  • the asset lookup and delivery service 310 can be a remote hypertext transfer protocol (HTTP)/2.0 over a secure socket layer (SSL) based application.
  • the asset lookup and delivery service 310 can include needed end points for the asset transfer client 316 to enable communication between the asset lookup and delivery service 310 and the asset transfer client 316.
  • the asset lookup and delivery service 310 can include an admin module 342 comprising administrative graphical user interfaces (GUIs) for managing the identity system 100 and for individual vendors to manage content and stores within or associated with the system 100.
  • GUIs administrative graphical user interfaces
  • the admin module 342 can include an internal admin module 344 to provide vendor setup capabilities and reporting capabilities.
  • the admin module can also include a vendor admin module 346 to provide vendor-to-vendor capabilities and vendor-to-user capabilities.
  • the vendor-to-vendor capabilities can include dress code enforcement and restricted internet protocol (IP) enforcement.
  • IP internet protocol
  • the vendor-to-user capabilities can include criteria based access.
  • the asset lookup and delivery service 310 provides the data collection point for a cross application social and reputation system to additionally add a way to have a level of persistence on the actions associated with a user and/or a persistent virtual identity.
  • the asset lookup and delivery service 310 can also define an access that users and/or vendors have to a data collection point including the reputation data.
  • the asset lookup and delivery service 310 can also include a rating system, for user ratings of vendors, with configurable vendor settings to give input and/or weight to social reputation.
  • the asset lookup and delivery service 310 can be divided into three subsystems: store module 350 (e.g., store API), content delivery network (CDN) 352, and ready room cloud module 348.
  • the ready room cloud module 348 can comprise an API for the asset lookup and delivery service 310.
  • the ready room cloud module 348 can authorize the manipulation of 3D assets, can save and/or load the state of a 3D asset, can determine a visual description of a 3D asset, and can rate interactions with the user interface 124 and/or applications 122 (see FIG. 1 ).
  • Each of the subsystems can be accessed independently from each other.
  • the store module 350 provides shopping and purchasing functionality such as a vendor restricted list and a checkout module.
  • the checkout module can provide checkout functionalities associated with a financial transaction.
  • the CDN 352 provides a fast and optimized asset lookup and delivery of assets, along with streaming support.
  • the ready room cloud module 348 provides the functionality to support either the asset transfer client library 316 and/or the admin module 342
  • the asset lookup and delivery service 310 can be a remote service to upload content prepared by the artist tools.
  • the asset lookup and delivery service 310 can be configured to stream the content prepared by the artist tools to the clients (e.g., applications 122).
  • the asset lookup and delivery service 310 can provide the internal admin module 344 and the vendor admin module 346 that interact with the admin module 342.
  • Data and assets can be passed from the admin module 342 to the ready room cloud module 348, which marshals the data to be sent either directly down to clients as requested or passed onto the store module 350 and/or the CDN 352 to pass down to end clients including the asset transfer client library/service 316 implemented in associated applications.
  • the internal admin module 344 initiates and monitors systems and vendors.
  • the internal admin module 344 can provide insight into metrics such as number of users and load across the system, set up and manage vendors, configure global shopping parameters, and/or generate reports across the system.
  • the vendor admin module 346 uploads and manages 3D assets.
  • the vendor admin module 346 provides a user the ability to upload 3D assets, configure the 3D assets for any additional data needed, allow 3D assets to be set up (for purchase or free) default associations of users that connect to their brand module, and allow users to set up access control lists (ACLs) (e.g., restrict IP) for 3D assets and dress code standards that their brand module enforces.
  • ACLs allow a vendor to set up what content from other brand modules is allowed in and what content to what brand modules the content for this brand module is allowed to be consumed in. This allows a high degree of control for a vendor to make sure items to be loaded into their application are appropriate, stylistically fitting, and on brand as desired.
  • the dress code standards (e.g., VR passport) additionally allow a vendor to put restrictions on what can be used or worn in their application and if there are required artistic styles.
  • the admin module 342 is the data store and logical component that handles the processing and prepping of the content (e.g., 3D assets), serving largely as the systemic point of authority.
  • the admin module 342 provides 3D assets and associated data for distribution.
  • the admin module 342 may provide a 3D asset over a network, such as a content delivery network (CDN) for delivery to the ready room VR 120 or an application 122 facilitated by the asset transfer client 1 16.
  • CDN content delivery network
  • the admin module 342 takes uploaded 3D assets from the artist tools 108, reprocesses and breaks that 3D asset into a more streaming optimal format, and makes it available to download (e.g., on a public facing URL) that a ready room 120 or an application 122 can authenticate to download.
  • CDN content delivery network
  • the ready room cloud module 348 can receive data in a per vendor view from the admin module 342 to allow fast queries for data and asset sets specific to what a vendor allows.
  • the ready room cloud module 348 passes data specific to store functionality (e.g., shopping, previewing, and/or checkout) to the store module 350.
  • the ready room cloud module 348 also provides the data to the CDN 352 to handle asset resolution and streaming of assets.
  • the CDN 352 downloads assets for different LOD levels or different material presets programmatically.
  • FIG. 4 is a block diagram of artist tools 408 according to one embodiment.
  • the artist tools 408 includes an import module 460, a materials module 462, a geometry module 464, an application module 468, and a preflight module 470.
  • a user can generate content 456 in an external or third- party 3D content creation application (e.g., Maya, Blender, and/or 3DS Max) and export the content 456 to a filmbox format (e.g., FBX), a portable network graphic (PNG) format, and/or a joint photographic expert group (JPEG) format (e.g., JPG format), among other possible formats.
  • the 3D artist can also export textures and materials, in standard formats, associated with the content created using the 3D content creation application.
  • the content 456 can then be manipulated and/or otherwise configured using the artist tools 408.
  • the artist tools 408 can be a standalone application that imports content 456 (e.g., in a standard format as exported from a 3D content creation application) and configures the content 456 to be compatible with the 3D character SDK.
  • the content 456 can be imported via the import module 460.
  • the content 456 can include user created geometry and textures.
  • the user can then create a new project and import the content 456 (e.g., FBX file) into the project.
  • the content 456 can include geometry, materials, rig, and/or textures.
  • the content 456 can be parsed.
  • the organize geometry module 464 enables a user to organize the geometry into items.
  • the user can also create metadata and associate the metadata with the items to allow the items to be identified.
  • the metadata can provide an automatic behavior associated with the items.
  • the metadata can also provide a level of detail groups for the items.
  • the artist tools 408 can also configure new materials or select existing materials via the materials module 462.
  • a material may be a combination of texture files, shaders, and different maps.
  • the artist tools 408 can also apply the material to the items via the application module 468 by associating the materials with the corresponding items. Once set up, the items are prepped and tested to ensure that the items function properly. The items can be tested via the preflight module 470. The test can include geomap generation, alpha injection mapping using multilayer depth preservation techniques, auto skinning of the asset if needed, setting up heterogeneous mesh behavior, and/or more. [0083] Once the tests are complete, a user (e.g., 3D artist), via the artist tools 408, can drive the content through a "fitting" operation to fit the content to a base figure, and test out the deformation and animation to ensure that the content functions properly. The user can make adjustments as necessary to ensure that the content functions properly. The artist tools 408 can then export the items in standard formats to other applications. The artist tools 408 can also export the items in formats accepted/recognized by the 3D character SDK.
  • FIG. 5 is a block diagram of a 3D character SDK 518 according to one embodiment.
  • the 3D character SDK 518 comprises a physics module 574, a miscellaneous module 576, a network module 578, and a body module 580.
  • the 3D character SDK 518 provides a list of independent functionalities and calls to drive the content in specific ways.
  • the 3D character SDK 518 can be used to fit clothes on a 3D asset, generate deformations of the 3D asset, transition between art styles, transform bone structures associated with the 3D asset, and/or affect heterogeneous mesh behavior, rigidity, and/or hair.
  • rigidity and/or hair associated with a 3D asset can be configured via the physics module 574.
  • Resource management and props associated with the 3D asset can be configured via the miscellaneous module 576.
  • the streaming support, the avatar loading/exporting, the ready room fetch, the ready room asset queue, and/or the pre-population filters can be configured via the network module 578.
  • the morphing, the bone structure, the head/body scanning, the standard shapes, and/or the projection associated with a 3D asset can be configured via the body module 580.
  • the 3D character SDK 518 can perform specific tasks without a specific flow or process across the 3D character SDK 518.
  • the network module 578 includes a subset of network calls to pull down or push up data.
  • Streaming support is a custom 3D file format optimized for streaming data and is platform agnostic.
  • Avatar load / export includes calls for loading in an avatar definition, saving an avatar definition, and exporting an avatar to a different application.
  • RR Fetch and RR asset queue go hand in hand in that RR Fetch fetches assets by putting them into an asynchronous RR asset queue for streaming down, caching, and loading in of assets from the web to a local location, so that, once downloaded, they can be accessed quickly from a hot or warm cache.
  • Pre- population filters may include application filters that different queries are automatically filtered by so that only content that an application allows to come in will get sent from the network SDK.
  • the body module 580 includes operations to modify an appearance of a 3D asset.
  • Morphs are the delta change of the vertices, also known as blend shapes, that shapes the geometry to different orientations, so going skinny to fat is through these.
  • Bones are bits of 3D data that may be analogous to a skeleton that vert influence levels are associated with and that allow the movement and animation of a 3D asset (e.g., a figure or avatar).
  • Head/body scanning is an example of new technology of taking scan data and generating morphs and materials to match it so it fits within the system and rig instead of generating all new 3D data.
  • Standard shapes may enable theme swapping of a standard set of shaping.
  • Projection relates to runtime projection and mapping.
  • FIG. 6 is a block diagram of an identity system according to one
  • the mobile device identity system 681 can generate a persistent virtual identity that can be transferred between applications, potentially on different application systems 622.
  • the identity system 681 can include a memory 620, one or more processors 693, a network interface 694, an input/output interface 695, and a system bus 696.
  • the identity system 681 may be the similar to or analogous to the interface system 100 in FIG. 1 .
  • the identity system 681 may interface with one or more VR application systems 622 via a communication network 12.
  • the identity system 681 may provide persistent virtual identity for the VR application systems 622.
  • the identity system 681 may also interface with one or more content creation application system 656 to obtain 3D assets.
  • the one or more processors 693 may include one or more general purpose devices, such as an Intel®, AMD®, or other standard microprocessor.
  • the one or more processors 693 may include a special purpose processing device, such as ASIC, SoC, SiP, FPGA, PAL, PLA, FPLA, PLD, or other customized or
  • the one or more processors 693 can perform distributed (e.g., parallel) processing to execute or otherwise implement functionalities of the presently disclosed embodiments.
  • the one or more processors 693 may run a standard operating system and perform standard operating system functions. It is recognized that any standard operating systems may be used, such as, for example, Microsoft® Windows®, Apple® MacOS®, Disk Operating System (DOS), UNIX, IRJX, Solaris, SunOS, FreeBSD, Linux®, ffiM® OS/2® operating systems, and so forth.
  • the memory 620 may include static RAM, dynamic RAM, flash memory, one or more flip-flops, ROM, CD-ROM, DVD, disk, tape, or magnetic, optical, or other computer storage medium.
  • the memory 620 may include a plurality of program engines 682 (and/or modules) and program data 688.
  • the memory 620 may be local to the identity system 681 , as shown, or may be distributed and/or remote relative to the identity system 681.
  • the program engines 682 may include all or portions of other elements of the system 681 .
  • the program engines 682 may run multiple operations concurrently or in parallel with/on the one or more processors 693.
  • portions of the disclosed modules, components, and/or facilities are embodied as executable instructions embodied in hardware or in firmware, or stored on a non- transitory, machine-readable storage medium, such as the memory 620.
  • the instructions may comprise computer program code that, when executed by a processor and/or computing device, cause a computing system (such as the processors 693 and/or the identity system 681 ) to implement certain processing steps, procedures, and/or operations, as disclosed herein.
  • the engines, modules, components, and/or facilities disclosed herein may be implemented and/or embodied as a driver, a library, an interface, an API, FPGA configuration data, firmware (e.g., stored on an EEPROM), and/or the like.
  • portions of the engines, modules, components, and/or facilities disclosed herein are embodied as machine components, such as general and/or application-specific devices, including, but not limited to: circuits, integrated circuits, processing components, interface components, hardware controller(s), storage controller(s), programmable hardware, FPGAs, ASICs, and/or the like.
  • the modules disclosed herein may be referred to as controllers, layers, services, engines, facilities, drivers, circuits, and/or the like.
  • the memory 620 may also include program data 688.
  • Data generated by the system 681 such as by the program engines 682 or other modules, may be stored on the memory 620, for example, as stored program data 688.
  • the stored program data 688 may be organized as one or more databases.
  • the program data 688 may be stored in a database system.
  • the database system may reside within the memory 620.
  • the program data 688 may be remote, such as in a distributed computing and/or storage environment.
  • the program data 688 may be stored in a database system on a remote computing device.
  • the input/output interface 695 may facilitate interfacing with one or more input devices and/or one or more output devices.
  • the input device(s) may include a keyboard, mouse, touch screen, light pen, tablet, microphone, sensor, or other hardware with accompanying firmware and/or software.
  • the output device(s) may include a monitor or other display, printer, speech or text synthesizer, switch, signal line, or other hardware with accompanying firmware and/or software.
  • the network interface 694 may facilitate communication with other computing devices and/or networks and/or other computing and/or communications networks.
  • the network interface 694 may be equipped with conventional network connectivity, such as, for example, Ethernet (IEEE 802.3), Token Ring (IEEE 802.5), Fiber Distributed Datalink Interface (FDDI), or Asynchronous Transfer Mode (ATM).
  • Ethernet IEEE 802.3
  • Token Ring IEEE 802.5
  • Fiber Distributed Datalink Interface FDDI
  • ATM Asynchronous Transfer Mode
  • the network interface 694 may be configured to support a variety of network protocols such as, for example, Internet Protocol (IP), Transfer Control Protocol (TCP), Network File System over UDP/TCP, Server Message Block (SMB),
  • IP Internet Protocol
  • TCP Transfer Control Protocol
  • SMB Server Message Block
  • CIFS Common Internet File System
  • HTTP Hypertext Transfer Protocols
  • DAFS Direct Access File System
  • FTP File Transfer Protocol
  • RTPS Real-Time Publish Subscribe
  • OSI Open Systems Interconnection
  • SMS Simple Mail Transfer Protocol
  • SSH Secure Shell
  • SSL Secure Socket Layer
  • the system bus 696 may facilitate communication and/or interaction between the other components of the system, including the one or more processors 693, the memory 620, the input/output interface 695, and the network interface 694.
  • the interface system 681 also includes various program engines 682 (or modules, elements, or components) to implement functionalities of the system 681 , including an asset transfer client engine 683, an asset lookup and delivery service engine 684, an artist tools engine 685, a 3D character SDK engine 686, and/or a ready room VR engine 687.
  • program engines 682 or modules, elements, or components to implement functionalities of the system 681 , including an asset transfer client engine 683, an asset lookup and delivery service engine 684, an artist tools engine 685, a 3D character SDK engine 686, and/or a ready room VR engine 687.
  • These elements may be embodied, for example, at least partially in the program engines 682. In other embodiments, these elements may be embodied or otherwise implemented in hardware of the system 681.
  • the system 681 also includes identity data 689 and 3D asset data 690 that may be stored in the program data 688 which may be generated, accessed, and/or manipulated by the program engines 682.
  • FIG. 7 illustrates a flow chart 700 for determining the neighbor vertices index, to determine how an item (e.g., item asset or item 3D asset) correlates to a base figure (e.g., a base figure asset, base 3D asset).
  • Neighboring vertices can be numbered between 1 and N, where N is the number of vertices in the base figure. For practical purposes, an upper limit of 10 vertices is generally imposed on the base figure, with a default of 4 vertices. However, N can be any number desired to create a more accurate neighboring vertices index and can be changed as needed.
  • the artist tools 108, 408 loads 702 the base figure and the item.
  • the item is placed on top of the base figure as it would normally function or be fitted onto a 3D asset.
  • the neighboring vertices are determined 704 using a k- dimensional (k-d) tree algorithm and also determined 706 by traversing the geometry using a geodesic algorithm on a combined mesh of the base figure and the item.
  • the results of the k-d tree algorithm and the geodesic algorithm are combined 708 to determine a more accurate result of neighboring vertices.
  • only a single algorithm, either a k-d tree algorithm or a geodesic algorithm is run to determine the neighboring vertices, and a combination of the algorithms is omitted.
  • An example of using a combination of a k-d tree algorithm and a geodesic algorithm would be using the geodesic algorithm in all directions to determine the nearest neighboring vertices in the base figure and then using the k-d tree algorithm to determine the straight path nearest neighbors.
  • the one or more modules of the artist tools 108 or 408 compares the geodesic distances between the geodesic neighbors and the k-d tree neighbors to determine the best nearest neighbors vertices of a base figure. This generates the initial neighboring vertices index.
  • the artist tools 108 or 408 receives 710 a manual selection from a creator or user.
  • the manual selection designates a selection of a point on the item by a creator or user and hand painting the vertices from the base figure to derive influence on that point on the item. This allows a user to modify the neighboring vertices index (in effect overriding the generated neighboring vertices index) to get the desired effect of the creator or artist.
  • the neighboring vertices index data is saved 512 into the index based on a key value designating portions of the base figure or item.
  • the neighboring vertices index can be saved into the index based on base figure vertex, base figure polygon, item vertex, or influencing bones, or grouped by regions such as head, body, arms, legs, hands, etc.
  • a k-d tree (or k-dimensional tree) is a binary tree in which every node is a k-dimensional point.
  • Each internal node implicitly generates a splitting hyperplane that divides the space into two parts, known as half-spaces. Points to the left of this hyperplane are represented by the left subtree of the selected internal node and points right of the hyperplane are represented by the right subtree of the selected internal node.
  • the hyperplane direction is chosen by: each node in the tree is associated with one of the k-dimensions, the hyperplane being perpendicular to the associated dimension's axis.
  • Geodesic algorithms can determine a shortest path between a source on a mesh and one or more destinations of the mesh. For example, an algorithm by Mitchell, Mount, and Papadimitriou (MMP) partitions each mesh edge into a set of intervals (windows) over which the exact distance computation can be performed atomically. These windows are propagated in a "continuous Dijkstra"-like manner.
  • MMP Papadimitriou
  • FIG. 8 illustrates a flow chart 800 for generating the occlusion index and the alpha injection map.
  • the occlusion index represents which polygons in the base figure the item is fully occluding and partially occluding.
  • the alpha injection map is a per pixel representation of the area that the item covers of the base figure as represented in a UV space.
  • the occlusion index and the alpha injection map are used by algorithms that fix multilayered depth issues in a 3D asset and aid in combining meshes and textures into a single geometry.
  • the artist tools 108 or 408 loads 802 the base figure and the item.
  • a bounding box optimization 804 may be performed.
  • the bounding box optimization 804 includes directly overlaying a bounding box of the non-edge polygons of the item on the base figure. Everything in the base figure that is within the bounding box is marked as fully occluded. This results in the number of polygons that rays must be shot from, as discussed in more detail below, being reduced because rays do not need to be shot from the fully occluded points since they have already been marked as fully occluded.
  • the polygons of the item are traversed, and using ray tracing, rays are shot 806 from each vertex and center points of the polygon of the item from both sides of the polygon at a 90-degree angle from the plane of the polygon. If one of the two rays shot 806 from each point of the polygon of the item collides with a polygon in the base figure, the relationship between the item polygon occluding the base figure is recorded. Then, the same ray trace method is performed on the collided polygons of the base figure, and hits and misses are recorded. If there are no misses of the ray, that polygon is considered fully occluded.
  • the polygon is marked as partially occluded. This is then saved into a polygon occlusion index for quick lookup. Each occlusion can be keyed in the index, as discussed above with respect to FIG. 7.
  • an alpha injection image map is generated 808 that mimics the UV map of the base figure but represents the part of the base figure UV map where the polygons of the item overlay.
  • the UV map and the overlay of the project image are used to generate a new image that consists of only the overlaid parts of the base figure to generate a pixel representation of the polygons occluded on the base figure. This is then keyed 810 to the index for quick lookup.
  • FIG. 9 illustrates a flow chart 900 for generating a heterogeneous mesh behavior index.
  • Normal geometry for form fitting materials will deform differently on a 3D asset than metal or flowing dynamic cloth.
  • 3D asset When an underlying figure, or 3D asset, is made taller or more muscular, normal geometry will deform in the X, Y, and Z axes the same amount as the base figure.
  • the heterogeneous mesh index is a representation of polygons and vertices, or groups of polygons and vertices, and what kind of real world material the polygons and vertices should behave like; what pivot points scale, rotate, or transform from; and in which directions along the X, Y, and Z axes the polygons or vertices behave.
  • the artist tools 108 loads 902 the item.
  • the artist tools 108 or 408 receives 904 a manual selection of vertices or polygons of the item, which can include a singular vertex or polygon, or a group of vertices and polygons that scale together.
  • the artist tools 108 or 408 also assigns 906 the type of behavior to the selected vertices or polygons of the item for scaling based on another manual selection by a creator or user.
  • the assignment 906 of the behavior is saved 908 within the heterogeneous mesh index. This is done for each desired vertices or polygons, or group of vertices and polygons, to create the entire heterogeneous mesh index.
  • FIG. 10 is a flow chart 1000 illustrating a process for creating and setting up the new art styles in the system 100.
  • base figures e.g., base 3D assets
  • the new art style is then sculpted 1004 onto the base figure.
  • An artist or vendor sculpts 1004 the new art style onto the base figure by reshaping the mesh of the female base figure and the male base figure according to the desired art style. While the mesh is reshaped, the vertices and polygon count and order are maintained for the base figures.
  • new materials and shaders are created for the new art style shape.
  • key landmarks of the anatomy may be assigned scaling values for the length, width, and depth, for each landmark of the anatomy, if there are bone transformations involved for the new art style. If there are only mesh changes, then scaling values are assigned along X, Y, and Z axes for each landmark of the anatomy.
  • the key landmarks of the anatomy may include, for example, eyes, mouth, nose, chin, jaw, cheek bone, ears, neck, upper arm, forearm, hands, pectorals, stomach, waist, hips, thighs, calves, feet, etc.
  • scale, translation, and rotation values are also provided to direct how additional deformations could be applied and scaled for 3D assets. The modifications that are made to the base figure are stored in memory as the deformations to the base figure for the new art style.
  • Standard deformations for identifying characteristics of a realistic avatar are provided for the base male figure and the base female figure. These standard deformations define the ways to shape and manipulate specific key landmarks of the anatomy to unique shapes, as desired by a user, to create identifying characteristics.
  • the standard deformations are used by a user in the ready room VR 120 to design a unique avatar.
  • a user selects a desired deformation of the base figure to create their unique 3D asset.
  • One example is a head shape.
  • the standard deformations shape the upper half and lower half of the head.
  • a user may modify the upper portion of the head of the base figure using a standard deformation to create a desired look for the 3D asset.
  • eyebrows of the 3D asset may be deformed to shape the inner point position, inner arch curve and thickness, outer arch curve and thickness, as well as the outer point. This allows a user to create their unique avatar.
  • standard deformations for unique characteristics may be overridden 1006 to create specific expressions of those shapes in the new art style.
  • the new override deformation may apply the same visual intention as the standard deformation on the same general anatomy of the avatar, but fit within the new art style. If a standard deformation is not overridden, then the standard deformation is applied during the transition to the new art style, as discussed in more detail below.
  • an override deformation for a wider nose in a comic book art style would affect the same vertices and polygons of the base figure with the wider nose standard deformation but with different degrees of offsets to get a different visual effect from the standard deformation.
  • the new style is imported 1008 to the artist tools 108.
  • the new style is imported via the import module 460, discussed above.
  • the deformation data for the new style is saved to a new deformation data file with metadata to link it back to the base figure as well as the overriding deformations.
  • the overriding deformations are named and tagged for quick reference as an art style deformation type and are ready to be used on a 3D asset, as discussed in detail with respect to FIG. 2.
  • FIG. 1 1 illustrates a flow chart 1 100 for changing an art style of a unique avatar (e.g., 3D asset) created by a user.
  • a user creates and/or customizes 1 102 a 3D asset to create the unique avatar, using the user interface 124 to interact with the ready room VR 120.
  • the 3D character SDK 1 18, 518 comprises computer- readable instructions configured to affect the changes on the base figure (e.g., a base 3D asset), as discussed above with respect to FIG. 3.
  • a user can customize 1 102 the 3D asset through applications of standard deformations and applying identifying marks with a decal system for visual effects such as scars, tattoos, birth marks, and makeup and activating various assets for clothing or accessories such as glasses, earrings, facial piercings, etc.
  • the user may decide to then change the art style of the original 3D asset.
  • the art style is selected 1 104 via the user interface 124 and the 3D character SDK 1 18 transitions the current style of the 3D asset to the selected art style.
  • the 3D character SDK 1 18 removes all previous modifications (e.g., customizing deformations, identifying characteristics, etc.) of the unique avatar, returning 1 106 the avatar to the default base figure (e.g., base 3D asset).
  • the 3D character SDK 1 18 loads 1 108 the chosen art style deformations from the asset transfer client 1 16.
  • the 3D character SDK 1 18 applies 1 1 10 the chosen art style deformations to the base figure to begin creating a 3D asset that corresponds to the original 3D asset, but in a different style, along with any custom materials and shaders of the chosen art style. Then, customizing deformations are applied 1 1 12 to the base figure in an additive manner, layering each change to the base figure on top of a previous change. The customizing deformations are the deformations originally chosen by the user to create the original 3D asset. The customizing deformations chosen for the original 3D asset are scanned to check if there is an art style override.
  • Standard deformations chosen for the original 3D asset are applied to their effected vertices if there is no art style override deformation. If there is an art style override deformation, then the art style override deformation is applied rather than the chosen standard deformation. If any scaling options were chosen by a user for a customized deformation to given anatomical landmarks, then the standard deformation gets its vertices offsets adjusted by a ratio multiplier associated to that landmark if there is no art style override scaling deformation. If there is an art style override scaling deformation, then the art style override scaling deformation is applied.
  • identifying markings that were originally chosen for the original 3D asset are also applied 1 1 14 to the new 3D asset that corresponds to the original 3D asset in the different art style. Identifying marks, such as tattoos or birth marks, are applied to the same positions and scaled to fit on the same coordinate space or on the same average spacing between nearest vertices on the new 3D asset as the original 3D asset.
  • the new 3D asset is then provided to or otherwise available to the user in the ready room VR 120.
  • a belt item consists of a belt strap and square belt buckle.
  • the polygons and/or vertices of the belt strap may be selected and configured by the creator as normal geometry. Then, if an underlying base figure is scaled, the belt strap scales normally with the underlying 3D asset.
  • the polygons and/or vertices of the belt buckle may be selected and configured as metal.
  • the belt buckle scales along its X, Y, and Z axes in a 1 : 1 : 1 ratio so that the belt buckle does not become rectangular or trapezoidal.
  • FIG. 12 illustrates a flow diagram 1200 of a method for adjusting a following 3D asset based on the deformation of a related base 3D asset.
  • the 3D character SDK 1 18 of FIG. 1 may implement this method to enable a base 3D asset (e.g., a 3D asset representing an avatar that is part of a persistent virtual identity) to be changed into any shape and/or size and automatically and near instantly alter the shapes of following 3D assets (e.g., a separate 3D asset associated with the avatar, such as, clothing, weapons, jewelry, and other virtual objects).
  • the automatic and near instantaneous altering of following 3D assets allows the following 3D assets to retain full functionality (e.g., clothing remains fitted to the avatar) as the base 3D asset changes.
  • the avatar and the accessories may be categorized as a base 3D asset and following 3D assets respectively.
  • the base 3D asset and the following 3D assets may be any related 3D assets.
  • deformation to the base 3D asset may be proportionally applied to the following 3D asset. This may allow a base 3D asset to be changed in its fundamental shape to any potential shape (e.g., grow taller, shorter, fatter, skinnier, or more muscular), and allow the following 3D assets to continue to work properly in fitting or otherwise associating with the avatar.
  • any potential shape e.g., grow taller, shorter, fatter, skinnier, or more muscular
  • a 3D character SDK may implement the method for adjusting a following 3D asset based on the deformation of a related base 3D asset by first loading 1202 a base 3D asset and a following 3D asset.
  • Loading 1202 the base 3D asset may include receiving vertex index, polygon data, bone data, skinning data, and UV map for the base 3D asset.
  • Loading 1202 the following 3D asset may include receiving vertex index, polygon data, bone data, skinning data, UV map, and Geomap data for the following 3D asset.
  • All of the loaded data may inform the 3D character SDK of relationships between the base 3D asset and the following 3D asset.
  • the Geomap data correlates and indexes key values and relationships between the following 3D asset and the base 3D asset.
  • the Geomap may be generated and sent to the 3D character SDK by another module, such as the artist tools 108 of FIG. 1 .
  • the 3D character SDK may activate a deformation on the base 3D asset by determining 1204 new vertex coordinates of the base asset for the activated deformation.
  • a user and/or the system may select a deformation from a list of available deformations already created for the base 3D asset.
  • the list of available deformations may include a height option. If the height option is selected, the 3D character SDK may deform the base 3D asset a preset amount along the Z axes.
  • the user or the system may procedurally apply new deformations to the mesh of the base 3D asset.
  • a user interface to an application or the ready room VR may allow a user to manually deform a base 3D asset by selecting a portion of the base 3D asset and modifying the portion with an input tool (e.g., drag and drop with a mouse, stylus, or other input implement) to create the deformation.
  • an input tool e.g., drag and drop with a mouse, stylus, or other input implement
  • the deformation of the base 3D asset creates new vertex coordinates on the X, Y, and Z axes for the base 3D asset.
  • the new vertex coordinates may be determined by the system based on a selection of an available deformation or based on a manual deformation.
  • the new and the original vertex coordinates may be stored in memory before moving the vertex coordinates to apply the deformation.
  • the original vertex coordinates may become a selectable deformation on a list of available deformations (e.g., revert).
  • the deformation may also define a bone transformation. If bone transformation data exists for the deformation, the new and the original bone structure coordinates may be stored in memory.
  • new X, Y, and Z coordinates for the new bone positions may be generated based on the deformation to the vertices .
  • the 3D character SDK may use the average distance from nearby vertices and a weight map to determine the new bone structure coordinates.
  • the weight map may include data that describe the influence of vertices on each bone structure.
  • the 3D character SDK may calculate an average change in X, Y, Z axes for each vertex and use the weight map to gradate the influence of the vertex in relation to the new bone structure coordinates.
  • the new bone structure coordinates may be an averaged offset based on the gradated average.
  • the 3D character SDK may perform a 3D asset crawl 1206 on the following 3D asset.
  • the crawl 1206 of the 3D asset may index the polygon data associated with the following 3D asset.
  • the polygon data may define a polygon that models a portion of the 3D asset.
  • a tree data structure may order the polygon data according to location. The tree data structure may allow for quick access to polygon data concerning each polygon as well as the neighbors of each polygon. Thus the indexed polygon data may be organized by spatial relationships between polygons.
  • the 3D character SDK may generate 1208 a point map.
  • the 3D character SDK may generate 1208 the point map based on the geomap, the base 3D asset, and the following 3D asset.
  • the point map represents the relationship between the vertices of the base 3D asset and the vertices of the following 3D asset. Specifically, the point map defines the influence each base 3D asset vertex has on each following 3D asset vertex.
  • the relationship between the base 3D asset vertices and the following 3D asset vertices may be a weighted average based on the distance between the vertices and influence data from the base 3D asset's weight map.
  • one or more vertices of the base 3D asset may correspond to an ankle of an avatar, and one or more vertices of the following 3D asset may correspond to a pant cuff.
  • the 3D character SDK When the 3D character SDK generates 1208 the point map, the point map will establish that the one or more ankle vertices have a significant influence on the one or more pant cuff vertices.
  • Other vertices of the base 3D asset may have little to no influence on the pant cuff vertices based on distance and a weight map.
  • the 3D character SDK may calculate 1210 offsets created by the deformation for the following 3D asset.
  • the 3D character SDK may determine which base 3D asset vertices are affected by the deformation. The 3D SDK may then use the point map to determine which vertices of the following 3D asset are influenced by the affected base 3D asset vertices.
  • the amount each base 3D asset vertex influences each following 3D asset vertex may be extracted from the geomap. Based on the influence scores and the distance between the new and original vertex coordinates of the base 3D asset (deformation difference), a set of following vertex offsets and a set of following bone offsets may be calculated.
  • the 3D character SDK may calculate the set of following vertex offsets by gradating each deformation difference based on the influence score and aggregating each gradated deformation difference. For example, a first following vertex offset may be calculated by (1 ) determining which base 3D asset vertex influences the first following vertex offset, (2) determining the 3D character SDK.
  • the set of following vertex offsets may be found by calculating the aggregated gradated deformation difference for each following 3D asset vertex affected by the
  • the 3D character SDK may calculate the set of following vertex offsets by finding a weighted (based on influence score) average of the deformation of influencing 3D asset vertices.
  • the 3D character SDK may calculate the set of following bone offsets by gradating each deformation difference based on the influence score and aggregating each gradated deformation difference.
  • a first following bone offset may be calculated by (1 ) determining which base 3D asset vertex influences the first following bone offset, (2) determining the deformation difference for each influencing 3D asset vertex, (3) gradating the deformation difference for each influencing 3D asset vertex based on the influence score, and (4) aggregating and averaging the gradated deformation difference.
  • the set of following bone offsets may be found by calculating the aggregated gradated deformation difference for each following 3D asset bones affected by the deformation and averaging the aggregated gradated deformation difference.
  • the 3D character SDK may calculate the set of following bone offsets by finding a weighted (based on influence score) average of the deformation of influencing 3D asset vertices.
  • the 3D character SDK may calculate the set of following bone offsets based on the set of following vertex offsets. For example, the 3D character SDK may use the average distance from nearby following 3D asset vertices and a weight map to determine the set of following bone offsets.
  • the weight map may include data that describe the influence of vertices on each bone structure.
  • the 3D character SDK may calculate an average change in X, Y, Z axes for each vertex and use the weight map to gradate the influence of the vertex in relation to the set of following bone offsets.
  • a file system may save 1212 a new deformation profile of the following 3D asset by storing the set of following vertex offsets and the set of following bone offsets.
  • the deformation profile may be distinct from the following 3D asset.
  • the deformation profile may be programmatically related to the following 3D asset.
  • the name of the deformation profile may be a unique identifying number procedurally generated from the corresponding base 3D assets deformation, a unique identifier of the base 3D asset, and a unique identifier of the following 3D asset.
  • the unique identifying number may allow the 3D character SDK to identify the deformation profile using a hash-based search.
  • the deformation profile can be reused. For example, if a certain
  • the deformation profile can be locally stored deformation file.
  • the file system may store the deformation profile until the base 3D asset, following 3D asset, or geomap data changes.
  • the 3D character SDK may reuse the deformation profile for future deformations of the same nature, thereby saving processing resources.
  • the 3D character SDK may inject 1214 the deformation profile into the following 3D asset as a following blend shape.
  • the process of injecting 1214 the deformation profile may form a new object with a unity identity number.
  • the file system may also store a source blend shape.
  • the blend shape may be the deformation applied to the base 3D asset.
  • the unity identity number may map the following blend shape to a source blend shape at runtime.
  • the blend shapes may define maximum offsets for vertices, and deformations may be a percentage of the maximum offsets.
  • the 3D character SDK may drive 1216 the deformation of the following 3D asset.
  • the deformation may begin by calculating the percentage that the base 3D asset has been offset. For example, the percentage may be calculated by comparing the actual deformation offsets with the maximum offsets. Based on that percentage, the 3D character SDK may move the vertices of the following 3D asset. Moving the vertices of the following 3D asset by the same percentage as the base 3D asset may ensure the proper positioning of the following 3D asset.
  • the bones of the following 3D asset may be deformed.
  • the deformation may begin by calculating the new position of each end point of the bones for the following 3D asset and adjust the end points to line them up with the same offsets as the base 3D asset through the JCT service.
  • the steps of the flow diagram 1200 have been described being performed by the 3D character SDK, other components of system 100 of FIG. 1 may be used to perform one or more of the steps.
  • the other components may include or reference the 3D character SDK for tools or engines to propagate base 3D asset deformations to the following 3D asset.
  • the asset transfer client may load 1202 the base 3D asset and the following 3D asset, and artist tool determines 1204 the deformation of the base 3D asset.
  • the content conversion system may crawl 1206 the following 3D asset to convert the following asset to be compatible with the 3D character SDK.
  • the content conversion system may index the polygon data associated with the following 3D asset into a hierarchal tree that represents spatial relationships between polygons.
  • the asset lookup and delivery service may allow the asset transfer client to ask for any stored deformation profiles. For example, if a
  • the asset lookup and delivery service may find a corresponding deformation profile of the base 3D asset and the following 3D asset by using a hash-based search.
  • FIG. 13 is a flow diagram of a method 1300 for stacking multiple meshes according to one embodiment.
  • the method 1300 may be performed by a 3D character SDK, such as described above.
  • the method 1300 may also be performed in any rendering system configured to render 3D content comprising a base 3D asset and one or more following 3D assets configured as layers (e.g., stacked meshes).
  • the rendering system may repeatedly perform the method 1300, such as during and/or throughout an animation process, to limit "poke-through" of layers (that should be occluded) during a rendering of movement of a 3D asset (e.g., a unique figure or avatar comprising a base 3D asset and one or more following 3D assets as layers, such as clothing, on the base 3D asset).
  • a 3D asset e.g., a unique figure or avatar comprising a base 3D asset and one or more following 3D assets as layers, such as clothing, on the base 3D asset.
  • multiple meshes of a 3D asset can be stacked on top of each other in layers while preserving the relative volume and ordering of the polygons to prevent poke-through of one mesh past another mesh.
  • the multiple meshes of the 3D asset can also be stacked on top of each other to preserve relative depth of the meshes along every point in the overall mesh.
  • "Overall mesh” references the combination of the multiple meshes.
  • "Stacking meshes on top of each other” references placing the meshes next to each other.
  • the stack of meshes can include at least an inner mesh and an outer mesh.
  • the outer mesh can be referred to as a top mesh.
  • the inner mesh can be a mesh that is closest to the middle of a 3D asset.
  • reference to a mesh can include a reference to an associated 3D asset
  • reference to a 3D asset can include a reference to the associated mesh.
  • the 3D asset comprises a human figure (e.g., a base 3D asset) wearing a dress shirt, a vest, and a suit coat (e.g., following assets)
  • a human figure e.g., a base 3D asset
  • a suit coat e.g., following assets
  • an inner mesh can describe the human figure
  • a next mesh can describe the dress shirt
  • a next to outer mesh can describe the vest
  • an outer mesh can describe the suit coat.
  • the dress shirt mesh can be prevented from poking through and being visible through the vest mesh and/or the suit coat mesh.
  • the suit coat mesh can also be prevented from poking into the vest mesh, the shirt mesh, and/or the human figure mesh. That is, visual discrepancies can be prevented where one mesh is visually going through a different mesh.
  • a geomap of a 3D asset can be used to identify what portions of the stacked meshes are occluded and what portions of the stacked meshes are visible.
  • the geomap can define a plurality of meshes and an order of the meshes.
  • a mesh can define polygons that compose the 3D asset.
  • each of the meshes can define a single 3D asset, or all of the meshes can define a single 3D asset.
  • An occlusion index of a top mesh of a 3D asset can identify polygons of the base mesh which are fully occluded, partially occluded, and/or non-occluded (e.g., visible) in view of an outer mesh (e.g., mesh that is next to and on top of the base mesh).
  • the occlusion index can be used to generate a list of polygons that are not rendered (e.g., fully occluded), a list of rendered polygons that are available for additional processing (e.g., partially occluded), and a list of polygons that are rendered and do not require additional processing (e.g., non-occluded).
  • the method of FIG. 13 provides and describes a top-down approach to stacking multiple meshes.
  • the 3D assets and associated meshes can be loaded 1341 from a character definition.
  • the character definition can be accessed at the applications 122 and/or the ready room VR 120 using the asset transfer client 1 16 of FIG. 1 (and/or the asset transfer client 216 in FIG. 2).
  • the character definition can also be accessed at the applications 122 and/or the ready room VR 120 using the 3D character SDK 1 18 of FIG. 1 (and/or the 3D character SDK 418 in FIG. 4).
  • the applications and/or the ready room VR can request a character definition of a 3D asset and/or a virtual identity from a client asset transfer client that is part of the applications and/or the ready room VR.
  • the client asset transfer client can request the character definition from the server asset transfer client.
  • the server asset transfer client can request the character definition from the asset lookup and delivery service.
  • the asset lookup and delivery service can provide an identity, a 3D asset, and/or a character definition of the 3D asset.
  • the character definition can describe the 3D assets and associated meshes that comprise a character.
  • the 3D assets and/or the meshes can be ordered.
  • the character definition can comprise a base 3D asset, a middle 3D asset, and an outer 3D asset.
  • Each 3D asset and/or associated mesh can have a depth index that represents a relative depth of the 3D asset and/or the mesh as compared to the other 3D assets and/or meshes.
  • the depth values are specific to the 3D assets and/or meshes described in the character definition.
  • the number of 3D assets and/or meshes that can be stacked on top of each other is unlimited. Stacking the meshes of the 3D assets can include iterating over the 3D asset and/or meshes in a linear fashion to stack the meshes.
  • the base 3D asset and/or base mesh can have a depth of zero.
  • the middle 3D asset and/or middle mesh can have a depth of 1 .
  • the top 3D asset and/or the top mesh can have a depth of 2.
  • the assigning of depth values can occur when the meshes and/or 3D assets are loaded to the applications and/or ready room VR and does not need to happen every frame render at the applications and/or the ready room VR.
  • the 3D assets and/or meshes can be sorted 1343 by depth.
  • the highest depth value represents a topmost layer, and the lowest depth value can represent a base layer.
  • the base 3D asset and/or the base mesh has a depth value equal to zero.
  • a highest depth of an unprocessed mesh can be identified 1345.
  • the top mesh is identified as the highest mesh that is unprocessed.
  • the occlusion map of the top mesh can be compared with the occlusion map of the middle mesh.
  • the occlusion map can describe whether the polygons that comprise a mesh are occluded, partly occluded, or non-occluded.
  • Shared polygons between the top mesh and the middle mesh can be identified 1347.
  • the shared polygons of the middle mesh can be hidden 1349.
  • Hiding 1349 a polygon can include setting a value of the occlusion map
  • an occlusion value of an occlusion map of a mesh constitutes processing the mesh. As such, the middle mesh is processed to prevent polygons from poking through the top mesh.
  • a fragment shader can be applied 1353.
  • Partially occluded polygons can be rendered with a custom fragment shader.
  • the fragment shader can utilize alpha injection masks to render the parts of a polygon that are covered by such an alpha injection mask and to make transparent the parts of the polygon that are not covered by such an alpha injection mask. This allows for an effective partial rendering of polygons for edge polygons so that the polygons stop and start on the edges of the following 3D asset or the base 3D asset that do not line up with the edge of a polygon on the higher-depth following 3D asset.
  • FIG. 14 is a block diagram for a method 1400 of stacking multiple meshes according to one embodiment.
  • the method 1400 may be performed by a 3D character SDK, such as described above.
  • the method 1400 may also be performed in any rendering system configured to render 3D content comprising a base 3D asset and one or more following 3D assets configured as layers (e.g., stacked meshes).
  • the rendering system may repeatedly perform the method 1400, such as throughout an animation process, to limit "poke-through" of layers (that should be occluded) during a movement of a 3D asset, such as a unique figure or avatar.
  • the 3D assets and/or associated meshes can be loaded 1455 from a character definition.
  • the character definition can comprise a base 3D asset, a middle 3D asset, and an outer 3D asset.
  • Each 3D asset and/or associated mesh can have a depth index that represents a relative depth of the 3D asset and/or the mesh as compared to the other 3D assets and/or meshes.
  • the depth values are specific to the 3D assets and/or meshes described in the character definition.
  • the base 3D asset and/or base mesh can have a depth of zero.
  • the middle 3D asset and/or middle mesh can have a depth of 1 .
  • the top 3D asset and/or the top mesh can have a depth of 2.
  • the 3D assets and/or meshes can be sorted 1457 by depth.
  • the highest depth value represents a topmost layer, and the lowest depth value can represent a base layer.
  • the base 3D asset and/or the base mesh has a depth value equal to zero.
  • Each polygon in the base 3D asset and/or base mesh can be iterated 1459 over to determine the occlusion index for each following asset where there is a shared polygon.
  • the shared polygons between a base mesh, the middle mesh, and/or the top mesh can be identified 1461 .
  • Shared polygons in the base mesh and/or the middle mesh can be hidden 1463 by marking the shared polygons in the base mesh and/or the middle mesh as fully occluded. If the shared polygons are marked as partially occluded, then the shared polygons are placed in a list for a given asset.
  • Partially occluded polygons can be rendered with a custom fragment shader.
  • the fragment shader can be applied 1465 to the meshes with an alpha injection map.
  • the fragment shader can utilize alpha injection masks to render the parts of a polygon that are covered by such a mask and to make transparent the parts of the polygon that are not covered by such a mask. This allows for an effective partial rendering of polygons for edge polygons so that the polygons stop and start on the edges of the following 3D asset or the base 3D asset that do not line up with the edge of a polygon on the higher-depth following 3D asset.
  • FIG. 15 is a graphical user interface 1500 of persistent virtual identity system, according to one embodiment.
  • the graphical user interface 1500 is presented in a display, such as a desktop computer monitor or a viewer (e.g., headset) for a virtual reality system or augmented reality system.
  • the graphical user interface 1500 shows a ready room 1510 in virtual reality and portals 1522, 1532 to a source application 1520 and a destination application 1530.
  • a 3D asset 1502 is shown in the ready room 1510.
  • the 3D asset 1502 is an avatar in the form of a bipedal humanoid character that has been customized according to user
  • the avatar 1502 comprises a base 3D asset and following 3D assets in the form of clothing.
  • the avatar 1502 could proceed to the console 1504 to initiate a deformation to any of the base 3D asset and/or the following 3D assets to further customize the avatar 1502.
  • a user could direct movement of the avatar 1502 to the console 1504 in VR and further customize characteristics of the avatar 1502.
  • the avatar 1502 may have moved from the source application 1520 (and/or platform) through the source portal 1522 and into the ready room 1510.
  • the ready room 1510 facilitates transport of the avatar 1502 from the source application 1520 to the destination application 1530.
  • the avatar 1502 may move into the destination application 1530 (and/or platform) my movement through the destination portal 1532.
  • the systems and methods disclosed herein may enable persistent virtual identity, including transport of the avatar between
  • Example 1 A persistent virtual identity system for transporting a virtual identity between a plurality of applications, the system comprising: an artist tools engine to receive a three dimensional (3D) asset in an electronic format from a 3D content creation application; an asset lookup and delivery service engine to enforce one or more standards on the 3D asset, the one or more standards corresponding to an application of the plurality of applications; an asset transfer client engine to receive the 3D asset from the asset lookup and delivery service engine and associate a persistent virtual identity with the 3D asset; a 3D character software development kit (SDK) engine to facilitate configuring at least a characteristic of the 3D asset according to an input of a user of the persistent virtual identity system; and a ready room engine to transition the 3D asset from a first application of the plurality of applications to a second application of the plurality of applications.
  • the ready room engine can transition the 3D asset by removing the 3D asset from the first application and instantiating the 3D asset in the second application.
  • Example 2 The persistent virtual identity system of Example 1 , wherein the artist tools engine is further to configure the 3D asset to be compatible with the persistent virtual identity system.
  • Example 3 The persistent virtual identity system of Example 1 , wherein the artist tools engine is further to configure the 3D asset to be compatible with the 3D character SDK engine.
  • Example 4 The persistent virtual identity system of Example 2, wherein the artist tools engine configures the 3D asset to be compatible by one or more of: grouping the geometry into one or more items; defining the levels of details for each of the one or more items; generating geographical maps (geomaps); adding self- defining behavioral information to objects for runtime simulation; configuring materials of the 3D asset; configuring multilayered characteristics of the geometry for runtime-optimized multilayer depth and volumetric preservation between meshes; and setting up zones on items for heterogeneous mesh deformation.
  • Example 5 The persistent virtual identity system of Example 1 , wherein the artist tools engine is further to upload 3D asset to the asset lookup and delivery service to be stored for future access by the 3D character SDK engine.
  • Example 6 The persistent virtual identity system of Example 1 , wherein persistent virtual identity comprises one or more of: a set of one or more following 3D assets, history associated with a user of the system, social reputation, social standing, inventory, wardrobe, and trophies.
  • Example 7 The persistent virtual identity system of Example 1 , further comprising: one or more brand modules each corresponding to an application of the plurality of applications and providing one or more of standards, rules, and protocols for the corresponding application, wherein the asset lookup and delivery service is further to configure the 3D asset according to user specified distribution based upon rules and conditions associated with the 3D asset and as provided by the one or more brand modules.
  • Example 8 The persistent virtual identity system of Example 1 , wherein the 3D character SDK engine includes one or more of: a morphing module, a joint center transform (JCT) bone module, a standard shape module, a projection module, a head scanning to a dynamic mesh fitting module, a heterogeneous mesh behavior module, a hair module, and a smart props module.
  • a morphing module a joint center transform (JCT) bone module
  • standard shape module a standard shape module
  • a projection module a head scanning to a dynamic mesh fitting module
  • a heterogeneous mesh behavior module a hair module
  • a smart props module a smart props module.
  • Example 9 The persistent virtual identity system of Example 1 , wherein the first application is configured for a first computing platform and the second application is configured for a second computing platform.
  • Example 10 The persistent virtual identity system of Example 9, wherein the first computing platform is a virtual reality (VR) platform and the second computing platform is a VR platform.
  • VR virtual reality
  • Example 1 1 The persistent virtual identity system of Example 1 , wherein, as part of the ready room engine transitioning the 3D asset into the second application: the asset lookup and delivery service engine transitions from enforcing the one or more standards corresponding to the first application to enforcing one or more standards corresponding to the second application; and the 3D character SDK conforms the 3D asset to the one or more standards corresponding to the second application.
  • Example 12 The persistent virtual identity system of Example 1 1 , wherein the ready room engine transitions the 3D asset into the second application in a manner such that settings and status of the 3D asset remain the same through the transition to begin in the second application as they were in the first application. Stated otherwise, the ready room engine can transition the 3D asset such that the settings and status of the 3D asset are the same in the second application as they were in the first application, following the transition.
  • Example 13 The persistent virtual identity system of Example 1 1 , wherein the one or more standards the 3D character SDK conforms the 3D asset to include one of: an art style of the second application; a theme of the second application.
  • Example 14 The persistent virtual identity system of Example 1 , further comprising an application interface to enable electronic communications with the plurality of applications.
  • Example 15 An apparatus for providing a ready room in virtual reality (VR) to transition an avatar representing a persistent virtual identity between a plurality of applications, comprising: memory to store 3D assets; and one or more processing units configured to: process a 3D asset received from a source application and stored in the memory; determine a plurality of standards associated with the 3D asset; compare the plurality of standards associated with the 3D asset with a plurality of standards enforced by a destination application; configure the plurality of standards associated with the 3D asset to conform with the plurality of standards enforced by the destination application based on a determination that the plurality of standards associated with the 3D asset do not conform to the plurality of standards enforced by the destination application; and transfer the 3D asset to the destination application based on a determination that the plurality of standards associated with the 3D asset conform to the plurality of standards enforced by the destination application.
  • VR virtual reality
  • Example 16 The apparatus of Example 15, wherein the plurality of standards include a standard associated with an artistic style of the 3D asset.
  • Example 17 The apparatus of Example 15, wherein the plurality of standards include a standard associated with a following 3D asset of the 3D asset.
  • Example 18 The apparatus of Example 17, wherein the following 3D asset is a clothing 3D asset associated with the 3D asset.
  • Example 19 The apparatus of Example 15, wherein the plurality of standards include a standard associated with a theme of the 3D asset.
  • Example 20. The apparatus of Example 15, wherein the memory is further configured to store one or more persistent virtual identities, and wherein the one or more processing units is further configured to associate a persistent virtual identity stored in memory with the 3D asset.
  • Example 21 The apparatus of Example 15, wherein the 3D asset is received via a network message.
  • Example 22 A computer-implemented method for maintaining a persistent 3D asset between a plurality of applications, comprising: receiving at a computing device a 3D asset in an electronic format from a source application of the plurality of applications; determining, by one or more processors of the computing device, one or more features of the 3D asset corresponding to standards associated with the 3D asset; comparing, by the one or more processors, the one or more features of the 3D asset that correspond to the plurality of standards associated with the 3D asset to a plurality of standards enforced by a destination application of the plurality of applications; configuring, by the one or more processors, the one or more features of the 3D asset to conform with the plurality of standards enforced by the destination application based on a determination that the one or more features of the 3D asset do not conform to the plurality of standards enforced by the destination application; and transferring the 3D asset to the destination application based on a determination that the plurality of standards associated with the 3D asset conform to the plurality of standards enforce
  • Example 23 The computer-implemented method of Example 22, wherein the plurality of standards include a standard associated with an artistic style of the 3D asset.
  • Example 24 The computer-implemented method of Example 22, further comprising associating a persistent virtual identity with the 3D asset.
  • Example 25 A persistent virtual identity system to transport a persistent virtual identity between a plurality of applications, comprising: an artist tools engine to configure a 3D asset to be compatible with the persistent virtual identity system by one or more of, grouping the geometry into one or more items, defining the levels of details for each of the one or more items, and generating geographical maps
  • an asset lookup and delivery service engine to enforce one or more standards on the 3D asset, the one or more standards corresponding to an application of the plurality of applications; an asset transfer client engine to associate a persistent virtual identity with the 3D asset; a 3D character software development kit (SDK) engine to modify one or more features of the 3D asset; and a ready room engine to: receive a 3D asset in an electronic format from a source application of the plurality of applications, the 3D asset including a base 3D model and one or more modifications thereto to define a unique figure for presentation in the source application; configure, by the artist tools engine, the 3D asset to be compatible with the persistent virtual identity system; update the one or more standards from corresponding to the source application to corresponding to a destination application of the plurality of applications; enforce, by the asset lookup and delivery service engine, one or more standards on the 3D asset; modify, by the 3D character SDK engine, on or more features of the 3D asset to conform to the one or more
  • transition the 3D asset from the source application to the destination application including removing the 3D asset from the source application, wherein the ready room engine transitions the 3D asset into the destination application in a manner such that settings and status of the 3D asset remain the same through the transition to begin in the destination application as they were in the source
  • Example 26 The persistent virtual identity system of Example 25, wherein the artist tools are further configured to receive a following 3D asset and configure the following 3D asset to be compatible with the 3D character SDK engine; the asset lookup and delivery service engine to enforce one or more standards on the following 3D asset and store the following 3D asset for access by the 3D character SDK; the 3D character SDK to receive input from a user to add the following 3D asset to the unique figure; the asset transfer client engine to receive the 3D asset from the asset lookup and delivery service engine and associate the following 3D asset with the persistent virtual identity of the 3D asset of the unique figure.
  • Example 27 An apparatus for processing face scan data, comprising: memory to store face scan data; and one or more processing units configured to: identify facial and head landmarks on the face scan data and the base figure;
  • Example 28 is a system for generating a nearest neighboring vertices index for rendering assistance, the system comprising a memory and one or more processors.
  • the memory is configured to store and retrieve the nearest neighboring vertices index, a base figure asset and an item asset.
  • One or more processors configured to: load the base figure asset and the item asset from the memory, select an item vertex of the item asset, and generate, using a k-dimensional tree algorithm or a geodesic algorithm a set of nearest neighbor vertices between the item vertex and vertices of the base figure asset, the set of nearest neighbor vertices limited to a maximum threshold.
  • One or more processors configured to: create the nearest neighboring vertices index based at least in part on the set of nearest neighbor vertices, present, in a user interface, an option to override or add vertices to the nearest neighboring vertices index, generate one or more influence indexes relating to a portion of a derived mesh derived from the base figure asset and the item asset, and render the derived mesh based at least in part on the one or more influence indexes.
  • Example 29 is the system of Example 28, wherein an influence index from the one or more influence indexes is a region.
  • Example 30 is the system of Example 29, wherein the region is defined as a head, body, arms, legs, or hands.
  • Example 31 is the system of Example 28, wherein the maximum threshold is a number of vertices.
  • Example 32 is the system of Example 28, wherein a size of the set of nearest neighbor vertices is set to a default number of vertices.
  • Example 33 is the system of Example 28, wherein the one or more processors further comprises a graphics processor.
  • Example 34 is the system of Example 28, wherein to generate the set of nearest neighbor vertices between the item vertex and vertices of the base figure asset further comprises to use both the k-dimensional tree algorithm and the geodesic algorithm.
  • Example 35 is the system of Example 28, further comprising a virtual reality interface configured to transmit the derived mesh for display.
  • Example 36 is the system of Example 28, wherein the base figure asset is an avatar.
  • Example 37 is a computer program product comprising a computer- readable storage medium that stores instructions for execution by a processor to perform operations of a polygon occlusion portion of a rendering system, the operations, when executed by the processor, to perform a method. The method comprising: loading a base asset and an item asset from memory, wherein the base asset comprises a base set of polygons and the item asset comprises an item set of polygons, overlaying a bounding box of non-edge polygons within the item asset on the figure asset, and indicating, within a polygon occlusion index, polygons of the base asset within the bounding box as occluded.
  • the method comprising: for vertex and center points of item polygons in the item set of polygons, tracing item rays from both sides of an item polygon at a 90-degree angle from a plane defined by the item polygon and determining whether the rays collide with a polygon from the base set of polygons, when an item ray collides with a base polygon from the base set of polygons, store a relationship set of relationships between the item polygon and the base polygon, the item polygon associated with the item ray, and reducing the relationship set to form a subset of the relationship set by removing relationships that include base polygons within the polygon occlusion index from the overlaying the bounding box operation.
  • the method comprising: vertex and center points of base polygons in the subset of the relationship set, tracing item rays from both sides of the base polygon at a 90-degree angle from a plane defined by the base polygon and determining whether the rays collide with a polygon from the item set of polygons, when at least one ray from each of the vertex and center points of the base polygon in the subset of the relationship set intersect one or more item polygons, add the base polygon to the polygon occlusion index as occluded, and when one or more one rays from each of the vertex and center points of the base polygon in the subset of the relationship set intersect one or more item polygons, but less than a ray from each of the vertex and center points of the base polygon, add the base polygon to the polygon occlusion index as partially occluded.
  • Example 38 is the computer program product of Example 37, wherein the method further comprises: generate an image map that represents portions of a base UV map that is covered by polygons of the item asset based at least in part on the polygon occlusion index; and overlay the image map on the base UV map to form an image that identifies a pixel representation of occluded areas on the base asset.
  • Example 39 is the computer program product of Example 38, wherein the occluded areas are represented by pixels in the image.
  • Example 40 is the computer program product of Example 38, wherein the image is an alpha injection map used to modify the base UV map with an item UV map using the representation of occluded areas on the base asset.
  • Example 41 is the computer program product of Example 38, wherein the image is used to form a combined image map for a combined mesh, the combined mesh including the base asset and the item asset.
  • Example 42 is a method for heterogeneous mesh behavior in a rendering system, the method comprising: loading an item asset, processing a selection of a set of vertices or polygons from an input, defining a set of behavior types for the selection, the behavior types defining rules including scaling rules, transform rules, or rotation rules, and storing an indication of the selection and the behavior types in a heterogeneous behavior index.
  • Example 43 is the method of Example 42, wherein the method further comprises to: process one or more deformation instructions of one or more of the vertices or polygons from the selection of the set of vertices or polygons; and apply the behavior types to the one or more deformation instructions with the behavior types having a priority over a conflicting deformation instruction included in the one or more deformation instructions.
  • Example 44 is the method of Example 42, wherein defining the set of behavior types further comprises selecting a grouping of behavior types.
  • Example 45 is the method of Example 44, wherein the grouping is defined by a material type.
  • Example 46 is the method of Example 42, wherein the rules comprise constraints.
  • Example 47 is the method of Example 42, wherein processing the selection of the set of vertices or polygons from the input further comprises receiving the input from an artist tools system.
  • Example 48 A system for switching a 3D asset between art styles, comprising: a user interface to receive an art style selection; and a 3D character software development kit (SDK) engine configured to: receive a selected art style from the user interface, remove customizing deformations from the 3D asset to return the 3D asset to a base 3D asset, receive style deformations based on the selected art style, apply the style deformations to the base figure to form a new 3D asset based on the selected art style, and apply the customizing deformations to the new 3D asset based on the selected art style.
  • SDK 3D character software development kit
  • Example 49 The system of example 48, wherein the 3D character SDK engine is configured to apply the customizing deformations to the new 3D asset based on the selected style by: determining whether the selected art style includes an override deformation for a customizing deformation; if there is an override deformation, applying the override deformation for the customizing deformation; and if there is not an override deformation, applying a standard deformation for the customizing deformation.
  • Example 50 The system of example 48, wherein the 3D character SDK engine is further configured to: determine identifying characteristics of the 3D asset; remove the identifying characteristics from the 3D asset to return the 3D asset to a base figure; and apply the identifying characteristics to the new 3D asset based on the selected art style.
  • Example 51 The system of example 50, wherein the 3D character SDK engine is configured to apply the identifying characteristics to the new 3D asset based on the selected style by: determining whether the selected art style includes an override deformation for an identifying characteristic, if there is an override deformation, applying the override deformation for the identifying characteristic and if there is not an override deformation, applying a standard deformation for the identifying characteristic.
  • Example 52 The system of example 48, wherein the identifying
  • characteristics are least one of scars, tattoos, birth marks, make up, glasses, earrings, and facial piercings.
  • Example 53 The system of example 48, wherein the 3D character SDK engine is configured to compare the new 3D asset to standards of a destination application.
  • Example 54 The system of example 48, wherein the 3D character SDK engine is configured to modify the new 3D asset to the standards of the destination application when the new 3D asset does not conform to the standards of the destination application.
  • Example 55 The system of example 6, wherein when the new 3D assets does not conform the standards of the destination application, the 3D character SDK engine is configured to transmit options to a user through the user interface.
  • Example 56 An apparatus for switching a 3D asset between art styles, comprising: memory to store a 3D asset; and one or more baseband processing units configured to: receive a selected art style from a user interface, determine identifying characteristics of the 3D asset, remove the identifying characteristics from the 3D asset to return the 3D asset to a base figure, receive style deformations based on the selected art style, apply the style deformations to the base figure to form a new 3D asset based on the selected style, and apply the identifying characteristics to the new 3D asset based on the selected style.
  • Example 57 The apparatus of example 56, wherein the one or more baseband processing unit is further configured to determine whether the selected artistic style includes an override deformation for an identifying characteristic, if there is an override deformation, applying the override deformation for the identifying characteristic and if there is not an override deformation, applying a standard deformation for the identifying characteristic.
  • Example 57 A method for defining an artistic style of a 3D asset, comprising: receiving a base figure of a 3D asset; modifying the base figure based on an artistic style; receiving standard deformations of identifying characteristics for the base figure; modifying at least a portion of the standard deformations of the identifying characteristics to override deformations based on the artistic style and the identifying characteristic; and importing the modified base figure and override deformations into an artistic tools engine.
  • Example 58 The method of example 57, customizing the modified base figure with the artistic tools engine.
  • Example 59 The method of example 57, further comprising: receiving a different artistic style for the modified base figure; determining the identifying characteristics of the modified base figure; removing the identifying characteristics from the modified base figure to return the modified base figure to the base figure; receiving style deformations based on the selected artistic style; applying the style deformations to the base figure to form a new 3D asset based on the selected artistic style; applying the identifying characteristics to the new 3D asset based on the selected artistic style.
  • Example 60 Example 60.
  • the method of example 59 further comprising determining whether the selected artistic style includes an override deformation for an identifying characteristic, if there is an override deformation, applying the override deformation for the identifying characteristic and if there is not an override deformation, applying a standard deformation for the identifying characteristic.
  • Example 61 The method of example 59, wherein the identifying characteristics are least one of scars, tattoos, birth marks, make up, glasses, earrings, and facial piercings.
  • Example 62 The method of example 59, wherein the received different artistic style is inputted by a user.
  • Example 63 The method of example 59, wherein the received different artistic style is received from a destination application.
  • Example 64 The method of example 57, further comprising comparing the modified base figure to standards of a destination application.
  • Example 65 The method of example 64, wherein when the modified base figure does not conform the standards of the application, providing options to a user to enable the new 3D asset to conform the standards of the destination application.
  • Example 66 The method of example 64, wherein when the modified base figure does not conform to the standards of the application, modifying new 3D asset to conform to the standards of the destination application.
  • Example 67 An apparatus for comparative virtual asset deformation system, comprising: memory to store at least two three dimensional (3D) assets, a geomap that correlates and indexes relationships between the 3D assets, and a 3D character software developer kit (SDK) engine, the two 3D assets including at least a base asset and a following asset; and one or more processing units to deform the following asset based on a deformation to the base asset, using the software developer kit (SDK) engine, the processing units to: determine new vertex
  • coordinates of the base asset for an activated deformation generate a point map defining an influence that vertices of the base asset assert on vertices of the following assets, the point map indicates which vertices of the following asset are influenced by vertices of the base asset with new vertex coordinates for the activated deformation, calculate offsets for the vertices of the following assets that are influenced by determining a weighted average of a difference between the new vertex coordinates and old vertex coordinates of the base asset, and drive the activated deformation to the following asset based on the calculated offsets.
  • Example 68 The apparatus of example 67, wherein the one or more processing units are further configured to generate new bone coordinates by calculating an average of a difference between the new vertex coordinates and old vertex coordinates of the base asset gradated by influence on affected bone polygons.
  • Example 69 The apparatus of example 67, wherein the memory further stores a geomap representing relationships between vertices of the base asset and the following asset, and wherein the one or more processing units generate the point map by assigning an influence score to the relationships in the geomap.
  • Example 70 The apparatus of example 69, wherein the influence score is based on a distance between vertices and data from a weight map associated with the base asset and the following asset.
  • Example 71 The apparatus of example 69, wherein to calculate the offsets, the one or more processing units: determine a deformation difference for each influencing base asset vertex; gradate; the deformation differences for each influencing base asset vertex based on the influence score; and aggregate and average the gradated deformation differences.
  • Example 72 The apparatus of example 69, wherein the weighted average is weighted based on the influence score.
  • Example 73 The apparatus of example 67, wherein the one or more processing units are further to index polygon data associated with the following asset into a hierarchal tree ordered by spatial relations between the polygon data.
  • Example 74 The apparatus of example 67, wherein the one or more processing units are further to save a deformation profile representing the offsets for the vertices of the following assets.
  • Example 75 The apparatus of example 74, wherein the deformation profile defines maximum offsets for the vertices of the following asset and the base asset; and wherein the one or more processing units drive the activated deformation by: calculating a percentage comparing offsets from the activated deformation on the base asset from the maximum offsets, and moving the vertices of the following asset by an equivalent percentage.
  • Example 75 The apparatus of example 74, wherein the deformation profile comprises a following asset blend shape corresponding to a source blend shape.
  • Example 76 The apparatus of example 74, wherein the following asset blend shape is mapped to the source blend shape.
  • Example 77 A non-transitory computer-readable medium with instructions stored thereon that, when executed by a processor, cause a virtual identity system to perform operations for propagating a deformation between disparate three
  • dimensional (3D) assets the operations comprising: determining new vertex coordinates of a base asset for an activated deformation, generating a point map defining an influence that vertices of the base asset assert on vertices of a following assets, determining, via the point map, which vertices of the following asset are influenced by vertices of the base asset with new vertex coordinates for the activated deformation, calculating offsets for the vertices of the following assets that are influenced by determining a weighted average of a difference between the new vertex coordinates and old vertex coordinates of the base asset, and drive the activated deformation to the following asset based on the calculated offsets.
  • Example 78 The apparatus of example 77, wherein generating the point map comprises assigning an influence score to relationships in a geomap.
  • Example 79 The apparatus of example 78, wherein the influence score is based on a distance between vertices and data from a weight map associated with the base asset and the following asset.
  • Example 80 The apparatus of example 78, wherein calculating the offsets comprises: determining a deformation difference for each influencing base asset vertex; gradating the deformation differences for each influencing base asset vertex based on the influence score; and aggregating and average the gradated
  • Example 81 The apparatus of example 78, wherein the weighted average is weighted based on the influence score.
  • Example 82 A method for adjusting virtual assets, the method comprising: loading at least two 3D assets, including at least a base 3D asset and a following asset; activating a deformation on the base 3D asset; generating a point map comprising: relationships between vertices of the base 3D asset and vertices of the following 3D asset, and an influence score for each relationship that indicates an amount that the vertices of the base 3D asset influence the vertices of the following 3D asset influences; determining a first set of offsets representing the deformation to the base 3D asset; calculating a second set of offsets corresponding to the first set of offsets, the second set of offsets based on the point map and representing a following deformation to be applied to the following 3D asset when the deformation is applied to the base 3D asset; and driving the following deformation to the following 3D asset.
  • Example 83 The method of example 82, further comprising saving a deformation profile representing the second set of offsets.
  • Example 84 The method of example 82, further comprising injecting the deformation profile into the following 3D asset as a blend shape.
  • Example 85 The method of example 82, further comprising crawling the following 3D asset to index polygon data of the following 3D asset into a tree structure representative of spatial relationships between polygon data on the following 3D asset.
  • Example 86 A 3D content rendering system, comprising: electronic memory to store a 3D asset including a first mesh and a second mesh; and one or more processing units configured to: load the first mesh and the second mesh from a character definition; identify a lowest depth mesh of the first mesh and the second mesh, the lowest depth mesh being closer to a center of the 3D asset; identify shared polygons from the first mesh and the second mesh; and hide the shared polygons of the lowest depth mesh, such that hidden shared polygons are not presented to be perceived be a user during rendering of the 3D asset.
  • Example 87 The 3D content rendering system of example 86, wherein the first mesh is a mesh of a base 3D asset and the second mesh is a mesh of a following 3D asset.
  • Example 88 The 3D content rendering system of example 86, wherein the first mesh is a mesh of a following 3D asset that is associated with a base 3D asset and the second mesh is a mesh of another following 3D asset of the base 3D asset.
  • Example 89 The 3D content rendering system of example 87 or 88, wherein the base 3D asset is a bipedal humanoid character, and the following 3D asset is an article of clothing.
  • Example 90 The 3D content rendering system of example 86, wherein shared polygons from the first mesh and the second mesh are identified using a geomap that identifies a plurality of meshes and an order of the meshes, including the first mesh and the second mesh.
  • Example 91 The 3D content rendering system of example 86, wherein the first mesh defines a first set of polygons and the second mesh defines a second set of polygons and shared polygons are at least partially overlapping.
  • Example 92 The 3D content rendering system of example 86, wherein the one or more processing units are further configured to electronically provide the 3D asset, including the first mesh and the second mesh, for presentation on a display.
  • Example 93 The 3D content rendering system of example 86, further comprising an electronic display for rendering 3D content, wherein the one or more processing units are further configured to electronically provide the 3D asset, including the first mesh and the second mesh, to the electronic display for
  • Example 94 A 3D content rendering system, comprising: electronic memory to store a 3D asset including a plurality of meshes; and one or more processing units configured to: load the plurality of meshes from a character definition; identify a lower depth mesh and a top depth mesh from the plurality of meshes, the lower depth mesh being closer to a center of the 3D asset, and the top depth mesh being further from the center of the 3D asset; iterate over a plurality of polygons of the lower depth mesh to identify polygons that the top depth mesh and the lower depth mesh share; and hide the shared polygons of the lower depth mesh, such that hidden shared polygons are not presented to be perceived be a user during rendering of the 3D asset.
  • Example 95 The 3D content rendering system of example 94, wherein the lower mesh is a mesh of a base 3D asset and the top depth mesh is a mesh of a following 3D asset.
  • Example 96 The 3D content rendering system of example 94, wherein the lower mesh is a mesh of a following 3D asset that is associated with a base 3D asset and the top depth mesh is a mesh of another following 3D asset of the base 3D asset.
  • Example 97 The 3D content rendering system of example 95 or 96, wherein the base 3D asset is a bipedal humanoid character, and the following 3D asset is an article of clothing.
  • Example 98 The 3D content rendering system of example 94, wherein shared polygons from the lower mesh and the top depth mesh are identified using a geomap that identifies a plurality of meshes and an order of the meshes, including the lower mesh and the top depth mesh.
  • Example 99 The 3D content rendering system of example 94, wherein the lower mesh defines a first set of polygons and the top depth mesh defines a second set of polygons and shared polygons are at least partially overlapping.
  • Example 100 The 3D content rendering system of example 94, wherein the one or more processing units are further configured to electronically provide the
  • 3D asset including the lower mesh and the top depth mesh, for presentation on a display.
  • Example 101 The 3D content rendering system of example 94, further comprising an electronic display for rendering 3D content, wherein the one or more processing units are further configured to electronically provide the 3D asset, including the lower mesh and the top depth mesh, to the electronic display for presentation to a viewer.
  • Embodiments may include various steps, which may be embodied in machine-executable instructions to be executed by a general-purpose or special- purpose computer (or other electronic device). Alternatively, the steps may be performed by hardware components that include specific logic for performing the steps, or by a combination of hardware, software, and/or firmware.
  • Embodiments may also be provided as a computer program product including a computer-readable storage medium having stored instructions thereon that may be used to program a computer (or other electronic device) to perform processes described herein.
  • the computer-readable storage medium may include, but is not limited to: hard drives, floppy diskettes, optical disks, CD-ROMs, DVD- ROMs, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, solid-state memory devices, or other types of medium/machine-readable medium suitable for storing electronic instructions.
  • a software module or component may include any type of computer instruction or computer executable code located within a memory device and/or computer-readable storage medium.
  • a software module may, for instance, comprise one or more physical or logical blocks of computer instructions, which may be organized as a routine, program, object, component, data structure, etc., that performs one or more tasks or implements particular abstract data types.
  • a particular software module may comprise disparate instructions stored in different locations of a memory device, which together implement the described functionality of the module.
  • a module may comprise a single instruction or many instructions, and may be distributed over several different code segments, among different programs, and across several memory devices.
  • Some embodiments may be practiced in a distributed computing environment where tasks are performed by a remote processing device linked through a communications network.
  • software modules may be located in local and/or remote memory storage devices.
  • data being tied or rendered together in a database record may be resident in the same memory device, or across several memory devices, and may be linked together in fields of a record in a database across a network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne des systèmes et des procédés de génération et de portage d'une identité virtuelle persistante. La génération et le portage d'une identité virtuelle persistante peuvent comprendre le traitement d'un actif 3D reçu à partir d'une application source, la détermination d'une pluralité de normes associées à l'actif 3D, la comparaison de la pluralité de normes associées à l'actif 3D à une pluralité de normes mises en œuvre par une application de destination, la configuration de la pluralité de normes associées à l'actif 3D pour correspondre à la pluralité de normes mises en œuvre par l'application de destination sur la base de la détermination du fait que la pluralité de normes associées à l'actif 3D n'est pas conforme à la pluralité de normes mises en œuvre par l'application de destination, et le transfert de l'actif 3D à l'application de destination sur la base de la détermination du fait que la pluralité de normes associées à l'actif 3D est conforme à la pluralité de normes mises en œuvre par l'application de destination.
PCT/US2017/059083 2016-10-31 2017-10-30 Systèmes et procédés pour identité virtuelle portable et persistante WO2018081732A1 (fr)

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US201662415029P 2016-10-31 2016-10-31
US201662415023P 2016-10-31 2016-10-31
US62/415,023 2016-10-31
US62/415,029 2016-10-31
US201662415881P 2016-11-01 2016-11-01
US201662415835P 2016-11-01 2016-11-01
US201662415808P 2016-11-01 2016-11-01
US62/415,808 2016-11-01
US62/415,881 2016-11-01
US62/415,835 2016-11-01

Publications (1)

Publication Number Publication Date
WO2018081732A1 true WO2018081732A1 (fr) 2018-05-03

Family

ID=62025528

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/059083 WO2018081732A1 (fr) 2016-10-31 2017-10-30 Systèmes et procédés pour identité virtuelle portable et persistante

Country Status (1)

Country Link
WO (1) WO2018081732A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190035131A1 (en) * 2017-07-28 2019-01-31 Baobab Studios Inc. Systems and methods for real-time complex character animations and interactivity
CN111143627A (zh) * 2019-12-27 2020-05-12 北京百度网讯科技有限公司 用户身份数据确定方法、装置、设备和介质
EP4194067A4 (fr) * 2020-08-07 2024-05-08 Xiamen Yaji Software Co., Ltd. Procédé et appareil de traitement de ressources de moteur de jeu, dispositif électronique et support d'enregistrement lisible par ordinateur

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120190458A1 (en) * 2009-02-09 2012-07-26 AltEgo, LLC Computational Delivery System For Avatar and Background Game Content
US8522330B2 (en) * 2011-08-18 2013-08-27 Brian Shuster Systems and methods of managing virtual world avatars
US20130275886A1 (en) * 2012-04-11 2013-10-17 Myriata, Inc. System and method for transporting a virtual avatar within multiple virtual environments

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120190458A1 (en) * 2009-02-09 2012-07-26 AltEgo, LLC Computational Delivery System For Avatar and Background Game Content
US8522330B2 (en) * 2011-08-18 2013-08-27 Brian Shuster Systems and methods of managing virtual world avatars
US20130275886A1 (en) * 2012-04-11 2013-10-17 Myriata, Inc. System and method for transporting a virtual avatar within multiple virtual environments

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190035131A1 (en) * 2017-07-28 2019-01-31 Baobab Studios Inc. Systems and methods for real-time complex character animations and interactivity
US10796469B2 (en) * 2017-07-28 2020-10-06 Baobab Studios Inc. Systems and methods for real-time complex character animations and interactivity
US10810780B2 (en) 2017-07-28 2020-10-20 Baobab Studios Inc. Systems and methods for real-time complex character animations and interactivity
US10818061B2 (en) 2017-07-28 2020-10-27 Baobab Studios Inc. Systems and methods for real-time complex character animations and interactivity
US10937219B2 (en) 2017-07-28 2021-03-02 Baobab Studios Inc. Systems and methods for real-time complex character animations and interactivity
CN111143627A (zh) * 2019-12-27 2020-05-12 北京百度网讯科技有限公司 用户身份数据确定方法、装置、设备和介质
CN111143627B (zh) * 2019-12-27 2023-08-15 北京百度网讯科技有限公司 用户身份数据确定方法、装置、设备和介质
EP4194067A4 (fr) * 2020-08-07 2024-05-08 Xiamen Yaji Software Co., Ltd. Procédé et appareil de traitement de ressources de moteur de jeu, dispositif électronique et support d'enregistrement lisible par ordinateur

Similar Documents

Publication Publication Date Title
US11354877B2 (en) Comparative virtual asset adjustment systems and methods
US10878627B2 (en) Multilayer depth and volume preservation of stacked meshes
US11494980B2 (en) Virtual asset map and index generation systems and methods
GB2564745B (en) Methods for generating a 3D garment image, and related devices, systems and computer program products
US20200320766A1 (en) Portable and persistent virtual identity systems and methods
KR102623730B1 (ko) 허위 가상 오브젝트의 검출
US11238667B2 (en) Modification of animated characters
WO2018081732A1 (fr) Systèmes et procédés pour identité virtuelle portable et persistante
JP2016122297A (ja) モデリングシステム、モデリングプログラム、及びモデリング方法
US10769860B2 (en) Transferrable between styles virtual identity systems and methods
US20230298297A1 (en) Layered clothing that conforms to an underlying body and/or clothing layer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17865167

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17865167

Country of ref document: EP

Kind code of ref document: A1