WO2016177290A1 - 为自由组合创作的虚拟形象生成及使用表情的方法和系统 - Google Patents

为自由组合创作的虚拟形象生成及使用表情的方法和系统 Download PDF

Info

Publication number
WO2016177290A1
WO2016177290A1 PCT/CN2016/080036 CN2016080036W WO2016177290A1 WO 2016177290 A1 WO2016177290 A1 WO 2016177290A1 CN 2016080036 W CN2016080036 W CN 2016080036W WO 2016177290 A1 WO2016177290 A1 WO 2016177290A1
Authority
WO
WIPO (PCT)
Prior art keywords
avatar
information
expression
face
organ
Prior art date
Application number
PCT/CN2016/080036
Other languages
English (en)
French (fr)
Inventor
陈容海
刘永健
张以纬
Original Assignee
北京蓝犀时空科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京蓝犀时空科技有限公司 filed Critical 北京蓝犀时空科技有限公司
Publication of WO2016177290A1 publication Critical patent/WO2016177290A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Definitions

  • the present invention relates to the field of computer graphics, and in particular, to a method and system for generating and using an avatar for free combination creation.
  • facial expression muscle tissue system has at least the following 18 muscles (some points) Left and right): anterior ear muscle, buccal muscle, eyebrow reduction muscle, lowering angle muscle, lowering lip muscle, lowering nasal septum, frontal muscle, lifting angle muscle, lifting upper lip muscle, diaphragm muscle, nasal muscle, orbicularis oculi muscle, mouth wheel Diaphragm, platysma, lower eyebrow muscle, laughter muscle, iliac muscle, and small muscle; the combined effect is extremely complicated and subtle.
  • the Computer Generated Imagery (CGI) approach is to capture the face of a real person by motion capture, image capture, and then reproduce the expression to a fixed animated character face.
  • the facial muscle tissue model algorithm is used to generate facial expressions.
  • the CGI community does not need to generate expressions for an unlimited number of non-pre-fixed faces, such as generating facial expression images for a number of faces in a random crowd.
  • Computer avatars are increasingly used in online social environments, becoming an important part of creating user inspiration and intimacy.
  • computer avatar technology can create a large number of different avatar faces by freely combining different avatar component organs, but these avatar faces lack the function of making different facial expressions.
  • More well-known user-defined avatar systems such as Sims, World of Warcraft, Tencent QQ, etc., their free combination avatars are not Can do different facial expressions.
  • the traditional police face combination sketch album system generates a face with an expressionless expression. None of these precedents can generate multiple facial expressions for freely combined avatar faces, although multiple facial expressions enhance the user experience of the product and the user's social experience.
  • the object of the present invention is that the prior art cannot flexibly generate a plurality of facial expression defects for a freely combined avatar face, thereby providing a method and system capable of generating a plurality of facial expressions for a freely combined avatar face.
  • the present invention provides a method for implementing at least one computing device, the computing device comprising at least one processor, a memory and a communication platform interface for generating an expression for an avatar, the avatar including but not Limited to two-dimensional or three-dimensional characters, animals, cartoon shapes, or abstract emotion faces, the facial organs or partial faces obtained by the method are derived from a set for generating different avatars to represent a plurality of users.
  • the method includes the following steps:
  • Step 1 Receive first information, where the first information is related to a virtual image representing a user;
  • Step 2 acquiring one or more facial organs or partial faces from the set based on the first information
  • Step 3 generating a virtual image base synthetic face representing the user based on the facial organ or the partial face hole;
  • Step 4 Receive second information, where the second information is related to a specified expression to be displayed by the avatar;
  • Step 5 Acquire one or more facial organs or partial faces from the above set based on the second information
  • Step 6 Based on the basic synthetic face and the facial organ or partial face obtained in step 5, a synthetic face displaying the specified expression is generated for the avatar.
  • the first information includes at least one of the following: the selection information of the basic synthetic face required to be specified from the plurality of preset basic synthetic faces, or one or more attribute information related to the avatar.
  • the one or more attribute information related to the avatar included in the first information includes at least one of the following: gender, age, race, occupation.
  • the acquired facial organ or partial face has associated metadata.
  • the metadata associated with the facial organ or the partial face includes at least one of the following: expression information, identity information, perspective information, contour information, and location information.
  • the acquired facial organ or partial face is based on its associated metadata.
  • the second information includes at least one of the following:
  • One or more parameters related to the application environment in which the avatar is located are located;
  • Information from a social network that can be applied to an avatar.
  • the method further includes storing a free combination avatar or an expression-compositing face capable of displaying a specified expression, and the stored information is used by other applications.
  • the avatar is rendered with an expression synthetic face and stored as a picture, and the picture is a static picture or a dynamic picture with an emoji.
  • the avatar-embedded synthetic face is stored as data information capable of reproducing the avatar, and the data information includes identification information of a facial organ or a partial face for the avatar.
  • the avatar further includes a limb organ, and the limb motion of the avatar conforms to the specified expression.
  • the set is an avatar organ database.
  • the user includes an avatar that can make an expression in the information sent to other users.
  • the user creates an avatar that can make an expression for other users, and sends the avatar to the social network for sharing.
  • the second information received in step 4 is an expression sequence information
  • a synthetic face displaying an expression in the expression sequence is generated for the avatar.
  • the present invention also provides a method for implementing at least one computing device, the computing device comprising at least one processor and memory, establishing or expanding an avatar organ database with expressions, the database for combining different avatars to represent multiple Users, the method steps include:
  • Step 1 Obtain an original image set with an avatar or partial avatar with an expression, and identify the expression and the avatar for the original image.
  • the format adopted by the original image includes but is not limited to one or more including a bitmap format.
  • a graphic description format such as a vector format, or a three-dimensional model format;
  • Step 2 Generate an image and metadata of a facial organ or a partial face based on the original image, and the metadata includes expression information.
  • the metadata of the facial organ or the partial face has identity information, and the identity information is based on an avatar to which the original image of the component part belongs.
  • the facial organ or partial face metadata includes at least one of the following:
  • Two-dimensional image perspective information or a default perspective information of the two-dimensional facial organ or partial face that is given metadata without perspective information
  • Three-dimensional image contour information or a default contour information of the three-dimensional facial organ or partial face that is given metadata without contour information.
  • the facial organ or partial face metadata further includes location information, and the location information is used for positioning the facial organ or a partial face in the entire avatar.
  • the above technical solution further includes marking at least one attribute information for the facial organ or a partial face, and the attribute information includes but is not limited to identification information, category information, color information, age information, gender information, ethnic information, occupation information.
  • the present invention further provides a method for controlling an expression of a user's avatar by at least one computing device including a processor, a memory, and a communication platform interface, the avatar being a avatar generated by free combination:
  • An expression that satisfies the trigger condition is set as an expression of the user's avatar based on one or more parameters related to the application environment in which the avatar is located.
  • the present invention also provides a computing device implemented in at least one processor, memory, and communication platform interface, wherein at least one device further includes a pointer or a touch screen, and a method for controlling an expression of a user avatar, the avatar Is a avatar generated by free combination, including:
  • Step 1 receiving a specified expression sequence, the expression sequence is an expression, or a plurality of expressions selected by the user by sliding the pointer or swiping the touch screen;
  • Step 2 The expression that satisfies the trigger condition in the expression sequence is set as the expression of the user avatar.
  • the present invention further provides a method for searching for a user by using an avatar, the avatar is a avatar generated by a free combination, and includes:
  • Step 1 Receive specified avatar information
  • Step 2 Return at least one user information, and the avatar component metadata and the specified avatar component metadata satisfy the matching condition.
  • the present invention also provides a system implemented in at least one computing device, the computing device comprising at least one processor, a memory and a communication platform interface for generating a system for specifying an expression for an avatar, including but not limited to two-dimensional or A three-dimensional character, animal, cartoon, or abstract emotion face, the facial organ or partial face acquired by the system comes from a collection for generating different avatars to represent multiple users.
  • the system includes:
  • a first information receiving module configured to receive first information, where the first information is related to an avatar representing a user
  • a first data acquisition module acquiring one or more facial organs or partial faces from the set based on the first information
  • a basic synthetic face generating module based on the facial organ or the partial face, generating a avatar basic synthetic face representing the user
  • a second information receiving module configured to receive second information, where the second information is related to a specified expression to be displayed by the avatar
  • a second data acquisition module acquiring one or more facial organs or partial faces from the set based on the second information
  • the facial expression generating module is configured to generate a synthetic face displaying the specified expression for the avatar based on the basic synthetic face and the facial organ or the partial face obtained by the second data acquiring module.
  • the first information includes at least one of the following: the selection information of the basic synthetic face required to be specified from the plurality of preset basic synthetic faces, or one or more attribute information related to the avatar.
  • the one or more attribute information related to the avatar included in the first information includes at least one of the following: gender, age, race, occupation.
  • the acquired facial organ or partial face has associated metadata.
  • the metadata associated with the facial organ or the partial face includes at least one of the following: expression information, identity information, perspective information, contour information, and location information.
  • the acquired facial organ or partial face is based on its associated metadata.
  • the second information includes at least one of the following:
  • One or more parameters related to the application environment in which the avatar is located are located;
  • Information from a social network that can be applied to an avatar.
  • a storage module is further configured to store a free combination avatar or an expression-compositing face capable of displaying a specified expression, and the other application does not need to generate different avatars to represent the multiple when using the stored information.
  • the storage module renders and stores the avatar with an expression synthesis face as a picture, and the picture is a static picture or a dynamic picture with an emoji animation.
  • the storage module stores the avatar with an expression synthesis face as data information capable of reproducing the avatar, and the data information includes identification information of a facial organ or a partial face for the avatar.
  • the avatar further includes a limb organ, and the limb motion of the avatar conforms to the specified expression.
  • the set is an avatar organ database.
  • a module for establishing or expanding an avatar organ database is also included.
  • the user includes an avatar that can make an expression in the information sent to other users.
  • the user creates an avatar that can make an expression for other users, and sends the avatar to the social network for sharing.
  • the second information received by the second information receiving module is an expression sequence information
  • the expression synthesis face generation module generates a synthetic face for displaying the expression in the expression sequence for the avatar.
  • the present invention further provides for implementing at least one computing device, the computing device comprising at least one processor And a memory, a system for creating or expanding an avatar organ database with expressions for combining different avatars to represent a plurality of users, the system comprising:
  • An original image obtaining module configured to acquire an original image set of an avatar with an expression or a partial avatar, and identify an emoticon and an avatar thereof for the original image, where the format adopted by the original image includes but is not limited to one or more Graphic description format including bitmap format, vector format, or 3D model format;
  • An image and metadata generation module generates an image and metadata of a facial organ or a partial face based on the original image, and the metadata includes expression information.
  • the metadata of the facial organ or the partial face has identity information, and the identity information is based on an avatar to which the original image of the component part belongs.
  • the facial organ or partial face metadata includes at least one of the following:
  • Two-dimensional image perspective information or a default perspective information of the two-dimensional facial organ or partial face that is given metadata without perspective information
  • Three-dimensional image contour information or a default contour information of the three-dimensional facial organ or partial face that is given metadata without contour information.
  • the facial organ or partial face metadata further includes location information, and the location information is used for positioning the facial organ or a partial face in the entire avatar.
  • the above technical solution further includes marking at least one attribute information for the facial organ or a partial face, and the attribute information includes but is not limited to identification information, category information, color information, age information, gender information, ethnic information, occupation information.
  • the present invention further provides a system implemented in at least one computing device including a processor, a memory, and a communication platform interface, and an emoticon for controlling a user's avatar.
  • the avatar is a avatar generated by free combination, and the system includes:
  • the expression setting module sets an expression satisfying the trigger condition as an expression of the user avatar based on one or more parameters related to the application environment in which the avatar is located.
  • the present invention also provides a computing device implemented in at least one processor, memory, and communication platform interface, wherein at least one device further includes a pointer or a touch screen, and a system for controlling an expression of a user avatar, the avatar Is a avatar generated by free combination, the system includes:
  • An expression receiving module configured to receive a specified expression sequence, the expression sequence is an expression, or a plurality of expressions selected by the user by sliding the pointer or swiping the touch screen;
  • the expression setting module is configured to set an expression that satisfies the trigger condition in the expression sequence as an expression of the user avatar.
  • the present invention also provides a computing device implemented in at least one interface including a processor, a memory, and a communication platform.
  • the avatar is a system for finding a user, the avatar is a avatar generated by free combination, and the system includes:
  • a avatar receiving module receiving the specified avatar information
  • the avatar search module returns at least one user information, and the avatar component metadata and the specified avatar component metadata satisfy the matching condition.
  • the method and system of the present invention does not require motion capture, image capture of individual user faces, and can generate a large number of freely combined avatars with a small number of image resources for the user to select an avatar representing the individual and to avatar the face. Expressions to express personal emotions or emotions.
  • Figure 1 is a schematic diagram of division of sites in a human face
  • FIG. 2 is a schematic diagram of image information and metadata
  • FIG. 3 is a schematic diagram showing a basic process of generating an avatar for displaying a specified expression by the avatar generating method with an expression of the present invention
  • FIG. 4 is a flow chart of a method for generating an avatar with an expression according to the present invention.
  • Figure 5 is a schematic illustration of a facial organ selection interface employed in one embodiment
  • FIG. 6 is a schematic diagram showing an example of an avatar with an expression of an expression of an expression
  • Figure 7 is a schematic view of a three-dimensional organ model
  • Figure 8 is a schematic diagram of generating a three-dimensional model with expressions
  • FIG. 9 is a flow chart of a method of establishing or augmenting an avatar organ database of the present invention.
  • FIG. 10 is a schematic diagram of an avatar emoticon implemented by the method of the present invention in an embodiment
  • FIG. 11 is a schematic diagram of a backend system of a system for generating a specified expression for a freely combined avatar as a social data server or game server in one embodiment
  • FIG. 12 is a schematic diagram of an embodiment of the present invention for generating a specified expression for a freely combined avatar as an independent service provider in one embodiment
  • FIG. 13 is a schematic diagram of a system for generating a specified expression for a freely combined avatar and an avatar organ creation system as independent service providers, respectively, in one embodiment
  • Figure 14 is a schematic illustration of an avatar expression private message implemented in accordance with the method of the present invention in one embodiment.
  • the client-side device may be a terminal device such as a personal computer (Personal Computer), a notebook computer, a tablet computer, a mobile phone, a smart phone, or a client module in the terminal device, for example, : web browser client, instant messenger application client, etc.
  • a terminal device such as a personal computer (Personal Computer), a notebook computer, a tablet computer, a mobile phone, a smart phone, or a client module in the terminal device, for example, : web browser client, instant messenger application client, etc.
  • Alignment point The alignment origin refers to the common origin used to overlap multiple images to ensure the relative orientation between the images.
  • the similar concept in the printing industry is the registration mark. Four-color printing.
  • Virtual Image An avatar is a graphic image used to represent individual users in the Internet and Internet applications and displayed to other users. It can contain only avatars or both body, body, hairstyle, dress, and accessories. The avatar also includes a cartoon avatar, an animal avatar, and a more abstract approximation of the avatar. In the embodiment of the present application, the human avatar is taken as an example for illustration.
  • Facial organ refers to an organ on the avatar's face. Taking a human face as an example, the facial organs on the human face include the face, eyebrows, eyes, nose, mouth, ears, and the like.
  • Partial face A partial face is a part of the avatar's intact face. It is a combination of several facial organs, such as a partial face composed of the mouth, the nose, and the skin around them.
  • Face components Face organs and partial faces are collectively referred to as facial components.
  • Limb organs Body parts refer to an organ other than the face of an avatar. In the case of a person, the trunk, arms, hands, legs, feet, etc. belong to the limbs.
  • Free-combination avatar refers to the selection of facial organs, partial faces, or limb organs (such as eyebrows, eyes, nose, mouth, and face), and then combined. Virtual image.
  • Expression refers to the facial state of the face used to display emotions and moods, sometimes in combination with physical movements or tone of tone.
  • Virtual image organ database refers to a set of components and related information for combining different avatars.
  • the content components can be image information of facial organs, partial faces, or limb organs.
  • the related information may be metadata of each component, attribute information, and the like.
  • the facial organs or partial faces acquired by the method of the present invention are derived from a collection for combining different avatars to represent a plurality of users.
  • the collection is an avatar organ database; the database is first described in detail, and then the method of the present invention is used to illustrate how the database is used.
  • the avatar organ database contains information about the facial organs, the partial faces, and the limb organs.
  • the information generally includes two types, one is image information, the other is metadata; there is a corresponding relationship between image information and metadata.
  • Figure 2 shows a schematic of image information and metadata.
  • the image information describes the shape of a facial organ or a partial face or a limb organ.
  • the image information may be in a two-dimensional image format, such as a bitmap format, or a vector graphics format, or other available graphical description format, or may be a three-dimensional model format.
  • the metadata is related information for describing corresponding image information.
  • the metadata includes expression information, and the expression information can reflect the expression of the corresponding image. For example, an expression corresponding to an open mouth is “laughing”, and an expression corresponding to a mouth slightly raised upward is “smile”. Some facial organs or partial faces may have multiple expressions, such as a closed mouth may be "sad” or "tension”.
  • the expression information can also be empty, which means that the facial organ or partial face or limb organ represented by the corresponding image is not associated with a specific expression, such as "ear” is usually not associated with a specific expression. .
  • the metadata may further include identity information for identifying which avatar the facial organ or partial face or limb organ represented by the corresponding image belongs to. Face organs or partial faces or limb organs with the same identity information or matching are more able to replace the use without changing the identity of the avatar. If a partial face contains an organ combination from a different avatar, its identity information contains the identity information of its individual organs, ie, the avatar to which its individual organs belong.
  • the metadata may also include perspective information of a two-dimensional image for describing an observation angle of the corresponding two-dimensional image. Obviously, the same organ looks up, sideways, and completely different from the two-dimensional image observed. Images of the same or similar viewing angles can be combined more. If there is a default perspective, an image that conforms to the default perspective, its metadata may have no perspective information.
  • the metadata may also include contour information of the three-dimensional image model, the 3D face model is hollowed out according to the organ standard contour, and each 3D organ model conforms to the standard contour of the type of the organ, ensuring that it can be implanted into the 3D face model, and can be of the same type 3D organ model interchange (see Figure 7). If there is a default organ standard contour combination, the model that conforms to the default standard contour, its metadata can be without contour information.
  • the metadata may further include location information for describing a location of the corresponding image in the entire avatar, and may also include a scaling scale, a rotation angle, and the like.
  • the location information includes the type of the location and the location of the type of the location in the image, or outside of the image boundary (typically, the distance between the pair of locations and the upper left corner of the image).
  • the applicant observed that after the body length and the head size are normalized, there are several key positions on the human body, such as the center of the eyes of the human face and the navel of the human body, for people of different looks and different body types.
  • the relative orientation relationship between the various organs and these key positions does not change much; in a preferred implementation, these key positions are taken as the opposite sites.
  • the positioning of the facial organs is based on the contour of the face and a cross formed by two lines passing through the nose-mouth and through the left-right eye, which are substantially fixed; at the intersection of the above-mentioned crosses (ie, the center of the two eyes) as the opposite site, the relative orientation of the facial organs of different phases of the person to the pair is also substantially fixed. Therefore, for different face types and facial organs, the use of the opposite point as a matching point for displaying the images helps to combine different facial organs or partial faces. Similarly, using the navel as a contradiction to the human body helps to combine different limb organs.
  • the position information of the facial organ or partial face (collectively referred to as a face component), or the image of the limb organ, needs to include a type of a pair of points having a relative orientation relationship with the face member or the limb organ, and The position of the locus in the image or outside the boundaries of the image.
  • a similar method can be used to unify the alignment points, thereby aligning multiple organ images and overlaying them.
  • the metadata may further include attribute information including but not limited to one or more of the following information: identification information, category information, color information, age information, gender information, ethnic information, occupation information.
  • identification information is used to identify the corresponding image, and the identifier is unique, such as identifying an image representing "nose” as "100001".
  • category information is used to describe the type of the face component or the limb organ (hereinafter collectively referred to as an organ component) represented by the corresponding image, such as the type of the face organ labeled "nose”.
  • Color information used to describe the color of the organ parts represented by the corresponding image, such as "brown", “black”, “blue” (eye), and the like.
  • the age information is used to describe the age of the avatar to which the organ component represented by the corresponding image belongs, to reflect the difference in the appearance of the avatar of different age groups.
  • the gender information is used to describe the gender of the avatar subordinate to the organ component represented by the corresponding image, so as to reflect the difference in the appearance of the organ parts of different genders; for example, males and females have more facial organs such as eyes and mouth. Obvious differences; of course, the gender information of certain organ parts is also allowed to be neutral, which means that the organ is suitable for both men and women.
  • the race information is used to describe the avatar of the avatar to which the organ component represented by the corresponding image belongs, to reflect the difference in the appearance of the organ components of the avatars of different races.
  • Occupational information is used to describe the occupation of the avatar to which the organ components represented by the corresponding images belong, to reflect the difference in the appearance of the avatars of different occupations, such as track and field athletes, which are usually relatively thin.
  • the avatar organ database contains information of various types of organ parts, and the type refers to "eyes", "nose”, "mouth”, etc.; the avatar organ database contains multiple organs of the same type. Image and associated metadata. Taking “mouth” as an example, there may be hundreds of different "mouth” information in the avatar organ database. These "mouths” may have different identities (such as the appearance of thick lips or thin lips). Or have different colors (such as the appearance of a different color lipstick), or have different expressions (such as smiling or laughing). When the identity information is the same or matches, when the two-dimensional image perspective information or the three-dimensional image contour information is the same or matches, when the expression information conforms to the expression required by the application, the images can be used to combine the generated virtual expressions required by the application. Image. When the attribute information satisfies the query or filter conditions set by the application, these images may provide the application with candidate components that can be combined into an avatar.
  • shape information (which is externally represented as a graphic file) and other information may be stored separately in different data sets.
  • the relationship between data in multiple data sets can be related by means of identification information, indexes, pointers, and the like. The advantage of this is that it is easy to do data query, index, update, transfer and storage.
  • the information of the facial organ or partial face (collectively referred to as a face component) included in the avatar organ database includes at least information such as the shape, location, category, identity, perspective, expression, and the like of the face component.
  • a face component includes at least information such as the shape, location, category, identity, perspective, expression, and the like of the face component.
  • an avatar organ database contains 6 male identities of the same perspective and 6 female identities of the basic face (without considering the expression) 5 types of facial organs, eyebrows, eyes, nose, mouth, and face (ie each The type of organ has 6 males and 6 females, which are different from each other; only the eyebrows, eyes, and mouth have different shapes for different expressions, and the three types of organs, eyebrows, eyes, and mouth, have the same 18 kinds of pre-forms. Set the expression.
  • the avatar identity, the five organs (eyebrows, eyes, nose, mouth, face) of the combined avatar are respectively derived from the above-mentioned basic facial organs of the same sex (without considering the expression); this organ combination is equivalent to the combined avatar Identity information.
  • each combined avatar identity, with its basic synthetic face has 18 expressions (because each of the expression organs has the same 18 preset expressions).
  • the above avatar organ database is for the social network user to select an avatar representing himself and enable his avatar to display the 18 expressions.
  • the avatar implementation method with the expression of the present invention Based on the avatar organ database, the avatar implementation method with the expression of the present invention generates an avatar displaying the specified expression.
  • the basic principle is: generating a basic synthetic face for the user, and then displaying according to the desired
  • the expression selects a face component that meets certain conditions from the avatar organ database to replace the corresponding face component in the basic synthetic face, and generates an avatar that displays the specified expression.
  • "alternative" refers to displaying a face component with a specified expression without displaying a basic face component of the same type.
  • the implementation may be, but not necessarily, the basic face component data on the generated synthetic face.
  • “Replace” is the data with the specified face component.
  • the step of generating a synthetic avatar representing the user avatar is completed when the user's basic face (such as “faceless") is displayed, which may be when the user selects an avatar representing himself or it may be, when any user
  • the avatar should be "faceless”; the step of generating a display-specific facial expression is done when any user avatar displays the specified expression.
  • the visual effect of the avatar “making expressions” with different expression changes before and after is the most vivid and effective.
  • Fig. 6 is a schematic diagram showing an example of avatar with expression of an expression of an expression organ, the data of which is part of the data of the avatar organ database.
  • A, B, C, D, E, and F are six completely different avatars. They have different expressions such as ordinary, proud, angry, mourning, very, pro, helpless, excited, disappointed, hesitant, and reassuring.
  • the avatar A as an example, when a variety of different expressions are displayed, A's eyebrows, eyes, and mouth are all replaced with an organ image.
  • the two avatars of G and H are composed of the aforementioned six avatar facial organs of A-F.
  • the metadata identity information of the combined avatar G is: the eyebrow is from A, the eye is from D, the nose is from E, the mouth is from B, the face is from A, the hair is from F;
  • the metadata identity information of the avatar H is: eyebrow is from C, eye From C, the nose comes from F, the mouth comes from E, the face comes from C, and the hair comes from A.
  • A-brow-normal, D-eye-normal, E-nose (no expression), B-mouth-normal, A-face type are acquired from the avatar organ database.
  • step S104 an example is designated as "satisfied"; in step S105, obtaining A-eyebrow-satisfaction, D-eye- Desirable, B-mouth-satisfied; in step S106, replace the A-brow-normal, D-eye-normal, B-mouth-normal in the basic synthetic face, and finally generate a synthetic face G-satisfied with a smug expression .
  • the processing principle of the three-dimensional avatar is basically the same as that of the two-dimensional avatar, but it is implemented by replacing the organ model implanted into the face model.
  • the avatar is a three dimensional model.
  • Figure 7 contains a 3D face model (with ears) after emptying the organ according to the organ standard outline, and two sets of different facial features of the facial features 3D organ model (left and right eyebrows, left and right eyes, nose, Mouth); implanted into the face model, the two sets of organ models will form two different avatars, image 1 and image 2.
  • An avatar 3D organ model set is a three-dimensional model of different expressions that includes a variety of expressions (such as “laugh” and "pro” as two sets of 3D organ models) to replace the 3D organ model to form the avatar.
  • a variety of expressions such as "laugh” and "pro” as two sets of 3D organ models
  • three kinds of three-dimensional models of "smile” and "pro” are included in the above two avatars (the screenshots are front, side, and oblique 45 degree angles). Because the voids in the face model are hollowed out according to the standard contour of the organ, the 3D organ models of the same type of organs can be implanted interchangeably, and the three-dimensional avatars of different identities can be combined and combined to generate the three-dimensional avatars of different identities (such as taking the above).
  • Image 2's mouth replaces the image of the mouth of the image 1 to form a new combined image 3).
  • a three-dimensional avatar organ database is selected for the social network user to select a three-dimensional avatar representing himself and to enable his three-dimensional avatar to display a plurality of expressions.
  • the method specifically includes the following steps:
  • Step S101 Receive first information, where the first information is related to a virtual image representing a user;
  • the first information includes at least one of the following: the selection information of the basic synthetic face required to be specified from the plurality of preset basic synthetic faces, or one or more attribute information related to the avatar.
  • the first information includes one or more attribute information related to the avatar, including at least one of the following: gender, age, race, occupation.
  • the user can select face organs or partial faces one by one through the interface of FIG. 5 (collectively referred to as face members). . Select the type of face component (such as eyebrows, eyes, nose, mouth, face) by the location button, and then select a specific face component from the list of face components that display different types (see the selected eyes in Figure 5, Other types of organs are unchanged).
  • the user's unselected facial organ type can use a preset default organ.
  • the selection information set of each type of face member described above is used as the basic synthetic face selection information, that is, the first information.
  • the underlying synthetic face to be generated is selected in conjunction with the physiological structure of the face in accordance with the instructions.
  • the instructions are used to describe the avatar attribute screening conditions to be selected, such as screening conditions for gender, age, race, occupation, or other screening conditions, such as eye size values.
  • the instructions may be input by a user or may be a preset default command. If only the filtering condition of a part of the face component in the entire face is given in the instruction, in this step, the screening condition of the remaining part may be automatically filled according to the physiological structure of the face, and the screening conditions may be Set the default value of the property.
  • the above attribute screening condition instruction is used as the attribute information related to the avatar, that is, the first information described above.
  • the selection information for specifying the basic synthetic face and the attribute information for filtering the avatar are stored in the first information, for example, supplementing the selection information with a lack of selection information.
  • the user can create an avatar that can be an expression for other users, and send the avatar to the social network for sharing; the first message is The avatar represents the user who was created the avatar.
  • Creating and sharing an avatar with an expression for a friend will be separately illustrated.
  • Step S102 Acquire one or more facial organs or partial faces from a set of facial organs or partial faces that can be combined to generate different avatars based on the first information.
  • the facial organ or partial face has associated metadata.
  • the metadata associated with the facial organ or the partial face includes at least one of the following: expression information, identity information, perspective information, and location information.
  • the acquired face An organ or partial face is based on its associated metadata.
  • the set of facial organs or partial faces that can be combined to generate different avatars to represent a plurality of users one implementation of which is an avatar organ database.
  • the component option of each type of the face component selected in the foregoing step S101 may be from the organ database and the identification information is used as part of the selection information, so that the step can obtain the corresponding face based on the identification information.
  • the avatar organ database includes images and metadata of the face component, the metadata including expression information, identity information, perspective information, location information, and the like.
  • the attribute information (ie, the screening condition) related to the avatar in the above step S101 may be matched with the attribute information metadata of the organ database, and the metadata satisfying the screening condition and the corresponding avatar face component are acquired.
  • step S102 further includes verifying the rationality and legality of all acquired facial organs or partial faces, such as no nose, no nakedness, and the like.
  • Step S103 Generate an avatar basic synthetic face representing the user based on the facial organ or the partial face.
  • the position information of the face organ or the partial face may be used.
  • the scaling scale and the rotation angle related information included in the position information when combining the images of the plurality of face components, first, the scaling scale and the rotation angle are first. Then, the images that have not been positioned are aligned one by one according to the position of their opposite sites, and the pair of pixels are aligned with the same type of the positioned image (the pairs can have different types) until all the images are completed. Positioning.
  • the eye layer should be above the face layer (assuming the layer above occludes the layer below), and then the synthetic face is displayed in the correct layer occlusion order.
  • Step S104 Receive second information, where the second information is related to a specified expression to be displayed by the avatar;
  • the second information includes, but is not limited to, one or more of the following: user input, specifying an expression to be displayed by the avatar; one or more parameters related to an application environment in which the avatar is located; sending from a social network The information that can be applied to the avatar.
  • a specific second information may be sent from the user and sent through the social network, and is also a parameter related to the application environment.
  • the user inputting the specified expression can be matched with the expression information metadata in the avatar organ database to filter out the face component displaying the expression.
  • the above user input may be a text label describing a specific expression through a drop-down menu, such as "happy”, “sad”, or a pattern identifier that selects a specific expression by clicking, such as Etc., you can also enter emotions or other strings such as ":-)", “:-(", etc., or other user interactions through the keyboard.
  • the user selects a sequence of emoticons by sliding the pointer or across the touch screen, wherein the emoticon that satisfies the trigger condition becomes the second information.
  • the trigger condition may be that when the pointer or touch screen operation is stopped (such as releasing the mouse-up, the finger leaves the touch screen), the pointer or the finger is located at an expression identifier; or the pointer or the finger stays at An emoticon is long enough; it can be other user interactions, such as double-clicking a mouse or a touch gesture.
  • the expression sequence may trigger a plurality of expressions to be displayed, and display them in order, such as "laugh-cry-smile-cry-smile", which means "between tears and laughter".
  • the user who sent the private message selects the expression by pulling down the menu or other means, and as the second information, the second information is received by the user receiving the private message.
  • the user who sends the private message and the private message is a social network user, and the second information is from the social network.
  • the second information is an application environment related parameter, which means that a private message with the avatar expression is available for reading.
  • the avatar expression private message as a complete embodiment, will be separately illustrated.
  • the environmental parameter information can also reflect the expression to be displayed by the avatar at a certain time, for example, in a social network game, according to the development of the game, the user or the friend
  • the avatar can display expressions such as "excitement” and "frustration”; the second information is an expression parameter that acts on behalf of a user's avatar.
  • an emoticon that satisfies the trigger condition is set as an avatar emoticon of the user or other user.
  • the user is provided with a set of default emoticons such as an emoticon trigger condition list and a user-customizable emoticon interface.
  • the automatic expression strategy as a complete embodiment, will be separately illustrated.
  • the information from the social network can also reflect the expression that the user or the avatar of the user wants to display, for example, when the friend logs in to the social network, the avatar of the friend and the user can be displayed.
  • Step S105 Acquire one or more facial organs or partial faces from the set based on the second information
  • the manner of acquiring the facial organ or the partial face is similar to the manner described in step S102, which acquires the face component of the basic synthetic face, and this step uses the second information, that is, the expression as the face. Hole component emoticon metadata filtering conditions.
  • the metadata identity information and the view information between the two are the same or match, and the face component metadata expression information used for the replacement conforms to the expression to be displayed.
  • other metadata attribute information filtering conditions such as category information, color information, age information, ethnic information, occupation information, and the like may be added.
  • Step S106 Generate a synthetic face displaying the specified expression for the avatar based on the basic synthetic face and the facial organ or the partial face.
  • a new synthetic face displaying the specified expression is regenerated based on the previously acquired face component in a manner substantially consistent with step S103, rather than "replacement" based on the previously generated synthetic face.
  • the synthetic face displaying the specified expression may also be generated by replacing the face component.
  • the face component to be replaced needs to be removed from the existing synthetic face, and then the face component to be replaced is inserted, and the position information and shape information of the face component are used for the operation. It may be necessary to normalize the face component image based on the zoom scale and the rotation angle in the position information.
  • This face component replacement operation may also change the occlusion relationship of each face component in the composite face. Therefore, after replacement, it may be necessary to adjust the layer relationship of each face component in the synthesized face to ensure correct display.
  • the image and related information of the face component can also be changed to change the shape, color, skin color and other attributes of the face component, so that the synthesized face of the avatar displays the application or the effect desired by the user;
  • An example is "big eyes", the width of the eye image is the same but the height is multiplied by a magnification (such as 1.2 times);
  • another example is "lipting lipstick", changing the color of the lips in the image;
  • another example is through the image
  • the affine transformation changes its perspective.
  • the synthesized face generated in the above step S106 can be stored as an avatar synthesized face displaying the specified expression, and then the complete avatar organ database can still be displayed.
  • One way is to render and store the avatar with an expression synthetic face as a picture, the picture being a static picture or a dynamic picture with an emoji.
  • Another way is to store the avatar with an expression synthesis face as data information capable of reproducing the avatar expression, and the data information may include an identification information set of each face component used by the avatar, such as a character string. 1001, 2001, 3001, 4001,5001, 6001; expression: 1"; after that, after parsing the data and obtaining the picture of the face component pointed thereto and the related position information, the above picture can be synthesized by the method described in step S103. , the number will be According to the information, it is reproduced as an avatar with expressions.
  • the emoticon-implemented avatar implementation method of the present invention further includes generating an avatar limb motion with an expression.
  • the avatar organ database on which the method is based further includes information of a limb organ, the body organ information including shape information, location information, category information, expression information, identity information; and color information, age information, and gender information. , ethnic information, career information, etc.
  • the method When the method is implemented, firstly generating a basic synthetic avatar including a face and a limb; then selecting a specific expression; and then selecting a face component corresponding to the expression from the avatar organ database according to the selected specific expression, Substituting the corresponding face component of the synthetic avatar, selecting a limb organ corresponding to the expression from the avatar organ database, replacing the corresponding limb organ of the synthetic avatar, so that the finally obtained avatar conforms to the previously selected specific expression.
  • an avatar expression private message can be sent between different users.
  • the user A who sent the private message pulls the menu selection expression ("excited") as the second information of step S104, and then sends the private message to the user B.
  • the user B opens the private message, it first receives the first information, that is, the avatar description information of the user A, and firstly sees the user A avatar that has not yet made the expression through steps S101-S103; the user B receives the second information, and proceeds to step S104.
  • -106 sees the user A avatar making the expression specified by the second message ("excitement").
  • the user can create and share an avatar representing a friend or celebrity.
  • the user C selects an organ component to be combined into an avatar representing a friend or a celebrity D, and transmits the avatar as the first information to the social network, and the user E receives the first information, and the user E may be, but It is not necessarily D; the expression of the second information about the avatar may be specified by the user C or the user E, or may be specified by the system; and displayed to the user E through the steps S101-S106, representing the user D The avatar of the specified expression.
  • the present invention further provides a method for controlling an expression of a user's virtual image by implementing at least one computing device including a processor, a memory, and a communication platform interface, wherein the virtual image is a virtual image generated by free combination, and the display of the expression is realized.
  • the manner may be, but not necessarily, through steps S101-S106; the method sets an expression satisfying the trigger condition as an expression of the user avatar based on one or more parameters related to the application environment in which the avatar is located.
  • the expression state of the freely combined avatar with expressions is associated with environmental parameters, the environment A parameter change can trigger an avatar expression change.
  • the association strategy between the environmental parameters and the avatar expression is configurable; for example, a user sets his own expression strategy to display a "sad" expression when his game gets a bad card in the game; Another user sets her own expression strategy to show that her avatar shows an "angry" look when she gets a bad card in the game.
  • the avatar with expression becomes a robot that can reflect the environmental change and change the expression according to the user's personality, representing the user.
  • the user can also set the avatar expression strategy of other users he sees, for example, to make him see that other users' avatars always display a "cry" expression when they get bad cards in the game. .
  • the user is provided with a set of default emoticons such as an emoticon trigger condition list and a user-customizable emoticon interface.
  • a set of default emoticons such as an emoticon trigger condition list and a user-customizable emoticon interface.
  • an expression strategy is generated or adjusted; under the action of the expression strategy, when the environment of the avatar reaches a specific trigger condition, the trigger causes the avatar of the user to be displayed.
  • the associated expression may also be automatically displayed according to the user's application or game operation or other behavior, including but not limited to facial expressions, gestures, gestures, modal particles, sounds, animations, and messages with expressions. .
  • the above method for controlling the expression of the user's avatar acts on the avatar generated by the free combination, and the manner in which the expression is displayed may be, but not necessarily, through steps S101-S106.
  • the present invention also provides an emoticon device, which is implemented in at least one computing device including a processor, a memory, and a communication platform interface, wherein at least one device further includes a pointer or a touch screen for the user to control the free combination virtual
  • a method for expressing an image, the avatar is a avatar generated by free combination, and the implementation manner of displaying the expression may be, but not necessarily, through steps S101-S106.
  • the method includes the following steps:
  • Step 1) receiving a specified expression sequence, the expression sequence being an expression, or a plurality of expressions selected by the user by sliding the pointer or swiping the touch screen;
  • Step 2 Set an expression in the above expression sequence that satisfies the trigger condition as an expression of the user's avatar.
  • FIG 10 is a schematic illustration of the aforementioned emoticon.
  • the central portion of the emoticon is used to dynamically display an avatar of the user's expression, and an emoticon is included around the central portion, the emoticon providing a human-machine for the user.
  • the interface allows the user to input an expression by operating the emoticon (clicking on the emoticon or sliding over the emoticon can be regarded as inputting the emoticon through the emoticon, wherein sliding the emoticon triggers the emoticon preview in the emoticon
  • the module, clicking on the emoticon will invoke the avatar emoticon switching module in the emoticon, and the expression sequence can be generated by continuously sliding over the emoticon), and then the emoticon is implemented according to steps S101-S106 of the present invention or according to other display emoticons.
  • the emoticon in addition to the above functions, can also be based on the user's
  • the input expression outputs the corresponding modal particle, such as the modal particle "hehe” corresponding to the expression "happy", which is realized by text or by sound playback.
  • other user interaction functions such as applications or game operations can be integrated in the emoticon.
  • the user selects a sequence of emoticons by sliding the pointer or swiping across the touch screen, wherein the emoticon that satisfies the trigger condition becomes the emoticon displayed by the user's avatar.
  • the trigger condition may be that when the pointer or the touch screen is stopped (such as releasing the mouse-up, the finger leaving the touch screen), the pointer or the finger is located at an expression identifier; or, the pointer or the finger Stay in an emoticon for a long time; it can also be other user interactions, such as double-clicking a mouse or touching a gesture.
  • the expression sequence may trigger a plurality of expressions to be displayed, and display them in order, such as "laugh-cry-smile-cry-smile", which means "between tears and laughter".
  • the above-mentioned emotic apparatus acts on the avatar generated by free combination, and the implementation manner of displaying the expression may be, but not necessarily, through steps S101-S106.
  • the present invention also provides a method for computing a device by using at least one computing device, a memory, and a communication platform interface, the avatar is a avatar generated by free combination, and includes:
  • Step 1 Receive specified avatar information
  • Step 2 Return at least one user information, and the avatar component metadata and the specified avatar component metadata satisfy the matching condition.
  • the user can also be searched for by the known avatar.
  • the user can use the avatar to find other users, such as looking for friends, or looking like a star, or someone who meets their expectations.
  • This embodiment establishes an inverted index for the user group from which a list of users using the organ component as an avatar can be obtained. Because the identity description of the avatar is a combination of avatar organ components, after matching the metadata of each component of the target avatar with the metadata in the avatar organ database, the user list of the organ components satisfying the matching condition in the database is And do the intersection processing, you can find the user who meets the avatar condition.
  • the avatar organ database is an existing database.
  • 3D avatar and 2D avatar are mainly because the 3D face model is hollowed out according to the organ standard contour, and each 3D organ model conforms to the standard outline of the organ, ensuring that it can be implanted into the 3D face.
  • the perspective information of the two-dimensional avatar is replaced by the three-dimensional organ standard contour information; the position information describes the 3D model, which relates the locus information to the three-dimensional spatial orientation, ensuring the correct implantation or replacement of the 3D organ model; Identity information, various attribute information, etc. are not substantially different from two-dimensional.
  • a two-dimensional virtual image is taken as an example to describe in detail a method of establishing or expanding a virtual image organ database.
  • the steps of establishing or augmenting the avatar organ database include:
  • Step S401 Acquire an original image of the avatar with an expression, and label the perspective information, the identity information, the expression information, the location information, and the category information;
  • the acquired avatar original image is an electronic format image file drawn by the artist on the computing device according to the specification by the drawing software.
  • the avatar original image may also be captured by a camera or photographic model and saved in any file format.
  • the face orientation of the avatar model is fixed and unified, and the distance is adjusted based on the size of the model face to ensure that the perspective effect is consistent with the above-mentioned specific perspective cross box, and the avatar angle is drawn.
  • the size and position of the organs are in accordance with certain standards, so that the same type of organs from different avatars or different expressions can be replaced and used for combination to generate different but reasonable faces.
  • the above method can also be used for camera or photography.
  • the perspective effect is determined to be consistent with the specific perspective cross box, thereby ensuring the replaceability of the facial organs for the combination to generate different but reasonable faces.
  • the original image of each avatar obtained by this step shall be marked with the category information, that is, all the organ types it contains; the angle of view information should be marked, such as the description information of the specific perspective cross box used to generate the image; the location information should be marked, for example The intersection of the above-mentioned perspective cross is used as the opposite point, and the position of the pair of points in the image or outside the boundary of the image is marked (generally, the distance between the pair of points and the upper left corner of the image).
  • the location information may also include information such as scaling scale, rotation angle, etc., if desired. If there is a default location, the original image that matches the default location may have no location information. If there is a default perspective, the original image that matches the default perspective may have no perspective information. If there is a default category, the original image that matches the default category may have no category information.
  • the acquired avatar original image is one or a set of multi-layer files that can be aligned, facial organs such as hair, eyebrows, eyes, nose, mouth, ears, and skin.
  • the limbs, arms, thighs, torso and other limbs are located on different layers.
  • the relevant data of each layer contains the shape information of the facial organs or limb organs located in the layer, as well as the category information (if there is default category information, for example, a certain layer defaults to a corresponding organ type, which can be omitted) And the offset distance of the layer relative to the full image (if not zero).
  • Each layer can be processed independently.
  • the original image set obtained in this step may have multiple avatars, and the original image of each avatar should be labeled with the identity information of the avatar for distinguishing; different avatars may have multiple expressions, such as smiles and depressions. , crying, shy, surprised, sad, etc., each original image with an expression should be marked with its expression information for distinction.
  • These expressions of the avatar are related to the shape of the face organ such as the eye, the eyebrow, the nose, the mouth, the position in the face, and the posture and movement of the limbs such as the arms and thighs (such as the posture of the arm when excited) Or the action can reflect this expression).
  • This step should collect as many different expressions as possible when acquiring the original image of the avatar, so as to facilitate the generation of the combined avatar with expressions.
  • Step S402 processing the avatar original image acquired in step S401, extracting a graphic file of the facial organ or the partial face, and labeling the viewing angle information, the identity information, the expression information, the position information, and the category information.
  • the individual layer is extracted as a graphic file of a facial organ or a partial face (collectively referred to as a face component), and the viewing angle information is marked (for the original Image view information), identity information (for the original image identity information), emoticon information (for the original image emoticon information), location information (for the original image location information, adjusted according to the offset distance of the layer), and Category information (for category information for this layer).
  • Information with default values can be omitted.
  • the original image of the undivided organ layer needs to be processed to extract individual face features, which can be achieved by the face component template and organ contour shape recognition, such as identifying the nose and then placing the area outside the nose. Crop or blank.
  • a graphic file of a facial organ or a partial face (collectively referred to as a face component) from the avatar original image
  • the face component only occupies the entire avatar original image. a part, therefore, cutting (or otherwise removing) the surrounding blank portion during extraction, leaving only the image of the rectangle size of the face component as a graphic file, and calculating and labeling the type of the matching point according to the above cutting information
  • the location of the location in the image or outside the border of the image is used as location information to properly display the image. If necessary, the location information may also include information such as scaling scale, rotation angle, etc., calculated based on the cropping information described above.
  • the original image of the captured avatar is a multi-layer file
  • the facial organs such as the hair, the eyebrow, the eyes, the nose, the mouth, and the ear are located on different layers, and the hair, the eyebrow, and the The layers of the human face organs such as the eyes, nose, mouth, and ears are respectively taken out from the original image of the avatar, and then the surrounding blank portions are removed by referring to the above method.
  • the extracted face part graphic file is not separately generated or saved, but is pointed to the image resource in other manners, for example, the identification information is specified, the image file, the layer information, and then the cropping is performed. information.
  • the partial face is a combination of several facial organs, so the extraction of the partial face graphic file is not substantially different from the extraction of the facial organ graphic file.
  • the obtained one or more face part graphic files include shape information of the face part, category information (one or more organ types), identity information (one or more avatar identities), and the like, in addition, It should include an identification information for differentiation.
  • Step S403 normalizing the facial organs or partial face graphic files of each category, and updating the position information thereof.
  • the extracted face components are separately processed according to the organ type; the type of the partial face is a set of the constituent organ types.
  • the "mouth" graphic files are classified into one type of unified processing.
  • the smallest common circumscribed rectangle of all the face part pictures of the same category can be taken as the normalized picture size of the face part of the category, and the same type of face part graphic file for easy replacement can be generated.
  • different expression graphic files of the same type of face component may be separately optimized to obtain a circumscribed rectangle which may be smaller for individual types and expressions.
  • the normalized face component graphic file is not separately generated or saved, but is pointed to the image resource in other ways, such as giving the identification information to specify the image file, the layer information, and then Cut the information. For example, multiple face part graphic files of the same category are combined into one file.
  • Step S404 adjusting information or labeling additional information for the facial organs or partial faces in each category, including shape information, category information, expression information, identity information, perspective information, and the like.
  • facial expressions of facial expressions can be used to display a variety of expressions, such as the nose is usually not changed greatly because of the expression; special markings can be made for such graphic files, or omitted Its expression information.
  • the face component graphic file may be further labeled with one or more attribute information, including but not limited to identification information, color information, age information, gender information, ethnic information, and occupation information.
  • the information of the face component can be adjusted to change the application range of the face component, such as adjusting the expression information, thereby allowing the identity information or other attribute information to meet the condition of the face component for displaying steps S402-S403.
  • An expression other than the annotated facial expression is a special identity in which facial muscle tissue is damaged so that the expression is abnormal; another example is an actor or user identity who can make or hope to make a specific face .
  • the image and related information of the face component can also be changed to change the shape, color, skin color and the like of the face component by calculation; one example is "big eyes", and the width of the eye image is unchanged but The height is multiplied by a magnification (eg 1.2 times); another example is "lipting" to adjust the color of the lips; another example is to change the perspective of the perspective through the affine transformation of the image.
  • Step S405 Store information of the previously obtained facial organs or partial faces to generate a virtual image organ database.
  • the information stored in the organ database for a facial organ or partial face includes an image (ie, shape information) and metadata, which may include location information, expression information, identity information, perspective information. , category information, identification information, color information, age information, gender information, ethnic information, occupational information, etc.
  • image ie, shape information
  • metadata may include location information, expression information, identity information, perspective information. , category information, identification information, color information, age information, gender information, ethnic information, occupational information, etc.
  • the above information may store the shape information (ie, the graphic file) of the face component separately in one data set, and store other information of the face component in another data set, two Data in a data set can be associated by identification information.
  • shape information ie, the graphic file
  • two Data in a data set can be associated by identification information.
  • the face part images may provide the application with candidate components that can be combined into an avatar.
  • the identity information is the same or matches
  • the two-dimensional image angle information or the three-dimensional image contour information is the same or matches
  • the expression information conforms to the expression required by the application the face component images can be used to combine the generated image with the required The avatar of the expression.
  • the artist draws the original images of the faces of 6 males and 6 female avatars of the same viewing angle according to step S401.
  • the eyebrows, eyes, nose, mouth, and face of the 12 avatars are different from each other. Among them, only the eyebrows, eyes, and mouth have different shapes for different expressions, and each avatar has 18 expressions, and a total of 216 original images with facial expressions.
  • the organs of men and women are not interchangeable, but the same type of organs between male avatars and female avatars can be interchanged because the angle of view remains the same. Then, follow the remaining steps to generate an organ database.
  • the five organs are derived from six original avatars of the same gender, and each combination avatar identity has 18 expressions each.
  • the above avatar organ database for social network users to choose from Table your own avatar and make its avatar able to display these 18 expressions.
  • the present invention further provides a system implemented in at least one computing device, the computing device comprising at least one processor, a memory and a communication platform interface, and a system for generating a specified expression for the freely combined avatar
  • the avatar includes, but is not limited to, two-dimensional or three-dimensional characters, animals, cartoon shapes, or abstract emotion faces, and the facial organs or partial faces acquired by the system are derived from a set that can be combined to generate different avatars.
  • the system includes:
  • a first information receiving module configured to receive first information, where the first information is related to an avatar representing a user
  • a first data acquisition module acquiring one or more facial organs or partial faces from the set based on the first information
  • a basic synthetic face generating module based on the facial organ or the partial face, generating a avatar basic synthetic face representing the user
  • a second information receiving module configured to receive second information, where the second information is related to a specified expression to be displayed by the avatar
  • a second data acquisition module acquiring one or more facial organs or partial faces from the set based on the second information
  • the facial expression generating module is configured to generate a synthetic face displaying the specified expression for the avatar based on the basic synthetic face and the facial organ or the partial face obtained by the second data acquiring module.
  • the present invention also provides a system implemented in at least one computing device, the computing device comprising at least one processor, a memory, and a virtual image organ database that is built or expanded to freely combine avatars to generate expressions, including:
  • An original image obtaining module configured to acquire an original image set of an avatar with an expression or a partial avatar, and identify an emoticon and an avatar thereof for the original image, where the format adopted by the original image includes but is not limited to one or more Graphic description format including bitmap format, vector format, or 3D model format;
  • An image and metadata generation module generates an image and metadata of a facial organ or a partial face based on the original image, and the metadata includes expression information.
  • the present invention also provides a system implemented in at least one computing device, the computing device comprising at least one processor, a memory, and a virtual image organ database that is built or expanded to freely combine avatars to generate expressions, including:
  • the original image acquisition module acquires an original image of the avatar with an expression, and marks the perspective information and the identity letter.
  • the component extracting module processes the virtual avatar original image, extracts a graphic file of the facial organ or the partial face, and labels the viewing angle information, the identity information, the expression information, the position information, and the category information;
  • the normalization processing module normalizes the facial organs or partial face graphic files of each category and updates their position information
  • Adjust or label the information module to adjust or label additional information for facial organs or partial faces in each category, including shape information, category information, expression information, identity information, perspective information, etc.
  • the image and metadata generation module stores information of the previously obtained facial organs or partial faces to generate a virtual image organ database.
  • the present invention further provides a system for implementing at least one computing device including a processor, a memory, and a communication platform interface, and controlling an expression of a user's avatar, the avatar being a avatar generated by free combination.
  • the expression setting module sets an expression satisfying the trigger condition as an expression of the user avatar based on one or more parameters related to the application environment in which the avatar is located.
  • the present invention also provides a computing device implemented in at least one processor, memory, and communication platform interface, wherein at least one device further includes a pointer or a touch screen, and a system for controlling an expression of a user avatar, the avatar Is a avatar generated by free combination, the system includes:
  • An expression receiving module configured to receive a specified expression sequence, the expression sequence is an expression, or a plurality of expressions selected by the user by sliding the pointer or swiping the touch screen;
  • the expression setting module is configured to set an expression that satisfies the trigger condition in the expression sequence as an expression of the user avatar.
  • the present invention also provides a computing device implemented in at least one processor, a memory, and a communication platform interface, which is an avatar for finding a user's system.
  • the avatar is a avatar generated by free combination, and the system includes:
  • a avatar receiving module receiving the specified avatar information
  • the avatar search module returns at least one user information, and the avatar component metadata and the specified avatar component metadata satisfy the matching condition.
  • the present invention has wide adaptability, and the avatar generating system (i.e., the system for generating a designated expression for freely combining avatars mentioned in the foregoing) can be used as the back end of the social data server or the game server as shown in FIG.
  • the system can also use the avatar generation system as an independent service provider as shown in FIG. It is also possible to use the avatar generation system and the avatar organ creation system as independent service providers as shown in FIG.
  • the computing device can generally comprise a processor, a storage medium readable by the processor (including volatile and nonvolatile memory and/or storage elements), at least one An input device and at least one output device.
  • One or more programs that can utilize aspects of the creation and/or implementation of the domain-specific programming model of the present invention, for example, by using a data processing API or the like, are preferably implemented in a high-level procedural or object-oriented programming language to interact with a computer. System communication.
  • the program can be implemented in assembly or machine language if desired. In any case, the language can be a compiled or interpreted language and combined with a hardware implementation.

Abstract

一种为自由组合创作的虚拟形象生成及使用表情的实现方法,包括:从一虚拟形象器官集合中选择脸部器官或局部脸孔作为部件形成基础虚拟形象;选择特定表情;然后以符合条件的器官部件替代所述基础虚拟形象中的器官部件,使虚拟形象符合所选择的特定表情。一种虚拟形象器官数据库的生成方法,满足上述虚拟形象表情实现的需求。该方法能够利用较小数量的图像资源,产生极大数量的自由组合虚拟形象供用户选择代表个人的虚拟形象,并且以虚拟形象面部表情来表达个人情感或情绪。该方法适用于不同种类的二维和三维虚拟形象,包括人脸、全身、动物、卡通造型、和抽象情感符脸孔。

Description

为自由组合创作的虚拟形象生成及使用表情的方法和系统 技术领域
本发明涉及计算机图形领域,特别涉及一种为自由组合创作的虚拟形象生成及使用表情的方法和系统。
背景技术
达尔文在他1872年出版的《人类和动物的表情》一书中提到,人类具有以下特殊表情:痛苦与哭泣、意志消沉、忧虑、悲哀、沮丧、失望、快乐、精神奋发、爱情、温情、崇拜、回想、默想、恶劣情绪、愠怒、决心、憎恨与愤怒、鄙视、轻蔑、厌恶、自觉有罪、骄傲、孤立无援、忍耐、肯定和否定、惊奇、吃惊、恐惧、大惊、自己注意、羞惭、谦虚、脸红(北京大学出版社,周邦立译)。达尔文相信人类(和动物)进化出表达情绪和识别表情的能力,是使人类(动物)能够生成社会组织从而加大物种生存力的重要手段;表情和语言文字一并作用于人类社会之存在、发展的基础。心理学家、生理学家普遍认为面部表情的机制与不同部位的肌肉变化相关,但是迄今为止仍然无法完全解析其中奥秘并建立完整模型,因为面部表情肌肉组织系统至少有以下18种肌肉(有的分左右):耳前肌、颊肌、降眉肌、降口角肌、降下唇肌、降鼻中隔肌、额肌、提口角肌、提上唇肌、颏肌、鼻肌、眼轮匝肌、口轮匝肌、颈阔肌、降眉间肌、笑肌、颧大肌、颧小肌;其组合效应极其复杂微妙。基于此,作为一个比较成熟并且商业化程度极高的领域,计算机生成影像界(CGI)的做法是,通过动作捕捉、图像捕捉真人替身的表情,然后把表情重现在固定的动画人物造型脸孔上,而不是经过表情肌肉组织系统模型算法来生成人物脸孔表情。CGI界并不需要为不限量的,非预先固定的脸孔生成表情,譬如为真人随机群众里众多个脸孔生成表情图像。在互联网诞生之初,以文本作为在线交流的时代就存在“情感符”(emoticon,文字标点符号绘制的面部表情,比如:-)亦称“笑容符”smiley)。今日情感符更加图形化和个性化,但其多样性和仿真性还远远不足以区分个别用户,让用户拿来当作自己的个人虚拟形象。
计算机虚拟形象越来越多地用于在线社交环境中,成为形成用户带入感和亲密感的一个重要组成部分。计算机虚拟形象技术作为一个市场相对成熟的领域能够通过自由组合不同的虚拟形象部件器官来创造极大数量的不同虚拟形象脸孔,但是这些虚拟形象脸孔缺少做不同面部表情的功能。比较知名的用户自定义虚拟形象系统比如模拟人生(Sims)、魔兽世界(World of Warcraft)、腾讯QQ等,他们的自由组合虚拟形象都不 能做不同的面部表情。除此之外,传统的警方脸孔组合素描图册系统所生成的也是固定面无表情的脸孔。这些先例没有一个能为自由组合的虚拟形象脸孔生成多种面部表情,尽管多种面部表情能够加强产品的用户体验和用户的社交体验。
发明内容
本发明的目的在于现有技术无法为自由组合的虚拟形象脸孔灵活生成多种面部表情的缺陷,从而提供一种能够为自由组合的虚拟形象脸孔生成多种面部表情的方法与系统。
为了实现上述目的,本发明提供了一种实现于至少一个计算装置,所述计算装置至少包含一个处理器、存储器和通讯平台接口,为虚拟形象生成指定表情的方法,所述虚拟形象包括但不限于二维或三维的人物、动物、卡通造型、或者抽象情感符脸孔,该方法所获取的脸部器官或局部脸孔来自于一个供组合生成不同虚拟形象来代表多个用户的集合,该方法包括以下步骤:
步骤1、接收第一信息,所述第一信息与代表用户的虚拟形象相关;
步骤2、基于第一信息从上述集合获取一个或多个脸部器官或局部脸孔;
步骤3、基于上述脸部器官或局部脸孔,生成代表用户的虚拟形象基础合成脸孔;
步骤4、接收第二信息,所述第二信息关于虚拟形象所欲显示的指定表情;
步骤5、基于第二信息从上述集合获取一个或多个脸部器官或局部脸孔;
步骤6、基于基础合成脸孔和步骤5所得到的脸部器官或局部脸孔,为虚拟形象生成显示上述指定表情的合成脸孔。
上述技术方案中,所述第一信息包括以下至少一个,从多个预设基础合成脸孔中指定所需的基础合成脸孔的选择信息,或者一个或多个与虚拟形象有关的属性信息。
上述技术方案中,所述第一信息所包括的一个或多个与虚拟形象有关的属性信息,包括以下至少一个,性别、年龄、种族、职业。
上述技术方案中,所述获取的脸部器官或局部脸孔具有关联的元数据。
上述技术方案中,所述与脸部器官或局部脸孔关联的元数据包括以下至少一个:表情信息、身份信息、视角信息、轮廓信息、位置信息。
上述技术方案中,所述获取的脸部器官或局部脸孔,基于其相关联的元数据。
上述技术方案中,所述第二信息包括以下至少一个:
用户输入,指定其虚拟形象所欲显示的表情;
一个或多个与虚拟形象所处应用环境相关的参数;
来自社交网络发送的能够作用于虚拟形象的信息。
上述技术方案中,还包括存储能显示指定表情的自由组合虚拟形象或带表情合成脸孔,该存储信息供其它应用使用。
上述技术方案中,将所述虚拟形象带表情合成脸孔渲染并存储为图片,所述图片为静态图片或带有表情动画的动态图片。
上述技术方案中,将所述虚拟形象带表情合成脸孔存储为能够重现该虚拟形象的数据信息,所述数据信息包括虚拟形象所用脸部器官或局部脸孔的标识信息。
上述技术方案中,所述虚拟形象还包括肢体器官,所述虚拟形象的肢体动作符合所述指定表情。
上述技术方案中,所述集合为虚拟形象器官数据库。
上述技术方案中,还包括建立或扩充虚拟形象器官数据库的步骤。
上述技术方案中,用户在给其他用户发送的信息中包括能做表情的虚拟形象。
上述技术方案中,用户为其他用户制作能做表情的虚拟形象,并且将该虚拟形象发送到社交网络上作为分享。
上述技术方案中,在步骤4中所接收的第二信息为一表情序列信息,在步骤6中为虚拟形象生成显示该表情序列中表情的合成脸孔。
本发明还提供了一种实现于至少一个计算装置,所述计算装置至少包含一个处理器和存储器,建立或扩充带表情的虚拟形象器官数据库的方法,该数据库供组合生成不同虚拟形象来代表多个用户,该方法步骤包括:
步骤1、获取带有表情的虚拟形象或者部分虚拟形象的原始图像集合,为所述原始图像标识其表情和所属虚拟形象,所述原始图像采用的格式包括但不限于一或多包括位图格式、矢量图格式在内的图形描述格式,或三维模型格式;
步骤2、基于所述原始图像生成脸部器官或局部脸孔的图像和元数据,元数据包括表情信息。
上述技术方案中,所述脸部器官或局部脸孔的元数据具有身份信息,所述身份信息基于其组成部分原始图像所属虚拟形象。
上述技术方案中,所述脸部器官或局部脸孔元数据包括以下至少一个:
二维图像视角信息,或者赋予元数据不含视角信息的所述二维脸部器官或局部脸孔一个默认视角信息;
三维图像轮廓信息,或者赋予元数据不含轮廓信息的所述三维脸部器官或局部脸孔一个默认轮廓信息。
上述技术方案中,所述脸部器官或局部脸孔元数据还包括位置信息,所述位置信息用于该脸部器官或局部脸孔在整个虚拟形象中的定位。
上述技术方案中,还包括为所述脸部器官或局部脸孔标注至少一个属性信息,所述属性信息包括但不限于标识信息、类别信息、颜色信息、年龄信息、性别信息、种族信息、职业信息。
本发明又提供了一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,控制用户虚拟形象的表情的方法,所述虚拟形象是自由组合生成的虚拟形象:
基于一个或多个与虚拟形象所处应用环境相关的参数,将满足触发条件的表情设定为用户虚拟形象的表情。
本发明还提供了一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,其中至少一个装置还包含指向器或触控屏,控制用户虚拟形象的表情的方法,所述虚拟形象是自由组合生成的虚拟形象,包括:
步骤1、接收指定表情序列,所述表情序列为一个表情,或用户通过滑动指向器或者划过触控屏来选择的多个表情;
步骤2、将上述表情序列中满足触发条件的表情设定为用户虚拟形象的表情。
本发明又提供了一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,以虚拟形象来查找用户的方法,所述虚拟形象是自由组合生成的虚拟形象,包括:
步骤1、接收指定虚拟形象信息;
步骤2、返回至少一个用户信息,其虚拟形象部件元数据与指定虚拟形象部件元数据满足匹配条件。
本发明还提供了一种实现于至少一个计算装置,所述计算装置至少包含一个处理器、存储器和通讯平台接口,为虚拟形象生成指定表情的系统,所述虚拟形象包括但不限于二维或三维的人物、动物、卡通造型、或者抽象情感符脸孔,该系统所获取的脸部器官或局部脸孔来自于一个供组合生成不同虚拟形象来代表多个用户的集合,该系统包括:
第一信息接收模块,用于接收第一信息,所述第一信息与代表用户的虚拟形象相关;
第一数据获取模块,基于第一信息从上述集合获取一个或多个脸部器官或局部脸孔;
基础合成脸孔生成模块,基于上述脸部器官或局部脸孔,生成代表用户的虚拟形象基础合成脸孔;
第二信息接收模块,用于接收第二信息,所述第二信息关于虚拟形象所欲显示的指定表情;
第二数据获取模块,基于第二信息从上述集合获取一个或多个脸部器官或局部脸孔;
带表情合成脸孔生成模块,基于基础合成脸孔和所述第二数据获取模块所得到的脸部器官或局部脸孔,为虚拟形象生成显示上述指定表情的合成脸孔。
上述技术方案中,所述第一信息包括以下至少一个,从多个预设基础合成脸孔中指定所需的基础合成脸孔的选择信息,或者一个或多个与虚拟形象有关的属性信息。
上述技术方案中,所述第一信息所包括的一个或多个与虚拟形象有关的属性信息,包括以下至少一个,性别、年龄、种族、职业。
上述技术方案中,所述获取的脸部器官或局部脸孔具有关联的元数据。
上述技术方案中,所述与脸部器官或局部脸孔关联的元数据包括以下至少一个:表情信息、身份信息、视角信息、轮廓信息、位置信息。
上述技术方案中,所述获取的脸部器官或局部脸孔基于其相关联的元数据。
上述技术方案中,所述第二信息包括以下至少一个:
用户输入,指定其虚拟形象所欲显示的表情;
一个或多个与虚拟形象所处应用环境相关的参数;
来自社交网络发送的能够作用于虚拟形象的信息。
上述技术方案中,还包括一存储模块,用于存储能显示指定表情的自由组合虚拟形象或带表情合成脸孔,其它应用使用该存储信息时不需要所述供组合生成不同虚拟形象来代表多个用户的集合。
上述技术方案中,所述存储模块将所述虚拟形象带表情合成脸孔渲染并存储为图片,所述图片为静态图片或带有表情动画的动态图片。
上述技术方案中,所述存储模块将所述虚拟形象带表情合成脸孔存储为能够重现该虚拟形象的数据信息,所述数据信息包括虚拟形象所用脸部器官或局部脸孔的标识信息。
上述技术方案中,所述虚拟形象还包括肢体器官,所述虚拟形象的肢体动作符合所述指定表情。
上述技术方案中,所述集合为虚拟形象器官数据库。
上述技术方案中,还包括建立或扩充虚拟形象器官数据库的模块。
上述技术方案中,用户在给其他用户发送的信息中包括能做表情的虚拟形象。
上述技术方案中,用户为其他用户制作能做表情的虚拟形象,并且将该虚拟形象发送到社交网络上作为分享。
上述技术方案中,第二信息接收模块所接收的第二信息为一表情序列信息,带表情合成脸孔生成模块为虚拟形象生成显示该表情序列中表情的合成脸孔。
本发明又提供了一种实现于至少一个计算装置,所述计算装置至少包含一个处理器 和存储器,建立或扩充带表情的虚拟形象器官数据库的系统,该数据库供组合生成不同虚拟形象来代表多个用户,该系统包括:
原始图像获取模块,用于获取带有表情的虚拟形象或者部分虚拟形象的原始图像集合,为所述原始图像标识其表情和所属虚拟形象,所述原始图像采用的格式包括但不限于一或多包括位图格式、矢量图格式在内的图形描述格式,或三维模型格式;
图像与元数据生成模块,基于所述原始图像生成脸部器官或局部脸孔的图像和元数据,元数据包括表情信息。
上述技术方案中,所述脸部器官或局部脸孔的元数据具有身份信息,所述身份信息基于其组成部分原始图像所属虚拟形象。
上述技术方案中,所述脸部器官或局部脸孔元数据包括以下至少一个:
二维图像视角信息,或者赋予元数据不含视角信息的所述二维脸部器官或局部脸孔一个默认视角信息;
三维图像轮廓信息,或者赋予元数据不含轮廓信息的所述三维脸部器官或局部脸孔一个默认轮廓信息。
上述技术方案中,所述脸部器官或局部脸孔元数据还包括位置信息,所述位置信息用于该脸部器官或局部脸孔在整个虚拟形象中的定位。
上述技术方案中,还包括为所述脸部器官或局部脸孔标注至少一个属性信息,所述属性信息包括但不限于标识信息、类别信息、颜色信息、年龄信息、性别信息、种族信息、职业信息。
本发明再提供了一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,控制用户虚拟形象的表情的系统,所述虚拟形象是自由组合生成的虚拟形象,该系统包括:
表情设定模块,基于一个或多个与虚拟形象所处应用环境相关的参数,将满足触发条件的表情设定为用户虚拟形象的表情。
本发明还提供了一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,其中至少一个装置还包含指向器或触控屏,控制用户虚拟形象的表情的系统,所述虚拟形象是自由组合生成的虚拟形象,该系统包括:
表情接收模块,用于接收指定表情序列,所述表情序列为一个表情,或用户通过滑动指向器或者划过触控屏来选择的多个表情;
表情设定模块,用于将上述表情序列中满足触发条件的表情设定为用户虚拟形象的表情。
本发明还提供了一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置, 以虚拟形象来查找用户的系统,所述虚拟形象是自由组合生成的虚拟形象,该系统包括:
虚拟形象接收模块,接收指定虚拟形象信息;
虚拟形象查找模块,返回至少一个用户信息,其虚拟形象部件元数据与指定虚拟形象部件元数据满足匹配条件。
本发明的优点在于:
本发明的方法和系统不需要动作捕捉、图像捕捉个别用户的脸孔,能够利用较少数量的图像资源产生极大数量的自由组合虚拟形象,供用户选择代表个人的虚拟形象并且以虚拟形象面部表情来表达个人情感或情绪。
附图说明
图1是人脸中对位点划分的示意图;
图2是图像信息与元数据的示意图;
图3是本发明的带有表情的虚拟形象生成方法生成显示指定表情的虚拟形象的基本过程的示意图;
图4是本发明的带有表情的虚拟形象生成方法的流程图;
图5是一个实施例中所采用的脸部器官选择界面的示意图;
图6是虚拟形象带表情器官图像替换的一个实例示意图;
图7是三维器官模型的示意图;
图8是生成带表情的三维模型的示意图;
图9是本发明的建立或扩充虚拟形象器官数据库的方法的流程图;
图10是一个实施例中采用本发明的方法所实现的虚拟形象表情仪的示意图;
图11是在一个实施例中,本发明的为自由组合虚拟形象生成指定表情的系统作为社交数据服务器或游戏服务器的后端系统的示意图;
图12是在一个实施例中,本发明的为自由组合虚拟形象生成指定表情的系统作为独立的服务提供者的示意图;
图13是在一个实施例中,本发明的为自由组合虚拟形象生成指定表情的系统与虚拟形象器官创建系统各自作为独立的服务提供者的示意图;
图14是一个实施例中采用本发明的方法所实现的虚拟形象表情私信的示意图。
具体实施方式
现结合附图对本发明作进一步的描述。
概念定义
为了便于理解,首先对本发明中所涉及的一些概念做统一的说明。
客户端:客户端(client-side device)可以是诸如个人计算机PC(Personal Computer)、笔记本电脑、平板电脑、手机、智能手机在内的终端设备;也可以是终端设备中的客户端模块,例如:网页浏览器(web browser)客户端、即时通信(instant messenger)应用客户端等。
对位点:对位点(alignment origin)指用来将多张图像重叠在一起而确保各图之间相对方位一致的共同原点;在印刷界类似概念是对位标记(registration mark)尤其用于四色印刷。
虚拟形象:虚拟形象(avatar)是在互联网、互联网应用中用来代表个人用户,并展示给其他用户的图形形象,可以仅含头像或兼含身体、肢体、发型、着装、配饰。虚拟形象亦包括卡通造型虚拟形象、动物虚拟形象、及更为抽象近似情感符的虚拟形象;在本申请的实施例中,以人类虚拟形象为例进行说明。
脸部器官:脸部器官(facial feature)是指虚拟形象脸部上的某一器官,以人脸为例,人脸上的脸部器官包括脸型、眉毛、眼睛、鼻子、嘴巴、耳朵等。
局部脸孔:局部脸孔(partial face)是指虚拟形象完整脸部的一部分,它是若干个脸部器官的组合,如由嘴巴、鼻子及两者周围的皮肤组合而成的局部脸孔。
脸孔部件:脸部器官和局部脸孔统称为脸孔部件(facial components)。
肢体器官:肢体器官(body parts)是指虚拟形象除脸部之外的某一器官,以人为例,躯干、胳膊、手、腿、脚等均属于肢体器官。
自由组合虚拟形象:自由组合虚拟形象(free-combination avatar)是指分别选择脸部器官、局部脸孔、或肢体器官(譬如分别选择眉毛、眼睛、鼻子、嘴巴、脸型),然后组合而成的虚拟形象。
表情:表情(expression)指用来显示情感(emotion)、情绪(mood)的脸孔面部状态,有时配合肢体动作或者口气声调。
虚拟形象器官数据库:虚拟形象器官数据库(avatar features database)指一个用于组合生成不同的虚拟形象的部件集合和相关信息,其内容部件可以是脸部器官、局部脸孔、或者肢体器官的图像信息,其相关信息可以是各个部件的元数据、属性信息等。
以上是对本发明中所涉及的相关概念的说明。在以下的实施例中都以人类为例,对本发明的方法如何生成带有表情的虚拟形象进行描述。采用本发明的方法同样可以生成其他类型的带有表情的虚拟形象,譬如卡通造型或者动物虚拟形象。
本发明的方法所获取的脸部器官或局部脸孔来自于一个供组合生成不同虚拟形象来代表多个用户的集合。在一个优选实施例中,该集合是一个虚拟形象器官数据库;首先对该数据库做详细描述,然后再说明本发明的方法如何使用该数据库。
在一个实施例中,虚拟形象器官数据库包含了所述脸部器官、所述局部脸孔与所述肢体器官的信息。这些信息总体包括两类,一类是图像信息,另一类是元数据;图像信息与元数据之间存在对应的关系。图2给出了图像信息与元数据的示意图。
所述图像信息描述了脸部器官或局部脸孔或肢体器官的形状。图像信息可以是二维图像格式,如位图格式,或矢量图格式,或其他可用的图形描述格式,也可以是三维模型格式。
所述元数据是用于描述对应图像信息的相关信息。所述元数据包括表情信息,表情信息能够反映所对应图像的表情,如一个张开的嘴巴所对应的表情为“大笑”,一个微微向上翘起的嘴巴所对应的表情为“微笑”。某些脸部器官或局部脸孔所对应的表情可以有多个,如一紧闭的嘴巴可以是“悲伤”或“紧张”。表情信息也可以为空,这代表着所对应图像所代表的脸部器官或局部脸孔或肢体器官不与特定的表情相关联,如“耳朵”通常就不会与某一特定的表情相关联。
所述元数据还可包括身份信息,所述身份信息用于标识对应图像所代表的脸部器官或局部脸孔或肢体器官属于哪一个虚拟形象。身份信息相同或者相匹配的脸部器官或局部脸孔或肢体器官,更能够替换着使用而不改变虚拟形象的身份。如果一个局部脸孔包含来自不同虚拟形象的器官组合,其身份信息包含其个别器官的身份信息,即标识其个别器官所属的虚拟形象。
所述元数据也可包括二维图像的视角信息,所述视角信息用于描述所对应的二维图像的观测角度。显然,同一个器官仰视、侧面、与正视所观测到的二维图像截然不同。相同或近似视角的图像更能够组合在一起。如果存在一个默认视角,符合该默认视角的图像,其元数据可以不带视角信息。
所述元数据也可包括三维图像模型的轮廓信息,3D脸型模型按照器官标准轮廓挖空,各个3D器官模型符合该类型器官标准轮廓,确保能够植入到3D脸型模型中,并且能够与同类型3D器官模型互换(参见图7)。如果存在一个默认器官标准轮廓组合,符合该默认标准轮廓的模型,其元数据可以不带轮廓信息。
所述元数据还可包括位置信息,所述位置信息用于描述所对应的图像在整个虚拟形象中所处的位置,也可能包含缩放尺度和旋转角度等。作为一种优选实现方式,所述位置信息包含对位点的类型以及该类型对位点在图像里,或者图像边界之外,的位置(通常,该对位点跟图像左上角的距离)。具有同类型对位点的多个图像,按上述缩放尺度 和上述旋转角度归一后,重叠在一起并且使该类型对位点对齐时,这些图像的相对方位关系正确,能够相容地一并展示。
根据人体的生理机构,本申请人观察到:将身长和头大小归一后,在人体上有若干个关键位置,如人脸的两眼正中、人体的肚脐,对于不同长相、不同体型的人,各个器官与这些关键位置之间的相对方位关系变化不大;在一种优选实现方式中拿这些关键位置作为对位点。参见图1,脸部器官的定位,基于脸部轮廓和一个十字,该十字由通过鼻子-嘴和通过左眼-右眼的两条直线所形成,是基本固定的;以上述十字的交叉点(即两眼正中)作为对位点,不同长相的人的脸部器官对于此对位点的相对方位也是基本固定的。所以对于不同的脸型、脸部器官,以此对位点作为将图像对齐展示的对位点有助于组合不同脸部器官或局部脸孔。同样的,以肚脐作为人体的对位点,有助于组合不同肢体器官。所述脸部器官或局部脸孔(统称为脸孔部件)图像,或肢体器官图像,的相关位置信息需包含与该脸孔部件或肢体器官具有相对方位关系的对位点的类型,以及该对位点在图像里或图像边界之外的位置。对于拟人化的其他虚拟形象,如动物,也可以采用类似的方法统一其对位点,从而将多个器官图像对齐后重叠展示。
所述元数据还可包括属性信息,属性信息包括但不限于以下信息中的一种或多种:标识信息、类别信息、颜色信息、年龄信息、性别信息、种族信息、职业信息。其中的标识信息用于标识所对应的图像,所述标识具有唯一性,如将一代表“鼻子”的图像标识为“100001”。类别信息用于描述对应图像所代表的脸孔部件或肢体器官(以下统称为器官部件)的类型,如标注某一脸部器官的类型为“鼻子”。颜色信息,用于描述对应图像所代表的器官部件的颜色,如“棕色”、“黑色”、“蓝色”(眼睛)等。年龄信息用于描述对应图像所代表的器官部件所从属的虚拟形象的年龄,以体现不同年龄段的虚拟形象在器官部件外观上的差别。性别信息用于描述对应图像所代表的器官部件所从属的虚拟形象的性别,以体现不同性别的虚拟形象在器官部件外观上的差别;如男、女在眼睛、嘴巴等脸部器官上有着较为明显的差异;当然某些器官部件的性别信息也允许是中性,这就意味着该器官男女都适用。种族信息用于描述对应图像所代表的器官部件所从属的虚拟形象的种族,以体现不同种族的虚拟形象在器官部件外观上的差别。职业信息用于描述对应图像所代表的器官部件所从属的虚拟形象的职业,以体现不同职业的虚拟形象在器官部件外观上的差别,如田径运动员通常比较瘦。
所述虚拟形象器官数据库中包含有多种类型的器官部件的信息,类型指“眼睛”、“鼻子”、“嘴巴”等;所述虚拟形象器官数据库中包含有同一类型的器官部件的多个图像和关联的元数据。以“嘴巴”为例,在所述虚拟形象器官数据库中可能包含有几百种不同的“嘴巴”的信息,这些“嘴巴”或者有不同的身份(如外观体现为厚唇或薄唇), 或者有不同的颜色(如外观体现为抹了不同颜色唇膏),或者有不同的表情(如微笑或大笑)。当身份信息相同或者相匹配,当二维图像视角信息或三维图像轮廓信息相同或者相匹配,当表情信息符合应用所需要的表情,这些图像可以用来组合生成应用所需要的带有表情的虚拟形象。当属性信息满足由应用设定的查询或筛选条件,这些图像可以为应用提供可以组合为虚拟形象的候选部件。
所述虚拟形象器官数据库中的各种信息在存储时,作为一种优选实现方式,可将形状信息(其对外表现为图形文件)和其它信息分开来存储在不同的数据集合中。多个数据集合中的数据之间的关系可通过标识信息、索引、指针等方式进行关联。这样做的优点是易于做数据查询、索引、更新、传输与存储。
以上是虚拟形象器官数据库的描述;其如何建立或扩充的方法将在下面另外做说明。首先,说明如何使用虚拟形象器官数据库来生成带有表情的虚拟形象。
在一个实施例中,虚拟形象器官数据库所包含的脸部器官或局部脸孔(统称为脸孔部件)的信息至少包括该脸孔部件的形状、位置、类别、身份、视角、表情等信息。举例,一个虚拟形象器官数据库包含相同视角的6个男性身份以及6个女性身份的基础脸孔(先不考虑表情)5种脸部器官,眉毛、眼睛、鼻子、嘴、和脸型(即每个类型器官有6个男性以及6个女性),互相都不同;其中仅眉毛、眼睛、嘴对于不同的表情有不同的形状,眉毛、眼睛、嘴这三个类型的器官各有相同的18种预设表情。男性和女性的器官不能互换,但男性脸部器官之间、女性脸部器官之间的上述同类型器官都可以互换,因为其视角统一。不考虑其它虚拟形象外观属性譬如发型、肤色等,此数据库能够组合生成2×6×6×6×6×6=15552个不同的男、女预设基础合成脸孔,各对应一个不同的组合虚拟形象身份,该组合虚拟形象的5种器官(眉毛、眼睛、鼻子、嘴、脸型)分别来自于上述同性别的基础脸孔器官(先不考虑表情);这个器官组合等同于该组合虚拟形象的身份信息。并且,每个组合虚拟形象身份,算上其基础合成脸孔,各有18种表情(因为每个带表情器官有相同的18种预设表情)。上述虚拟形象器官数据库,供社交网络用户从中选择代表自己的虚拟形象并且使其虚拟形象能够显示此18种表情。
基于该虚拟形象器官数据库,本发明的带有表情的虚拟形象实现方法生成显示指定表情的虚拟形象,参考图3,其基本原理为:为用户生成一个基础合成脸孔,然后根据所欲显示的表情从虚拟形象器官数据库中选取符合某些条件的脸孔部件来替代基础合成脸孔中的对应脸孔部件,生成显示指定表情的虚拟形象。(此处,“替代”指显示带指定表情的脸孔部件而不显示同类型的基础脸孔部件,实现方式可以是,但不一定是,在生成的合成脸孔上将基础脸孔部件数据“替换”为带指定表情脸孔部件的数据。)通常, 生成代表用户虚拟形象基础合成脸孔的步骤在显示该用户基础脸孔(如“面无表情”)时完成,这可以是让用户选择代表自己的虚拟形象的时候,也可以是,当任何用户的虚拟形象应该“面无表情”的时候;生成显示指定表情合成脸孔的步骤在让任何用户虚拟形象显示指定表情时完成。具有之前和之后不同表情转变的虚拟形象“做表情”视觉效果,才是最为鲜明有效的。
图6为虚拟形象带表情器官图像替换的一个实例示意图,其数据为上述虚拟形象器官数据库的部分数据。其中A,B,C,D,E,F为6个完全不同虚拟形象的身份,她们各有普通、得意、愤怒、哀愁、嘲笑、亲、无奈、兴奋、失望、犹豫、放心等不同的表情;以虚拟形象A为例,在显示多种不同的表情时A的眉、眼、嘴均做了器官图像替换。而G和H两个虚拟形象是由前述A-F这6个虚拟形象脸部器官组合而成。组合虚拟形象G的元数据身份信息为:眉来自A,眼来自D,鼻来自E,嘴来自B,脸型来自A,头发来自F;虚拟形象H的元数据身份信息为:眉来自C,眼来自C,鼻来自F,嘴来自E,脸型来自C,头发来自A。要显示虚拟形象G时,在步骤S101-步骤S103,从虚拟形象器官数据库中获取A-眉-普通,D-眼-普通,E-鼻(无表情),B-嘴-普通,A-脸型(无表情),F-头发(无表情),从而生成基础合成脸孔G-普通;在步骤S104,举例指定表情为“得意”;在步骤S105,获取A-眉-得意、D-眼-得意、B-嘴-得意;在步骤S106,替换掉基础合成脸孔里的A-眉-普通、D-眼-普通、B-嘴-普通,最终生成显示得意表情的合成脸孔G-得意。
三维虚拟形象的处理原理与二维虚拟形象基本相同,但以替换植入到脸型模型中的器官模型来实现。在一个实施例中,虚拟形象为三维模型。如图7,包含了一个按照器官标准轮廓挖空了器官后的3D脸型模型(含耳朵),和两组不同的脸部五官3D器官模型集合(左和右眉毛、左和右眼睛、鼻子、嘴巴);植入到脸型模型中,上述两组器官模型将分别形成两个不同的虚拟形象,形象1和形象2。一个虚拟形象的3D器官模型集合是包含多种表情的(如“笑”和“亲”为两组3D器官模型),以替换3D器官模型方式形成该虚拟形象的不同表情三维模型。如图8,包含了上述两个虚拟形象的“笑”和“亲”两种表情三维模型(截图为正面、侧面、斜45度角)。因为脸型模型中的空洞均按照器官标准轮廓挖空,同类型器官的3D器官模型可以互换地植入,跟上述二维虚拟形象组合方式一样能够组合生成不同身份的三维虚拟形象(譬如拿上述形象2的嘴巴来替代形象1的嘴巴形成新的组合形象3)。在此实施例中,一个三维虚拟形象器官数据库,供社交网络用户从中选择代表自己的三维虚拟形象并且使其三维虚拟形象能够显示多种表情。
现在,以二维虚拟形象为例,详细说明为虚拟形象生成指定表情的方法。参考图4,该方法具体包括以下步骤:
步骤S101、接收第一信息,所述第一信息与代表用户的虚拟形象相关;
在步骤S101中,所述第一信息包括以下至少一个,从多个预设基础合成脸孔中指定所需的基础合成脸孔的选择信息,或者一或多与虚拟形象有关的属性信息。所述第一信息所包括的一或多与虚拟形象有关的属性信息,包括以下至少一个:性别、年龄、种族、职业。
在一个实施例中,虽然用户无法现实地遍历所有他或她可以选择的预设基础合成脸孔,用户可以通过如图5的界面逐个选择脸部器官或者局部脸孔(统称为脸孔部件)。通过部位按钮选择脸孔部件的类型(如眉毛、眼睛、鼻子、嘴巴、脸型),然后再从展示该类型不同的脸孔部件的列表中选择特定脸孔部件(如图5中的选眼睛,其它类型器官不变)。用户未选的脸部器官类型,可以使用预先设定的默认器官。上述各类型脸孔部件的选择信息集合,作为基础合成脸孔选择信息,即上述第一信息。
在另一个实施例中,根据指令,结合脸部的生理结构选择待生成的基础合成脸孔。其中,所述指令用于描述所要选择的虚拟形象属性筛选条件,如关于性别、年龄、种族、职业的筛选条件,也可以是其它筛选条件,譬如眼睛大小值。所述指令可由用户输入,也可采用一预先设定的默认指令。若所述指令中仅仅给出了整个脸孔中的一部分脸孔部件的筛选条件,则在本步骤中还可根据人脸的生理结构自动补齐其余部分的筛选条件,这些筛选条件可以是预先设定的属性默认值。上述属性筛选条件指令,作为与虚拟形象有关属性信息,即上述第一信息。
在另一个实施例中,上述用于指定基础合成脸孔的选择信息,和上述用于筛选虚拟形象的属性信息,并存于上述第一信息之中,譬如以筛选方式补充选择信息之不足。
在一个优选实施例中(“为好友制作和分享带表情虚拟形象”),用户可以为其他用户制作能做表情的虚拟形象,并且将该虚拟形象发送到社交网络上作为分享;第一信息为,该虚拟形象,代表被制作虚拟形象的用户。为好友制作和分享带表情虚拟形象,作为一个完整的实施例,将另外单独说明。
步骤S102、基于第一信息从一个可供组合生成不同虚拟形象的脸部器官或局部脸孔的集合中获取一个或多个脸部器官或局部脸孔。
所述脸部器官或局部脸孔具有关联的元数据。所述与脸部器官或局部脸孔关联的元数据包括以下至少一个:表情信息、身份信息、视角信息、位置信息。所述获取的脸部 器官或局部脸孔基于其相关联的元数据。
在一个实施例中,所述可供组合生成不同虚拟形象来代表多个用户的脸部器官或局部脸孔的集合,其一种实现方式为虚拟形象器官数据库。上述步骤S101实施例中所选择各类型脸孔部件的部件选项可以来自于该器官数据库并带标识信息,该标识信息作为上述选择信息的一部分,使得本步骤能够基于该标识信息获取对应的脸孔部件。虚拟形象器官数据库包括所述脸孔部件的图像和元数据,该元数据包括表情信息、身份信息、视角信息、位置信息等。上述步骤S101实施例中与虚拟形象有关的属性信息(即筛选条件),可以与器官数据库的属性信息元数据进行匹配,获取满足筛选条件的元数据以及对应的虚拟形象脸孔部件。
在另一个实施例中,步骤S102还包括对所获取的所有脸部器官或局部脸孔的信息做合理性与合法性校验,譬如不能没有鼻子,不能裸体等。
步骤S103、基于上述脸部器官或局部脸孔,生成代表用户的虚拟形象基础合成脸孔;
其中,在将脸部器官或局部脸孔的形状信息组合起来时,可依据脸部器官或局部脸孔的位置信息。
在一个实施例中,以位置信息中所包含的对位点、缩放尺度、以及旋转角度相关信息为例,在将多个脸孔部件的图像进行组合时,首先将缩放尺度和旋转角度归一化,然后逐个将尚未定位的图像按照其对位点的位置把该对位点与已定位的图像的相同类型对位点对齐(对位点可以有不同类型的),直到所有的图像都完成定位。
在将脸孔部件的图像组合起来时还需要考虑待组合的脸孔部件间的图层遮挡关系。以眼睛和脸型为例,在组合时,眼睛图层应当在脸型图层上面(假设上面的图层遮挡下面的图层),然后按照正确的图层遮挡顺序显示合成脸孔。
步骤S104、接收第二信息,所述第二信息关于虚拟形象所欲显示的指定表情;
所述第二信息包括但不限于以下信息中的一个或多个:用户输入,指定其虚拟形象所欲显示的表情;一个或多个与虚拟形象所处应用环境相关的参数;来自社交网络发送的可以作用于虚拟形象的信息。
上述三种第二信息并不互斥,一个具体第二信息可能既来自用户输入,亦通过社交网络发送,亦是反映应用环境相关的参数。
在一个实施例中,用户输入所指定的表情(上述第二信息)能够与上述虚拟形象器官数据库里的表情信息元数据相匹配,从而筛选出显示该表情的脸孔部件。上述用户输 入可以是通过下拉菜单选择描述了某一具体表情的文字标签,如“开心”、“伤心”等,也可以是通过点击选择某一具体表情的图样标识,如
Figure PCTCN2016080036-appb-000001
等,也可以是通过键盘输入情感符或其它字符串如“:-)”、“:-(”等,或者其它用户交互方式。
在一个优选实施例中(参见图10中的“虚拟形象表情仪”),用户通过滑动指向器或划过触控屏来选择一序列表情标识,其中满足触发条件的表情成为所述第二信息,该用户虚拟形象所欲显示的表情。上述触发条件可以是,停止指向器或触控屏操作时(如松开鼠标mouse-up、手指离开触控屏)指向器或手指位于某个表情标识;也可以是,指向器或手指停留在某个表情标识足够长时间;也可以是其它用户交互,如双击鼠标或触控手势等。在另一个实施例中,上述表情序列可以触发多个待显示的表情,按顺序排列显示,如“笑-哭-笑-哭-笑”,其含义为“啼笑皆非”(between tears and laughter)。
在一个优选实施例中(参见图14中的“虚拟形象表情私信”),发私信的用户以下拉菜单或其它方式选择表情,作为第二信息,由收私信的用户接收该第二信息。在一个实施例中,发私信和收私信的用户均为社交网络用户,上述第二信息来自社交网络。在一个实施例中,上述第二信息为应用环境相关的参数,其含义为,有带此虚拟形象表情的私信可供阅读。虚拟形象表情私信,作为一个完整的实施例,将另外单独说明。
在另一个实施例中,在展示虚拟形象的应用里,环境参数信息在一定时候也能反映虚拟形象所欲显示的表情,例如在一社交网络游戏里,根据游戏的发展,用户自己或好友的虚拟形象都可以显示出“兴奋”、“沮丧”等表情;上述第二信息为表情参数,作用于,代表某一个用户的虚拟形象。
在另一个优选实施例中(“自动表情策略”),进一步的,基于上述环境参数信息,将满足触发条件的表情设定为用户自己或其他用户的虚拟形象表情。作为一种优选实现方式,为用户提供一套默认的表情策略如表情触发条件列表,和一个用户可自定义表情策略的界面。自动表情策略,作为一个完整的实施例,将另外单独说明。
在另一个实施例中,来自社交网络的信息也能反映用户自己或好友的虚拟形象所欲显示的表情,例如,当好友登录该社交网路,该好友和用户自己的虚拟形象均可显示能反映二人之间关系的表情,如“微笑”、“微笑加挥手”、“鄙视加掉头”等;上述第二信息为来自社交网络的虚拟形象表情信息。
步骤S105、基于第二信息从上述集合获取一个或多个脸部器官或局部脸孔;
在一个实施例中,所述脸部器官或局部脸孔的获取方式类似于步骤S102所述方式,该方式获取基础合成脸孔的脸孔部件,而本步骤将第二信息即表情,作为脸孔部件表情信息元数据筛选条件。在一个优选实施例中,基于步骤S102所获取的脸孔部件,获取 用以替代的脸孔部件,两者之间元数据身份信息和视角信息相同或者相匹配,用以替代的脸孔部件元数据表情信息符合上述欲显示的表情。
在一个实施例中,当符合上述筛选条件的脸孔部件集合还需要进一步的筛选,可以添加其它元数据属性信息筛选条件如类别信息、颜色信息、年龄信息、种族信息、职业信息等。
步骤S106、基于基础合成脸孔和上述脸部器官或局部脸孔,为虚拟形象生成显示上述指定表情的合成脸孔。
在一个实施例中,基于之前获取的脸孔部件重新生成一个新的,显示指定表情的合成脸孔,其方式与步骤S103基本一致,而不是在之前已生成的合成脸孔基础上“替换”某些脸孔部件数据。
在另一个实施例中,也可以通过替换脸孔部件,来生成显示上述指定表情的合成脸孔。在实现脸孔部件替换时,需要从已有合成脸孔里去除将被替代的脸孔部件,然后插入用以替代的脸孔部件,该操作用到所述脸孔部件的位置信息与形状信息,可能需要基于位置信息中的缩放尺度和旋转角度对脸孔部件图像先进行归一化。此脸孔部件替换操作亦可能会改变合成脸孔中的各个脸孔部件遮挡关系,因此在替换后可能还需要调整合成脸孔中各个脸孔部件的图层关系,确保正确展示。
在另一个实施例中,脸孔部件的图像和相关信息亦可以通过计算,改变该脸孔部件的形状、颜色、肤色等属性,使虚拟形象的合成脸孔显示应用或者用户所需要的效果;一个实例是“睁大眼睛”,眼睛图像的宽度不变但高度乘以一个倍率(如1.2倍);另一个实例是“涂唇膏”,改变图像中嘴唇的颜色;另一个实例是,通过图像的仿射变换(affine transformation)改变其透视视角。
以上完成为虚拟形象生成指定表情方法的详细说明。现在将描述几种该方法的衍生应用,和几种与虚拟形象表情相关,但不依赖于上述步骤S101-S106的应用。
上述步骤S106生成的合成脸孔,可以存储为显示指定表情的虚拟形象合成脸孔,之后没有完整的虚拟形象器官数据库依然能展示。一种方式是将所述虚拟形象带表情合成脸孔渲染并存储为图片,所述图片为静态图片或带有表情动画的动态图片。另外一种方式是将所述虚拟形象带表情合成脸孔存储为能够重现该虚拟形象表情的数据信息,所述数据信息可以包括虚拟形象所用各脸孔部件的标识信息集合,如字符串“1001,2001,3001,4001,5001,6001;表情:1”;之后,解析这份数据并获取其指向的脸孔部件图片以及相关位置信息后,通过步骤S103所描述的方法可以把上述图片合成,将所述数 据信息重现为带有表情的虚拟形象。
人的表情除了可以通过面部表现外,在某些情况下,也可通过肢体动作表现出来。因此,在前述多个实施例的基础上所实现的另一个实施例中,本发明的带有表情的虚拟形象实现方法还包括生成带有表情的虚拟形象肢体动作。该方法所基于的虚拟形象器官数据库还包括有肢体器官的信息,所述肢体器官的信息包括形状信息、位置信息、类别信息、表情信息、身份信息;还可包括颜色信息、年龄信息、性别信息、种族信息、职业信息等。该方法在实现时,首先生成一个包括脸孔与肢体的基础合成虚拟形象;然后选择特定表情;接着根据所选择的特定表情,从虚拟形象器官数据库中选择与该表情所对应的脸孔部件,替代所述合成虚拟形象的对应脸孔部件,从虚拟形象器官数据库中选择与该表情对应的肢体器官,替代所述合成虚拟形象的对应肢体器官,使得最终得到的虚拟形象符合之前所选择的特定表情。
本发明的为虚拟形象生成指定表情的方法具有广泛的应用前景。例如,不同用户之间可以发送虚拟形象表情私信。在一个优选实施例中,参见图14,发私信的用户A以下拉菜单选择表情(“兴奋”),作为步骤S104的第二信息,然后将私信发给用户B。当用户B打开此私信时,首先接收第一信息即用户A的虚拟形象描述信息,通过步骤S101-S103首先看到尚未做表情的用户A虚拟形象;用户B接收上述第二信息,通过步骤S104-106看到用户A虚拟形象作出第二信息所指定的表情(“兴奋”)。
在另一个实施例中,采用本发明的为虚拟形象生成指定表情的方法,用户可以创造和分享代表好友或名人的虚拟形象。例如,用户C选择器官部件组合为代表好友或名人D的虚拟形象,并且将该虚拟形象,作为第一信息,发送到社交网络上,由用户E接收该第一信息,用户E可以是,但不一定是D;第二信息关于该虚拟形象所欲显示的表情,可以由上述用户C或用户E指定,也可以由系统指定;通过步骤S101-S106,显示给用户E,代表用户D的带该指定表情的虚拟形象。
本发明又提供了一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,控制用户虚拟形象的表情的方法,所述虚拟形象是自由组合生成的虚拟形象,其显示表情的实现方式可以是,但不一定是,通过步骤S101-S106;该方法,基于一个或多个与虚拟形象所处应用环境相关的参数,将满足触发条件的表情设定为用户虚拟形象的表情。
在一个实施例中,带表情的自由组合虚拟形象的表情状态与环境参数相关联,环境 参数改变可以触发虚拟形象表情改变。环境参数和虚拟形象表情的关联策略是可设置的;例如,一个用户将他自己的表情策略设置为,在游戏中得到一张坏牌的时候,他的虚拟形象显示“悲哀”的表情;而另一个用户将她自己的表情策略设置为,在游戏中得到一张坏牌的时候,她的虚拟形象显示“愤怒”的表情。在设置多种环境参数与虚拟形象的表情关联策略之后,带表情的虚拟形象就成为一个能够反映环境改变,按用户性格而改变表情的,代表用户的机器人。在另一个实施例中,用户亦可以设置他所看到的其他用户的虚拟形象表情策略,譬如使他看到,其他用户的虚拟形象在游戏中得到坏牌的时候总显示“大哭”的表情。
作为一种优选实现方式,为用户提供一套默认的表情策略如表情触发条件列表,和一个用户可自定义表情策略的界面。更详细的,通过设置不同的触发条件和其触发器,生成或调整表情策略;在表情策略的作用下,当虚拟形象所处环境达成特定的触发条件后,其触发器使用户的虚拟形象显示该触发条件所指定的表情。在一个实施例中,还可以根据用户的应用或游戏操作或其它行为自动显示关联的表情,该表情包括但不限于面部表情、手势、姿势、语气词、声音、动画播放、发送带表情的消息。上述控制用户虚拟形象的表情的方法作用于自由组合生成的虚拟形象,其显示表情的实现方式可以是,但不一定是,通过步骤S101-S106。
本发明还提供了一种表情仪,该表情仪实现于至少一个含处理器、存储器、通讯平台接口的计算装置,其中至少一个装置还包含指向器或触控屏,供用户控制其自由组合虚拟形象的表情的方法,所述虚拟形象是自由组合生成的虚拟形象,其显示表情的实现方式可以是,但不一定是,通过步骤S101-S106。该方法包括以下步骤:
步骤1)接收指定表情序列,所述表情序列为一个表情,或用户通过滑动指向器或者划过触控屏来选择的多个表情;
步骤2)将上述表情序列中满足触发条件的表情设定为用户虚拟形象的表情。
图10为前述表情仪的示意图。在一个实施例中,如图所示,该表情仪的中心部分用于动态展示用户的带表情的虚拟形象,在中心部分的周围包括有表情标识,所述表情标识为用户提供了一个人机接口,用户可通过对表情标识进行操作来输入表情(点击表情标识或在表情标识上滑过都可视为通过表情标识输入表情,其中,滑过表情标识会调用表情仪中的虚拟形象表情预览模块,点击表情标识会调用表情仪中的虚拟形象表情切换模块,通过连续滑过表情标识可以产生表情序列),然后,表情仪根据本发明的步骤S101-S106,或者根据其它显示表情的实现方式,在表情仪的中心部分展示带该表情的上述自由组合虚拟形象。在一个实施例中,除了上述功能外,该表情仪还可根据用户所 输入的表情输出相应的语气词,如与“开心”这一表情所对应的语气词“呵呵”,该语气词以文字或以声音播放来实现。此外,表情仪中还可集成应用或游戏操作等其他用户交互功能。
在一个优选表情仪实施例中,用户通过滑动指向器或划过触控屏来选择一序列表情标识,其中满足触发条件的表情成为该用户虚拟形象所显示的表情。举例,上述触发条件可以是,停止指向器或触控屏操作时(如松开鼠标mouse-up、手指离开触控屏)指向器或手指位于某个表情标识;也可以是,指向器或手指停留在某个表情标识足够长时间;也可以是其它用户交互,如双击鼠标或触控手势等。在另一个实施例中,上述表情序列可以触发多个待显示的表情,按顺序排列显示,如“笑-哭-笑-哭-笑”,其含义为“啼笑皆非”(between tears and laughter)。上述表情仪作用于自由组合生成的虚拟形象,其显示表情的实现方式可以是,但不一定是,通过步骤S101-S106。
本发明还提供了一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,以虚拟形象来查找用户的方法,所述虚拟形象是自由组合生成的虚拟形象,包括:
步骤1、接收指定虚拟形象信息
步骤2、返回至少一个用户信息,其虚拟形象部件元数据与指定虚拟形象部件元数据满足匹配条件。
由于采用前述为虚拟形象生成指定表情的方法所生成的虚拟形象与用户之间存在对应关系,因此,也可反过来通过已知的虚拟形象查找用户。在一个实施例中,用户可以用虚拟形象来查找其他用户,譬如寻找朋友,或者长得像某位明星,或着符合自己心中期望的人。此实施例为用户群建立一个反向索引,从虚拟形象器官部件能获取使用该器官部件作为虚拟形象的用户列表。因为虚拟形象的身份描述是一个虚拟形象器官部件的组合,将目标虚拟形象各个部件的元数据与虚拟形象器官数据库中的元数据做匹配后,将数据库内满足匹配条件的器官部件的用户列表一并做交集处理,即可找到符合该虚拟形象条件的用户。
在之前的实施例中,所述虚拟形象器官数据库均为已有的数据库。在另一个实施例中,不具有现成的虚拟形象器官数据库,或者需要扩充已有的虚拟形象器官数据库,因此本发明的带有表情的虚拟形象实现方法在步骤S101之前还包括有建立或扩充虚拟形象器官数据库的步骤。
三维虚拟形象与二维虚拟形象的不同处理方式,主要在于,3D脸型模型按照器官标准轮廓挖空,各个3D器官模型符合该类型器官标准轮廓,确保能够植入到3D脸型 模型中,并且能够与同类型3D器官模型互换(参见图7和图8)。二维虚拟形象的视角信息被三维器官标准轮廓信息以替代;位置信息所描述的是3D模型,其对位点信息与三维空间方位相关,确保3D器官模型的正确植入或替换;表情信息、身份信息、各种属性信息等则与二维没有基本上的不同。
下面,以二维虚拟形象为例,详细说明建立或扩充虚拟形象器官数据库的方法。参考图9,所述建立或扩充虚拟形象器官数据库的步骤包括:
步骤S401、获取带有表情的虚拟形象原始图像,并标注视角信息、身份信息、表情信息、位置信息和类别信息;
在本实施例中,所获取的虚拟形象原始图像为美工人员在计算装置上通过制图软件按照规范绘制而成的电子格式图像文件。在其它实施例中亦可以通过摄像或摄影模特来捕捉虚拟形象原始图像,并以任何文件格式保存。
为了能够让同类型脸部器官,但不同虚拟形象、不同表情的图像能够替换着使用,美工人员绘制虚拟形象原始图像时需要保证图像脸部视角和缩放比例是一致的。作为一种优选实现方式,如图1所示,不同表情的不同虚拟形象均通过一个带有鼻子-嘴直线、左眼-右眼直线的特定透视十字方盒,来确定其脸型轮廓和各个脸部器官的视角、大小和位置,从而确定整个虚拟形象脸孔的视角和缩放比例。绘制这些图片时(相对于美工人员)固定并且统一虚拟形象模特的脸部朝向,和基于模特人脸大小调整距离,从而保证其透视效果与上述特定透视十字方盒一致,所绘制虚拟形象的视角和器官大小、位置均符合特定标准,才能使来自不同虚拟形象或者显示不同表情的同类型器官能够替换着使用,供组合生成不同但合理的脸孔。上述方法亦可用于摄像或摄影,捕捉模特人脸表情时确定其透视效果与特定透视十字方盒一致,从而保证脸部器官的可替换性,供组合生成不同但合理的脸孔。
由本步骤获取的各个虚拟形象原始图像,应当标注其类别信息,即其包含的所有器官类型;应当标注其视角信息,譬如生成图像所用特定透视十字方盒的描述信息;应当标注其位置信息,譬如以上述透视十字的交叉点作为对位点,标注该对位点在图像里或者图像边界之外的位置(通常,该对位点跟图像左上角的距离)。如果需要,位置信息还可以包含缩放尺度、旋转角度等信息。如果存在一个默认位置,符合该默认位置的原始图像可以不带位置信息。如果存在一个默认视角,符合该默认视角的原始图像可以不带视角信息。如果存在一个默认类别,符合该默认类别的原始图像可以不带类别信息。
作为一种优选实现方式,在本实施例中,所获取的虚拟形象原始图像为一个或一组能够对齐的多图层文件,头发、眉、眼、鼻、口、耳等脸部器官以及皮肤、手臂、大腿、躯干等肢体器官位于不同的图层上,所有图层一并展示时各个器官之间的相对方位正 确。每一图层的相关数据包含有位于该图层的脸部器官或肢体器官的形状信息,以及类别信息(如果有默认类别信息,譬如某个图层默认为某个对应器官类型,可以省略),以及该图层相对于全图像的偏移距离(如果不为零)。每一图层可被独立地处理。
本步骤中所获取的原始图像集合可以有多个虚拟形象,各个虚拟形象的原始图像应当标注该虚拟形象的身份信息以作区分;不同的虚拟形象可以有各自的多个表情,如微笑、沮丧、哭泣、害羞、吃惊、悲哀等,各个带有表情的原始图像应当标注其表情信息以作区分。虚拟形象的这些表情与眼、眉、鼻、口等脸部器官的形状、在脸孔中所在的位置有关,也与手臂、大腿等肢体器官的姿态、动作有关(如兴奋时,手臂的姿态或动作能够反映这一表情)。本步骤在获取虚拟形象原始图像时应当采集尽可能多样的表情,以利于后续的带有表情的组合虚拟形象的生成。
步骤S402、对步骤S401获取的虚拟形象原始图像进行加工,从中提取出脸部器官或局部脸孔的图形文件,并标注视角信息、身份信息、表情信息、位置信息和类别信息。
在本实施例中,从上述多图层文件的虚拟形象原始图像,提取个别图层作为脸部器官或局部脸孔(统称为脸孔部件)的图形文件,并标注其视角信息(为该原始图像视角信息)、身份信息(为该原始图像身份信息)、表情信息(为该原始图像表情信息)、位置信息(为该原始图像位置信息,按该图层的偏移距离做调整)、和类别信息(为该图层的类别信息)。存在默认值的信息可以省略。
在其它实施例中,未分器官图层的原始图像需要经过处理来提取个别脸孔部件,此处理可以通过脸孔部件模板和器官轮廓形状识别来实现,譬如识别鼻子然后将鼻子之外的区域裁切或变为空白。
作为一种可选实现方式,从虚拟形象原始图像中提取脸部器官或局部脸孔(统称为脸孔部件)的图形文件时,考虑到脸孔部件在整个虚拟形象原始图像中只占其中的一部分,因此在提取时裁切(或以其它方式去除)周围的空白部分,只保留该脸孔部件外接矩形大小的图片作为图形文件,并依据上述裁切信息计算并标注对位点类型和该对位点在图片里或者图片边界之外的位置,作为位置信息,供正确展示该图片。如果需要,位置信息还可以包含缩放尺度、旋转角度等信息,根据上述裁切信息计算。作为一种可选实现方式,当所采集的虚拟形象原始图像为多图层文件时,即头发、眉、眼、鼻、口、耳等脸部器官位于不同的图层上,将头发、眉、眼、鼻、口、耳等人脸器官各自所在的图层从虚拟形象原始图像分别截取出来,然后参照上述方法去除周围的空白部分。
作为一种可选实现方式,所提取的脸孔部件图形文件并不单独生成或保存,而是以其它方式指向图像资源,譬如给出标识信息指定图像文件、其中图层信息、再其中裁切 信息。
所述局部脸孔是若干个脸部器官的组合,因此局部脸孔图形文件的提取与脸部器官图形文件的提取没有本质上的区别。所得到的一个或多个脸孔部件图形文件包括有所述脸孔部件的形状信息、类别信息(一个或多个器官类型)、身份信息(一个或多个虚拟形象身份)等,此外,还应当包括有一个用于区分的标识信息。
步骤S403、为各个类别的脸部器官或局部脸孔图形文件做归一化处理,其并更新其位置信息。
以下作为一种实现方式,将提取得到的脸孔部件按器官类型分别处理;局部脸孔的类型为其组成器官类型的集合。举例,将“嘴”的图形文件归为一类统一处理。作为一种优选方式,可拿同一类别的所有脸孔部件图片的最小共同外接矩形作为归一化后的该类别脸孔部件图片尺寸,生成便于替换的同类别脸孔部件图形文件。在做归一化处理时,应当确保图片的位置信息被正确地应用、计算和标注。进一步的,在另外的实例中也可以对同一类别脸孔部件的不同表情图形文件再分别做优化处理,为个别类型和表情获得可能更小的外接矩形。
作为一种可选实现方式,归一化后的脸孔部件图形文件并不单独生成或保存,而是以其它方式指向图像资源,譬如给出标识信息指定图像文件、其中图层信息、再其中裁切信息。举例,多个同类别的脸孔部件图形文件合并为一个文件。
步骤S404、为各个类别中的脸部器官或局部脸孔调整信息或标注额外信息,包括形状信息、类别信息、表情信息、身份信息、视角信息等。
作为一种优选方式,某些带表情的脸孔部件图形文件可以用来展示多种表情,譬如鼻子通常并不因为做表情而有巨大改变;为此类图形文件可以做出特别标注,或者省略其表情信息。
可选的,在本步骤中还可包括对脸孔部件图形文件标注一个或多个属性信息,包括但不限于标识信息、颜色信息、年龄信息、性别信息、种族信息、职业信息。
在一个实施例中,脸孔部件的信息可以通过调整,改变该脸孔部件的应用范围,譬如调整表情信息,从而允许身份信息或其它属性信息符合条件的脸孔部件用于显示步骤S402-S403所标注的表情信息之外的表情;上述条件的一个实例是脸部肌肉组织受过损伤所以表情显示异常的特殊身份;另一个实例是能做出或希望能做出特异脸孔的演员或用户身份。
在另一个实施例中,脸孔部件的图像和相关信息亦可以通过计算,改变该脸孔部件的形状、颜色、肤色等属性;一个实例是“睁大眼睛”,眼睛图像的宽度不变但高度乘以一个倍率(如1.2倍);另一个实例是“涂唇膏”,调整嘴唇的颜色;另一个实例是,通过图像的仿射变换(affine transformation)改变其透视视角。
步骤S405、存储之前所得到的脸部器官或局部脸孔的信息,生成虚拟形象器官数据库。
在一个实施例中,为一个脸部器官或局部脸孔在器官数据库中所存储的信息包括图像(即形状信息)和元数据,该元数据可以包括位置信息、表情信息、身份信息、视角信息、类别信息、标识信息、颜色信息、年龄信息、性别信息、种族信息、职业信息等。作为一种优选实现方式,上述信息在存储时可将脸孔部件的形状信息(即图形文件)单独存储在一个数据集合中,将脸孔部件的其他信息存储在另一个数据集合中,两个数据集合中的数据之间可通过标识信息进行关联。这样做的优点是易于做数据查询、索引、更新、传输与存储。
以上完成建立器官数据库方法的详细说明。在已存在的器官数据库基础上做扩充,可以使用基本一样的步骤,获取符合特定表情信息、身份信息、视角信息等要求的虚拟形象原始图像,加工并提取脸孔部件的图形文件,可选的归一化处理,标注相关信息,存储信息并生成扩充后的器官数据库。
使用时,当类别信息或其它属性信息满足由应用设定的查询或筛选条件,这些脸孔部件图像可以为应用提供可以组合为虚拟形象的候选部件。当身份信息相同或者相匹配,当二维图像视角信息或三维图像轮廓信息相同或者相匹配,当表情信息符合应用所需要的表情,这些脸孔部件图像可以用来组合生成应用所需要的带有表情的虚拟形象。
在一个实施例中,美工人员按照步骤S401绘制相同视角的6个男性以及6个女性虚拟形象的脸孔原始图像,这12个虚拟形象的眉毛、眼睛、鼻子、嘴、和脸型互相都不同,其中仅眉毛、眼睛、嘴对于不同的表情有不同的形状,每个虚拟形象各有18种表情,总共216个带有表情的脸孔原始图像。男性和女性的器官不能互换,但男性虚拟形象之间、女性虚拟形象之间的同类型器官都可以互换,因为视角保持一致。然后,按照剩余步骤生成器官数据库。不考虑其它虚拟形象外观属性譬如发型、肤色等,此数据库能够组合生成2×6×6×6×6×6=15552个不同的男、女组合虚拟形象身份,其中每个组合虚拟形象身份的5种器官分别来自于同性别的6个原始虚拟形象,并且每个组合虚拟形象身份各有18种表情。上述虚拟形象器官数据库,供社交网络用户从中选择代 表自己的虚拟形象并且使其虚拟形象能够显示此18种表情。
与上述方法相对应的,本发明还提供了一种实现于至少一个计算装置,所述计算装置至少包含一个处理器、存储器和通讯平台接口,为自由组合虚拟形象生成指定表情的系统,所述虚拟形象包括但不限于二维或三维的人物、动物、卡通造型、或者抽象情感符脸孔,该系统所获取的脸部器官或局部脸孔来自于一个可供组合生成不同虚拟形象的集合,该系统包括:
第一信息接收模块,用于接收第一信息,所述第一信息与代表用户的虚拟形象相关;
第一数据获取模块,基于第一信息从上述集合获取一个或多个脸部器官或局部脸孔;
基础合成脸孔生成模块,基于上述脸部器官或局部脸孔,生成代表用户的虚拟形象基础合成脸孔;
第二信息接收模块,用于接收第二信息,所述第二信息关于虚拟形象所欲显示的指定表情;
第二数据获取模块,基于第二信息从上述集合获取一个或多个脸部器官或局部脸孔;
带表情合成脸孔生成模块,基于基础合成脸孔和所述第二数据获取模块所得到的脸部器官或局部脸孔,为虚拟形象生成显示上述指定表情的合成脸孔。
本发明还提供了一种实现于至少一个计算装置,所述计算装置至少包含一个处理器、存储器,建立或扩充为自由组合虚拟形象生成表情的虚拟形象器官数据库的系统,包括:
原始图像获取模块,用于获取带有表情的虚拟形象或者部分虚拟形象的原始图像集合,为所述原始图像标识其表情和所属虚拟形象,所述原始图像采用的格式包括但不限于一或多包括位图格式、矢量图格式在内的图形描述格式,或三维模型格式;
图像与元数据生成模块,基于所述原始图像生成脸部器官或局部脸孔的图像和元数据,元数据包括表情信息。
本发明还提供了一种实现于至少一个计算装置,所述计算装置至少包含一个处理器、存储器,建立或扩充为自由组合虚拟形象生成表情的虚拟形象器官数据库的系统,包括:
原始图像获取模块,获取带有表情的虚拟形象原始图像,并标注视角信息、身份信 息、表情信息、位置信息和类别信息;
部件提取模块,对虚拟的虚拟形象原始图像进行加工,从中提取出脸部器官或局部脸孔的图形文件,并标注视角信息、身份信息、表情信息、位置信息和类别信息;
归一化处理模块,为各个类别的脸部器官或局部脸孔图形文件做归一化处理,并更新其位置信息;
调整或标注信息模块,为各个类别中的脸部器官或局部脸孔调整或标注额外信息,包括形状信息、类别信息、表情信息、身份信息、视角信息等;
图像与元数据生成模块,存储之前所得到的脸部器官或局部脸孔的信息,生成虚拟形象器官数据库。
本发明又提供了一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,控制用户虚拟形象的表情的系统,所述虚拟形象是自由组合生成的虚拟形象,
表情设定模块,基于一个或多个与虚拟形象所处应用环境相关的参数,将满足触发条件的表情设定为用户虚拟形象的表情。
本发明还提供了一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,其中至少一个装置还包含指向器或触控屏,控制用户虚拟形象的表情的系统,所述虚拟形象是自由组合生成的虚拟形象,该系统包括:
表情接收模块,用于接收指定表情序列,所述表情序列为一个表情,或用户通过滑动指向器或者划过触控屏来选择的多个表情;
表情设定模块,用于将上述表情序列中满足触发条件的表情设定为用户虚拟形象的表情。
本发明还提供了一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,以虚拟形象来查找用户的系统,所述虚拟形象是自由组合生成的虚拟形象,该系统包括:
虚拟形象接收模块,接收指定虚拟形象信息;
虚拟形象查找模块,返回至少一个用户信息,其虚拟形象部件元数据与指定虚拟形象部件元数据满足匹配条件。
本发明具有广泛的适应性,既可以如图11所示的那样将虚拟形象生成系统(即前文中提到的为自由组合虚拟形象生成指定表情的系统)作为社交数据服务器或游戏服务器的后端系统,也可以如图12所示的那样将虚拟形象生成系统作为独立的服务提供者, 还可以如图13所示的那样将虚拟形象生成系统与虚拟形象器官创建系统各自作为独立的服务提供者。
最后,应当注意,此处描述的各种技术可以结合硬件或软件,或在适当时以两者的组合来实现。因此,当前所公开的主题的方法、计算机可读介质、以及系统或其特定方面或部分可采取包含在诸如软盘、CD-ROM、硬盘驱动器或任何其它机器可读存储介质等有形介质中的程序代码(即,指令)的形式,其中当程序代码被加载到诸如计算机等机器内并由其执行时,该机器成为用于实现本主题的装置。
在程序代码在可编程计算机上执行的情况下,计算设备通常可以包括处理器、该处理器可读的存储介质(包括易失性和非易失性的存储器和/或存储元件)、至少一个输入设备、以及至少一个输出设备。可例如通过使用数据处理API等来利用本发明的域专用编程模型的创建和/或实现的各方面的一个或多个程序较佳地用高级过程语言或面向对象的编程语言来实现以与计算机系统通信。如果需要,该程序可以用汇编语言或机器语言来实现。在任何情形中,语言可以是编译语言或解释语言,且与硬件实现相结合。
最后所应说明的是,以上实施例仅用以说明本发明的技术方案而非限制。尽管参照实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,对本发明的技术方案进行修改或者等同替换,都不脱离本发明技术方案的精神和范围,其均应涵盖在本发明的权利要求范围当中。

Claims (48)

  1. 一种实现于至少一个计算装置,所述计算装置至少包含一个处理器、存储器和通讯平台接口,为虚拟形象生成指定表情的方法,所述虚拟形象包括但不限于二维或三维的人物、动物、卡通造型、或者抽象情感符脸孔,该方法所获取的脸部器官或局部脸孔来自于一个供组合生成不同虚拟形象来代表多个用户的集合,该方法包括以下步骤:
    步骤1、接收第一信息,所述第一信息与代表用户的虚拟形象相关;
    步骤2、基于第一信息从上述集合获取一个或多个脸部器官或局部脸孔;
    步骤3、基于上述脸部器官或局部脸孔,生成代表用户的虚拟形象基础合成脸孔;
    步骤4、接收第二信息,所述第二信息关于虚拟形象所欲显示的指定表情;
    步骤5、基于第二信息从上述集合获取一个或多个脸部器官或局部脸孔;
    步骤6、基于基础合成脸孔和步骤5所得到的脸部器官或局部脸孔,为虚拟形象生成显示上述指定表情的合成脸孔。
  2. 根据权利要求1所述为虚拟形象生成指定表情的方法,其特征在于,所述第一信息包括以下至少一个,从多个预设基础合成脸孔中指定所需的基础合成脸孔的选择信息,或者一个或多个与虚拟形象有关的属性信息。
  3. 根据权利要求2所述为虚拟形象生成指定表情的方法,其特征在于,所述第一信息所包括的一个或多个与虚拟形象有关的属性信息,包括以下至少一个,性别、年龄、种族、职业。
  4. 根据权利要求1所述为虚拟形象生成指定表情的方法,其特征在于,所述获取的脸部器官或局部脸孔具有关联的元数据。
  5. 根据权利要求4所述为虚拟形象生成指定表情的方法,其特征在于,所述与脸部器官或局部脸孔关联的元数据包括以下至少一个:表情信息、身份信息、视角信息、轮廓信息、位置信息。
  6. 根据权利要求4所述为虚拟形象生成指定表情的方法,其特征在于,所述获取 的脸部器官或局部脸孔,基于其相关联的元数据。
  7. 根据权利要求1所述为虚拟形象生成指定表情的方法,其特征在于,所述第二信息包括以下至少一个:
    用户输入,指定其虚拟形象所欲显示的表情;
    一个或多个与虚拟形象所处应用环境相关的参数;
    来自社交网络发送的能够作用于虚拟形象的信息。
  8. 根据权利要求1所述为虚拟形象生成指定表情的方法,其特征在于,还包括存储能显示指定表情的自由组合虚拟形象或带表情合成脸孔,该存储信息供其它应用使用。
  9. 根据权利要求8所述为虚拟形象生成指定表情的方法,其特征在于,将所述虚拟形象带表情合成脸孔渲染并存储为图片,所述图片为静态图片或带有表情动画的动态图片。
  10. 根据权利要求8所述为虚拟形象生成指定表情的方法,其特征在于,将所述虚拟形象带表情合成脸孔存储为能够重现该虚拟形象的数据信息,所述数据信息包括虚拟形象所用脸部器官或局部脸孔的标识信息。
  11. 根据权利要求1所述为虚拟形象生成指定表情的方法,其特征在于,所述虚拟形象还包括肢体器官,所述虚拟形象的肢体动作符合所述指定表情。
  12. 根据权利要求1所述为虚拟形象生成指定表情的方法,其特征在于,所述集合为虚拟形象器官数据库。
  13. 根据权利要求12所述为虚拟形象生成指定表情的方法,其特征在于,还包括建立或扩充虚拟形象器官数据库的步骤。
  14. 根据权利要求1所述为虚拟形象生成指定表情的方法,其特征在于,用户在给其他用户发送的信息中包括能做表情的虚拟形象。
  15. 根据权利要求1所述为虚拟形象生成指定表情的方法,其特征在于,用户为其他用户制作能做表情的虚拟形象,并且将该虚拟形象发送到社交网络上作为分享。
  16. 根据权利要求1所述为虚拟形象生成指定表情的方法,其特征在于,在步骤4中所接收的第二信息为一表情序列信息,在步骤6中为虚拟形象生成显示该表情序列中表情的合成脸孔。
  17. 一种实现于至少一个计算装置,所述计算装置至少包含一个处理器和存储器,建立或扩充带表情的虚拟形象器官数据库的方法,该数据库供组合生成不同虚拟形象来代表多个用户,该方法步骤包括:
    步骤1、获取带有表情的虚拟形象或者部分虚拟形象的原始图像集合,为所述原始图像标识其表情和所属虚拟形象,所述原始图像采用的格式包括但不限于一或多包括位图格式、矢量图格式在内的图形描述格式,或三维模型格式;
    步骤2、基于所述原始图像生成脸部器官或局部脸孔的图像和元数据,元数据包括表情信息。
  18. 根据权利要求17所述建立或扩充虚拟形象器官数据库的方法,其特征在于,所述脸部器官或局部脸孔的元数据具有身份信息,所述身份信息基于其组成部分原始图像所属虚拟形象。
  19. 根据权利要求17所述建立或扩充虚拟形象器官数据库的方法,其特征在于,所述脸部器官或局部脸孔元数据包括以下至少一个:
    二维图像视角信息,或者赋予元数据不含视角信息的所述二维脸部器官或局部脸孔一个默认视角信息;
    三维图像轮廓信息,或者赋予元数据不含轮廓信息的所述三维脸部器官或局部脸孔一个默认轮廓信息。
  20. 根据权利要求17所述建立或扩充虚拟形象器官数据库的方法,其特征在于,所述脸部器官或局部脸孔元数据还包括位置信息,所述位置信息用于该脸部器官或局部脸孔在整个虚拟形象中的定位。
  21. 根据权利要求17所述建立或扩充虚拟形象器官数据库的方法,其特征在于,还包括为所述脸部器官或局部脸孔标注至少一个属性信息,所述属性信息包括但不限于标识信息、类别信息、颜色信息、年龄信息、性别信息、种族信息、职业信息。
  22. 一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,控制用户 虚拟形象的表情的方法,所述虚拟形象是自由组合生成的虚拟形象,其特征在于:
    基于一个或多个与虚拟形象所处应用环境相关的参数,将满足触发条件的表情设定为用户虚拟形象的表情。
  23. 一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,其中至少一个装置还包含指向器或触控屏,控制用户虚拟形象的表情的方法,所述虚拟形象是自由组合生成的虚拟形象,包括:
    步骤1、接收指定表情序列,所述表情序列为一个表情,或用户通过滑动指向器或者划过触控屏来选择的多个表情;
    步骤2、将上述表情序列中满足触发条件的表情设定为用户虚拟形象的表情。
  24. 一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,以虚拟形象来查找用户的方法,所述虚拟形象是自由组合生成的虚拟形象,包括:
    步骤1、接收指定虚拟形象信息;
    步骤2、返回至少一个用户信息,其虚拟形象部件元数据与指定虚拟形象部件元数据满足匹配条件。
  25. 一种实现于至少一个计算装置,所述计算装置至少包含一个处理器、存储器和通讯平台接口,为虚拟形象生成指定表情的系统,所述虚拟形象包括但不限于二维或三维的人物、动物、卡通造型、或者抽象情感符脸孔,该系统所获取的脸部器官或局部脸孔来自于一个供组合生成不同虚拟形象来代表多个用户的集合,该系统包括:
    第一信息接收模块,用于接收第一信息,所述第一信息与代表用户的虚拟形象相关;
    第一数据获取模块,基于第一信息从上述集合获取一个或多个脸部器官或局部脸孔;
    基础合成脸孔生成模块,基于上述脸部器官或局部脸孔,生成代表用户的虚拟形象基础合成脸孔;
    第二信息接收模块,用于接收第二信息,所述第二信息关于虚拟形象所欲显示的指定表情;
    第二数据获取模块,基于第二信息从上述集合获取一个或多个脸部器官或局部脸孔;
    带表情合成脸孔生成模块,基于基础合成脸孔和所述第二数据获取模块所得到的脸部器官或局部脸孔,为虚拟形象生成显示上述指定表情的合成脸孔。
  26. 根据权利要求25所述的为虚拟形象生成指定表情的系统,其特征在于,所述第一信息包括以下至少一个,从多个预设基础合成脸孔中指定所需的基础合成脸孔的选择信息,或者一个或多个与虚拟形象有关的属性信息。
  27. 根据权利要求26所述的为虚拟形象生成指定表情的系统,其特征在于,所述第一信息所包括的一个或多个与虚拟形象有关的属性信息,包括以下至少一个,性别、年龄、种族、职业。
  28. 根据权利要求25所述的为虚拟形象生成指定表情的系统,其特征在于,所述获取的脸部器官或局部脸孔具有关联的元数据。
  29. 根据权利要求28所述的为虚拟形象生成指定表情的系统,其特征在于,所述与脸部器官或局部脸孔关联的元数据包括以下至少一个:表情信息、身份信息、视角信息、轮廓信息、位置信息。
  30. 根据权利要求28所述的为虚拟形象生成指定表情的系统,其特征在于,所述获取的脸部器官或局部脸孔基于其相关联的元数据。
  31. 根据权利要求25所述的为虚拟形象生成指定表情的系统,其特征在于,所述第二信息包括以下至少一个:
    用户输入,指定其虚拟形象所欲显示的表情;
    一个或多个与虚拟形象所处应用环境相关的参数;
    来自社交网络发送的能够作用于虚拟形象的信息。
  32. 根据权利要求25所述的为虚拟形象生成指定表情的系统,其特征在于,还包括一存储模块,用于存储能显示指定表情的自由组合虚拟形象或带表情合成脸孔,其它应用使用该存储信息时不需要所述供组合生成不同虚拟形象来代表多个用户的集合。
  33. 根据权利要求32所述的为虚拟形象生成指定表情的系统,其特征在于,所述 存储模块将所述虚拟形象带表情合成脸孔渲染并存储为图片,所述图片为静态图片或带有表情动画的动态图片。
  34. 根据权利要求32所述的为虚拟形象生成指定表情的系统,其特征在于,所述存储模块将所述虚拟形象带表情合成脸孔存储为能够重现该虚拟形象的数据信息,所述数据信息包括虚拟形象所用脸部器官或局部脸孔的标识信息。
  35. 根据权利要求25所述的为虚拟形象生成指定表情的系统,其特征在于,所述虚拟形象还包括肢体器官,所述虚拟形象的肢体动作符合所述指定表情。
  36. 根据权利要求25所述的为虚拟形象生成指定表情的系统,其特征在于,所述集合为虚拟形象器官数据库。
  37. 根据权利要求36所述的为虚拟形象生成指定表情的系统,其特征在于,还包括建立或扩充虚拟形象器官数据库的模块。
  38. 根据权利要求25所述的为虚拟形象生成指定表情的系统,其特征在于,用户在给其他用户发送的信息中包括能做表情的虚拟形象。
  39. 根据权利要求25所述的为虚拟形象生成指定表情的系统,其特征在于,用户为其他用户制作能做表情的虚拟形象,并且将该虚拟形象发送到社交网络上作为分享。
  40. 根据权利要求25所述的为虚拟形象生成指定表情的系统,其特征在于,第二信息接收模块所接收的第二信息为一表情序列信息,带表情合成脸孔生成模块为虚拟形象生成显示该表情序列中表情的合成脸孔。
  41. 一种实现于至少一个计算装置,所述计算装置至少包含一个处理器和存储器,建立或扩充带表情的虚拟形象器官数据库的系统,该数据库供组合生成不同虚拟形象来代表多个用户,该系统包括:
    原始图像获取模块,用于获取带有表情的虚拟形象或者部分虚拟形象的原始图像集合,为所述原始图像标识其表情和所属虚拟形象,所述原始图像采用的格式包括但不限于一或多包括位图格式、矢量图格式在内的图形描述格式,或三维模型格式;
    图像与元数据生成模块,基于所述原始图像生成脸部器官或局部脸孔的图像和元数据,元数据包括表情信息。
  42. 根据权利要求41所述的建立或扩充虚拟形象器官数据库的系统,其特征在于,所述脸部器官或局部脸孔的元数据具有身份信息,所述身份信息基于其组成部分原始图像所属虚拟形象。
  43. 根据权利要求41所述的建立或扩充虚拟形象器官数据库的系统,其特征在于,所述脸部器官或局部脸孔元数据包括以下至少一个:
    二维图像视角信息,或者赋予元数据不含视角信息的所述二维脸部器官或局部脸孔一个默认视角信息;
    三维图像轮廓信息,或者赋予元数据不含轮廓信息的所述三维脸部器官或局部脸孔一个默认轮廓信息。
  44. 根据权利要求41所述的建立或扩充虚拟形象器官数据库的系统,其特征在于,所述脸部器官或局部脸孔元数据还包括位置信息,所述位置信息用于该脸部器官或局部脸孔在整个虚拟形象中的定位。
  45. 根据权利要求41所述的建立或扩充虚拟形象器官数据库的系统,其特征在于,还包括为所述脸部器官或局部脸孔标注至少一个属性信息,所述属性信息包括但不限于标识信息、类别信息、颜色信息、年龄信息、性别信息、种族信息、职业信息。
  46. 一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,控制用户虚拟形象的表情的系统,所述虚拟形象是自由组合生成的虚拟形象,其特征在于,该系统包括:
    表情设定模块,基于一个或多个与虚拟形象所处应用环境相关的参数,将满足触发条件的表情设定为用户虚拟形象的表情。
  47. 一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,其中至少一个装置还包含指向器或触控屏,控制用户虚拟形象的表情的系统,所述虚拟形象是自由组合生成的虚拟形象,该系统包括:
    表情接收模块,用于接收指定表情序列,所述表情序列为一个表情,或用户通过滑动指向器或者划过触控屏来选择的多个表情;
    表情设定模块,用于将上述表情序列中满足触发条件的表情设定为用户虚拟形象的表情。
  48. 一种实现于至少一个含处理器、存储器、通讯平台接口的计算装置,以虚拟形象来查找用户的系统,所述虚拟形象是自由组合生成的虚拟形象,该系统包括:
    虚拟形象接收模块,接收指定虚拟形象信息;
    虚拟形象查找模块,返回至少一个用户信息,其虚拟形象部件元数据与指定虚拟形象部件元数据满足匹配条件。
PCT/CN2016/080036 2015-05-06 2016-04-22 为自由组合创作的虚拟形象生成及使用表情的方法和系统 WO2016177290A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510227900.4A CN106204698A (zh) 2015-05-06 2015-05-06 为自由组合创作的虚拟形象生成及使用表情的方法和系统
CN201510227900.4 2015-05-06

Publications (1)

Publication Number Publication Date
WO2016177290A1 true WO2016177290A1 (zh) 2016-11-10

Family

ID=57218058

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/080036 WO2016177290A1 (zh) 2015-05-06 2016-04-22 为自由组合创作的虚拟形象生成及使用表情的方法和系统

Country Status (2)

Country Link
CN (1) CN106204698A (zh)
WO (1) WO2016177290A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992822A (zh) * 2017-11-30 2018-05-04 广东欧珀移动通信有限公司 图像处理方法和装置、计算机设备、计算机可读存储介质
CN108989705A (zh) * 2018-08-31 2018-12-11 百度在线网络技术(北京)有限公司 一种虚拟形象的视频制作方法、装置和终端
CN109741415A (zh) * 2019-01-02 2019-05-10 中国联合网络通信集团有限公司 图层整理方法、装置及终端设备
CN110390705A (zh) * 2018-04-16 2019-10-29 北京搜狗科技发展有限公司 一种生成虚拟形象的方法及装置
CN111541950A (zh) * 2020-05-07 2020-08-14 腾讯科技(深圳)有限公司 表情的生成方法、装置、电子设备及存储介质
CN113099150A (zh) * 2020-01-08 2021-07-09 华为技术有限公司 图像处理的方法、设备及系统
CN113613048A (zh) * 2021-07-30 2021-11-05 武汉微派网络科技有限公司 虚拟形象表情驱动方法和系统
CN113691833A (zh) * 2020-05-18 2021-11-23 北京搜狗科技发展有限公司 虚拟主播换脸方法、装置、电子设备及存储介质
US11354844B2 (en) 2018-10-26 2022-06-07 Soul Machines Limited Digital character blending and generation system and method

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485774B (zh) * 2016-12-30 2019-11-15 当家移动绿色互联网技术集团有限公司 基于语音实时驱动人物模型的表情和姿态的方法
CN107146275B (zh) * 2017-03-31 2020-10-27 北京奇艺世纪科技有限公司 一种设置虚拟形象的方法及装置
CN107169872A (zh) * 2017-05-09 2017-09-15 北京龙杯信息技术有限公司 用于生成虚拟礼物的方法、存储设备和终端
CN107272884A (zh) * 2017-05-09 2017-10-20 聂懋远 一种基于虚拟现实技术的控制方法及其控制系统
CN107551549A (zh) * 2017-08-09 2018-01-09 广东欧珀移动通信有限公司 游戏形象调整方法及其装置
CN109410299B (zh) * 2017-08-15 2022-03-11 腾讯科技(深圳)有限公司 一种信息处理方法、装置和计算机存储介质
CN109427083B (zh) * 2017-08-17 2022-02-01 腾讯科技(深圳)有限公司 三维虚拟形象的显示方法、装置、终端及存储介质
CN107527033A (zh) * 2017-08-25 2017-12-29 歌尔科技有限公司 摄像头模组和社交系统
CN109472851A (zh) * 2017-09-06 2019-03-15 蒋铁骐 3d人体虚拟形象的构成方法及使用
CN108171770B (zh) * 2018-01-18 2021-04-06 中科视拓(北京)科技有限公司 一种基于生成式对抗网络的人脸表情编辑方法
CN110135226B (zh) * 2018-02-09 2023-04-07 腾讯科技(深圳)有限公司 表情动画数据处理方法、装置、计算机设备和存储介质
CN110300049B (zh) * 2018-03-23 2022-05-27 阿里巴巴集团控股有限公司 一种基于即时通讯的消息屏蔽方法、设备以及系统
CN108510437B (zh) * 2018-04-04 2022-05-17 科大讯飞股份有限公司 一种虚拟形象生成方法、装置、设备以及可读存储介质
CN108305309B (zh) * 2018-04-13 2021-07-20 腾讯科技(成都)有限公司 基于立体动画的人脸表情生成方法和装置
CN108846881B (zh) * 2018-05-29 2023-05-12 珠海格力电器股份有限公司 一种表情图像的生成方法及装置
CN108854074B (zh) * 2018-06-15 2021-08-24 北京奇虎科技有限公司 电子宠物的配置方法及装置
CN109126136B (zh) * 2018-07-27 2020-09-15 腾讯科技(深圳)有限公司 三维虚拟宠物的生成方法、装置、设备及存储介质
CN109358923B (zh) * 2018-08-29 2024-04-12 华为技术有限公司 一种虚拟机器人形象的呈现方法及装置
CN109345616A (zh) * 2018-08-30 2019-02-15 腾讯科技(深圳)有限公司 三维虚拟宠物的二维渲染图的生成方法、设备及存储介质
CN109353078B (zh) * 2018-10-09 2020-07-28 乐米智拓(北京)科技有限公司 折纸模型生成方法、装置、介质及电子设备
CN110148191B (zh) * 2018-10-18 2023-02-28 腾讯科技(深圳)有限公司 视频虚拟表情生成方法、装置及计算机可读存储介质
CN109523604A (zh) * 2018-11-14 2019-03-26 珠海金山网络游戏科技有限公司 一种虚拟脸型生成方法、装置、电子设备以及存储介质
CN109603151A (zh) 2018-12-13 2019-04-12 腾讯科技(深圳)有限公司 虚拟角色的皮肤显示方法、装置及设备
CN109683784A (zh) * 2018-12-25 2019-04-26 河北微幼趣教育科技有限公司 数据处理方法和装置
CN111383308B (zh) 2018-12-29 2023-06-23 华为技术有限公司 生成动画表情的方法和电子设备
CN109919016B (zh) * 2019-01-28 2020-11-03 武汉恩特拉信息技术有限公司 一种在无脸部器官的对象上生成人脸表情的方法及装置
TWI720438B (zh) * 2019-03-18 2021-03-01 國立勤益科技大學 多國語時裝遊戲系統
CN109922355B (zh) * 2019-03-29 2020-04-17 广州虎牙信息科技有限公司 虚拟形象直播方法、虚拟形象直播装置和电子设备
CN110209283A (zh) * 2019-06-11 2019-09-06 北京小米移动软件有限公司 数据处理方法、装置、系统、电子设备及存储介质
CN113646733A (zh) * 2019-06-27 2021-11-12 苹果公司 辅助表情
CN110717974B (zh) * 2019-09-27 2023-06-09 腾讯数码(天津)有限公司 展示状态信息的控制方法、装置、电子设备和存储介质
CN112785681B (zh) * 2019-11-07 2024-03-08 杭州睿琪软件有限公司 宠物的3d形象生成方法及装置
CN111124231B (zh) * 2019-12-26 2021-02-12 维沃移动通信有限公司 图片生成方法及电子设备
CN113763531B (zh) * 2020-06-05 2023-11-28 北京达佳互联信息技术有限公司 三维人脸重建方法、装置、电子设备及存储介质
CN112800365A (zh) * 2020-09-01 2021-05-14 腾讯科技(深圳)有限公司 表情包的处理方法、装置及智能设备
CN112614212B (zh) * 2020-12-16 2022-05-17 上海交通大学 联合语气词特征的视音频驱动人脸动画实现方法及系统
CN112598785B (zh) * 2020-12-25 2022-03-25 游艺星际(北京)科技有限公司 虚拟形象的三维模型生成方法、装置、设备及存储介质
CN113223121B (zh) * 2021-04-30 2023-10-10 北京达佳互联信息技术有限公司 视频生成方法、装置、电子设备及存储介质
CN113763518A (zh) * 2021-09-09 2021-12-07 北京顺天立安科技有限公司 基于虚拟数字人的多模态无限表情合成方法及装置
CN113838159B (zh) * 2021-09-14 2023-08-04 上海任意门科技有限公司 用于生成卡通图像的方法、计算设备和存储介质
CN114972652B (zh) * 2022-06-14 2023-11-10 深圳市固有色数码技术有限公司 一种虚拟形象建模方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504774A (zh) * 2009-03-06 2009-08-12 暨南大学 一种基于虚拟现实的动漫设计引擎
CN101931621A (zh) * 2010-06-07 2010-12-29 上海那里网络科技有限公司 一种借助虚拟形象进行情感交流的装置和方法
CN102157007A (zh) * 2011-04-11 2011-08-17 北京中星微电子有限公司 一种表演驱动的制作人脸动画的方法和装置
WO2014178044A1 (en) * 2013-04-29 2014-11-06 Ben Atar Shlomi Method and system for providing personal emoticons

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010056965A (ko) * 1999-12-17 2001-07-04 박희완 부분 이미지 합성에 의한 인물 캐릭터 생성 방법
CN1328908C (zh) * 2004-11-15 2007-07-25 北京中星微电子有限公司 一种视频通信的方法
US8111281B2 (en) * 2007-06-29 2012-02-07 Sony Ericsson Mobile Communications Ab Methods and terminals that control avatars during videoconferencing and other communications
JP2009100823A (ja) * 2007-10-19 2009-05-14 Sega Corp ゲーム装置、ゲームプログラム、及びその記憶媒体
CN103207745B (zh) * 2012-01-16 2016-04-13 上海那里信息科技有限公司 虚拟化身交互系统和方法
CN103366782B (zh) * 2012-04-06 2014-09-10 腾讯科技(深圳)有限公司 在虚拟形象上自动播放表情的方法和装置
WO2014153689A1 (en) * 2013-03-29 2014-10-02 Intel Corporation Avatar animation, social networking and touch screen applications
CN103218844B (zh) * 2013-04-03 2016-04-20 腾讯科技(深圳)有限公司 虚拟形象的配置方法、实现方法、客户端、服务器及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504774A (zh) * 2009-03-06 2009-08-12 暨南大学 一种基于虚拟现实的动漫设计引擎
CN101931621A (zh) * 2010-06-07 2010-12-29 上海那里网络科技有限公司 一种借助虚拟形象进行情感交流的装置和方法
CN102157007A (zh) * 2011-04-11 2011-08-17 北京中星微电子有限公司 一种表演驱动的制作人脸动画的方法和装置
WO2014178044A1 (en) * 2013-04-29 2014-11-06 Ben Atar Shlomi Method and system for providing personal emoticons

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992822B (zh) * 2017-11-30 2020-04-10 Oppo广东移动通信有限公司 图像处理方法和装置、计算机设备、计算机可读存储介质
CN107992822A (zh) * 2017-11-30 2018-05-04 广东欧珀移动通信有限公司 图像处理方法和装置、计算机设备、计算机可读存储介质
US10824901B2 (en) 2017-11-30 2020-11-03 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing of face sets utilizing an image recognition method
CN110390705A (zh) * 2018-04-16 2019-10-29 北京搜狗科技发展有限公司 一种生成虚拟形象的方法及装置
CN110390705B (zh) * 2018-04-16 2023-11-10 北京搜狗科技发展有限公司 一种生成虚拟形象的方法及装置
CN108989705A (zh) * 2018-08-31 2018-12-11 百度在线网络技术(北京)有限公司 一种虚拟形象的视频制作方法、装置和终端
US11354844B2 (en) 2018-10-26 2022-06-07 Soul Machines Limited Digital character blending and generation system and method
CN109741415B (zh) * 2019-01-02 2023-08-08 中国联合网络通信集团有限公司 图层整理方法、装置及终端设备
CN109741415A (zh) * 2019-01-02 2019-05-10 中国联合网络通信集团有限公司 图层整理方法、装置及终端设备
CN113099150A (zh) * 2020-01-08 2021-07-09 华为技术有限公司 图像处理的方法、设备及系统
CN113099150B (zh) * 2020-01-08 2022-12-02 华为技术有限公司 图像处理的方法、设备及系统
CN111541950A (zh) * 2020-05-07 2020-08-14 腾讯科技(深圳)有限公司 表情的生成方法、装置、电子设备及存储介质
CN111541950B (zh) * 2020-05-07 2023-11-03 腾讯科技(深圳)有限公司 表情的生成方法、装置、电子设备及存储介质
CN113691833B (zh) * 2020-05-18 2023-02-03 北京搜狗科技发展有限公司 虚拟主播换脸方法、装置、电子设备及存储介质
CN113691833A (zh) * 2020-05-18 2021-11-23 北京搜狗科技发展有限公司 虚拟主播换脸方法、装置、电子设备及存储介质
CN113613048A (zh) * 2021-07-30 2021-11-05 武汉微派网络科技有限公司 虚拟形象表情驱动方法和系统

Also Published As

Publication number Publication date
CN106204698A (zh) 2016-12-07

Similar Documents

Publication Publication Date Title
WO2016177290A1 (zh) 为自由组合创作的虚拟形象生成及使用表情的方法和系统
US11615592B2 (en) Side-by-side character animation from realtime 3D body motion capture
US11734894B2 (en) Real-time motion transfer for prosthetic limbs
US11763481B2 (en) Mirror-based augmented reality experience
US11670059B2 (en) Controlling interactive fashion based on body gestures
US11734866B2 (en) Controlling interactive fashion based on voice
WO2022108805A1 (en) Personalized avatar real-time motion capture
WO2022108806A1 (en) Body animation sharing and remixing
US11900506B2 (en) Controlling interactive fashion based on facial expressions
US20230066179A1 (en) Interactive fashion with music ar
US11636662B2 (en) Body normal network light and rendering control
KR20180118669A (ko) 디지털 커뮤니케이션 네트워크에 기반한 지능형 채팅
US20240096040A1 (en) Real-time upper-body garment exchange
US20230196602A1 (en) Real-time garment exchange
US11651572B2 (en) Light and rendering of garments
US20230196712A1 (en) Real-time motion and appearance transfer
US20230316666A1 (en) Pixel depth determination for object
US20230316665A1 (en) Surface normals for pixel-aligned object
WO2023196387A1 (en) Pixel depth determination for object
CN117940962A (zh) 基于面部表情控制交互时尚

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16789284

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16789284

Country of ref document: EP

Kind code of ref document: A1