CN109919016B - Method and device for generating facial expression on object without facial organs - Google Patents

Method and device for generating facial expression on object without facial organs Download PDF

Info

Publication number
CN109919016B
CN109919016B CN201910081812.6A CN201910081812A CN109919016B CN 109919016 B CN109919016 B CN 109919016B CN 201910081812 A CN201910081812 A CN 201910081812A CN 109919016 B CN109919016 B CN 109919016B
Authority
CN
China
Prior art keywords
facial
organ
region
model
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910081812.6A
Other languages
Chinese (zh)
Other versions
CN109919016A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Entela Information Technology Co ltd
Original Assignee
Wuhan Entela Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Entela Information Technology Co ltd filed Critical Wuhan Entela Information Technology Co ltd
Priority to CN201910081812.6A priority Critical patent/CN109919016B/en
Publication of CN109919016A publication Critical patent/CN109919016A/en
Application granted granted Critical
Publication of CN109919016B publication Critical patent/CN109919016B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a method and a device for generating facial expressions on an object without facial organs, wherein the method comprises the following steps: selecting a region to be processed on the surface of a data model representing an object; generating a facial organ model characterizing a facial organ in the region to be processed; migrating facial expressions to the generated facial organ models. According to the technical scheme provided by the embodiment of the invention, the human face expression is formed on the data model representing the object without the facial organ, so that the object without the facial organ is endowed with the anthropomorphic expression, the application scene of the human face expression simulation is widened, and the user experience is enhanced.

Description

Method and device for generating facial expression on object without facial organs
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a device for generating a facial expression on an object without facial organs.
Background
Facial expression recognition and facial expression migration are currently common technologies, and on this basis, in order to achieve a human-like effect, applications of giving expressions or migrating expressions to animals also appear. The common point of the technologies is that the expression is generated based on the existing five sense organs through the technologies of expression recognition, expression migration and the like, the application scene is limited, and the user experience is poor.
Disclosure of Invention
In order to solve the technical problems that the application scene is limited and the user experience is poor due to the fact that expressions need to be generated on the existing five sense organs, the embodiment of the invention provides a method and a device for generating facial expressions on an object without facial organs.
In one aspect, a method for generating a facial expression on an object without facial organs is provided, comprising:
selecting a region to be processed on the surface of a data model representing an object;
generating a facial organ model characterizing a facial organ in the region to be processed;
migrating facial expressions to the generated facial organ models.
In certain embodiments, the method further comprises:
acquiring a data model representing the surface shape of an object;
acquiring initial data characteristics of a face organ model representing a face organ to be generated;
the initial data features comprise shape features of facial organs, position features of the facial organs and facial expression features, the shape features of the facial organs represent the three-dimensional shapes of the facial organs, the position features of the facial organs represent the positions of the facial organs, and the facial expression features represent facial expressions.
In some embodiments, the selecting a region to be processed on the surface of the data model representing the object includes:
traversing the shape of the surface of the acquired data model, and determining whether a shape which conforms to the shape characteristics of the facial organ exists;
if so, selecting a region to be processed based on the region where the shape conforming to the shape characteristics of the facial organ is located;
if the data model does not exist, selecting a region to be processed on the surface of the data model;
the generation of a facial organ model characterizing facial organs in the region to be processed comprises:
and generating a face organ model in the region to be processed based on the shape feature of the face organ and the position feature of the face organ, wherein if a shape conforming to the shape feature of the face organ exists, a corresponding face organ model is generated in a region where the shape conforming to the shape feature of the face organ exists.
In certain embodiments, the method further comprises:
acquiring a data model representing a surface pattern of an object;
acquiring initial data characteristics of a face organ model representing a face organ to be generated;
the initial data features comprise contour features of facial organs, position features of the facial organs and facial expression features, the contour features of the facial organs represent plane contours of the facial organs, the position features of the facial organs represent positions of the facial organs, and the facial expression features represent facial expressions.
In some embodiments, the selecting a region to be processed on the surface of the data model representing the object includes:
traversing the patterns on the surface of the acquired data model, and determining whether the patterns which conform to the contour characteristics of the facial organs exist;
if so, selecting a region to be processed based on the region where the pattern which conforms to the contour characteristics of the facial organ is located;
if the data model does not exist, selecting a region to be processed on the surface of the data model;
the generation of a facial organ model characterizing facial organs in the region to be processed comprises:
and generating a face organ model in the region to be processed based on the contour features of the face organ and the position features of the face organ, wherein if a pattern which conforms to the contour features of the face organ exists, a corresponding face organ model is generated in a region where the pattern which conforms to the contour features of the face organ is located.
In certain embodiments, the data model additionally characterizes one or more of texture, gloss, and color of the object surface.
In some embodiments, the initial data features further include color features of a facial organ, and the facial organ model is generated based additionally on the color features of the facial organ when the facial organ model is generated for the region to be processed.
In another aspect, an apparatus for generating a facial expression on an object without a facial organ is provided, including:
the device comprises a to-be-processed area selection component, a processing component and a processing component, wherein the to-be-processed area selection component is used for selecting a to-be-processed area on the surface of a data model representing an object;
a facial organ model generating part for generating a facial organ model representing a facial organ in the region to be processed;
and the facial expression migration component is used for migrating the facial expression to the generated facial organ model.
In certain embodiments, the apparatus further comprises:
a data model acquisition unit configured to acquire a data model representing a surface shape or a pattern of an object;
the initial data feature acquisition component is used for acquiring initial data features of a face organ model representing a face organ to be generated;
the initial data features comprise the shape or contour feature of the facial organ, the position feature of the facial organ and the facial expression feature, the shape or contour feature of the facial organ represents the three-dimensional shape or plane contour of the facial organ, the position feature of the facial organ represents the position of the facial organ, and the facial expression feature represents the facial expression.
In certain embodiments, the apparatus implements a method as described in any of the previous claims.
In some embodiments, the selecting the region to be processed from the surface of the data model representing the object by the region to be processed selecting component includes:
traversing the shape of the surface of the acquired data model, and determining whether a shape which conforms to the shape characteristics of the facial organ exists;
if so, selecting a region to be processed based on the region where the shape conforming to the shape characteristics of the facial organ is located;
if the data model does not exist, selecting a region to be processed on the surface of the data model;
the facial organ model generating part generates a facial organ model characterizing a facial organ in the region to be processed, including:
and generating a face organ model in the region to be processed based on the shape feature of the face organ and the position feature of the face organ, wherein if a shape conforming to the shape feature of the face organ exists, a corresponding face organ model is generated in a region where the shape conforming to the shape feature of the face organ exists.
In some embodiments, the selecting the region to be processed from the surface of the data model representing the object by the region to be processed selecting component includes:
traversing the patterns on the surface of the acquired data model, and determining whether the patterns which conform to the contour characteristics of the facial organs exist;
if so, selecting a region to be processed based on the region where the pattern which conforms to the contour characteristics of the facial organ is located;
if the data model does not exist, selecting a region to be processed on the surface of the data model;
the facial organ model generating part generates a facial organ model characterizing a facial organ in the region to be processed, including:
and generating a face organ model in the region to be processed based on the contour features of the face organ and the position features of the face organ, wherein if a pattern which conforms to the contour features of the face organ exists, a corresponding face organ model is generated in a region where the pattern which conforms to the contour features of the face organ is located.
In certain embodiments, the data model additionally characterizes one or more of texture, gloss, and color of the object surface.
In some embodiments, the initial data features further include color features of a facial organ, and the facial organ model is generated based additionally on the color features of the facial organ when the facial organ model is generated for the region to be processed.
According to the method and the device for generating the facial expression on the object without the facial organ, which are provided by the embodiment of the invention, the facial organ model for representing the facial organ is simulated on the data model for representing the object without the facial organ, the existing facial expression migration technology is utilized, the facial expression is endowed to the simulated facial organ model, the anthropomorphic effect is realized by means of the data model and the facial organ model, and the emotion of a person is expressed. Therefore, the technical scheme provided by the embodiment of the invention forms the facial expression on the data model representing the object without the facial organ, thereby endowing the object without the facial organ with the anthropomorphic expression, widening the application scene of the facial expression simulation and enhancing the user experience.
Drawings
FIG. 1 is a flow chart of a method for generating a facial expression on an object without facial organs according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an apparatus for generating a facial expression on an object without a facial organ according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings. Those skilled in the art will appreciate that the present invention is not limited to the drawings and the following examples.
The embodiment of the invention provides a method for generating facial expressions on an object without facial organs, which comprises the following steps of:
selecting a region to be processed on the surface of a data model representing an object;
generating a facial organ model characterizing a facial organ in the region to be processed;
migrating facial expressions to the generated facial organ models.
In an embodiment of the present invention, the object is an object without a facial organ. The method for generating the facial expression on the object without the facial organ, which is provided by the embodiment of the invention, simulates the facial organ model representing the facial organ on the data model representing the object without the facial organ, then endows the simulated facial expression to the simulated facial organ model by utilizing the existing facial expression migration technology, realizes the anthropomorphic effect by means of the data model and the facial organ model, and expresses the emotion of a person. Therefore, the technical scheme provided by the embodiment of the invention forms the facial expression on the data model representing the object without the facial organ, thereby endowing the object without the facial organ with the anthropomorphic expression, widening the application scene of the facial expression simulation and enhancing the user experience.
In an embodiment, the object may be an object without a facial organ having a three-dimensional structure (including a regular three-dimensional structure and an irregular three-dimensional structure), and the data model may characterize a shape of a surface of the object.
In another embodiment, the object may be an object without a facial organ having a two-dimensional structure (including a regular two-dimensional structure and an irregular two-dimensional structure), and the data model may characterize a pattern of the object surface.
In an embodiment, the data model may also represent texture of the object surface to represent texture of the object, further enhancing user experience.
In one embodiment, the data model can also represent the luster of the surface of the object, enhance the agility of the expression and improve the visual perception of the user.
In one embodiment, the data model may also characterize the color of the object surface, improving the degree of restitution and the user's visual perception.
In an embodiment, the surface of the data model may also be rendered with colors according to the needs of the user, for example, coloring the face organ model, or coloring other areas except the face organ, so as to enrich the expressive power of the data model and enhance the user experience.
In one embodiment, the object may be an object that has been built from a data model selected from a database.
In another embodiment, the object may be a real object, and existing modeling techniques are used to construct a data model characterizing the object, such as 3DMAX, SoftImage, Maya, UG, AutoCAD, and the like, and specific details are not described in detail.
In an embodiment, a region to be processed is selected on a surface of a data model representing an object, and the region to be processed may be selected at any position of the surface of the data model.
In another embodiment, a region to be processed is selected on the surface of the data model representing the object, a region matched with the facial organ can be searched on the surface of the data model, and the region to be processed is selected on the basis of the region, so that the matching degree of the subsequently generated facial organ model and the surface of the data model is improved, and the harmony between the facial expression and the object is increased.
In an embodiment, the surface comprises a plane, the area to be processed being provided based on a plane of a data model. In an embodiment, the plane comprises a curvature of less than 2m-1All planes of (a). In one embodiment, the plane includes a plane having a pixel data amount of 200 × 200 or more.
In an embodiment, a facial organ model is generated on the region to be processed, which may be generated based on a human facial organ, an animal facial organ, or an imaginary character facial organ. The avatar may include, but is not limited to, an avatar having facial organs, such as a cartoon character avatar.
In one embodiment, the facial organ includes at least eyes, and the facial organ may include one or more of a nose, a mouth, eyebrows, and ears in addition to the eyes. In one embodiment, the facial organ includes eyes and a nose.
In one embodiment, the facial expression may be selected from a database of facial expressions.
In another embodiment, the facial expression may be obtained by using a real-based face by using an existing facial expression recognition technology, and specific details are not described in detail.
In the embodiment of the present invention, the technology for migrating facial expressions may adopt an existing facial expression migration technology, and specific details are not described in detail.
An embodiment of the present invention provides an apparatus for performing expression simulation on an object without a facial organ, as shown in fig. 2, including:
the device comprises a to-be-processed area selection component, a processing component and a processing component, wherein the to-be-processed area selection component is used for selecting a to-be-processed area on the surface of a data model representing an object;
a facial organ model generating part for generating a facial organ model representing a facial organ in the region to be processed;
and the facial expression migration component is used for migrating the facial expression to the generated facial organ model.
In an embodiment of the present invention, the object is an object without a facial organ. The device for simulating the expression on the object without the facial organ, provided by the embodiment of the invention, simulates the facial organ model representing the facial organ on the data model representing the object without the facial organ, and then gives the facial expression to the simulated facial organ model by utilizing the existing facial expression migration technology, so that the anthropomorphic effect is realized by means of the data model and the facial organ model, and the emotion of a person is expressed. Therefore, the technical scheme provided by the embodiment of the invention forms the facial expression on the data model representing the object without the facial organ, thereby endowing the object without the facial organ with the anthropomorphic expression, widening the application scene of the facial expression simulation and enhancing the user experience.
The device for performing expression simulation on an object without a facial organ according to the embodiment of the present invention may refer to the description of the method for performing expression simulation on an object without a facial organ according to the embodiment of the present invention, and is not described again for saving space.
The following is a description of an exemplary embodiment of the present invention, taking a method for generating a facial expression by an object without a facial organ as an example, but does not limit the scope of the claims of the present application. A person skilled in the art can know an implementation manner of the apparatus for generating a facial expression by an object without a facial organ according to the following description of the embodiment of the present invention, and for brevity, related contents of the apparatus for generating a facial expression by an object without a facial organ according to the embodiment of the present invention are not repeated.
Example 1:
in this embodiment, an embodiment of the method for generating a facial expression on an object without a facial organ proposed in this embodiment is exemplarily described by taking the object as an example having a three-dimensional structure.
The embodiment provides a method for generating a facial expression on an object without facial organs, which comprises the following steps:
acquiring a data model representing the surface shape of an object;
acquiring initial data characteristics of a facial organ model representing facial organs to be generated, wherein the initial data characteristics comprise shape characteristics of the facial organs, position characteristics of the facial organs and facial expression characteristics, the shape characteristics of the facial organs represent the three-dimensional shapes of the facial organs, the position characteristics of the facial organs represent the positions of the facial organs, and the facial expression characteristics represent facial expressions;
traversing the shape of the surface of the acquired data model, and determining whether a shape which conforms to the shape characteristics of the facial organ exists;
if so, selecting a region to be processed based on the region where the shape conforming to the shape characteristics of the facial organ is located; generating a face organ model in the region to be processed based on the shape feature of the face organ and the position feature of the face organ, wherein the corresponding face organ model is generated in the region where the shape conforming to the shape feature of the face organ is located; migrating the facial expression features to the generated facial organ models;
if the data model does not exist, selecting a region to be processed on the surface of the data model; generating a face organ model in the region to be processed based on the shape features of the face organ and the position features of the face organ; migrating the facial expression features to the generated facial organ model.
In an embodiment, the initial data features further include color features of the facial organ, and when the facial organ model is generated in the region to be processed, the facial organ model is additionally generated based on the color features of the facial organ, so that the expressiveness of the data model is enriched, and the user experience is enhanced.
In the method for generating a facial expression on an object without a facial organ provided by this embodiment, a region matched with the shape of the facial organ is searched on the surface of the data model, and the region to be processed is selected based on the region, so that the shape of the surface of the data model is reasonably utilized, which is beneficial to improving the matching degree of the subsequently generated facial organ model and the surface of the data model, and increasing the harmony between the facial expression and the data model representing the object.
Example 2:
in this embodiment, an embodiment of the method for generating a facial expression on an object without a facial organ proposed in this embodiment is exemplarily described by taking the object as an example having a two-dimensional structure.
The embodiment provides a method for generating a facial expression on an object without facial organs, which comprises the following steps:
acquiring a data model representing a surface pattern of an object;
acquiring initial data characteristics of a facial organ model representing a facial organ to be generated, wherein the initial data characteristics comprise contour characteristics of the facial organ, position characteristics of the facial organ and facial expression characteristics, the contour characteristics of the facial organ represent a plane contour of the facial organ, the position characteristics of the facial organ represent the position of the facial organ, and the facial expression characteristics represent facial expressions;
traversing the patterns on the surface of the acquired data model, and determining whether the patterns which conform to the contour characteristics of the facial organs exist;
if so, selecting a region to be processed based on the region where the pattern which conforms to the contour characteristics of the facial organ is located; generating a face organ model in the region to be processed based on the contour features of the face organ and the position features of the face organ, wherein the corresponding face organ model is generated in the region where the pattern which conforms to the contour features of the face organ is located; migrating the facial expression features to the generated facial organ models;
if the data model does not exist, selecting a region to be processed on the surface of the data model; generating a face organ model in the region to be processed based on the contour features of the face organ and the position features of the face organ; migrating the facial expression features to the generated facial organ model.
In an embodiment, the initial data features further include color features of the facial organ, and when the facial organ model is generated in the region to be processed, the facial organ model is additionally generated based on the color features of the facial organ, so that the expressiveness of the data model is enriched, and the user experience is enhanced.
In the method for generating a facial expression on an object without a facial organ provided by this embodiment, a region matched with a pattern of the facial organ is searched on the surface of the data model, and the region to be processed is selected based on the region, so that the pattern on the surface of the data model is reasonably utilized, which is beneficial to improving the matching degree between a subsequently generated facial organ model and the surface of the data model, and increasing the harmony between the facial expression and the data model representing the object.
Example 3:
in this embodiment, an embodiment of the method for generating a facial expression on an object without facial parts according to this embodiment is exemplarily described by taking an example that the object has a three-dimensional structure and a surface has a texture of material.
The embodiment provides a method for generating a facial expression on an object without facial organs, which comprises the following steps:
acquiring a data model representing the surface shape and the texture of the object;
acquiring initial data characteristics of a facial organ model representing facial organs to be generated, wherein the initial data characteristics comprise shape characteristics of the facial organs, position characteristics of the facial organs and facial expression characteristics, the shape characteristics of the facial organs represent the three-dimensional shapes of the facial organs, the position characteristics of the facial organs represent the positions of the facial organs, and the facial expression characteristics represent facial expressions;
traversing the shape of the surface of the acquired data model, and determining whether a shape which conforms to the shape characteristics of the facial organ exists;
if so, selecting a region to be processed based on the region where the shape conforming to the shape characteristics of the facial organ is located; generating a face organ model in the region to be processed based on the shape feature of the face organ and the position feature of the face organ, wherein the corresponding face organ model is generated in the region where the shape conforming to the shape feature of the face organ is located; migrating the facial expression features to the generated facial organ models;
if the data model does not exist, selecting a region to be processed on the surface of the data model; generating a face organ model in the region to be processed based on the shape features of the face organ and the position features of the face organ; migrating the facial expression features to the generated facial organ model.
In an alternative embodiment, the data model may characterize the gloss and/or color of the object surface in the data model in addition to the shape of the object surface.
In an embodiment, the data model may represent the texture and the color of the material of the object surface, or represent the texture, the color and the color of the material of the object surface, in the data model, in addition to the shape of the object surface.
According to the method for generating the facial expression on the object without the facial organ, the texture and/or the gloss of the surface of the object are/is further integrated into the data model on the basis of the shape of the surface of the object, so that the texture of the object can be vividly expressed, the expression flexibility is enhanced, the visual perception of a user is improved, and the user experience is enhanced.
Example 4:
in this embodiment, an embodiment of the method for generating a facial expression on an object without facial parts according to this embodiment is exemplarily described by taking an example that the object has a two-dimensional structure and a surface has a texture of material.
The embodiment provides a method for generating a facial expression on an object without facial organs, which comprises the following steps:
acquiring a data model representing the surface pattern and the texture of the object;
acquiring initial data characteristics of a facial organ model representing a facial organ to be generated, wherein the initial data characteristics comprise contour characteristics of the facial organ, position characteristics of the facial organ and facial expression characteristics, the contour characteristics of the facial organ represent a plane contour of the facial organ, the position characteristics of the facial organ represent the position of the facial organ, and the facial expression characteristics represent facial expressions;
traversing the patterns on the surface of the acquired data model, and determining whether the patterns which conform to the contour characteristics of the facial organs exist;
if so, selecting a region to be processed based on the region where the pattern which conforms to the contour characteristics of the facial organ is located; generating a face organ model in the region to be processed based on the contour features of the face organ and the position features of the face organ, wherein the corresponding face organ model is generated in the region where the pattern which conforms to the contour features of the face organ is located; migrating the facial expression features to the generated facial organ models;
if the data model does not exist, selecting a region to be processed on the surface of the data model; generating a face organ model in the region to be processed based on the contour features of the face organ and the position features of the face organ; migrating the facial expression features to the generated facial organ model.
In an alternative embodiment, the data model may characterize the gloss and/or color of the object surface in the data model in addition to the pattern of the object surface.
In an embodiment, the data model may represent the texture and the color of the material of the object surface, or represent the texture, the color and the color of the material of the object surface, in addition to the pattern of the object surface.
According to the method for generating the facial expression on the object without the facial organ, the texture and/or the gloss of the surface of the object are/is further integrated into the data model on the basis of the pattern of the surface of the object, so that the texture of the object can be vividly expressed, the expression flexibility is enhanced, the visual perception of a user is improved, and the user experience is enhanced.
Example 5:
in the present embodiment, an embodiment of the method for generating a facial expression on an object without a facial organ proposed in the present embodiment is exemplarily described by taking an example in which the facial organ includes eyes and a nose.
The embodiment provides a method for generating a facial expression on an object without facial organs, which comprises the following steps:
traversing the surface of the data model representing the object, searching for a shape or pattern matching the eyes or nose;
if the shape or pattern matched with the eyes is searched, for example, two small planes with the area of the pixels close to each other is more than 30 x 30, or two leaves with the area of the pixels close to each other is more than 30 x 30, selecting a region to be processed based on the region where the shape or pattern matched with the eyes is located, generating an eye model in the region where the shape or pattern matched with the eyes is located, and generating a nose model with a relative position relation according to the position of the eye model; generating other face organ models (e.g., one or more of mouth, ears, and eyebrows) having relative positional relationships as necessary, based on the eye model and the nose model;
if the shape or the pattern matched with the nose is searched, for example, the shape of a protruded corner of a special-shaped cube is matched with the nose, or an ellipsoid is provided with a protruded edge, and the shape of the edge is matched with the nose, selecting a region to be processed based on the region where the shape or the pattern matched with the nose is located, generating a nose model according to the region where the shape or the pattern matched with the nose is located, and generating an eye model with a relative position relation according to the position where the nose is located; generating other face organ models (e.g., one or more of mouth, ears, and eyebrows) having relative positional relationships as necessary, based on the nose model and the eye model;
if the shapes or patterns matched with the eyes and the nose are searched respectively, for example, two flower leaves with the pixel areas close to each other exceeding 30 x 30 are arranged, a flower thorn is arranged on flower branches below the two flower leaves, the shape of the flower thorn is matched with the nose, the region to be processed is selected based on the region where the shapes or patterns matched with the eyes and the nose are located, the region where the shapes or patterns matched with the eyes are located generates an eye model, and the region where the shapes or patterns matched with the nose are located generates a nose model; generating other face organ models (e.g., one or more of mouth, ears, and eyebrows) having relative positional relationships as necessary, based on the eye model and the nose model;
and migrating the facial expression features to the generated facial organ models, for example, migrating the eye-related features in the facial expression features to the generated eye models, migrating the nose-related features in the facial expression features to the generated nose models, and migrating the mouth-related features in the facial expression features to the generated mouth models if mouth models are also generated.
In one embodiment, the data model may characterize one or more of texture, gloss, and color of the object surface in addition to the shape or pattern of the object surface.
In an embodiment, the method further comprises coloring the generated face organ model, which may be before or after the migration step, so as to enrich the expressiveness of the data model and enhance the user experience.
An embodiment of the present invention further provides a computer-readable storage medium storing a computer program for executing the foregoing method.
An embodiment of the present invention further provides a computer device, which includes a processor and the above computer-readable storage medium operatively connected to the processor, where the processor executes a computer program in the computer-readable storage medium.
Those of skill in the art will understand that the logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be viewed as implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description of the present specification, the term "include" and its various variants are to be understood as open-ended terms, which mean "include, but are not limited to. The term "based on" may be understood as "based at least in part on". Reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The embodiments of the present invention have been described above. However, the present invention is not limited to the above embodiment. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A method of generating a facial expression on an object without facial organs, comprising:
selecting a region to be processed on the surface of a data model representing an object;
generating a facial organ model characterizing a facial organ in the region to be processed;
migrating facial expressions to the generated facial organ models;
the method further comprises the following steps:
acquiring a data model representing the surface shape of an object;
acquiring initial data characteristics of a face organ model representing a face organ to be generated;
the initial data characteristics comprise shape characteristics of facial organs, position characteristics of the facial organs and facial expression characteristics, wherein the shape characteristics of the facial organs represent the three-dimensional shapes of the facial organs, the position characteristics of the facial organs represent the positions of the facial organs, and the facial expression characteristics represent facial expressions; wherein, on the surface of the data model for representing the object, selecting the area to be processed comprises:
traversing the shape of the surface of the acquired data model, and determining whether a shape which conforms to the shape characteristics of the facial organ exists;
if so, selecting a region to be processed based on the region where the shape conforming to the shape characteristics of the facial organ is located;
if the data model does not exist, selecting a region to be processed on the surface of the data model;
the generation of a facial organ model characterizing facial organs in the region to be processed comprises:
generating a face organ model in the region to be processed based on the shape feature of the face organ and the position feature of the face organ, wherein if a shape conforming to the shape feature of the face organ exists, a corresponding face organ model is generated in a region where the shape conforming to the shape feature of the face organ exists;
alternatively, the first and second electrodes may be,
the method further comprises the following steps:
acquiring a data model representing a surface pattern of an object;
acquiring initial data characteristics of a face organ model representing a face organ to be generated;
the initial data features comprise contour features of facial organs, position features of the facial organs and facial expression features, the contour features of the facial organs represent plane contours of the facial organs, the position features of the facial organs represent positions of the facial organs, and the facial expression features represent facial expressions; wherein, on the surface of the data model for representing the object, selecting the area to be processed comprises:
traversing the patterns on the surface of the acquired data model, and determining whether the patterns which conform to the contour characteristics of the facial organs exist;
if so, selecting a region to be processed based on the region where the pattern which conforms to the contour characteristics of the facial organ is located;
if the data model does not exist, selecting a region to be processed on the surface of the data model;
the generation of a facial organ model characterizing facial organs in the region to be processed comprises:
and generating a face organ model in the region to be processed based on the contour features of the face organ and the position features of the face organ, wherein if a pattern which conforms to the contour features of the face organ exists, a corresponding face organ model is generated in a region where the pattern which conforms to the contour features of the face organ is located.
2. The method of claim 1, wherein selecting the region to be processed on the surface of the data model characterizing the object comprises: and searching a region matched with the facial organ on the surface of the data model, and selecting the region to be processed on the basis of the region.
3. The method of claim 1, wherein the surface comprises a plane, and the area to be processed is provided based on the plane of the data model.
4. The method of claim 3, wherein the plane comprises a curvature of less than 2m-1All planes of (a).
5. The method of claim 1, wherein generating a facial organ model characterizing a facial organ in the region to be processed comprises: the face organ model is generated based on a human face organ, an animal face organ, or an imaginary character face organ.
6. The method of claim 1, wherein the data model additionally characterizes one or more of texture, gloss, and color of the object surface.
7. The method of claim 1, wherein the initial data features further comprise color features of the facial organ, and wherein the face organ model is generated based additionally on the color features of the facial organ when the facial organ model is generated in the region to be processed.
8. An apparatus for generating a facial expression on an object without facial organs, implementing the method of any of claims 1 to 7, comprising:
the device comprises a to-be-processed area selection component, a processing component and a processing component, wherein the to-be-processed area selection component is used for selecting a to-be-processed area on the surface of a data model representing an object;
a facial organ model generating part for generating a facial organ model representing a facial organ in the region to be processed;
a facial expression migration section for migrating a facial expression to the generated facial organ model;
the device further comprises:
a data model acquisition unit configured to acquire a data model representing a surface shape or a pattern of an object;
the initial data feature acquisition component is used for acquiring initial data features of a face organ model representing a face organ to be generated;
the initial data features comprise the shape or contour feature of the facial organ, the position feature of the facial organ and the facial expression feature, the shape or contour feature of the facial organ represents the three-dimensional shape or plane contour of the facial organ, the position feature of the facial organ represents the position of the facial organ, and the facial expression feature represents the facial expression.
CN201910081812.6A 2019-01-28 2019-01-28 Method and device for generating facial expression on object without facial organs Active CN109919016B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910081812.6A CN109919016B (en) 2019-01-28 2019-01-28 Method and device for generating facial expression on object without facial organs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910081812.6A CN109919016B (en) 2019-01-28 2019-01-28 Method and device for generating facial expression on object without facial organs

Publications (2)

Publication Number Publication Date
CN109919016A CN109919016A (en) 2019-06-21
CN109919016B true CN109919016B (en) 2020-11-03

Family

ID=66961028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910081812.6A Active CN109919016B (en) 2019-01-28 2019-01-28 Method and device for generating facial expression on object without facial organs

Country Status (1)

Country Link
CN (1) CN109919016B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104205171A (en) * 2012-04-09 2014-12-10 英特尔公司 System and method for avatar generation, rendering and animation
WO2015057733A1 (en) * 2013-10-14 2015-04-23 Fuhu, Inc. Widgetized avatar and a method and system of creating and using same
CN106104633A (en) * 2014-03-19 2016-11-09 英特尔公司 Facial expression and/or the mutual incarnation apparatus and method driving
CN106204698A (en) * 2015-05-06 2016-12-07 北京蓝犀时空科技有限公司 Virtual image for independent assortment creation generates and uses the method and system of expression
WO2017003031A1 (en) * 2015-06-29 2017-01-05 김영자 Method for providing lifelike avatar emoticon-based ultralight data animation creation system, and terminal device providing lifelike avatar emoticon for implementing same
CN106599817A (en) * 2016-12-07 2017-04-26 腾讯科技(深圳)有限公司 Face replacement method and device
WO2019005182A1 (en) * 2017-06-30 2019-01-03 Tobii Ab Systems and methods for displaying images in a virtual world environment
CN109173263A (en) * 2018-08-31 2019-01-11 腾讯科技(深圳)有限公司 A kind of image processing method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4569670B2 (en) * 2008-06-11 2010-10-27 ソニー株式会社 Image processing apparatus, image processing method, and program
JP5966657B2 (en) * 2012-06-22 2016-08-10 カシオ計算機株式会社 Image generating apparatus, image generating method, and program
CN104157001A (en) * 2014-08-08 2014-11-19 中科创达软件股份有限公司 Method and device for drawing head caricature
CN105374055B (en) * 2014-08-20 2018-07-03 腾讯科技(深圳)有限公司 Image processing method and device
KR102146398B1 (en) * 2015-07-14 2020-08-20 삼성전자주식회사 Three dimensional content producing apparatus and three dimensional content producing method thereof
CN108171789B (en) * 2017-12-21 2022-01-18 迈吉客科技(北京)有限公司 Virtual image generation method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104205171A (en) * 2012-04-09 2014-12-10 英特尔公司 System and method for avatar generation, rendering and animation
WO2015057733A1 (en) * 2013-10-14 2015-04-23 Fuhu, Inc. Widgetized avatar and a method and system of creating and using same
CN106104633A (en) * 2014-03-19 2016-11-09 英特尔公司 Facial expression and/or the mutual incarnation apparatus and method driving
CN106204698A (en) * 2015-05-06 2016-12-07 北京蓝犀时空科技有限公司 Virtual image for independent assortment creation generates and uses the method and system of expression
WO2017003031A1 (en) * 2015-06-29 2017-01-05 김영자 Method for providing lifelike avatar emoticon-based ultralight data animation creation system, and terminal device providing lifelike avatar emoticon for implementing same
CN106599817A (en) * 2016-12-07 2017-04-26 腾讯科技(深圳)有限公司 Face replacement method and device
WO2019005182A1 (en) * 2017-06-30 2019-01-03 Tobii Ab Systems and methods for displaying images in a virtual world environment
CN109173263A (en) * 2018-08-31 2019-01-11 腾讯科技(深圳)有限公司 A kind of image processing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
数据驱动的人像卡通与表情动画生成技术;于佳骏;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170215;全文 *

Also Published As

Publication number Publication date
CN109919016A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
KR102241153B1 (en) Method, apparatus, and system generating 3d avartar from 2d image
CN110807836B (en) Three-dimensional face model generation method, device, equipment and medium
US20230128505A1 (en) Avatar generation method, apparatus and device, and medium
CN108335345B (en) Control method and device of facial animation model and computing equipment
CN113628327B (en) Head three-dimensional reconstruction method and device
CN105513114A (en) Three-dimensional animation generation method and device
US20190295272A1 (en) Synthesizing hair features in image content based on orientation data from user guidance
CN103854306A (en) High-reality dynamic expression modeling method
CN108460398A (en) Image processing method, device, cloud processing equipment and computer program product
Dicko et al. Anatomy transfer
CN112102480B (en) Image data processing method, apparatus, device and medium
CN112950769A (en) Three-dimensional human body reconstruction method, device, equipment and storage medium
CN111833236A (en) Method and device for generating three-dimensional face model simulating user
US11443473B2 (en) Systems and methods for generating a skull surface for computer animation
US11769309B2 (en) Method and system of rendering a 3D image for automated facial morphing with a learned generic head model
CN110443872B (en) Expression synthesis method with dynamic texture details
CN106326980A (en) Robot and method for simulating human facial movements by robot
CN114299206A (en) Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN117315211A (en) Digital human synthesis and model training method, device, equipment and storage medium thereof
CN109919016B (en) Method and device for generating facial expression on object without facial organs
CN117132711A (en) Digital portrait customizing method, device, equipment and storage medium
CN114972601A (en) Model generation method, face rendering device and electronic equipment
US20220277586A1 (en) Modeling method, device, and system for three-dimensional head model, and storage medium
CN115631516A (en) Face image processing method, device and equipment and computer readable storage medium
CN113706399A (en) Face image beautifying method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant