CN113658307A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN113658307A
CN113658307A CN202110970276.2A CN202110970276A CN113658307A CN 113658307 A CN113658307 A CN 113658307A CN 202110970276 A CN202110970276 A CN 202110970276A CN 113658307 A CN113658307 A CN 113658307A
Authority
CN
China
Prior art keywords
bone
target
face image
skeleton
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110970276.2A
Other languages
Chinese (zh)
Inventor
郭昊
李想
赵慧斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110970276.2A priority Critical patent/CN113658307A/en
Publication of CN113658307A publication Critical patent/CN113658307A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides an image processing method and device, relates to the field of image processing, and particularly relates to the field of computer vision. The specific implementation scheme is as follows: and displaying a first face image to be processed, wherein the first face image corresponds to a plurality of bones. A morphing operation input for the first facial image is received. At least one target bone corresponding to the deformation operation is determined in the plurality of bones, and a target position corresponding to each target bone is determined according to the deformation operation. And updating the first facial image to obtain a second facial image according to the target position corresponding to each target bone, and displaying the second facial image. The target positions of the target skeletons are determined according to the deformation operation of the first facial image, and then the virtual character is driven to show the corresponding second facial image according to the target positions of the target skeletons, so that the custom design of the style of the virtual character can be realized according to the operation of a user, and the flexibility of control over the virtual character can be effectively improved.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of computer vision in image processing, and in particular, to an image processing method and apparatus.
Background
With the continuous development of image processing technology, a large number of digital virtual characters are applied, and currently, virtual characters which can be synchronously expressed along with facial expressions of users exist.
In the prior art, when a virtual character synchronously expressed according to a facial expression of a user is implemented, multiple mixed shape (BS) bases are usually configured in advance, then the facial expression of the user is captured, and then the corresponding BS bases are matched, and the virtual character is controlled by combining the BS bases to express the corresponding expression.
However, the BS substrate cannot satisfy the user's requirement for custom design of the virtual character, thereby resulting in lack of flexibility in control of the virtual character.
Disclosure of Invention
The present disclosure provides an image processing method and apparatus.
According to a first aspect of the present disclosure, there is provided an image processing method including:
displaying a first facial image to be processed, wherein the first facial image corresponds to a plurality of bones;
receiving a deformation operation input on the first face image;
determining at least one target bone corresponding to the deformation operation in the plurality of bones, and determining a target position corresponding to each target bone according to the deformation operation;
and updating the first facial image to obtain a second facial image according to the target position corresponding to each target bone, and displaying the second facial image.
According to a second aspect of the present disclosure, there is provided an image processing apparatus comprising:
the display module is used for displaying a first face image to be processed, and the first face image corresponds to a plurality of bones;
the receiving module is used for receiving deformation operation input on the first face image;
a determining module, configured to determine at least one target bone corresponding to the deformation operation from among the multiple bones, and determine a target position corresponding to each target bone according to the deformation operation;
and the processing module is used for updating the first face image to obtain a second face image according to the target position corresponding to each target bone and displaying the second face image.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of an electronic device can read the computer program, execution of the computer program by the at least one processor causing the electronic device to perform the method of the first aspect.
The technique according to the present disclosure solves the problem of lack of flexibility in the control of virtual characters.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic diagram of an implementation of a virtual role provided in an embodiment of the present disclosure;
fig. 2 is a flowchart of an image processing method provided by an embodiment of the present disclosure;
fig. 3 is a second flowchart of an image processing method provided in the embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an implementation of displaying a first face image according to an embodiment of the disclosure;
fig. 5 is a schematic diagram illustrating implementation of face key points according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of an implementation of a skeletal system provided by an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of one possible implementation of a transformation operation provided by an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of another possible implementation of the transformation operation provided by the embodiments of the present disclosure;
FIG. 9 is a schematic diagram of an implementation of moving a bone in response to a deformation operation provided by an embodiment of the present disclosure;
fig. 10 is a flowchart three of an image processing method provided in the embodiment of the present disclosure;
fig. 11 is a fourth flowchart of an image processing method provided by the embodiment of the present disclosure;
fig. 12 is a schematic diagram illustrating an implementation of synchronously expressing expressions by virtual characters according to an embodiment of the present disclosure;
fig. 13 is a schematic diagram illustrating an implementation of the eye closing effect of the virtual character according to the embodiment of the disclosure
Fig. 14 is a schematic diagram illustrating an implementation of an eye closing effect of a virtual character according to an embodiment of the disclosure
Fig. 15 is a schematic diagram for implementing the mouth opening effect of the virtual character provided by the embodiment of the disclosure
Fig. 16 is a schematic diagram illustrating an implementation of a virtual character mouth opening effect provided by the embodiment of the present disclosure;
fig. 17 is a fifth flowchart of an image processing method provided by the embodiment of the present disclosure;
FIG. 18 is a schematic diagram of a face-pinching implementation performed while the virtual character synchronously expresses expressions;
FIG. 19 is a model of a realistic style provided by embodiments of the present disclosure;
FIG. 20 is a cartoon style model provided by an embodiment of the present disclosure;
fig. 21 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 22 is a block diagram of an electronic device to implement the image processing method of the embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In order to better understand the technical solution of the present disclosure, the related art to which the present disclosure relates is described below.
At present, with the continuous development of image technology, a great number of applications of virtual characters are obtained, for example, a PTA (Photo to avatar) system exists at present, and a virtual character corresponding to an image can be automatically generated according to a shot image, and meanwhile, the generated virtual character can show a corresponding expression synchronously with a facial expression of a user.
For example, fig. 1 may be understood in conjunction with fig. 1, where fig. 1 is a schematic diagram of an implementation of a virtual role provided in the embodiment of the present disclosure.
As shown in fig. 1, it is assumed that 101 in fig. 1 is a face image of a mouth, and the face image of the mouth is currently processed, so as to obtain a virtual character shown by 102 in fig. 1, that is, a panda head in fig. 2, and at the same time, the panda head may also show a mouth-opening movement following the expression of the user.
In an actual implementation process, when the expression of the virtual character is controlled, for example, the expression of the user may be captured in real time, and then the virtual character is controlled to show the corresponding expression in real time according to the expression of the user, for example, the expression may be understood by referring to dumoji in a hundred-degree input method.
The implementation of the current PTA system requires a large number of BS bases for implementation, which is described below.
To some extent, a face with an expression can be split into two parts: personality part (identity) and expression part (expression). The identity is the essence of a certain face, is the essence for distinguishing one face from another face, and is the essence that one face does not change within the 7x24 hour scale; correspondingly, the expression is an expression component, and a face has various expressions and changes all the time.
Then the face at a certain moment can be regarded as the essential identity superimposed with the expression at that moment. The great advantage of splitting in this way is that the expression component of one face can be extracted and superimposed on the identity of another face, so that another face has the same expression.
Based on this, if the expression component is superimposed on the virtual character, the virtual character can be controlled to express the expression consistent with the human face.
However, the expression seems to be an infinite number of possible things, in order to calculate the expression, for example, the expression may be realized by a mixed shape (BS) base, where the BS base is a reference for a group of integral expressions, and the number of the BS base may be, for example, 10, 50, 100, 200, and so on, and this embodiment is not limited thereto, and the larger the number of the BS base, the more virtual the expression represented by the corresponding virtual character is.
For example, the set of BS bases may be used to calculate the overall expression through linear combination, and in the actual implementation process, for example, a certain number of BS bases may be preset, and then a corresponding set of BS bases may be determined by capturing facial expressions, and the virtual character may be controlled to implement the corresponding expressions based on the linear combination of the set of BS bases.
The preset certain number of BS substrates may include, for example, blinking left eye (eyeBlinkLeft), looking down left eye (eyehookdown left), blinking right eye (eyeBlinkRight), closing mouth (mouthClose), and the like.
In short, a corresponding face model can be prepared in advance for each expression, wherein the degree of each expression can be controlled by a coefficient from 0 to 1, for example, from an unexplained state to an expressed state, the corresponding BS substrate and the coefficients of the BS substrate are determined by tracking and capturing a human face, and then the virtual character can be controlled to show various expressions through linear combination.
However, in the implementation manner of controlling the virtual character to express the corresponding expression based on the BS substrate in the prior art, if the virtual character is required to express a relatively fine expression, a large number of BS substrates need to be set, and if the number of BS substrates is large, the BS substrates may cause a great energy consumption problem when landing in an engine to drive the expression.
Meanwhile, the BS substrate can only show some preset combined expressions, for example, the user currently wants the eyes of the virtual character to be larger, and only several preset expression forms of the eyes can be realized based on the BS, but these several expression forms may not be the user wants, for example, the user currently wants to adjust the size of the eyes of the virtual character to the style of the mood of the user at will, so the BS substrate cannot meet the user's requirement for custom design of the virtual character, and the control of the virtual character lacks flexibility.
Aiming at the problems in the prior art, the technical concept is as follows: by providing a skeleton system, wherein the skeleton system comprises a plurality of skeletons, each skeleton can move correspondingly in response to the operation of a user, and then the user can realize the custom design requirement of the virtual character based on the skeleton system, so that the control flexibility of the virtual character can be effectively improved.
Based on the above description, the image processing method provided by the present disclosure is described below with reference to specific embodiments, and it should be noted that the execution subject of each embodiment in the present disclosure may be, for example, a terminal device, where the terminal device may be, for example, a computer device, a tablet computer, a mobile phone (or referred to as a "cellular" phone), and the like, and the terminal device may also be a portable, pocket, handheld, computer-embedded mobile device or equipment, and the like, where the terminal device is not particularly limited as long as the terminal device can achieve the acquisition of an image and can process the image.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes:
s201, displaying a first face image to be processed, wherein the first face image corresponds to a plurality of bones.
In this embodiment, for example, a first facial image to be processed may be displayed on a screen of a terminal device, in this embodiment, the first facial image may be, for example, an image currently represented by a pre-created 3D model, for example, a panda head illustrated in fig. 1 may be, for example, the first facial image, and in the modeling process, a representation style of the first facial image may be designed and implemented, and in the actual implementation process, a specific representation style of the first facial image may be selected according to an actual requirement, which is not limited in this embodiment.
In this embodiment, the created bone system is applied to each 3D model obtained through modeling, so that the first face image corresponds to a plurality of bones, where the plurality of bones may include, for example, a bone corresponding to a left eyebrow, a bone corresponding to a right eyebrow, a bone corresponding to a mouth, a bone corresponding to an eye, and the like.
S202, receiving a deformation operation input on the first face image.
In this embodiment, for example, the first face image may be displayed on the terminal device, and thus in one possible implementation, for example, the deformation operation input on the first face image may be received by the terminal device in this embodiment.
The deformation operation may be, for example, a sliding operation on the first face image, and the deformation of the first face image corresponding to the sliding operation may be controlled by the sliding operation, so as to implement a user-defined design of the virtual character.
S203, determining at least one target skeleton corresponding to the deformation operation in the plurality of skeletons, and determining a target position corresponding to each target skeleton according to the deformation operation.
After receiving the deformation operation input for the first face image, for example, at least one target bone corresponding to the deformation operation may be determined in a plurality of bones, for example, if the current deformation operation is a sliding operation performed on an eyebrow of the first face image, then for example, the bone corresponding to the eyebrow may be determined as the target bone, and if the current deformation operation is a sliding operation performed on a mouth of the first face image, then for example, the bone corresponding to the mouth may be determined as the target bone.
It is understood that in the present embodiment, the shooting is performed in a customized manner for the expression pattern of the first facial image, and therefore, after at least one target bone is determined, the target position corresponding to each target bone may also be determined according to the above-mentioned deformation operation, and it is understood that the target position corresponding to the target bone may be the operation position of the deformation operation.
And S204, updating the first face image to obtain a second face image according to the target position corresponding to each target bone, and displaying the second face image.
After the target position corresponding to each target bone is determined, the expression form of the virtual character desired by the current user is obtained, and then the first face image may be updated according to the target position corresponding to each target bone to obtain the second face image, where the corresponding 3D model may be driven to express the corresponding pattern according to the position of the bone, so in this embodiment, the second face image may be determined according to the target position corresponding to each target bone, where a specific implementation manner of the bone driving model may refer to any possible implementation manner, which is not described herein again.
After the second facial image is obtained, the second facial image can be displayed on a screen of the terminal device, for example, so that the virtual character designed by the user in a customized manner can be rapidly displayed to the user.
The image processing method provided by the embodiment of the disclosure comprises the following steps: and displaying a first face image to be processed, wherein the first face image corresponds to a plurality of bones. A morphing operation input for the first facial image is received. At least one target bone corresponding to the deformation operation is determined in the plurality of bones, and a target position corresponding to each target bone is determined according to the deformation operation. And updating the first facial image to obtain a second facial image according to the target position corresponding to each target bone, and displaying the second facial image. At least one designed bone is applied to the first facial image, the target position of the target bone is determined according to the deformation operation of the first facial image, and then the virtual character is driven to show the corresponding second facial image according to the target position of each target bone, so that the custom design of the style of the virtual character can be realized according to the operation of a user, and the flexibility of control aiming at the virtual character can be effectively improved.
On the basis of the foregoing embodiments, an image processing method provided by the embodiment of the present disclosure is further described in detail with reference to fig. 3 to 9, fig. 3 is a second flowchart of the image processing method provided by the embodiment of the present disclosure, fig. 4 is an implementation schematic diagram of displaying a first face image provided by the embodiment of the present disclosure, fig. 5 is an implementation schematic diagram of a key point of a face provided by the embodiment of the present disclosure, fig. 6 is an implementation schematic diagram of a bone system provided by the embodiment of the present disclosure, fig. 7 is a possible implementation schematic diagram of a morphing operation provided by the embodiment of the present disclosure, fig. 8 is another possible implementation schematic diagram of the morphing operation provided by the embodiment of the present disclosure, and fig. 9 is an implementation schematic diagram of moving a bone in response to the morphing operation provided by the embodiment of the present disclosure.
As shown in fig. 3, the method includes:
s301, displaying a first face image to be processed, wherein the first face image corresponds to a plurality of bones.
Wherein, the implementation manner of S301 is similar to that of S201 described above.
Further, an implementation manner of displaying the first face image may be described with reference to fig. 4, referring to fig. 4, assuming that the first face image 402, that is, the panda head image in fig. 4, may be currently displayed on the screen of the terminal device 402, it can be understood that the panda head image is actually an image of a 3D model of a panda head, and in an actual implementation process, a specific implementation manner of the first face image may be selected according to an actual requirement, which is not limited in this embodiment.
And in this embodiment, the first facial image corresponds to a plurality of bones, and in a possible implementation manner, a set of bone systems may be preset and then applied to each 3D model, so that the first facial image corresponding to the plurality of bones may be obtained.
In a possible implementation manner, for example, the bone system may be constructed based on a model required by a PTA algorithm according to a PTA operation principle, for example, the bone system may be designed based on facial contours and facial features involved in PTA-dependent landmark functions, such as 5 types of pinching deformation directions: face shape, eyebrow shape, eye shape, nose shape, and mouth shape.
For example, the landmark punctuation may be understood with reference to fig. 5, and it is understood that, when performing expression tracking based on a face image, for example, a face keypoint (landmark punctuation) may be detected, and then a corresponding model is driven to exhibit a corresponding expression based on the face keypoint, for example, referring to fig. 5, the current landmark is an implementation illustrated by each point in fig. 5, for example, 501 in fig. 5 represents a punctuation on an eyebrow, in a specific implementation process, for example, a face image may be acquired, and then the acquired face image is processed according to a corresponding algorithm, so as to obtain a face keypoint, and then, for example, a corresponding face image may be driven according to a position of the detected face keypoint.
Based on the landmark punctuation introduced in fig. 5, in this embodiment, for example, a skeleton system shown in fig. 6 may be designed, see fig. 6, in which a plurality of bones may be included, and as can be understood by combining fig. 6 and fig. 5, the positions, numbers, types, and the like of the bones included in fig. 6 correspond to the landmark punctuation in fig. 5, so that the skeleton system in this embodiment may drive the model to exhibit an expression corresponding to the face image according to the detection of the face keypoints of the face image.
Moreover, each skeleton in the skeleton system in this embodiment may move correspondingly in response to the operation of the user, so that the user may perform customized face pinching on the virtual character, and it can be determined based on the above description that, for example, 5 types of face pinching deformation directions may be designed for the skeleton system in this embodiment: face shape, eyebrow shape, eye shape, nose shape, and mouth shape.
For example, the following description may be referred to as an implementation manner of each pinching direction in more detail.
The face type classification may be, for example: the overall width and length of the head are changed, the head shape above the eyebrow is changed, cheekbone part is changed, the cheek is fat and thin, and the lower jaw is changed. Such a deformation corresponds to a landmark of one week of the landmark dotted pattern.
The eyebrow shape classification may be, for example: the height of the whole eyebrow changes, the width and thickness of the eyebrow changes, and the relative position and thickness of the head, the peak, the tail and the eyebrow of the eyebrow changes. Such a deformation corresponds to the eyebrow-shaped punctuation of the landmark point.
The eye type classification may be, for example: the whole position and size of the eye are changed, the whole rotation height and inside and outside change of the eyelid, the deformation of the inner canthus, the deformation of the upper eyelid, the deformation of the outer canthus and the deformation of the lower eyelid. Such deformation corresponds to landmark point eye shape punctuation.
The nose type classification may be, for example: the size of the whole nose position changes, the deformation of the nasal alar, the deformation of the nasal tip, the position of the nasal bottom changes, and the deformation of the nasal bridge. Such deformation corresponds to landmark points of the landmark point nose shape
Wherein, the mouth type classification can be, for example: the whole position of the mouth changes, the height of the mouth angle changes, the thickness of the upper lip changes and deforms, and the thickness of the lower lip changes and deforms. Such deformation corresponds to landmark point mouth shape punctuation.
S302, receiving a deformation operation input to the first face image, wherein the deformation operation comprises at least one sliding operation.
Wherein, the implementation manner of S302 is similar to the implementation manner of S202 described above, and further, in this embodiment, the deforming operation may include at least one sliding operation.
In a possible implementation manner, for example, the input medium of the sliding operation is a finger of a user, and for example, when a part is pressed and slid by a finger, a sliding operation is performed; for another example, when two fingers press and slide two parts, two sliding operations are performed, and so on, so that the deforming operation in this embodiment may include at least one sliding operation. In an actual implementation process, a specific input medium for the deforming operation may be selected according to an actual requirement, which is not limited in this embodiment.
For example, as can be understood in conjunction with fig. 7, referring to 701 in fig. 7, for example, a first face image, that is, a panda head displayed in fig. 7, is currently displayed on a screen of the terminal device, and it is assumed that a morphing operation currently receiving an input for the first face image is an operation illustrated by 703 in fig. 7, and it can be understood with reference to fig. 7 that, if the morphing operation is a sliding operation for one part, the morphing operation includes two sliding operations, and if the sliding direction of the sliding operation is the direction indicated by arrow a in fig. 7, it can be understood that the current user wants the right side face of the avatar to be a little thinner.
For another example, it can be understood with reference to fig. 8, as shown in fig. 8, first referring to 801 in fig. 8, for example, a first face image, that is, a panda head shown in fig. 8, is currently displayed on a screen of a terminal device, and it is assumed that a deformation operation currently receiving an input for the first face image is an operation indicated by 803 in fig. 8, and it can be understood with reference to fig. 8 that, if the deformation operation is a sliding operation for two positions, the deformation operation includes two sliding operations, and if sliding directions of the two sliding operations are directions indicated by arrow B and arrow C in fig. 8, respectively, it can be understood that a user currently wants two sides of the avatar to be thinner.
And S303, acquiring the starting position and the ending position of each sliding operation in the first face image.
After determining at least one of the deformation operations, for example, a start position and an end position of each sliding operation in the first face image may be acquired, for example, as may be understood in conjunction with fig. 7, and referring to fig. 7, assuming that a sliding trajectory of the sliding operation shown at 703 is a trajectory indicated by an arrow a in fig. 7, for example, a start position of the sliding operation may be determined as a position indicated by 704 therein, and for example, an end position of the sliding operation may be determined as a position indicated by 705 therein.
In an actual implementation process, there may be a plurality of sliding operations, where a specific implementation of a specific start position and an end position of each sliding operation may be selected according to an actual requirement, and this embodiment does not particularly limit this.
S304, for any sliding operation, according to the starting position and the ending position of the sliding operation in the first face image, a deformation area is determined in the first face image.
After determining the start position and the end position of the sliding operation, the action region corresponding to the sliding region can be determined based on the start position and the deformation position of the sliding operation, for example.
Based on the above description, it can be determined that the deformation operation in the present embodiment may include at least one sliding operation, and the following description is given by taking any one of the sliding operations as an example, and the implementation manner corresponding to each sliding operation is the same, which is not particularly limited herein.
Specifically, for any one of the sliding operations, for example, the deformation region may be determined in the first face image according to the start position and the end position of the sliding operation in the first face image, and it is understood that the deformation region in the present embodiment is actually the action region of the sliding operation.
In a possible implementation manner, for example, the starting position or the ending position of the sliding operation may be taken as a central point, and an elliptical region within a preset range may be determined as a deformation region, for example, as will be understood with reference to fig. 7, where 706 may be, for example, a deformation region corresponding to the sliding operation shown in 703; alternatively, for example, a middle point of the operation trajectory of the slide operation may be set as a center point, and a region of an arbitrary shape (for example, a circle, a rectangle, an irregular shape, or the like) within a preset range of the center point may be determined as the deformed region.
And in alternative implementations, the implementation of determining the deformed region for each position of the first facial image is the same, for example, all the above-described determined elliptical regions, or all the above-described determined rectangular regions, etc.; or, the implementation manner of determining the deformation region for each position of the first facial image may also be different, for example, the corresponding elliptical region is determined as the deformation region for the face of the first facial region, the corresponding circular region is determined as the deformation region for the eye of the first facial region, and so on.
And S305, determining the bone in the deformation region as at least one target bone.
Also described herein with respect to any one sliding operation, following the description of S304 above, after determining the deformation region corresponding to the sliding operation, the bone within the deformation region may be determined as at least one target bone,
based on the above description, it can be determined that the first facial image in the present embodiment corresponds to a plurality of bones, and the deformation region is currently determined, and then, for example, the bones in the deformation region can be determined, so as to obtain at least one target bone, for example, the deformation region determined for the sliding operation in fig. 7 is the region indicated by 706 therein, and then, in combination with the above described bone system, it can be determined that the bones in the deformation region can be the bones corresponding to the right side facial edge in the above described bone system, and the corresponding deformation is the cheek fat deformation in the face type classification.
In the actual implementation process, the specific implementation of the deformation region can be selected according to actual requirements, the specific implementation of the bone system can also be selected according to actual requirements, and the target bone in the corresponding determined deformation region can also be selected according to actual requirements.
S306, aiming at any one target bone, determining a target sliding operation corresponding to the target bone in at least one sliding operation.
Based on the above description, it can be determined that at least one target bone can be determined according to the deformation operation in the present embodiment, and the deformation operation in the present embodiment may include at least one sliding operation, so that a target sliding operation corresponding to a target bone can be determined in at least one sliding operation currently for any one target bone.
For example, as will be appreciated with reference to fig. 7, the deformation operation of fig. 7 includes only one sliding operation, and thus in the example of fig. 7, the corresponding target sliding operation for each target bone is determined to be the one sliding operation.
As will be understood with reference to fig. 8, for example, if the morphing operation in fig. 8 includes two sliding operations, namely a sliding operation for the left face and a sliding operation for the right face, the sliding operation on the left face may be determined as a target sliding operation corresponding to the target skeleton of the left face, and the sliding operation on the right face may be determined as a target sliding operation corresponding to the target skeleton of the right face.
S307, according to the starting position and the ending position of the target sliding operation in the first facial image, a first position of a first end point of the target skeleton and a second position of a second end point of the target skeleton are determined, and the target positions comprise the first position and the second position.
After determining the target sliding operation corresponding to each target bone, the target positions of the target bones may be determined according to the starting position and the ending position of the target sliding operation in the first facial image, and it is understood that there are two end points for each bone, so in one possible implementation, for example, the starting position and the ending position of the target sliding operation may be included, the first position of the first end point of the target bone is determined, and the second position of the second end point of the target bone is determined, and the target positions of the target bones in this embodiment may include the first position and the second position, for example.
In a possible implementation manner, according to the starting position and the ending position of the target sliding operation in the first facial image, for example, a sliding direction of the target sliding operation and a sliding distance of the target sliding operation may be obtained, for example, a mapping relationship between the sliding distance and a bone moving distance may be preset, for example, the mapping relationship between the sliding distance and the bone moving distance is a preset ratio, for example, 1:0.5, that is, the sliding distance of the sliding operation is 1 distance unit, and the end point of the bone correspondingly moves 0.5 distance unit.
For example, the same mapping relationship may be set for each part of the bone, or the mapping relationship may be set for each part of the bone, and the specific implementation manner of the mapping relationship is not limited in this embodiment as long as the end point of the bone can be controlled to move to the corresponding degree according to the sliding operation.
Then, according to the sliding direction and the sliding distance of the target sliding operation, the moving direction and the moving distance of the target end point of the corresponding target bone can be determined, and then, for example, the end point of the target bone in the deformation region can be used as the target end point, and corresponding movement is performed according to the moving direction and the moving distance, so as to obtain the first position of the first end point and the second position of the second end point of the target bone after movement, wherein the mapping relationship between the sliding distance of the sliding operation and the moving distance of the corresponding bone can be selected according to actual requirements.
For example, as can be understood in conjunction with fig. 9, referring to fig. 9, assuming that a sliding operation is currently performed with respect to the first face image, the start position of the sliding operation is the position indicated by 902 in fig. 9, and the end position of the sliding operation is the position indicated by 903 in fig. 9, the sliding direction and the sliding distance indicated by the arrows in fig. 9 can be obtained according to the start position and the end position of the sliding operation.
Then, according to the implementation manner described above, according to the sliding direction and the sliding distance of the sliding operation, the moving direction and the moving distance of the target end point of the target bone can be correspondingly determined, see fig. 9, where the bone 904, the bone 905, and the bone 906 are all located in the deformation region 901, and all of the three bones can be regarded as the target bones. Taking the bone 904 as an example for introduction, the bone 904 includes two end points, namely an end point a and an end point b, where the end point a is a bone in the deformation region, it may be determined that the end point a is a target bone that needs to be moved currently, and then, for example, the end point a may be moved correspondingly according to the moving direction and the moving distance described above, so as to obtain a first position of a first end point a of the target bone 904, and in the present example, the end point b that is directed at the bone 904 does not need to be moved, and then, for example, an original position of the end point b may be directly determined as a second position of a second end point b of the target bone 904, so as to obtain a target position of the target bone 904. The above is an exemplary description performed in conjunction with the target bone 904, and the implementation manner of determining the target position for each bone is similar, and will not be described herein again.
And S308, updating the first face image to obtain a second face image according to the target position corresponding to each target bone, and displaying the second face image.
After the target position corresponding to each target bone is determined, because the bone can drive the expression and deformation of the model, the first face image can be updated according to the target position corresponding to each target bone to obtain the second face image, and the second face image is displayed.
For example, as can be understood in conjunction with fig. 7, referring to fig. 7, a first face image is displayed in 701 of fig. 7, and after the deformation operation indicated by 703, a second face image displayed in 702 of fig. 7 can be obtained, and in conjunction with 701 and 702, it can be determined that the right face of the avatar panda head is a little thinner than the first face image.
As will also be understood, for example, in conjunction with fig. 8, with reference to fig. 8, a first face image is displayed at 801 in fig. 8, and after undergoing a morphing operation indicated at 803, a second face image is displayed at 802 in fig. 8, and in conjunction with 801 and 802, it can be determined that the faces of both sides of the cat's head of the virtual character bear are slightly thinner than the first face image.
The above-mentioned drawings are exemplary implementations, and in an actual implementation process, specific implementations of the first face image and the second face image obtained in response to the deformation operation may be selected according to actual requirements, which is not particularly limited by the embodiment.
According to the image processing method provided by the embodiment of the disclosure, the first position of the first end point and the second position of the second end point of the target skeleton corresponding to the sliding operation are determined according to the starting position and the ending position of each sliding operation in the deformation operation, so as to obtain the target position of the target skeleton, so that the corresponding target skeleton can be accurately and effectively moved corresponding to the deformation operation in response to the deformation operation, then the virtual character is driven to express the corresponding image according to the position of the target skeleton, and the second face image is obtained and displayed, so that the expression style of the virtual character can be correspondingly adjusted according to the operation of a user, thereby realizing the user-defined setting of the expression style of the virtual character by the user, and effectively improving the flexibility of controlling the virtual character.
On the basis of the foregoing embodiment, it can be understood that the image processing method provided by the embodiment of the present disclosure can satisfy a user's user-defined face-pinching for a virtual character, and can also satisfy the requirements of tracking the facial expression of the user in real time and controlling the virtual character to show the same expression, and also satisfy the user-defined face-pinching for the virtual character while tracking the facial expression of the user and controlling the virtual character to show the same expression, in which three different situations are introduced separately below.
First, referring to fig. 10 in a manner of introducing a user-defined face-pinching implementation of a virtual character, when a user only pinches a face, updating the first face image to obtain the second face image according to the target position corresponding to each target bone may include the following steps shown in fig. 10. Fig. 10 is a flowchart three of an image processing method according to the embodiment of the present disclosure.
As shown in fig. 10, the method includes:
s1001, acquiring a first skeleton structure corresponding to the first face image, wherein the first skeleton structure comprises a plurality of skeletons and position relations among the skeletons.
In this embodiment, the first image corresponds to a first bone structure, wherein the first bone structure may be, for example, the predetermined bone system described in the above embodiment, and the first bone structure includes a plurality of bones and a positional relationship between the plurality of bones. In an actual implementation process, the number of bones, the types of bones, the position relationship between the bones, and the like specifically included in the first bone framework may be selected according to actual requirements, which is not limited in this embodiment.
S1002, updating the positions of the target bones in the first bone framework according to the target positions corresponding to the target bones to obtain a second bone framework.
Based on the above description, it can be determined that, in the present embodiment, the target bones are determined according to the deformation operation, and the target positions of the respective target bones are determined, where the target positions of the target bones are actually the bone positions after the bones are correspondingly moved in response to the deformation operation, so that the positions of the target bones in the first bone architecture can be updated according to the target position corresponding to each target bone, and the second bone architecture can be obtained, and it can be understood that the target bones in the second bone architecture are located at the corresponding target positions.
And S1003, generating a second face image according to the second skeleton architecture.
After the second skeleton architecture is determined, a second face image may be generated according to the second skeleton architecture, in a possible implementation manner, the skeleton may be driven by, for example, a representation style of the virtual character, and the virtual character may be driven according to the second skeleton architecture to represent a corresponding style, so as to generate the second face image, thereby implementing corresponding adjustment on the representation style of the virtual character according to a transformation operation input by a user, that is, enabling a face-pinching of the virtual character.
In this embodiment, the target skeleton in the first skeleton system may be moved according to the deformation operation to obtain a second skeleton architecture, and then the virtual character is driven according to the second skeleton architecture to generate the second facial image, so that the presentation style of the virtual character may be set in a user-defined manner in response to the operation of the user, thereby realizing the personalized presentation of the virtual character and effectively improving the control flexibility of the virtual character.
Fig. 10 describes an implementation manner of merely pinching a face of a virtual character, and in another possible implementation manner, the image processing method provided by the present disclosure may further track a facial expression of a user in real time and control the virtual character to show the same expression, and the following describes an implementation manner of controlling the virtual character to show an expression in a corresponding style for the expression in real time with reference to fig. 11 to 16.
Fig. 11 is a fourth flowchart of the image processing method provided by the embodiment of the present disclosure, fig. 12 is a schematic diagram illustrating implementation of synchronous expression of virtual characters provided by the embodiment of the present disclosure, fig. 13 is a schematic diagram illustrating implementation of a first eye closing effect of virtual characters provided by the embodiment of the present disclosure, fig. 14 is a schematic diagram illustrating implementation of a second eye closing effect of virtual characters provided by the embodiment of the present disclosure, fig. 15 is a schematic diagram illustrating implementation of a first mouth opening effect of virtual characters provided by the embodiment of the present disclosure, and fig. 16 is a schematic diagram illustrating implementation of a mouth opening effect of virtual characters provided by the embodiment of the present disclosure.
As shown in fig. 11, the method provided in this embodiment may further include, after displaying the second face image, the following steps:
and S1101, acquiring a reference image acquired by a camera in the electronic equipment.
In this embodiment, the electronic device may include, for example, an image capturing device, where the image capturing device of the electronic device may capture an image, and after the image capturing device of the electronic device captures a reference image, the reference image may be obtained in this embodiment, for example.
In a possible implementation manner, the electronic device may be, for example, the terminal device described above, and the terminal device may acquire the captured reference image after capturing an image by using the image capturing device, for example, as can be understood with reference to fig. 12, which may be referred to as 1201 in fig. 12, where the terminal device 1203 may capture the reference image by using a front camera, for example.
In an actual implementation process, a specific image capturing device and a specific implementation manner of the captured reference image may be selected according to actual requirements, which is not limited in this embodiment.
And S1102, if the reference image comprises a face image, acquiring a second skeleton structure corresponding to the second face image.
In this embodiment, because expression of the user needs to be tracked and synchronized, it is necessary to determine whether a reference image acquired by a current camera includes a face image, if the reference image does not include the face image, the subsequent facial expression tracking expression processing is not needed, and when it is determined that the reference image includes the face image, corresponding processing may be performed to control the virtual object to express an expression corresponding to the acquired reference image.
In a possible implementation manner, if it is determined that the reference image includes a human face image, a second skeleton structure corresponding to the second face image may be acquired. It can be understood that the tracking and synchronous expression for the facial expression of the user described in this embodiment is implemented after the second facial image is displayed, that is, the user has finished pinching the face of the virtual character, and the virtual character is controlled to express the corresponding expression based on the second facial image after the user has pinched the face, so that a second skeleton structure corresponding to the second facial image can be currently obtained, where the implementation manner of the second skeleton structure is similar to that described in the above embodiment.
And S1103, determining second bone movement parameters of a second bone architecture according to the face image, wherein the second bone movement parameters comprise the movement distance and the rotation angle of each bone in the second bone architecture.
The facial image in this embodiment may be, for example, a facial image of a user, and the facial image may reflect a facial expression of the user, and for example, a second bone movement parameter of a second bone architecture may be determined according to the facial image, and based on the above description, it may be determined that, if a plurality of bones are included in the second bone architecture, for example, a movement distance and a rotation angle of each bone in the second bone architecture may be included in the second bone movement parameter.
In one possible implementation, for example, processing may be performed based on a face image, positions of a plurality of face key points are determined in the face image, then the moving distance and the rotating angle of each bone are determined based on the positions of the face key points, for example, if the expression of the current user is mouth opening, the positions of the face key points in the mouth are changed from the original positions, then the positions of the bones in the mouth after the movement are determined according to the positions of the face key points, and according to the positions of the bones in the mouth after the movement and the positions before the movement, the moving distance and the rotating angle of the mouth may be determined to control the virtual character to express the corresponding mouth opening expression.
In the actual implementation process, the specific implementation mode for determining the bone movement parameters according to the face image can be selected according to actual requirements as long as the bone positions indicated by the movement distance and the rotation angle in the bone movement parameters after movement conform to the expression of the face image.
In a possible implementation manner, for example, a moving manner and a moving range of each bone in the bone architecture may be preset in the present embodiment, where the moving manner may include at least one of the following: translation, zooming, and rotation.
And it can be determined based on the above description that, in the PTA system, when determining the expression of the virtual character, the expression can be generated based on the BS substrate, and the BS substrate and the skeletal drive in this embodiment can act simultaneously to control the virtual character to express the corresponding expression. When the BS base and the bone driver act simultaneously, for example, the corresponding BS base may be first matched according to a face image in the reference image, then the corresponding bone parameters may be determined in the manner described above, and then the expression of the virtual character may be obtained synthetically based on the matched BS base and the determined bone parameters.
In a possible implementation manner, by defining the positions, the moving manners, and the moving ranges of the bones in advance as described above, for example, when the virtual character is driven to express the corresponding expression, such as blinking, opening the mouth, smiling, and the like, the virtual character can still be normally expressed after the user pinches the face of the virtual model.
For example, it can be understood by referring to fig. 13 and 14 below in conjunction with the eye closing action, the eye closing action is actually the expression of the upper eyelid lowering, and referring to fig. 13, if the upper eyelid lowering is realized by translating the skeleton in the manner of fig. 13, referring to fig. 13, the pattern 1301 in fig. 13 is realized before the upper eyelid lowering, and the pattern 1302 in fig. 13 is realized after the upper eyelid lowering, it can be determined based on 1302 in the figure that the blinking effect expressed after the upper eyelid lowering is unnatural, and specifically, the upper eyelid is felt to be too tight, that is, too tight.
Based on this, the implementation manner in the present embodiment is, for example, the implementation manner shown in fig. 14, see fig. 14, the deformed bone of the upper eyelid is set at the BS eyelid overlap of the blinking expression, and the moving manner of the bone is set to zoom, then, see fig. 14, the implementation manner before the upper eyelid is lowered is the pattern 1401 in fig. 14, and the implementation manner after the upper eyelid is lowered is the pattern shown in 1402 in fig. 14, it can be determined based on fig. 14 that the current implementation manner by zooming the bone actually moves the upper eyelid slightly downward to implement the blinking effect, and comparing fig. 13 and fig. 14 in combination, it can be determined that the blinking effect in fig. 14 is more natural to represent.
And also for example, as can be understood in conjunction with the action of opening the mouth with reference to fig. 15 and 16 below, with reference to fig. 15, if the opening of the mouth is realized by translating the bone up and down in the manner shown in fig. 15, the pattern 1501 in fig. 15 is realized before the opening of the mouth, and the pattern 1502 in fig. 15 is realized after the opening of the mouth, it can be determined based on 1502 in the figure that the opening effect exhibited after the opening of the mouth is unnatural, specifically, the gourd-shaped mouth corner appears.
Based on this, the implementation manner in this embodiment is, for example, the implementation manner shown in fig. 16, see fig. 16, the corner bone is also the implementation manner of up-and-down displacement, and besides, the deformation of other corner positions is replaced by its parent bone, i.e. the most incomplete deformation, if the corner becomes wider, the whole mouth becomes wider, so as to ensure that the action effect of opening mouth smile is normal, see fig. 16, the implementation manner before opening mouth is the style 1601 in fig. 16, and the implementation manner after opening mouth is the style 1602 in fig. 16, comparing fig. 15 and fig. 16, it can be determined that the corner in fig. 16 does not have the unnatural phenomenon of the gourd shape in fig. 15.
In the above description, mouth bones and eye bones are taken as examples, and in an actual implementation process, all bones in a skeleton framework may be preset, and by setting a movement mode and a movement range for each bone, an expression may still be normally expressed after a virtual character is pinched, and in an actual implementation process, specific implementation of the movement mode and the movement range corresponding to each bone may be selected according to actual requirements, which is not limited in this embodiment.
And S1104, updating the second skeleton framework according to the second skeleton movement parameter to obtain a fifth skeleton framework.
After the second bone movement parameter is determined, the second bone architecture can be correspondingly updated according to the second bone movement parameter, that is, the bone to be moved is correspondingly moved according to the determined movement direction and movement distance, so that a fifth bone architecture is obtained.
And S1105, generating a third facial image according to the fifth skeleton structure.
After the fifth skeleton structure is obtained, a third face image may be generated based on the fifth skeleton structure, and in a possible implementation, for example, the virtual character may be driven to express a corresponding pattern according to the fifth skeleton structure, so as to generate the third face image, and it may be determined based on the above description that, in the PTA system, for example, the corresponding BS parameter may also be matched according to the face image, and then the third face image may be generated based on the BS parameter and the currently determined fifth skeleton structure, where the third face image in this embodiment is generated based on the second face image, that is, after the virtual character is pinched, the pinched virtual character is controlled to express an expression corresponding to the facial expression of the user.
For example, as shown in fig. 12, the image capturing device of the electronic device may collect a face image, and then perform the above analysis based on the face image, so that the virtual character on the electronic device may be controlled to show a corresponding image, as shown in 1201 in fig. 12, assuming that at a time corresponding to 1201, the user has no expression, the image of the virtual character displayed on the screen of the electronic device in 1201 also shows an initial state without a special expression; assuming that the user makes an expression of open mouth at the next time, looking for 1202 in fig. 12, the corresponding image of the virtual character displayed on the screen of the electronic device at 1202 also shows an expression of open mouth, i.e., corresponds to the third face image.
The image processing method provided by the embodiment of the disclosure, after the face pinching of the virtual character by the user is finished, collecting facial expressions of the user to obtain a face image, then determining the moving direction and the rotating angle of each skeleton in the second skeleton framework after pinching the face based on the expressions expressed by the user in the face image, then updating the second skeleton framework based on the moving direction and the rotating angle of each skeleton to obtain a fifth skeleton framework, and then generating a third facial image based on a fifth skeletal architecture, so that the tracking and synchronous expression of the user expression can be effectively realized based on the virtual character after face pinching, and in the embodiment, by presetting the moving mode and the moving range of each bone in the bone architecture, after the virtual character is pinched, the corresponding expression can still be normally expressed, so that the control stability and correctness of the virtual character are effectively improved.
On the basis of the above embodiments, the following describes an implementation manner that satisfies a user-defined face-pinching of a virtual character while tracking a facial expression of the user and controlling a virtual character to show the same expression, with reference to fig. 17 and 18.
In an implementation of performing face pinching while tracking and controlling the expression of the virtual character, the step of updating the first face image to obtain the second face image according to the target position corresponding to each target bone may include the following step shown in fig. 17. Fig. 17 is a fifth flowchart of an image processing method provided in the embodiment of the present disclosure, and fig. 18 is a schematic diagram of implementing face pinching while simultaneously expressing an expression in a virtual character.
As shown in fig. 17, the method includes:
s1701, a reference image captured by an imaging device in the electronic apparatus is acquired.
The implementation of S1701 is similar to that of S1101, and is not described again here.
And S1702, if the reference image comprises a face image, acquiring a first skeleton structure corresponding to the first face image, wherein the first skeleton structure comprises a plurality of bones and the position relationship between the plurality of bones.
The implementation manner of S1702 is similar to that of S1102, except that the implementation of S1102 is that the control of the expression is performed after the face-pinching is completed, so that the second skeleton structure after the face-pinching is obtained, but the face-pinching is not performed yet in this embodiment, so that the initial first skeleton structure is obtained, and the first skeleton structure also includes a plurality of skeletons and a positional relationship between the skeletons.
And S1703, determining first bone movement parameters of a first bone architecture according to the face image, wherein the first bone movement parameters comprise movement distances and rotation angles of all bones in the first bone architecture.
S1704, updating the first skeleton framework according to the first skeleton movement parameter to obtain a third skeleton framework.
The implementation manners of S1703 and S1704 are similar to the implementation manners of S1103 and S1104 described above, except that the first bone movement parameter of the first bone structure is determined and the position of the first bone structure is updated correspondingly in this embodiment.
And S1705, updating the positions of the target bones in the third bone framework according to the target positions corresponding to the target bones to obtain a fourth bone framework.
In this embodiment, after the above steps are completed, tracking of the facial expression of the user and updating of the corresponding bone position are implemented, so as to obtain a third bone architecture, then, on the basis of the determined third bone architecture, a face pinching of the virtual character may be implemented according to a deformation operation, and specifically, the position of the target bone in the third bone architecture may be updated according to the target position corresponding to each target bone, so as to obtain a fourth bone architecture.
And S1706, generating a second face image according to the fourth skeleton architecture.
It can be understood that the fourth skeleton structure in this embodiment is determined according to the skeleton position of the tracked facial expression of the user and the skeleton position after the deformation operation corresponding to the face-pinching, and therefore the second facial image generated according to the fourth skeleton structure is the image of the virtual character obtained according to the user-defined face-pinching of the virtual character while the facial expression of the user is tracked and the virtual character is controlled to show the same expression.
For example, as can be understood with reference to fig. 18, the electronic device 1801 may capture a face image of a user, and then may control a virtual object to perform synchronous expression based on the captured face image, and while performing expression tracking and synchronous control, may also refer to fig. 18, and perform a customized face pinch on a virtual character based on a morphing operation 1802.
Therefore, according to the image processing method provided by this embodiment, the corresponding bone position is moved according to the expression tracking, so as to obtain a third bone architecture, then, on the basis of the third bone architecture, the third bone architecture is correspondingly updated according to the deformation operation, so as to obtain a fourth bone architecture, and then, a second face image is generated according to the fourth bone architecture, so that the user-defined face pinching of the virtual character can be realized while the facial expression of the user is tracked and the virtual character is controlled to show the same expression, so that the applicable scene of the virtual character pinching in the area is effectively expanded, and the control flexibility of the virtual character is improved.
On the basis of the above embodiments, it is worth to be further explained that the skeleton framework provided in the present disclosure may be applicable to various different 3D models, and it is understood that the 3D models may have various styles, such as a realistic style, a cartoon style, and the like, and for the 3D models of different styles, the positions of corresponding five sense organs and the expressions of faces are different, for example, the eyes of the cartoon style are closer to the face, and the eyes of the realistic style are closer to the upper level of the face, so in this embodiment, when the skeleton framework is associated with the corresponding 3D model, the positions of the bones in the skeleton framework may be adjusted to some extent.
For example, it can be understood with reference to fig. 19 and 20, fig. 19 is a realistic-style model provided by the embodiment of the present disclosure, and fig. 20 is a cartoon-style model provided by the embodiment of the present disclosure.
As can be determined by combining fig. 19 and fig. 20, the positions of the five sense organs and the appearance of the face are different for the realistic style and the cartoon style, so that when the skeletal system is associated with different 3D models, the positions of the bones can be correspondingly adjusted to be suitable. It will be appreciated that the adjustment of the position of the respective bones and the association of the skeletal system and model is effected prior to the implementation of the embodiments described above.
In one possible implementation, for example, one parent bone may be configured for each large category to manage the deformed bones of its children, for example, the parent bone is set for the eyeball, the parent bone is set for the face, the parent bone is set for the ear, and the like, wherein the parent bone mainly serves as a positioning function, and the deformed bones of the children move synchronously when the parent bone moves correspondingly.
Therefore, when different styles of models are replaced, for example, the parent skeleton can be adjusted to a specific position, and the deformed skeleton of the child corresponding to each parent skeleton moves correspondingly, so that the position of each skeleton can be adjusted adaptively, for example, the eyeball parent skeleton node can be adjusted to the center of the eyeball, and the Y-axis of the head whole zoom node can be adjusted to the glabella. And, the position data of the bone can be stored, for example, in diff (difference) form, rather than absolute coordinate storage, to ensure that the same set of data is reused as many as possible for the case of models of multiple styles.
Therefore, the skeleton system in the embodiment can be erected on models of different styles by adjusting the positions of all the skeletons, that is, the skeleton system in the disclosure has good applicability to different models, and the expansibility of the skeleton system is effectively ensured on the basis of meeting the functions of face pinching and/or expression tracking introduced above.
Fig. 21 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. As shown in fig. 21, the image processing apparatus 2100 of the present embodiment may include: a display module 2101, a receiving module 2102, a determining module 2103, a processing module 2104.
A display module 2101 configured to display a first facial image to be processed, where the first facial image corresponds to a plurality of bones;
a receiving module 2102 configured to receive a morphing operation input on the first face image;
a determining module 2103, configured to determine at least one target bone corresponding to the deformation operation from among the multiple bones, and determine a target position corresponding to each target bone according to the deformation operation;
and a processing module 2104 for updating the first facial image to obtain a second facial image according to the target position corresponding to each target bone, and displaying the second facial image.
In a possible implementation, the deformation operation comprises at least one sliding operation; the determining module 2103 is specifically configured to:
acquiring a starting position and an ending position of each sliding operation in the first face image;
and determining a target bone corresponding to each sliding operation in the plurality of bones according to the starting position and the ending position of each sliding operation in the first face image.
In one possible implementation, for any one sliding operation; the determining module 2103 is specifically configured to:
determining a deformation area in the first face image according to the starting position and the ending position of the sliding operation in the first face image;
determining a bone within the deformation region as the at least one target bone.
In a possible implementation, the deformation operation comprises at least one sliding operation; for any one target bone, the determining module 2103 is specifically configured to:
determining a target sliding operation corresponding to the target bone in the at least one sliding operation;
determining a first position of a first end point of the target bone and a second position of a second end point of the target bone according to a starting position and an ending position of the target sliding operation in the first facial image, wherein the target positions comprise the first position and the second position.
In one possible implementation, the processing module 2104 is specifically configured to:
acquiring a first skeleton framework corresponding to the first face image, wherein the first skeleton framework comprises a plurality of skeletons and position relations among the skeletons;
updating the positions of the target bones in the first bone framework according to the target positions corresponding to the target bones to obtain a second bone framework;
generating the second facial image from the second bone architecture.
In one possible implementation, the processing module 2104 is specifically configured to:
acquiring a reference image acquired by a camera device in the electronic equipment;
and if the reference image comprises a face image, updating the first face image according to the face image and the target position corresponding to each target skeleton to obtain a second face image.
In one possible implementation, the processing module 2104 is specifically configured to:
acquiring a first skeleton framework corresponding to the first face image, wherein the first skeleton framework comprises a plurality of skeletons and position relations among the skeletons;
determining first bone movement parameters of the first bone architecture according to the face image, wherein the first bone movement parameters comprise movement distances and rotation angles of all bones in the first bone architecture;
updating the first skeleton architecture according to the first skeleton movement parameter to obtain a third skeleton architecture;
updating the position of the target skeleton in the third skeleton framework according to the target position corresponding to each target skeleton to obtain a fourth skeleton framework;
generating the second facial image according to the fourth bone architecture.
In one possible implementation, the processing module 2104 is further configured to:
after the second face image is displayed, acquiring a reference image acquired by a camera in the electronic equipment;
and if the reference image comprises a face image, updating the second face image according to a skeleton framework corresponding to the face image to obtain a third face image, and displaying the third face image.
In one possible implementation, the processing module 2104 is specifically configured to:
acquiring a second skeleton architecture corresponding to the second face image;
determining second bone movement parameters of the second bone architecture from the face image, wherein the second bone movement parameters comprise movement distance and rotation angle of each bone in the second bone architecture;
updating the second skeleton architecture according to the second skeleton movement parameter to obtain a fifth skeleton architecture;
generating the third facial image according to the fifth skeletal architecture.
In a possible implementation, before determining at least one target bone corresponding to the deformation operation from among the plurality of bones, the processing module 2104 is further configured to:
pre-configuring a movement mode and a movement range of each bone, wherein the movement mode comprises at least one of the following: translation, zooming, and rotation.
The present disclosure provides an image processing method and apparatus, which are applied to the field of computer vision in image processing, so as to achieve the purpose of improving the flexibility of controlling virtual characters.
It should be noted that the head model in this embodiment is not a head model for a specific user, and cannot reflect personal information of a specific user. It should be noted that the two-dimensional face image in the present embodiment is from a public data set.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of the electronic device can read the computer program, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any of the embodiments described above.
Fig. 22 illustrates a schematic block diagram of an example electronic device 2200 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 22, the apparatus 2200 includes a computing unit 2201, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)2202 or a computer program loaded from a storage unit 2208 into a Random Access Memory (RAM) 2203. In the RAM 2203, various programs and data required for the operation of the device 2200 may also be stored. The computing unit 2201, ROM 2202, and RAM 2203 are connected to each other via a bus 2204. An input/output (I/O) interface 2205 is also connected to bus 2204.
A number of components in the device 2200 are connected to the I/O interface 2205, including: an input unit 2206 such as a keyboard, a mouse, or the like; an output unit 2207 such as various types of displays, speakers, and the like; a storage unit 2208 such as a magnetic disk, an optical disk, or the like; and a communication unit 2209 such as a network card, modem, wireless communication transceiver, etc. The communication unit 2209 allows the device 2200 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 2201 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 2201 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 2201 performs the respective methods and processes described above, such as an image processing method. For example, in some embodiments, the image processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 2208. In some embodiments, some or all of the computer programs may be loaded and/or installed onto device 2200 via ROM 2202 and/or communications unit 2209. When the computer program is loaded into the RAM 2203 and executed by the computing unit 2201, one or more steps of the image processing method described above may be performed. Alternatively, in other embodiments, the computing unit 2201 may be configured to perform the image processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (23)

1. An image processing method comprising:
displaying a first facial image to be processed, wherein the first facial image corresponds to a plurality of bones;
receiving a deformation operation input on the first face image;
determining at least one target bone corresponding to the deformation operation in the plurality of bones, and determining a target position corresponding to each target bone according to the deformation operation;
and updating the first facial image to obtain a second facial image according to the target position corresponding to each target bone, and displaying the second facial image.
2. The method of claim 1, wherein the deforming operation comprises at least one sliding operation; determining at least one target bone among the plurality of bones to which the deformation operation corresponds, including:
acquiring a starting position and an ending position of each sliding operation in the first face image;
and determining a target bone corresponding to each sliding operation in the plurality of bones according to the starting position and the ending position of each sliding operation in the first face image.
3. The method of claim 2, wherein for any one sliding operation; determining a target bone corresponding to the sliding operation in the plurality of bones according to the starting position and the ending position of the sliding operation in the first facial image, wherein the determining comprises the following steps:
determining a deformation area in the first face image according to the starting position and the ending position of the sliding operation in the first face image;
determining a bone within the deformation region as the at least one target bone.
4. A method according to any one of claims 1-3, wherein the deforming operation comprises at least one sliding operation; for any target bone, determining a target position corresponding to the target bone according to the deformation operation, including:
determining a target sliding operation corresponding to the target bone in the at least one sliding operation;
determining a first position of a first end point of the target bone and a second position of a second end point of the target bone according to a starting position and an ending position of the target sliding operation in the first facial image, wherein the target positions comprise the first position and the second position.
5. The method according to any one of claims 1-4, wherein updating the first facial image to obtain a second facial image according to the target location corresponding to each target bone comprises:
acquiring a first skeleton framework corresponding to the first face image, wherein the first skeleton framework comprises a plurality of skeletons and position relations among the skeletons;
updating the positions of the target bones in the first bone framework according to the target positions corresponding to the target bones to obtain a second bone framework;
generating the second facial image from the second bone architecture.
6. The method according to any one of claims 1-4, wherein updating the first facial image to obtain a second facial image according to the target location corresponding to each target bone comprises:
acquiring a reference image acquired by a camera device in the electronic equipment;
and if the reference image comprises a face image, updating the first face image according to the face image and the target position corresponding to each target skeleton to obtain a second face image.
7. The method of claim 6, wherein updating the first facial image to obtain a second facial image according to the target position corresponding to the facial image and each target bone comprises:
acquiring a first skeleton framework corresponding to the first face image, wherein the first skeleton framework comprises a plurality of skeletons and position relations among the skeletons;
determining first bone movement parameters of the first bone architecture according to the face image, wherein the first bone movement parameters comprise movement distances and rotation angles of all bones in the first bone architecture;
updating the first skeleton architecture according to the first skeleton movement parameter to obtain a third skeleton architecture;
updating the position of the target skeleton in the third skeleton framework according to the target position corresponding to each target skeleton to obtain a fourth skeleton framework;
generating the second facial image according to the fourth bone architecture.
8. The method of any of claims 1-5, wherein after displaying the second facial image, further comprising:
acquiring a reference image acquired by a camera device in the electronic equipment;
and if the reference image comprises a face image, updating the second face image according to a skeleton framework corresponding to the face image to obtain a third face image, and displaying the third face image.
9. The method of claim 8, wherein updating the second facial image according to the bone architecture corresponding to the facial image to obtain a third facial image comprises:
acquiring a second skeleton architecture corresponding to the second face image;
determining second bone movement parameters of the second bone architecture from the face image, wherein the second bone movement parameters comprise movement distance and rotation angle of each bone in the second bone architecture;
updating the second skeleton architecture according to the second skeleton movement parameter to obtain a fifth skeleton architecture;
generating the third facial image according to the fifth skeletal architecture.
10. The method of any of claims 1-9, further comprising, prior to determining at least one target bone of the plurality of bones to which the deformation operation corresponds:
pre-configuring a movement mode and a movement range of each bone, wherein the movement mode comprises at least one of the following: translation, zooming, and rotation.
11. An image processing apparatus comprising:
the display module is used for displaying a first face image to be processed, and the first face image corresponds to a plurality of bones;
the receiving module is used for receiving deformation operation input on the first face image;
a determining module, configured to determine at least one target bone corresponding to the deformation operation from among the multiple bones, and determine a target position corresponding to each target bone according to the deformation operation;
and the processing module is used for updating the first face image to obtain a second face image according to the target position corresponding to each target bone and displaying the second face image.
12. The apparatus of claim 11, wherein the deforming operation comprises at least one sliding operation; the determining module is specifically configured to:
acquiring a starting position and an ending position of each sliding operation in the first face image;
and determining a target bone corresponding to each sliding operation in the plurality of bones according to the starting position and the ending position of each sliding operation in the first face image.
13. The apparatus of claim 12, wherein for any one sliding operation; the determining module is specifically configured to:
determining a deformation area in the first face image according to the starting position and the ending position of the sliding operation in the first face image;
determining a bone within the deformation region as the at least one target bone.
14. The device according to any one of claims 11-13, wherein the deformation operation comprises at least one sliding operation; for any one target bone, the determining module is specifically configured to:
determining a target sliding operation corresponding to the target bone in the at least one sliding operation;
determining a first position of a first end point of the target bone and a second position of a second end point of the target bone according to a starting position and an ending position of the target sliding operation in the first facial image, wherein the target positions comprise the first position and the second position.
15. The apparatus according to any one of claims 11-14, wherein the processing module is specifically configured to:
acquiring a first skeleton framework corresponding to the first face image, wherein the first skeleton framework comprises a plurality of skeletons and position relations among the skeletons;
updating the positions of the target bones in the first bone framework according to the target positions corresponding to the target bones to obtain a second bone framework;
generating the second facial image from the second bone architecture.
16. The apparatus according to any one of claims 11-14, wherein the processing module is specifically configured to:
acquiring a reference image acquired by a camera device in the electronic equipment;
and if the reference image comprises a face image, updating the first face image according to the face image and the target position corresponding to each target skeleton to obtain a second face image.
17. The apparatus of claim 16, wherein the processing module is specifically configured to:
acquiring a first skeleton framework corresponding to the first face image, wherein the first skeleton framework comprises a plurality of skeletons and position relations among the skeletons;
determining first bone movement parameters of the first bone architecture according to the face image, wherein the first bone movement parameters comprise movement distances and rotation angles of all bones in the first bone architecture;
updating the first skeleton architecture according to the first skeleton movement parameter to obtain a third skeleton architecture;
updating the position of the target skeleton in the third skeleton framework according to the target position corresponding to each target skeleton to obtain a fourth skeleton framework;
generating the second facial image according to the fourth bone architecture.
18. The apparatus of any of claims 11-15, wherein the processing module is further configured to:
after the second face image is displayed, acquiring a reference image acquired by a camera in the electronic equipment;
and if the reference image comprises a face image, updating the second face image according to a skeleton framework corresponding to the face image to obtain a third face image, and displaying the third face image.
19. The apparatus of claim 18, wherein the processing module is specifically configured to:
acquiring a second skeleton architecture corresponding to the second face image;
determining second bone movement parameters of the second bone architecture from the face image, wherein the second bone movement parameters comprise movement distance and rotation angle of each bone in the second bone architecture;
updating the second skeleton architecture according to the second skeleton movement parameter to obtain a fifth skeleton architecture;
generating the third facial image according to the fifth skeletal architecture.
20. The apparatus according to any one of claims 11-19, wherein, prior to determining at least one target bone of the plurality of bones to which the deformation operation corresponds, the processing module is further configured to:
pre-configuring a movement mode and a movement range of each bone, wherein the movement mode comprises at least one of the following: translation, zooming, and rotation.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-10.
23. A computer program product comprising a computer program which, when executed by a processor, carries out the steps of the method of any one of claims 1 to 10.
CN202110970276.2A 2021-08-23 2021-08-23 Image processing method and device Pending CN113658307A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110970276.2A CN113658307A (en) 2021-08-23 2021-08-23 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110970276.2A CN113658307A (en) 2021-08-23 2021-08-23 Image processing method and device

Publications (1)

Publication Number Publication Date
CN113658307A true CN113658307A (en) 2021-11-16

Family

ID=78481615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110970276.2A Pending CN113658307A (en) 2021-08-23 2021-08-23 Image processing method and device

Country Status (1)

Country Link
CN (1) CN113658307A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023109256A1 (en) * 2021-12-15 2023-06-22 International Business Machines Corporation Extracting trajectories from arrow pictograms

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633542A (en) * 2016-07-19 2018-01-26 珠海金山网络游戏科技有限公司 One kind pinches face editor and animation fusion method and system
CN109191570A (en) * 2018-09-29 2019-01-11 网易(杭州)网络有限公司 Method of adjustment, device, processor and the terminal of game role facial model
CN110766777A (en) * 2019-10-31 2020-02-07 北京字节跳动网络技术有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN111632374A (en) * 2020-06-01 2020-09-08 网易(杭州)网络有限公司 Method and device for processing face of virtual character in game and readable storage medium
CN111729321A (en) * 2020-05-07 2020-10-02 完美世界(重庆)互动科技有限公司 Method, system, storage medium and computing device for constructing personalized character
CN111798550A (en) * 2020-07-17 2020-10-20 网易(杭州)网络有限公司 Method and device for processing model expressions
CN112090082A (en) * 2020-09-27 2020-12-18 完美世界(北京)软件科技发展有限公司 Facial skeleton processing method and device, electronic equipment and storage medium
US20210043000A1 (en) * 2019-05-15 2021-02-11 Zhejiang Sensetime Technology Development Co., Ltd. Method, apparatus and device for processing deformation of virtual object, and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633542A (en) * 2016-07-19 2018-01-26 珠海金山网络游戏科技有限公司 One kind pinches face editor and animation fusion method and system
CN109191570A (en) * 2018-09-29 2019-01-11 网易(杭州)网络有限公司 Method of adjustment, device, processor and the terminal of game role facial model
US20210043000A1 (en) * 2019-05-15 2021-02-11 Zhejiang Sensetime Technology Development Co., Ltd. Method, apparatus and device for processing deformation of virtual object, and storage medium
CN110766777A (en) * 2019-10-31 2020-02-07 北京字节跳动网络技术有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN111729321A (en) * 2020-05-07 2020-10-02 完美世界(重庆)互动科技有限公司 Method, system, storage medium and computing device for constructing personalized character
CN111632374A (en) * 2020-06-01 2020-09-08 网易(杭州)网络有限公司 Method and device for processing face of virtual character in game and readable storage medium
CN111798550A (en) * 2020-07-17 2020-10-20 网易(杭州)网络有限公司 Method and device for processing model expressions
CN112090082A (en) * 2020-09-27 2020-12-18 完美世界(北京)软件科技发展有限公司 Facial skeleton processing method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李岩;: "表情动画中3D骨骼蒙皮技术运用研究", 渭南师范学院学报, no. 12, 20 June 2017 (2017-06-20) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023109256A1 (en) * 2021-12-15 2023-06-22 International Business Machines Corporation Extracting trajectories from arrow pictograms

Similar Documents

Publication Publication Date Title
US11587300B2 (en) Method and apparatus for generating three-dimensional virtual image, and storage medium
CN113643412A (en) Virtual image generation method and device, electronic equipment and storage medium
CN113240778B (en) Method, device, electronic equipment and storage medium for generating virtual image
CN113362263A (en) Method, apparatus, medium, and program product for changing the image of a virtual idol
CN114549710A (en) Virtual image generation method and device, electronic equipment and storage medium
CN113359995B (en) Man-machine interaction method, device, equipment and storage medium
EP3855386B1 (en) Method, apparatus, device and storage medium for transforming hairstyle and computer program product
CN115049799A (en) Method and device for generating 3D model and virtual image
CN111667588A (en) Person image processing method, person image processing device, AR device and storage medium
CN111709875A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113380269B (en) Video image generation method, apparatus, device, medium, and computer program product
CN113658307A (en) Image processing method and device
CN113223125A (en) Face driving method, device, equipment and medium for virtual image
CN115359171B (en) Virtual image processing method and device, electronic equipment and storage medium
EP4152138A1 (en) Method and apparatus for adjusting virtual face model, electronic device and storage medium
CN114120448B (en) Image processing method and device
CN113327311B (en) Virtual character-based display method, device, equipment and storage medium
CN112200169B (en) Method, apparatus, device and storage medium for training a model
CN117441192A (en) Image generation method and device
CN114332365A (en) Virtual character generation method and device, electronic equipment and storage medium
CN114140560A (en) Animation generation method, device, equipment and storage medium
CN114120412B (en) Image processing method and device
CN117422831A (en) Three-dimensional eyebrow shape generating method and device, electronic equipment and storage medium
CN115145672A (en) Information presentation method, information presentation apparatus, information presentation device, storage medium, and program product
CN115953512A (en) Expression generation method, neural network training method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination