CN113986105A - Face image deformation method and device, electronic equipment and storage medium - Google Patents

Face image deformation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113986105A
CN113986105A CN202010732569.2A CN202010732569A CN113986105A CN 113986105 A CN113986105 A CN 113986105A CN 202010732569 A CN202010732569 A CN 202010732569A CN 113986105 A CN113986105 A CN 113986105A
Authority
CN
China
Prior art keywords
face image
deformed
displacement
region
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010732569.2A
Other languages
Chinese (zh)
Inventor
闫鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010732569.2A priority Critical patent/CN113986105A/en
Priority to PCT/CN2021/104093 priority patent/WO2022022220A1/en
Publication of CN113986105A publication Critical patent/CN113986105A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/18

Abstract

The disclosure relates to a method and a device for deforming a face image, electronic equipment and a storage medium, and belongs to the technical field of computer vision processing. The method comprises the following steps: acquiring an initial trigger position selected by a user for a face image, and determining a region to be deformed in the face image according to the initial trigger position; acquiring a trigger track taking an initial trigger position as a trigger starting point, and extracting the track length and the track direction of the trigger track; determining the displacement length of each pixel point in the to-be-deformed area according to the track length, and respectively determining the displacement length and the displacement direction of each pixel point according to the track length and the track direction; and adjusting the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction so as to generate a deformed face image. Therefore, the personalized deformation of the face image is performed in response to the triggering of the user, the face deformation function with higher degree of freedom is realized, and the viscosity of the user and the product is increased.

Description

Face image deformation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision processing technologies, and in particular, to a method and an apparatus for transforming a face image, an electronic device, and a storage medium.
Background
With the progress of computer vision processing technology, the functions of performing vision processing on face images are becoming more diversified, for example, special effect addition, filter superimposition and the like in beauty applications.
In the related art, the function of performing visual processing on a face image is preset by a system, and after a user selects a corresponding function, a corresponding processing effect is realized according to a default processing effect parameter preset by the system and corresponding to the function, so that the personalized visual processing requirements of the user cannot be met, and the viscosity of the user and a product is not high.
Disclosure of Invention
The present disclosure provides a method and an apparatus for transforming a face image, an electronic device, and a storage medium, so as to at least solve the problem in the related art that the processing of the face image depends on default effect parameters preset by a system and cannot meet personalized requirements of a user.
The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a method for transforming a face image, including: acquiring an initial trigger position selected by a user for a face image, acquiring a trigger track taking the initial trigger position as a trigger starting point, and extracting the track length and the track direction of the trigger track; respectively determining the displacement length of each pixel point in the region to be deformed according to the track length and the track direction, and determining the sum of the displacement direction of each pixel point according to the track direction; and adjusting the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction so as to generate and display the deformed human face image.
In addition, the method for deforming the face image of the embodiment of the present disclosure further includes the following additional technical features:
in an embodiment of the present disclosure, the determining, according to the initial trigger position of the trigger operation, a region to be deformed in the face image includes: calculating the distance between the initial trigger position and a plurality of preset control areas; and determining a preset control area with the minimum distance from the initial trigger position as the area to be deformed.
In an embodiment of the present disclosure, the determining, according to the initial trigger position of the trigger operation, a region to be deformed in the face image includes: determining target identification information corresponding to the initial trigger position in a plurality of pieces of identification information preset on the face image, wherein each piece of identification information in the plurality of pieces of identification information corresponds to an image area on the face image; and determining an image area corresponding to the target identification information as the area to be deformed.
In an embodiment of the present disclosure, the determining, according to the initial trigger position of the trigger operation, a region to be deformed in the face image includes: determining a target face key point in the face image according to the initial trigger position; acquiring a preset deformation radius corresponding to the target face key point; and determining the region to be deformed by taking the target face key point as a circle center and the preset deformation radius as a circle radius.
In an embodiment of the present disclosure, the determining, according to the track length, a displacement length of each pixel point in the to-be-deformed region includes: calculating a first distance between each pixel point in the region to be deformed and the key point of the target face; determining a first deformation coefficient corresponding to each pixel point according to the first distance; and calculating a first product of the track length and the first deformation coefficient, and taking the first product as the displacement length of each pixel point.
In an embodiment of the present disclosure, the determining, according to the track length, a displacement length of each pixel point in the to-be-deformed region includes: calculating a second distance between each pixel point in the region to be deformed and a preset reference key point in the face image; determining a second deformation coefficient corresponding to each pixel point according to the second distance; and calculating a second product of the track length and the second deformation coefficient, and taking the second product as the displacement length of each pixel point.
In an embodiment of the present disclosure, the adjusting, according to the displacement length and the displacement direction, the position of each pixel point in the region to be deformed includes: responding to the fact that the displacement direction belongs to the preset direction, and determining a target adjusting position of each pixel point according to the corresponding displacement direction and displacement length of each pixel point, wherein the preset direction comprises the horizontal direction and the vertical direction of the face image; and adjusting each pixel point to the target adjusting position.
In one embodiment of the present disclosure, further comprising: responding to the fact that the displacement direction does not belong to the preset direction, and splitting the displacement direction into a horizontal direction and a vertical direction; determining a first displacement in the horizontal direction and a second displacement in the vertical direction according to the displacement length; and controlling each pixel point in the region to be deformed to move according to the first displacement in the horizontal direction and the second displacement in the vertical direction so as to generate and display a deformed face image.
According to a second aspect of the embodiments of the present disclosure, there is provided a face image morphing apparatus, including: the first determining module is configured to acquire an initial trigger position selected by a user for a face image, and determine a region to be deformed in the face image according to the initial trigger position; the extraction module is configured to acquire a trigger track taking the initial trigger position as a trigger starting point, and extract the track length and the track direction of the trigger track; the second determining module is configured to respectively determine the displacement length and the displacement direction of each pixel point in the to-be-deformed region according to the track length and the track direction; and the deformation adjusting module is configured to adjust the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction so as to generate a deformed face image.
In addition, the deformation device for the face image of the embodiment of the present disclosure further includes the following additional technical features:
in an embodiment of the disclosure, the first determining module is specifically configured to: calculating the distance between the initial trigger position and a plurality of preset control areas; and determining a preset control area with the minimum distance from the initial trigger position as the area to be deformed.
In an embodiment of the disclosure, the first determining module is specifically configured to: determining target identification information corresponding to the initial trigger position in a plurality of pieces of identification information preset on the face image, wherein each piece of identification information in the plurality of pieces of identification information corresponds to an image area on the face image; and determining an image area corresponding to the target identification information as the area to be deformed.
In one embodiment of the present disclosure, the first determining module includes: a first determining unit configured to determine a target face key point in the face image according to the initial trigger position; an obtaining unit configured to obtain a preset deformation radius corresponding to the target face key point; and the second determining unit is configured to determine the to-be-deformed area by taking the target face key point as a circle center and the preset deformation radius as a circle radius.
In an embodiment of the disclosure, the second determining module is specifically configured to: calculating a first distance between each pixel point in the region to be deformed and the key point of the target face; determining a first deformation coefficient corresponding to each pixel point according to the first distance; and calculating a first product of the track length and the first deformation coefficient, and taking the first product as the displacement length of each pixel point.
In an embodiment of the disclosure, the second determining module is specifically configured to: calculating a second distance between each pixel point in the region to be deformed and a preset reference key point in the face image; determining a second deformation coefficient corresponding to each pixel point according to the second distance; and calculating a second product of the track length and the second deformation coefficient, and taking the second product as the displacement length of each pixel point.
In an embodiment of the present disclosure, the deformation adjusting module specifically includes: a third determining unit, configured to determine, in response to that the displacement direction belongs to the preset direction, a target adjustment position of each pixel point according to a displacement direction and a displacement length of each pixel point corresponding to the displacement direction, where the preset direction includes a horizontal direction and a vertical direction of the face image; a first adjusting unit configured to adjust each of the pixel points to the target adjustment position.
In an embodiment of the present disclosure, the deformation adjustment module further includes: a fourth determination unit configured to split the displacement direction into a horizontal direction and a vertical direction in response to the displacement direction not belonging to the preset direction; a fifth determination unit configured to determine a first displacement in the horizontal direction and a second displacement in the vertical direction according to the displacement length; and the second adjusting unit is configured to control each pixel point in the region to be deformed to move according to the first displacement in the horizontal direction and the second displacement in the vertical direction so as to generate and display the deformed face image.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method for morphing a face image as described above.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a storage medium, wherein instructions when executed by a processor of an electronic device enable the electronic device to perform the method for morphing a face image as described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, which, when executed by a processor of an electronic device, enables the electronic device to perform the method of morphing a face image as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the method comprises the steps of obtaining an initial trigger position selected by a user for a face image, determining a to-be-deformed area in the face image according to the initial trigger position, further obtaining a trigger track with the initial trigger position as a trigger starting point, extracting the track length and the track direction of the trigger track, respectively determining the displacement length and the displacement direction of each pixel point in the to-be-deformed area according to the track length and the track direction, and then adjusting the position of each pixel point in the to-be-deformed area according to the displacement length and the displacement direction so as to generate and display a deformed face image. Therefore, the personalized deformation of the face image is performed in response to the triggering of the user, the face deformation function with higher degree of freedom is realized, and the viscosity of the user and the product is increased.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flowchart illustrating a method of morphing a face image according to a first exemplary embodiment;
FIG. 2 is a schematic diagram of a trajectory direction shown in accordance with a second exemplary embodiment;
FIG. 3 is a schematic diagram of a trajectory direction shown in accordance with a third exemplary embodiment;
FIG. 4 is a schematic diagram of a distorted scene of a face image according to a fourth exemplary embodiment;
fig. 5 is a flowchart illustrating a method of morphing a face image according to a fifth exemplary embodiment;
FIG. 6-1 is a schematic illustration of the distance between an initial trigger position and a corresponding preset control area, shown in accordance with a sixth exemplary embodiment;
FIG. 6-2 is a schematic illustration of the distance between an initial trigger position and a corresponding preset control area, shown in accordance with a seventh exemplary embodiment;
fig. 7 is a flowchart illustrating a method of morphing a face image according to the eighth exemplary embodiment;
fig. 8 is a schematic diagram illustrating identification information according to a ninth exemplary embodiment;
fig. 9 is a flowchart illustrating a method of morphing a face image according to the tenth exemplary embodiment;
fig. 10 is a schematic diagram illustrating a scene for determining a region to be deformed according to the eleventh exemplary embodiment;
fig. 11 is a schematic diagram illustrating a scene for determining a to-be-deformed region according to a twelfth exemplary embodiment;
fig. 12 is a flowchart illustrating a method of morphing a face image according to the thirteenth exemplary embodiment;
fig. 13 is a flowchart illustrating a method of morphing a face image according to the fourteenth exemplary embodiment;
fig. 14 is a flowchart illustrating a method of morphing a face image according to a fifteenth exemplary embodiment;
FIG. 15-1 is a schematic diagram illustrating a target adjustment position scenario in accordance with a sixteenth exemplary embodiment;
FIG. 15-2 is a schematic diagram of a morphed face image according to a seventeenth exemplary embodiment;
FIG. 16-1 is a schematic diagram illustrating a first and second orientation scenario in accordance with an eighteenth exemplary embodiment;
fig. 16-2 is a schematic diagram illustrating a morphed face image according to a nineteenth exemplary embodiment;
fig. 17 is a schematic structural diagram of a face image morphing apparatus according to a twentieth exemplary embodiment;
fig. 18 is a schematic structural diagram of a face image morphing apparatus according to a twenty-first exemplary embodiment;
fig. 19 is a schematic structural diagram of a face image morphing apparatus according to a twenty-second exemplary embodiment;
fig. 20 is a schematic structural diagram of a face image morphing apparatus according to a twenty-third exemplary embodiment;
fig. 21 is a schematic structural diagram of a terminal device according to a twenty-fourth exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In view of the above-mentioned background art, the problem that in the related art, the processing effect based on the visual processing is determined by the processing effect parameter preset by the system and is difficult to meet the personalized processing requirement of the user is solved.
Of course, the face image processing method according to the embodiments of the present disclosure may be applied to deformation processing of any subject image, such as a building image, a fruit image, and the like, besides the face image.
Fig. 1 is a flowchart illustrating a method for transforming a face image according to an exemplary embodiment, where as shown in fig. 1, the method for transforming a face image is used in an electronic device, which may be a smart phone, a portable computer, or the like, and includes the following steps.
In step 101, an initial trigger position selected by a user for a face image is obtained, and a region to be deformed in the face image is determined according to the initial trigger position.
The main body of the user implementing the initial trigger position may be a finger or a stylus pen.
In the embodiment of the disclosure, an initial trigger position selected by a user for a face image is obtained, for example, a position where the user initially clicks with a finger is the initial trigger position, and then a region to be deformed in the face image is determined according to the initial trigger position, where the region to be deformed includes a face region to be deformed, and the like.
In some possible embodiments, an image region which is within a preset range from the initial trigger position may be directly determined as a region to be deformed; in other possible embodiments, a corresponding relationship between each trigger position and the to-be-deformed region may be pre-constructed, and if the initial trigger position belongs to the pre-constructed trigger position 1, the to-be-deformed region a corresponding to the trigger position 1 is determined as the to-be-deformed region. How to determine the specific details of the region to be deformed in the face image according to the initial trigger position will be described in the following embodiments, and details are not described herein.
In step 102, a trigger trajectory with the initial trigger position as a trigger starting point is obtained, and a trajectory length and a trajectory direction of the trigger trajectory are extracted.
It should be understood that, in the present embodiment, the face deformation display is implemented based on the drag operation of the user, and thus, the trigger trajectory of the initial trigger position is obtained, for example, the trigger trajectory of the initial trigger position is detected based on the capacitance value in the terminal device, and then the trajectory length and the trajectory direction of the trigger trajectory are extracted.
In one embodiment of the present disclosure, as shown in fig. 2, an end point trigger position of a trigger track may be identified, a direction from an initial trigger position to the end point trigger position is constructed as a track direction, and a track length may be obtained as a distance from the initial trigger position to the end point trigger position.
In another embodiment of the present disclosure, as shown in fig. 3, a plurality of trigger points may be selected at preset time intervals in a current trigger trajectory, and a direction of each trigger point and a direction of a previous adjacent trigger point are constructed as a trajectory direction, where the trajectory direction is multiple, and a distance between each trigger point and the previous adjacent trigger point is taken as a trajectory length in the corresponding trajectory direction.
In step 103, the displacement length and the displacement direction of each pixel point in the region to be deformed are respectively determined according to the track length and the track direction.
In the embodiment of the disclosure, in order to respond to the triggering operation of the user to perform the deformation operation on the human face, the displacement length of each pixel point in the region to be deformed is determined according to the track length, and the displacement direction of each pixel point is determined according to the track direction, so that the triggering track of the user is reflected to the movement of each pixel point in the region to be deformed.
It should be noted that the track length and the displacement length of each pixel point in the region to be deformed are in a direct proportion relationship, so that the farther the hand is triggered, the larger the deviation is, the stronger the pulling to the region to be deformed is, and the more exaggerated the face is represented. The displacement direction may be the same as the track direction, or may be set differently according to the individual needs of the user, for example, if the user sets the track direction to be left, the corresponding displacement direction is right, and so on.
In step 104, the position of each pixel point in the region to be deformed is adjusted according to the displacement length and the displacement direction, so as to generate a deformed face image.
In this embodiment, the position of each pixel point in the region to be deformed is adjusted according to the displacement length and the displacement direction, so as to generate a deformed face image. For example, as shown in the left diagram of fig. 4, when the initial trigger position of the user is the position of the eyelid, the determined region to be deformed is the entire eye region, the trigger trajectory of the user is the downward movement a trajectory length, and as shown in the right diagram of fig. 4, the eye region of the face image region is deformed.
In summary, the method for deforming a face image according to the embodiments of the present disclosure obtains an initial trigger position selected by a user for the face image, and determines a region to be deformed in the face image according to the initial trigger position, further obtains a trigger trajectory using the initial trigger position as a trigger starting point, extracts a trajectory length and a trajectory direction of the trigger trajectory, determines a displacement length of each pixel point in the region to be deformed according to the trajectory length, determines a displacement direction of each pixel point according to the trajectory direction, and then adjusts a position of each pixel point in the region to be deformed according to the displacement length and the displacement direction, so as to generate and display the deformed face image. Therefore, based on a mode of interaction with a user, personalized deformation of the face image is met, a face deformation function with higher degree of freedom is realized, and the viscosity of the user and a product is increased.
It should be noted that, in different application scenarios, the manner of determining the region to be deformed in the face image according to the initial trigger position is different, and the following is exemplified:
example one:
as shown in fig. 5, the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation includes:
in step 201, the distances between the initial trigger position and a plurality of preset control areas are calculated.
In step 202, a preset control area with the minimum distance from the initial trigger position is determined as an area to be deformed.
In this embodiment, it can be understood that a plurality of control regions are set in the face image in advance, each control region corresponds to one face region, for example, the control region 1 corresponds to a left eye region of a face, the control region 2 corresponds to a mouth region of a face, and the like, different preset control regions may be calibrated by big data or by a user according to personal needs, and when calibration is performed according to personal needs of the user, different preset control regions may be divided according to a dividing track triggered by the user on the face image, and the like.
After the initial trigger position is obtained, the distances between the initial trigger position and the plurality of preset control areas are calculated, for example, as shown in fig. 6-1, a center point coordinate of the initial trigger position and a center point coordinate of each preset control area are extracted, and the distance between the initial trigger position and the corresponding preset control area is calculated based on the distance between the center point coordinates of the initial trigger position and the center point coordinates of each preset control area. For another example, as shown in fig. 6-2, the closest distance between the edge contour of the initial trigger position and the edge contour of each preset control area is calculated as the distance between the initial trigger position and the corresponding preset control area.
And then, determining a preset control area with the minimum distance from the initial trigger position as an area to be deformed. Of course, in this embodiment, if the initial trigger position belongs to a certain preset control region, the corresponding preset control region is directly used as the region to be deformed, and if the initial trigger position belongs to a plurality of preset control regions, the ratio of the area of the initial trigger position in each control region to the total area of the initial trigger position is counted, and the preset control region corresponding to the highest value of the ratio is used as the region to be deformed. This can achieve a deformation effect of "dot-band surface".
Example two:
as shown in fig. 7, the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation includes:
in step 301, target identification information corresponding to an initial trigger position is determined among a plurality of pieces of identification information preset on a face image, wherein each piece of identification information in the plurality of pieces of identification information corresponds to an image area on the face image.
In this embodiment, as shown in fig. 8, a plurality of identification information (a plurality of identification information may be displayed or may not be displayed) is preset on the face image, wherein each identification information is used as a control module of an image area in the face image, a corresponding relationship between each identification information and the image area is stored in a preset database, and reference is continued to fig. 8, for example, for identification information 1, the corresponding image area is a forehead area.
In step 302, the image area corresponding to the target identification information is determined as the area to be deformed.
In this embodiment, the target identification information corresponding to the initial trigger position is determined, for example, the target identification information whose position overlaps with the initial trigger position is determined to be the target identification information corresponding to the initial trigger position, the distance between the center point of the initial trigger position and the center point of the identification information may also be calculated, and the identification information with the closest distance from the center point of the initial trigger position is used as the target identification information corresponding to the initial trigger position.
As mentioned above, there is a corresponding relationship between the identification information and the image area, and therefore, in this embodiment, the image area corresponding to the target identification information is determined to be the area to be deformed by querying a preset database or the like. Therefore, an image area to be deformed can be directly determined according to the target identification information triggered by the user, and the image area and the identification information are bound in advance, so that the deformation efficiency is improved.
Example three:
as shown in fig. 9, the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation includes:
step 401, determining a target face key point in the face image according to the initial trigger position.
In this embodiment, a target face key point is determined in a face image according to an initial trigger position, where the target face key point may be a central point of an area corresponding to the initial trigger position, or may be a pre-trained convolutional neural network, as shown in fig. 10, the convolutional neural network may obtain a plurality of face key points (101 in the figure) after the face image is input, and further determine a face key point associated with the initial trigger position as the target face key point, when one face key point is overlapped at the initial trigger position, the one face key point is used as the target face key point, and when a plurality of face key points are overlapped at the initial trigger position, one of the plurality of face key points is randomly selected as the target face key point. And when the initial trigger position is not overlapped to the key point of the face, determining the key point closest to the central point or the edge point of the initial trigger position as the key point of the target face.
And 402, acquiring a preset deformation radius corresponding to the key point of the target face.
In some possible embodiments, a fixed preset deformation radius is set for the face image, the preset deformation radius in this embodiment may be calibrated according to preset big data, or may be set according to the size of the face image, and the larger the size of the face image is, the larger the corresponding preset deformation radius is, of course, in some possible embodiments, the preset deformation radius may also be set by the personal preference of the user.
In other possible examples, the preset radii in the face image may be different, for example, the face image is divided into a plurality of regions, such as a forehead region, a cheek region, a mouth region, and the like, a preset deformation radius is set for each region, the preset deformation radii corresponding to different regions are different, so as to ensure that different regions have different deformation effects, and further, a region where the target face key point is located is determined, and the preset deformation radius corresponding to the region is used as the preset deformation radius of the target face key point.
In this example, a corresponding preset deformation radius may also be preset for the key point in each face image, and the corresponding relationship between the coordinates of the key point and the preset deformation radius is stored, and the corresponding relationship is queried to determine the preset deformation radius corresponding to the target face key point, thereby further improving the flexibility of face deformation and ensuring the diversity of deformation effects.
And step 403, determining a region to be deformed by taking the key point of the target face as the center of a circle and taking the preset deformation radius as the circular radius.
In this embodiment, as shown in fig. 11, a region to be deformed is determined by taking a target face key point as a center of a circle and taking a preset deformation radius as a circular radius, where the region to be deformed may include a background region and the like in addition to a face region.
In summary, according to the method for deforming a face image in the embodiment of the present disclosure, according to the needs of an application scenario, a region to be deformed corresponding to an initial trigger position is flexibly determined, so that the stickiness of a user and a product is improved.
In order to make it clear for those skilled in the art how to determine the displacement length of each pixel point in the region to be deformed according to the track length in the embodiment of the present disclosure, the following description is made with reference to a specific example:
example one:
in this example, the method for determining the to-be-deformed region is the third example shown in fig. 9, so that, as shown in fig. 12, determining the displacement length of each pixel point in the to-be-deformed region according to the track length includes:
in step 501, a first distance between each pixel point in the region to be deformed and a key point of the target face is calculated.
In some possible embodiments, a first distance between each pixel point in the to-be-deformed region and the target face key point is calculated, and the distance can be calculated according to the coordinate of each pixel point and the coordinate of the target face key point. The target face key point is located in the face area of the face image, so that deformation with the face area as a main body can be guaranteed.
In step 502, a first deformation coefficient of each pixel point is determined according to the first distance.
In step 503, a first product of the trajectory length and the first deformation coefficient is calculated, and the first product is used as the displacement length of each pixel.
In some possible examples, the first distance and the first deformation coefficient are in an inverse relationship, and the larger the first distance is, the smaller the corresponding first deformation coefficient is, so that after the first product of the track length and the first deformation coefficient is calculated, and the displacement length of each pixel point is obtained, the effect of liquefaction processing centered on the target face key point is realized after displacement according to the displacement length.
In another possible example, the first distance and the first deformation coefficient are in a direct proportion relationship, and the larger the first distance is, the larger the corresponding first deformation coefficient is, so that after the first product of the track length and the first deformation coefficient is calculated, and the displacement length of each pixel point is obtained, the effect of focusing blurring is realized after displacement is performed according to the displacement length.
Example two:
in this example, as shown in fig. 13, determining the displacement length of each pixel point in the to-be-deformed region according to the track length includes:
in step 601, a second distance between each pixel point in the region to be deformed and a preset reference key point in the face image is calculated.
The preset reference key points in the face image are preset key points in a fixed face area, can be key points on the tip of the nose, can also be key points on the forehead and the like, and the preset reference key points are different in setting position, so that the deformation effect is different inevitably.
In step 602, a second deformation coefficient of each pixel point is determined according to the second distance.
In step 603, a second product of the trajectory length and the second deformation coefficient is calculated, and the second product is used as the displacement length of each pixel point.
In some possible examples, the second distance and the second deformation coefficient are in an inverse relationship, and the larger the second distance is, the smaller the corresponding second deformation coefficient is, so that after the second product of the track length and the second deformation coefficient is calculated, and the displacement length of each pixel point is obtained, the effect of liquefaction processing centered on the preset reference key point is realized after displacement according to the displacement length.
In another possible example, the second distance and the second deformation coefficient are in a direct proportion relationship, and the larger the second distance is, the larger the corresponding second deformation coefficient is, so that the second product of the track length and the second deformation coefficient is calculated, and after the displacement length of each pixel point is obtained, the effect of focusing virtualization is realized after displacement is performed according to the displacement length.
Example three:
in this example, the length of the track is directly used as the length of the displacement, so that after each pixel point moves according to the length of the displacement, the effect of moving the whole to-be-deformed area is achieved.
Of course, in order to ensure the processing effect of the deformed image during the actual operation, a limit value of the displacement length may be set, and the displacement length is determined as the limit value when the displacement length is once greater than the limit value, wherein the limit value of the displacement length may be different according to the initial trigger position of the user, and the user may set according to personal preference, for example, when the initial trigger position of the user is in the eyes, the corresponding limit value may be larger, and the like.
Therefore, in the embodiment, the displacement of each pixel point can be determined in different modes to realize different deformation effects.
Furthermore, after the displacement length is determined, the position of each pixel point in the to-be-deformed area can be adjusted by adopting different implementation modes according to the displacement length and the displacement direction.
In an embodiment of the present disclosure, as shown in fig. 14, the step of adjusting the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction includes:
in step 701, in response to that the displacement direction belongs to a preset direction, determining a target adjustment position of each pixel point according to the displacement direction and the displacement length of each pixel point corresponding to the displacement direction, wherein the preset direction includes a horizontal direction and a vertical direction of the face image.
In an embodiment of the present disclosure, an end point trigger position of a trigger trajectory is obtained, a direction from an initial trigger position to the end point trigger position is determined as a displacement direction, and whether the displacement direction is a horizontal direction or a vertical direction is determined, where if the displacement direction is the horizontal direction or the vertical direction, it is determined that the displacement direction belongs to a preset direction.
Of course, in different application scenarios, the preset direction condition may also be any 1-3 directions of up, down, left, and right, or may also be any other directions, and may be set according to the needs of the scenario.
Specifically, if the target position belongs to the preset direction, the target adjustment position of each pixel point is determined according to the displacement direction and the displacement length of each pixel point in the corresponding direction, as shown in fig. 15-1, the target position of each pixel point is a position away from the displacement length in the corresponding unique direction.
In step 702, each pixel point is adjusted to a target adjustment position.
For example, as shown in fig. 15-2, when the user triggers the identification information 1 on the forehead as shown in fig. 8 and the corresponding displacement is toward the right top of the face image, each pixel point of the region to be deformed is adjusted to the target adjustment position to obtain a deformed face image with the forehead stretched.
In an embodiment of the present disclosure, with continuing reference to fig. 14, after step 701 above, the method further includes:
in step 703, in response to the displacement direction not belonging to the preset direction, the displacement direction is split into a horizontal direction and a vertical direction.
It should be understood that in a trigger trajectory, the corresponding displacement directions may be substantially lower left, lower right, upper left, upper right, etc., and each direction may be split into a horizontal direction and a vertical direction.
In step 704, a first displacement in the horizontal direction and a second displacement in the vertical direction are determined based on the length of the displacement.
In the embodiment of the present disclosure, the first displacement in the horizontal direction and the second displacement in the vertical direction are determined according to the displacement length, and as shown in fig. 16-1, if the displacement direction is left and below, the displacement direction can be split into the left and below directions, and for the displacement length, the first displacement in the left direction and the second displacement in the below direction are split according to the rectangular coordinate system.
In step 705, each pixel point in the region to be deformed is controlled to move according to the first displacement in the horizontal direction and the second displacement in the vertical direction, so as to generate and display the deformed face image. .
In this embodiment, the deformable face image may be generated by adopting a Frame Buffer Object (FBO) concept, and first, each pixel point in the region to be deformed is controlled to move to a first pixel position in the horizontal direction according to a first displacement direction to obtain a reference face image of the face image, at this time, the reference face image is not displayed, but is cached in an early offline control, and then, a second pixel position corresponding to each pixel point is determined according to the second displacement direction in the vertical direction, and the reference face image is used as an input texture map to control each pixel point in the reference face image to be adjusted from the first pixel position to the second pixel position. Therefore, only the face image finally adjusted to the second pixel position is rendered and displayed, the processing cost of the image is greatly reduced, and the image generation efficiency is improved.
Therefore, the method for deforming a face image in this embodiment can realize any operation of deforming a face according to the trigger operations such as dragging of the user, wherein after the trigger trajectory of each trigger operation is ended, the deformation effect is not rebounded but is retained, so that multiple trigger operations of the user can be reflected in the face image, for example, as for fig. 16-2, the user can realize the deformation operation on the face of the user by dragging the flag information 1, 4, and 5, and the personalized effect shown in fig. 16-2 is realized.
In summary, the method for deforming a face image according to the embodiment of the present disclosure can flexibly adjust the position of each pixel point in the region to be deformed, and meet the requirement of generating a personalized deformed face image.
In order to implement the above embodiments, the embodiments of the present disclosure further provide a schematic structural diagram of a human face image morphing device.
Fig. 17 is a schematic structural diagram illustrating a face image morphing apparatus according to an exemplary embodiment. As shown in fig. 17, the face image morphing apparatus includes: a first determination module 171, an extraction module 172, a second determination module 173, and a deformation adjustment module 174, wherein,
the first determining module 171 is configured to obtain an initial trigger position selected by the user for the face image, and determine a region to be deformed in the face image according to the initial trigger position;
an extracting module 172 configured to acquire a trigger trajectory taking the initial trigger position as a trigger starting point, and extract a trajectory length and a trajectory direction of the trigger trajectory;
a second determining module 173 configured to determine the displacement length and displacement of each pixel point in the region to be deformed according to the track length and the track direction;
a third determining module 174 configured to determine a displacement direction of each pixel point according to the track direction;
and the deformation adjusting module 175 is configured to adjust the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction, so as to generate a deformed face image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In summary, the facial image deformation apparatus of the embodiment of the present disclosure obtains an initial trigger position selected by a user for a facial image, and determines a to-be-deformed region in the facial image according to the initial trigger position, further obtains a trigger trajectory taking the initial trigger position as a trigger starting point, extracts a trajectory length and a trajectory direction of the trigger trajectory, determines a displacement length of each pixel point in the to-be-deformed region according to the trajectory length, determines a displacement direction of each pixel point according to the trajectory direction, and then adjusts a position of each pixel point in the to-be-deformed region according to the displacement length and the displacement direction, so as to generate and display the deformed facial image. Therefore, the personalized deformation of the face image is performed in response to the triggering of the user, the face deformation function with higher degree of freedom is realized, and the viscosity of the user and the product is increased.
It should be noted that, in different application scenarios, the first determining module 171 determines the to-be-deformed region in the face image according to the initial trigger position in different ways, which is exemplified as follows:
example one:
in this example, the first determining module 171 is specifically configured to:
calculating the distance between the initial trigger position and a plurality of preset control areas;
and determining a preset control area with the minimum distance from the initial trigger position as an area to be deformed.
Example two:
in this example, the first determining module 171 is specifically configured to:
determining target identification information corresponding to an initial trigger position in a plurality of pieces of identification information preset on the face image, wherein each piece of identification information in the plurality of pieces of identification information corresponds to an image area on the face image;
and querying a preset database, and determining an image area corresponding to the target identification information as an area to be deformed.
Example three:
in this example, as shown in fig. 18, on the basis as shown in fig. 17, the first determination module 171 includes: a first determining unit 1711, a first obtaining unit 1712, a second determining unit 1713, wherein,
a first determining unit 1711 configured to determine a target face key point in the face image according to the initial trigger position;
an obtaining unit 1712 configured to obtain a preset deformation radius corresponding to the target face key point;
the second determining unit 1713 is configured to determine the region to be deformed by taking the target face key point as a center of a circle and taking the preset deformation radius as a circular radius.
In one embodiment of the present disclosure, the first determining unit 1711 is specifically configured to:
inputting the face image into a pre-trained convolutional neural network to generate a plurality of face key points;
and determining the face key points associated with the initial trigger positions as target face key points.
In summary, the deformation device for a face image according to the embodiments of the present disclosure flexibly determines the region to be deformed corresponding to the initial trigger position according to the requirement of the application scenario, thereby improving the viscosity of the user and the product.
In order to make it clear for those skilled in the art how to determine the displacement length of each pixel point in the region to be deformed according to the track length in the embodiment of the present disclosure, the following description is made with reference to a specific example:
example one:
in this example, the second determining module 173 is specifically configured to:
calculating a first distance between each pixel point in the region to be deformed and a key point of a target face;
determining a first deformation coefficient corresponding to each pixel point according to the first distance;
and calculating a first product of the track length and the first deformation coefficient, and taking the first product as the displacement length of each pixel point.
Example two:
in this example, the second determining module 173 is specifically configured to:
calculating a second distance between each pixel point in the region to be deformed and a preset reference key point in the face image;
determining a second deformation coefficient corresponding to each pixel point according to the second distance;
and calculating a second product of the track length and the second deformation coefficient, and taking the second product as the displacement length of each pixel point.
Therefore, in the embodiment, the displacement of each pixel point can be determined in different modes to realize different deformation effects.
Furthermore, after the displacement length is determined, the position of each pixel point in the to-be-deformed area can be adjusted by adopting different implementation modes according to the displacement length and the displacement direction.
In one embodiment of the present disclosure, as shown in fig. 19, on the basis of fig. 17, the deformation adjusting module 174 includes: a third determining unit 1741, a first adjusting unit 1742, wherein,
a third determining unit 1741 configured to determine, in response to that the displacement direction belongs to a preset direction, a target adjustment position of each pixel point according to the corresponding displacement direction and displacement length of each pixel point, where the preset direction includes a horizontal direction and a vertical direction of the face image;
a first adjusting unit 1742 configured to adjust each pixel point to a target adjustment position.
In one embodiment of the present disclosure, referring to fig. 20, on the basis as shown in fig. 19, the deformation adjustment module 174 further includes: a fourth determining unit 1743, a fifth determining unit 1744, and a second adjusting unit 1745, wherein,
a fourth determining unit 1743 configured to split the displacement direction into a horizontal direction and a vertical direction in response to the displacement direction not belonging to the preset direction;
a fifth determining unit 1744 configured to determine a first displacement in the horizontal direction and a second displacement in the vertical direction according to the displacement length;
and a second adjusting unit 1745 configured to control each pixel point in the region to be deformed to move according to the first displacement in the horizontal direction and the second displacement in the vertical direction, so as to generate and display the deformed face image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In summary, the facial image deformation device according to the embodiment of the present disclosure can flexibly adjust the position of each pixel point in the region to be deformed, and meet the requirement of personalized deformed facial image generation service.
In order to realize the above embodiment, the present disclosure further provides an electronic device. Fig. 21 is a block diagram of an electronic device proposed according to the present disclosure.
As shown in fig. 21, the electronic device 200 includes:
a memory 210 and a processor 220, a bus 230 connecting different components (including the memory 210 and the processor 220), wherein the memory 210 stores a computer program, and when the processor 220 executes the program, the method for deforming the face image according to the embodiment of the present disclosure is implemented.
Bus 230 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 200 typically includes a variety of electronic device readable media. Such media may be any available media that is accessible by electronic device 200 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 210 may also include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)240 and/or cache memory 250. The electronic device 200 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 260 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 21, and commonly referred to as a "hard drive"). Although not shown in FIG. 21, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 230 by one or more data media interfaces. Memory 210 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
A program/utility 280 having a set (at least one) of program modules 270, including but not limited to an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment, may be stored in, for example, the memory 210. The program modules 270 generally perform the functions and/or methodologies of the embodiments described in this disclosure.
Electronic device 200 may also communicate with one or more external devices 290 (e.g., keyboard, pointing device, display 291, etc.), with one or more devices that enable a user to interact with electronic device 200, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 200 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 292. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 293. As shown in FIG. 21, the network adapter 293 communicates with the other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 220 executes various functional applications and data processing by executing programs stored in the memory 210.
It should be noted that, for the implementation process and the technical principle of the electronic device of this embodiment, reference is made to the foregoing explanation of the method for transforming a face image according to the embodiment of the present disclosure, and details are not repeated here.
In summary, the electronic device for a face image according to the embodiment of the present disclosure obtains an initial trigger position selected by a user for the face image, and determines a region to be deformed in the face image according to the initial trigger position, further obtains a trigger trajectory of the initial trigger position, extracts a trajectory length and a trajectory direction of the trigger trajectory, determines a displacement length of each pixel point in the region to be deformed according to the trajectory length, determines a displacement direction of each pixel point according to the trajectory direction, and then adjusts a position of each pixel point in the region to be deformed according to the displacement length and the displacement direction, so as to generate and display the deformed face image. Therefore, based on a mode of interaction with a user, personalized deformation of the face image is met, a face deformation function with higher degree of freedom is realized, and the viscosity of the user and a product is increased.
In order to implement the above embodiments, the present disclosure also provides a storage medium.
Wherein the instructions in the storage medium, when executed by a processor of the electronic device, enable the electronic device to perform the method of morphing a face image as described above.
In order to implement the above embodiments, the present disclosure also provides a computer program product, which, when executed by a processor of an electronic device, enables the electronic device to execute the method for morphing a face image as described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for deforming a face image, comprising:
acquiring an initial trigger position selected by a user for a face image, and determining a region to be deformed in the face image according to the initial trigger position;
acquiring a trigger track taking the initial trigger position as a trigger starting point, and extracting the track length and the track direction of the trigger track;
respectively determining the displacement length and the displacement direction of each pixel point in the region to be deformed according to the track length and the track direction;
and adjusting the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction so as to generate a deformed face image.
2. The method as claimed in claim 1, wherein the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation comprises:
calculating the distance between the initial trigger position and a plurality of preset control areas;
and determining a preset control area with the minimum distance from the initial trigger position as the area to be deformed.
3. The method as claimed in claim 1, wherein the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation comprises:
determining target identification information corresponding to the initial trigger position in a plurality of pieces of identification information preset on the face image, wherein each piece of identification information in the plurality of pieces of identification information corresponds to an image area on the face image;
and determining an image area corresponding to the target identification information as the area to be deformed.
4. The method as claimed in claim 1, wherein the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation comprises:
determining a target face key point in the face image according to the initial trigger position;
acquiring a preset deformation radius corresponding to the target face key point;
and determining the region to be deformed by taking the target face key point as a circle center and the preset deformation radius as a circle radius.
5. The method of claim 4, wherein the determining the displacement length of each pixel point in the region to be deformed according to the track length comprises:
calculating a first distance between each pixel point in the region to be deformed and the key point of the target face;
determining a first deformation coefficient corresponding to each pixel point according to the first distance;
and calculating a first product of the track length and the first deformation coefficient, and taking the first product as the displacement length of each pixel point.
6. The method of claim 1, wherein the determining the displacement length of each pixel point in the region to be deformed according to the track length comprises:
calculating a second distance between each pixel point in the region to be deformed and a preset reference key point in the face image;
determining a second deformation coefficient corresponding to each pixel point according to the second distance;
and calculating a second product of the track length and the second deformation coefficient, and taking the second product as the displacement length of each pixel point.
7. The method according to any one of claims 1 to 6, wherein the adjusting the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction comprises:
responding to the fact that the displacement direction belongs to the preset direction, and determining a target adjusting position of each pixel point according to the corresponding displacement direction and displacement length of each pixel point, wherein the preset direction comprises the horizontal direction and the vertical direction of the face image;
and adjusting each pixel point to the target adjusting position.
8. An apparatus for morphing a face image, comprising:
the first determining module is configured to acquire an initial trigger position selected by a user for a face image, and determine a region to be deformed in the face image according to the initial trigger position;
the extraction module is configured to acquire a trigger track taking the initial trigger position as a trigger starting point, and extract the track length and the track direction of the trigger track;
the second determining module is configured to respectively determine the displacement length and the displacement direction of each pixel point in the to-be-deformed region according to the track length and the track direction;
and the deformation adjusting module is configured to adjust the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction so as to generate a deformed face image.
9. An electronic device, comprising:
a processor;
a memory configured to store the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of morphing a face image according to any one of claims 1 to 7.
10. A storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform a method of morphing a face image according to any one of claims 1 to 7.
CN202010732569.2A 2020-07-27 2020-07-27 Face image deformation method and device, electronic equipment and storage medium Pending CN113986105A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010732569.2A CN113986105A (en) 2020-07-27 2020-07-27 Face image deformation method and device, electronic equipment and storage medium
PCT/CN2021/104093 WO2022022220A1 (en) 2020-07-27 2021-07-01 Morphing method and morphing apparatus for facial image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010732569.2A CN113986105A (en) 2020-07-27 2020-07-27 Face image deformation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113986105A true CN113986105A (en) 2022-01-28

Family

ID=79731505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010732569.2A Pending CN113986105A (en) 2020-07-27 2020-07-27 Face image deformation method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113986105A (en)
WO (1) WO2022022220A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107154030A (en) * 2017-05-17 2017-09-12 腾讯科技(上海)有限公司 Image processing method and device, electronic equipment and storage medium
CN109087239A (en) * 2018-07-25 2018-12-25 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium
CN109242765A (en) * 2018-08-31 2019-01-18 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium
CN110069191A (en) * 2019-01-31 2019-07-30 北京字节跳动网络技术有限公司 Image based on terminal pulls deformation implementation method and device
CN110502993A (en) * 2019-07-18 2019-11-26 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104965654B (en) * 2015-06-15 2019-03-15 广东小天才科技有限公司 A kind of method and system of head portrait adjustment
US9940753B1 (en) * 2016-10-11 2018-04-10 Disney Enterprises, Inc. Real time surface augmentation using projected light

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107154030A (en) * 2017-05-17 2017-09-12 腾讯科技(上海)有限公司 Image processing method and device, electronic equipment and storage medium
CN109087239A (en) * 2018-07-25 2018-12-25 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium
CN109242765A (en) * 2018-08-31 2019-01-18 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium
CN110069191A (en) * 2019-01-31 2019-07-30 北京字节跳动网络技术有限公司 Image based on terminal pulls deformation implementation method and device
CN110502993A (en) * 2019-07-18 2019-11-26 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2022022220A1 (en) 2022-02-03

Similar Documents

Publication Publication Date Title
Memo et al. Head-mounted gesture controlled interface for human-computer interaction
US10739849B2 (en) Selective peripheral vision filtering in a foveated rendering system
US10720128B2 (en) Real-time user adaptive foveated rendering
US20200334853A1 (en) Facial features tracker with advanced training for natural rendering of human faces in real-time
US10599914B2 (en) Method and apparatus for human face image processing
US20190384967A1 (en) Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
KR20200037147A (en) Gaze point determination method and apparatus, electronic device and computer storage medium
CN109242765B (en) Face image processing method and device and storage medium
Arcoverde Neto et al. Enhanced real-time head pose estimation system for mobile device
CN108734078B (en) Image processing method, image processing apparatus, electronic device, storage medium, and program
CN105988566B (en) A kind of information processing method and electronic equipment
CN110915211A (en) Physical input device in virtual reality
CN111527468A (en) Air-to-air interaction method, device and equipment
WO2021185110A1 (en) Method and device for eye tracking calibration
CN110941337A (en) Control method of avatar, terminal device and computer readable storage medium
CN111429338A (en) Method, apparatus, device and computer-readable storage medium for processing video
Moeslund et al. A natural interface to a virtual environment through computer vision-estimated pointing gestures
CN114904268A (en) Virtual image adjusting method and device, electronic equipment and storage medium
CN113822965A (en) Image rendering processing method, device and equipment and computer storage medium
CN113658035A (en) Face transformation method, device, equipment, storage medium and product
CN113986105A (en) Face image deformation method and device, electronic equipment and storage medium
CN114332365A (en) Virtual character generation method and device, electronic equipment and storage medium
US11481940B2 (en) Structural facial modifications in images
Raees et al. THE-3DI: Tracing head and eyes for 3D interactions: An interaction technique for virtual environments
CN111949113A (en) Image interaction method and device applied to virtual reality VR scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination