WO2022022220A1 - 人脸图像的变形方法和变形装置 - Google Patents

人脸图像的变形方法和变形装置 Download PDF

Info

Publication number
WO2022022220A1
WO2022022220A1 PCT/CN2021/104093 CN2021104093W WO2022022220A1 WO 2022022220 A1 WO2022022220 A1 WO 2022022220A1 CN 2021104093 W CN2021104093 W CN 2021104093W WO 2022022220 A1 WO2022022220 A1 WO 2022022220A1
Authority
WO
WIPO (PCT)
Prior art keywords
displacement
deformed
face image
area
pixel
Prior art date
Application number
PCT/CN2021/104093
Other languages
English (en)
French (fr)
Inventor
闫鑫
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2022022220A1 publication Critical patent/WO2022022220A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually

Definitions

  • the present disclosure relates to the technical field of computer vision processing, and in particular, to a face image deformation method and deformation device.
  • the function of performing visual processing on face images is preset by the system. After the user selects the corresponding function, the corresponding processing effect is realized according to the default processing effect parameters preset by the system and corresponding to the function, which cannot satisfy the user’s needs.
  • the demand for personalized visual processing leads to low stickiness between users and products.
  • the present disclosure provides a deformation method and deformation device of a face image.
  • the technical solutions of the present disclosure are as follows:
  • a method for deforming a face image including: acquiring an initial trigger position selected by a user for a face image, acquiring a trigger trajectory with the initial trigger position as a trigger start point, and extracting all the track length and track direction of the trigger track; according to the track length and the track direction, respectively determine the displacement length of each pixel in the to-be-deformed area, and determine the displacement length of each pixel according to the track direction and displacement direction; adjust the position of each pixel in the to-be-deformed area according to the displacement length and the displacement direction to generate and display the deformed face image to generate a deformed face image.
  • the method for deforming a face image according to the embodiment of the present disclosure also includes the following additional technical features:
  • the determining the area to be deformed in the face image according to the initial trigger position of the trigger operation includes: calculating the distance between the initial trigger position and a plurality of preset control areas distance; determine the preset control area with the smallest distance from the initial trigger position as the to-be-deformed area.
  • the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation includes: in the multiple pieces of identification information preset on the face image, Determine target identification information corresponding to the initial trigger position, wherein each identification information in the plurality of identification information corresponds to an image area on the face image; determine an image corresponding to the target identification information The region is the region to be deformed.
  • the determining the area to be deformed in the face image according to the initial trigger position of the trigger operation includes: determining a target person in the face image according to the initial trigger position face key point; obtain the preset deformation radius corresponding to the target face key point; take the target face key point as the center of the circle, and use the preset deformation radius as the circle radius to determine the area to be deformed .
  • the determining the displacement length of each pixel in the to-be-deformed area according to the trajectory length includes: calculating the relationship between each pixel in the to-be-deformed area and the target face the first distance of the key point; determine the first deformation coefficient corresponding to each pixel point according to the first distance; calculate the first product of the trajectory length and the first deformation coefficient, and use the first The product is used as the displacement length of each pixel point.
  • the determining the displacement length of each pixel in the to-be-deformed area according to the trajectory length includes: calculating the relationship between each pixel in the to-be-deformed area and the face The second distance of the preset reference key point in the image; the second deformation coefficient corresponding to each pixel point is determined according to the second distance; the second product of the trajectory length and the second deformation coefficient is calculated, and taking the second product as the displacement length of each pixel point.
  • the adjusting the position of each pixel in the region to be deformed according to the displacement length and the displacement direction includes: in response to the displacement direction belonging to the preset direction, Determine the target adjustment position of each pixel point according to the corresponding displacement direction and displacement length of each pixel point, wherein the preset direction includes the horizontal direction and the vertical direction of the face image; The each pixel is adjusted to the target adjustment position.
  • the method further includes: in response to the displacement direction not belonging to the preset direction, splitting the displacement direction into a horizontal direction and a vertical direction; determining the horizontal direction according to the displacement length The first displacement in the direction, and the second displacement in the vertical direction; control each pixel in the to-be-deformed area according to the first displacement in the horizontal direction, and the vertical direction The second displacement moves to generate and display the deformed face image.
  • an apparatus for deforming a face image comprising: a first determination module configured to acquire an initial trigger position selected by a user for a face image, and determine the face image according to the initial trigger position the area to be deformed in the face image; the extraction module is configured to acquire the trigger trajectory with the initial trigger position as the trigger start point, and extract the trajectory length and trajectory direction of the trigger trajectory; the second determination module is configured to be is configured to respectively determine the displacement length and displacement direction of each pixel in the region to be deformed according to the track length and the track direction; the deformation adjustment module is configured to adjust according to the displacement length and the displacement direction The position of each pixel in the area to be deformed is used to generate a deformed face image.
  • the device for deforming a face image also includes the following additional technical features:
  • the first determining module is specifically configured to: calculate a distance between the initial trigger position and a plurality of preset control areas; determine a preset distance with the smallest distance from the initial trigger position Let the control area be the to-be-deformed area.
  • the first determining module is specifically configured to: determine target identification information corresponding to the initial trigger position from a plurality of identification information preset on the face image, Wherein, each identification information in the plurality of identification information corresponds to an image area on the face image; and the image area corresponding to the target identification information is determined as the to-be-deformed area.
  • the first determination module includes: a first determination unit configured to determine a target face key point in the face image according to the initial trigger position; an acquisition unit, configured by is configured to obtain a preset deformation radius corresponding to the target face key point; the second determination unit is configured to take the target face key point as the center of the circle and the preset deformation radius as the circle radius, The region to be deformed is determined.
  • the second determining module is specifically configured to: calculate a first distance between each pixel point in the area to be deformed and the key point of the target face; The distance determines the first deformation coefficient corresponding to each pixel point; calculates the first product of the trajectory length and the first deformation coefficient, and uses the first product as the displacement of each pixel point length.
  • the second determining module is specifically configured to: calculate a second distance between each pixel in the region to be deformed and a preset reference key point in the face image ; Determine the second deformation coefficient corresponding to each pixel point according to the second distance; Calculate the second product of the track length and the second deformation coefficient, and use the second product as the each The displacement length of the pixel point.
  • the deformation adjustment module specifically includes: a third determination unit, configured to, in response to the displacement direction belonging to the preset direction, move each pixel to a corresponding displacement direction and displacement length to determine the target adjustment position of each pixel, wherein the preset direction includes the horizontal direction and the vertical direction of the face image; the first adjustment unit is configured to adjust each pixel pixel points are adjusted to the target adjustment position.
  • the deformation adjustment module further includes: a fourth determination unit, configured to split the displacement direction into horizontal directions in response to the displacement direction not belonging to the preset direction and a vertical direction; a fifth determination unit configured to determine a first displacement in the horizontal direction and a second displacement in the vertical direction according to the displacement length; a second adjustment unit configured to Control each pixel in the area to be deformed to move according to the first displacement in the horizontal direction and the second displacement in the vertical direction, so as to generate and display a deformed face image.
  • a fourth determination unit configured to split the displacement direction into horizontal directions in response to the displacement direction not belonging to the preset direction and a vertical direction
  • a fifth determination unit configured to determine a first displacement in the horizontal direction and a second displacement in the vertical direction according to the displacement length
  • a second adjustment unit configured to Control each pixel in the area to be deformed to move according to the first displacement in the horizontal direction and the second displacement in the vertical direction, so as to generate and display a deformed face image.
  • an electronic device comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to implement as before The deformation method of the described face image.
  • a non-volatile computer-readable storage medium that, when instructions in the non-volatile computer-readable storage medium are executed by a processor of an electronic device, enables an electronic device to execute The deformation method of the face image as mentioned above.
  • a computer program product that, when executed by a processor of an electronic device, enables the electronic device to execute the aforementioned method for deforming a face image.
  • Obtain the initial trigger position selected by the user for the face image determine the displacement length and displacement direction of each pixel in the area to be deformed, and adjust the position of each pixel in the area to be deformed according to the displacement length and direction to generate a Display the deformed face image.
  • personalized deformation of the face image is performed, and a face deformation function with a higher degree of freedom is realized.
  • FIG. 1 is a flowchart of a method for deforming a face image according to an exemplary embodiment
  • FIG. 2 is a schematic diagram of a trajectory direction according to an exemplary embodiment
  • FIG. 3 is a schematic diagram of a trajectory direction according to an exemplary embodiment
  • FIG. 4 is a schematic diagram of a deformation scene of a human face image according to an exemplary embodiment
  • FIG. 5 is a flowchart of a method for deforming a face image according to an exemplary embodiment
  • Fig. 6-1 is a schematic diagram showing the distance between the initial trigger position and the corresponding preset control area according to an exemplary embodiment
  • 6-2 is a schematic diagram showing the distance between the initial trigger position and the corresponding preset control area according to an exemplary embodiment
  • FIG. 7 is a flowchart of a method for deforming a face image according to an exemplary embodiment
  • FIG. 8 is a schematic diagram of identification information according to an exemplary embodiment
  • FIG. 9 is a flowchart of a method for deforming a face image according to an exemplary embodiment
  • FIG. 10 is a schematic diagram of a scene for determining a region to be deformed according to an exemplary embodiment
  • FIG. 11 is a schematic diagram of a scene for determining a region to be deformed according to an exemplary embodiment
  • FIG. 12 is a flowchart of a method for deforming a face image according to an exemplary embodiment
  • FIG. 13 is a flowchart of a method for deforming a face image according to an exemplary embodiment
  • FIG. 14 is a flowchart of a method for deforming a face image according to an exemplary embodiment
  • Fig. 15-1 is a schematic diagram of a target adjustment position scene according to an exemplary embodiment
  • Figure 15-2 is a schematic diagram of a deformed face image according to an exemplary embodiment
  • FIG. 16-1 is a schematic diagram of a scene in a first direction and a second direction according to an exemplary embodiment
  • 16-2 is a schematic diagram of a deformed face image according to an exemplary embodiment
  • 17 is a schematic structural diagram of a device for deforming a face image according to an exemplary embodiment
  • FIG. 18 is a schematic structural diagram of a device for deforming a face image according to an exemplary embodiment
  • FIG. 19 is a schematic structural diagram of a device for deforming a face image according to an exemplary embodiment
  • 20 is a schematic structural diagram of a device for deforming a face image according to an exemplary embodiment
  • Fig. 21 is a schematic structural diagram of a terminal device according to an exemplary embodiment
  • the present disclosure proposes a method that can interact with the user according to the problem.
  • a method for realizing personalized deformation of a face image In this method, different deformation effects can be displayed on the face image according to the different drag degrees of the user, which satisfies the user's personalized needs and increases the interest.
  • the face image processing methods of the embodiments of the present disclosure can also be applied to the deformation processing of any subject image, such as building images, fruit images, etc.
  • any subject image such as building images, fruit images, etc.
  • the following embodiments Mainly focus on the deformation of face images to illustrate.
  • FIG. 1 is a flowchart of a method for deforming a face image according to an exemplary embodiment. As shown in FIG. 1 , the method for deforming a face image is used in an electronic device, and the electronic device computer, etc., including the following steps.
  • step 101 an initial trigger position selected by the user for the face image is acquired, and the area to be deformed in the face image is determined according to the initial trigger position.
  • the main body for the user to implement the initial trigger position may be a finger or a stylus or the like.
  • the initial trigger position selected by the user on the face image is obtained, for example, the position where the user initially clicks with a finger is the initial trigger position, and then the area to be deformed in the face image is determined according to the initial trigger position , the region to be deformed includes the face region that needs to be deformed, and the like.
  • the image area that is within a preset range from the initial trigger position may be directly determined as the area to be deformed; in other possible embodiments, the correspondence between each trigger position and the area to be deformed may be pre-built If the initial trigger position belongs to the pre-built trigger position 1, the to-be-deformed area a corresponding to the trigger position 1 is determined as the to-be-deformed area.
  • the specific details of how to determine the region to be deformed in the face image according to the initial trigger position will be described in subsequent embodiments, and will not be repeated here.
  • step 102 a trigger trajectory with an initial trigger position as a trigger start point is acquired, and the trajectory length and trajectory direction of the trigger trajectory are extracted.
  • the face deformation display is realized based on the user's drag operation. Therefore, the trigger trajectory of the initial trigger position is obtained, for example, the trigger trajectory of the initial trigger position is detected based on the capacitance value in the terminal device, Further, the track length and track direction of the trigger track are extracted.
  • the end trigger position of the trigger track can be identified, the direction from the initial trigger position to the end trigger position is constructed as the track direction, and the track length can be from the initial trigger position to the end trigger position. The distance between the end trigger positions is obtained.
  • multiple trigger points may be selected in the current trigger trajectory according to preset time intervals, and the direction of each trigger point and the previous adjacent trigger point may be constructed as Track direction, at this time there are multiple track directions, and the distance between each trigger point and the previous adjacent trigger point is used as the track length in the corresponding track direction.
  • step 103 the displacement length and displacement direction of each pixel point in the area to be deformed are respectively determined according to the track length and track direction.
  • the displacement length of each pixel in the to-be-deformed area is determined according to the trajectory length, and the displacement direction of each pixel is determined according to the trajectory direction, so that It is used to reflect the user's trigger trajectory to the movement of each pixel in the area to be deformed.
  • the length of the trajectory is proportional to the displacement length of each pixel in the area to be deformed. Therefore, the farther the hand is triggered, the greater the offset, the stronger the pull to the area to be deformed, and the expression is the face Pulled even more exaggerated.
  • the displacement direction can be the same as the trajectory direction, or it can be set differently according to the user's individual needs. For example, if the user sets the trajectory direction to be left, the corresponding displacement direction is right, etc.
  • step 104 the position of each pixel in the region to be deformed is adjusted according to the displacement length and the displacement direction, so as to generate a deformed face image.
  • the position of each pixel in the region to be deformed is adjusted according to the displacement length and the displacement direction, so as to generate a deformed face image. For example, as shown in the left figure of Fig. 4, when the initial trigger position of the user is the position of the eyelid, the determined area to be deformed is the entire eye area, and the user's trigger trajectory is to move down a trajectory length, as As shown in the right image of Fig. 4, the eye region of the face image region is deformed.
  • the method for deforming a face image obtains the initial trigger position selected by the user for the face image, and determines the area to be deformed in the face image according to the initial trigger position, and then obtains the initial trigger position as Trigger the trigger trajectory of the starting point, extract the trajectory length and trajectory direction of the trigger trajectory, determine the displacement length of each pixel in the area to be deformed according to the trajectory length, and determine the displacement direction of each pixel according to the trajectory direction. and the displacement direction to adjust the position of each pixel in the area to be deformed to generate and display the deformed face image. Therefore, based on the way of interacting with the user, the personalized deformation of the face image is satisfied, the face deformation function with a higher degree of freedom is realized, and the stickiness of the user and the product is increased.
  • the above-mentioned determination of the area to be deformed in the face image according to the initial trigger position of the trigger operation includes:
  • step 201 the distance between the initial trigger position and a plurality of preset control areas is calculated.
  • step 202 a preset control area with the smallest distance from the initial trigger position is determined as the area to be deformed.
  • control region 1 corresponds to the left eye region of the human face
  • control region 2 corresponds to The mouth area of the face
  • different preset control areas can be calibrated by big data, or by the user according to personal needs.
  • the calibration is based on the user's personal needs, it can be based on the user's trigger on the face image.
  • the division track, etc. Divide different preset control areas.
  • the preset control area with the smallest distance from the initial trigger position is determined as the to-be-deformed area.
  • the corresponding preset control area is directly used as the area to be deformed; if the initial trigger position belongs to multiple preset control areas, the initial trigger is counted
  • the ratio of the area of the position in each control area to the overall area of the initial trigger position, and the preset control area corresponding to the highest ratio is used as the area to be deformed. In this way, the deformation effect of "points with surfaces" can be achieved.
  • the above-mentioned determination of the region to be deformed in the face image according to the initial trigger position of the trigger operation includes:
  • target identification information corresponding to the initial trigger position is determined from a plurality of identification information preset on the face image, wherein each identification information in the plurality of identification information corresponds to the identification information on the face image. an image area.
  • the identification information may be one or a combination of icon form, text form, and dynamic effect.
  • a plurality of identification information (multiple The identification information can be displayed or not displayed), wherein, each identification information is used as a control module of an image area in the face image, and the corresponding relationship between each identification information and the image area is stored in the preset database, and continues to refer to Figure 8 , for example, for the identification information 1, the corresponding image area is the forehead area.
  • step 302 the image area corresponding to the target identification information is determined as the area to be deformed.
  • the target identification information corresponding to the initial trigger position is determined, for example, the target identification information whose position overlaps with the initial trigger position is determined as the target identification information corresponding to the initial trigger position, and the center point and the mark of the initial trigger position can also be calculated. For the distance between the information center points, the identification information of the closest distance from the center point of the initial trigger position is used as the target identification information corresponding to the initial trigger position.
  • the image area corresponding to the target identification information is determined as the area to be deformed by querying a preset database or the like. Therefore, an image area to be deformed can be directly determined according to the target identification information triggered by the user, and the image area and the identification information are pre-bound, which improves the efficiency of deformation.
  • the above-mentioned determination of the area to be deformed in the face image according to the initial trigger position of the trigger operation includes:
  • Step 401 Determine the key points of the target face in the face image according to the initial trigger position.
  • the target face key point is determined in the face image according to the initial trigger position
  • the target face key point may be the center point of the area corresponding to the initial trigger position, or may be a pre-trained convolutional neural network, such as As shown in Figure 10, the convolutional neural network can obtain a plurality of face key points (101 in the figure) after inputting the face image, and then determine the face key points associated with the initial trigger position as the target face.
  • the convolutional neural network can obtain a plurality of face key points (101 in the figure) after inputting the face image, and then determine the face key points associated with the initial trigger position as the target face.
  • the target face key point when the initial trigger position overlaps a face key point, the one face key point is used as the target face key point, and when the initial trigger position overlaps multiple face key points, the multiple face key points One is randomly selected as the target face keypoint.
  • the key point closest to the center point or edge point of the initial trigger position is determined as the target face key point.
  • Step 402 Acquire a preset deformation radius corresponding to the target face key point.
  • a fixed preset deformation radius is set for the face image.
  • the preset deformation radius in this embodiment may be calibrated according to preset large data, or may be set according to the size of the face image , the larger the size of the face image, the larger the corresponding preset deformation radius.
  • the preset deformation radius can also be set by the user's personal preference.
  • the preset radii in the face image may be different, for example, the face image is divided into multiple areas, such as forehead area, cheek area, mouth area, etc., for each area Set the preset deformation radius.
  • the preset deformation radius corresponding to different regions is different to ensure that the deformation effect obtained by different regions is different. Then, determine the region where the key points of the target face are located, and the preset deformation corresponding to the region The radius is used as the preset deformation radius of the key points of the target face.
  • the corresponding preset deformation radius can also be preset for the key points in each face image, and the corresponding relationship between the coordinates of the key points and the preset deformation radius can be stored, and the corresponding relationship can be queried to determine the target face key point.
  • the corresponding preset deformation radius thereby further improving the flexibility of face deformation and ensuring the diversity of deformation effects.
  • Step 403 taking the target face key point as the center of the circle, and taking the preset deformation radius as the circle radius, determine the area to be deformed.
  • the target face key point is used as the center of the circle
  • the preset deformation radius is used as the circular radius to determine the to-be-deformed area, wherein the to-be-deformed area includes the face area except the face area. , and can also include background areas, etc.
  • the method for deforming a face image flexibly determines the to-be-deformed area corresponding to the initial trigger position according to the needs of the application scenario, which improves the stickiness of the user and the product.
  • the method of determining the area to be deformed is the embodiment shown in the above-mentioned FIG. 9 . Therefore, as shown in FIG. 12 , the displacement length of each pixel in the area to be deformed is determined according to the track length, including:
  • step 501 a first distance between each pixel in the area to be deformed and a key point of the target face is calculated.
  • the first distance between each pixel point in the area to be deformed and the target face key point is calculated, and the distance can be calculated according to the coordinates of each pixel point and the target face key point.
  • the key points of the target face are located in the face region of the face image, it can be ensured that the deformation takes the face region as the main body.
  • step 502 a first deformation coefficient of each pixel is determined according to the first distance.
  • step 503 the first product of the track length and the first deformation coefficient is calculated, and the first product is used as the displacement length of each pixel point.
  • the first distance and the first deformation coefficient are in an inversely proportional relationship.
  • the first distance and the first deformation coefficient are in a proportional relationship, and the larger the first distance is, the larger the corresponding first deformation coefficient, so that the first product of the track length and the first deformation coefficient is calculated. , after obtaining the displacement length of each pixel point, the effect of focus blur is realized after displacement according to the displacement length.
  • the displacement length of each pixel in the area to be deformed is determined according to the track length, including:
  • step 601 a second distance between each pixel in the area to be deformed and a preset reference key point in the face image is calculated.
  • the preset reference key points in the face image are preset key points in the fixed face area, which may be the key points on the tip of the nose, or the key points on the forehead, etc.
  • the preset reference key points are set If the position is different, the deformation effect will be different.
  • step 602 a second deformation coefficient of each pixel is determined according to the second distance.
  • step 603 a second product of the track length and the second deformation coefficient is calculated, and the second product is used as the displacement length of each pixel point.
  • the second distance and the second deformation coefficient are in an inversely proportional relationship, and the larger the second distance is, the smaller the corresponding second deformation coefficient is. Therefore, the second product of the track length and the second deformation coefficient is calculated, After obtaining the displacement length of each pixel point, the effect of liquefaction processing centered on the preset reference key point is realized after displacement according to the displacement length.
  • the second distance and the second deformation coefficient are in a proportional relationship, and the larger the second distance is, the larger the corresponding second deformation coefficient is, so that the second product of the track length and the second deformation coefficient is calculated. , after obtaining the displacement length of each pixel point, the effect of focus blur is realized after displacement according to the displacement length.
  • the track length is directly used as the displacement length, so that after each pixel point moves according to the displacement length, the effect of the overall movement of the area to be deformed is achieved.
  • the limit value of the displacement length can also be set. Once the displacement length is greater than the limit value, the displacement length is determined as the limit value, wherein the limit value of the displacement length
  • the value can be different according to the user's initial trigger position, and the user can set it according to personal preference. For example, when the user's initial trigger position is at the eyes, the corresponding limit value may be larger.
  • the displacement of each pixel point can be determined in different ways to achieve different deformation effects.
  • different implementations may be used to adjust the position of each pixel in the region to be deformed according to the displacement length and the displacement direction.
  • the steps adjust the position of each pixel in the area to be deformed according to the displacement length and the displacement direction, including:
  • step 701 in response to the displacement direction belonging to a preset direction, the target adjustment position of each pixel point is determined according to the corresponding displacement direction and displacement length of each pixel point, wherein the preset direction includes the horizontal direction of the face image and vertical direction.
  • the end trigger position of the trigger track is obtained, the direction from the initial trigger position to the end trigger position is determined as the displacement direction, and it is determined whether the displacement direction is the horizontal direction or the vertical direction, wherein, if the displacement direction is The horizontal direction or the vertical direction, the displacement direction is determined to belong to the preset direction. It can also be understood that in this embodiment, it is determined whether the displacement direction is the four positive directions of up, down, left and right to determine whether it satisfies the preset direction. direction.
  • the preset direction conditions can also be any 1-3 directions of up, down, left, and right, or any other directions, which can be set according to the needs of the scene.
  • the target adjustment position of each pixel is determined according to the corresponding displacement direction and displacement length of each pixel. As shown in Figure 15-1, the target position of each pixel is is the position from the displacement length in the corresponding unique direction.
  • step 702 each pixel is adjusted to the target adjustment position.
  • each pixel in the area to be deformed Adjust the point to the target adjustment position to get the deformed face image with the elongated forehead.
  • step 701 it further includes:
  • step 703 in response to the displacement direction not belonging to the preset direction, the displacement direction is split into a horizontal direction and a vertical direction.
  • the corresponding displacement directions may be roughly lower left, lower right, upper left, upper right, etc., and each direction can be divided into a horizontal direction and a vertical direction.
  • each direction can be divided into a horizontal direction and a vertical direction.
  • the lower left In terms of square, it can be split into left and bottom.
  • step 704 the first displacement in the horizontal direction and the second displacement in the vertical direction are determined according to the displacement length.
  • the first displacement in the horizontal direction and the second displacement in the vertical direction are determined according to the displacement length.
  • the displacement direction is the lower left, it can be divided into Left and bottom, the displacement length is divided into the first displacement in the left direction and the second displacement in the down direction according to the Cartesian coordinate system.
  • each pixel in the area to be deformed is controlled to move according to the first displacement in the horizontal direction and the second displacement in the vertical direction, so as to generate and display the deformed face image.
  • each pixel in the area to be deformed is controlled to move in the horizontal direction to the first position according to the first displacement.
  • the reference face image of the face image is obtained.
  • the reference face image is not displayed, but is cached in the offline control.
  • the corresponding For the second pixel position the reference face image is used as an input texture map to control each pixel in the reference face image to be adjusted from the first pixel position to the second pixel position. Therefore, only the face image that is finally adjusted to the second pixel position is rendered and displayed, which greatly reduces the processing cost of the image and improves the generation efficiency of the image.
  • the method for deforming a face image in this embodiment can realize any operation of deforming a face according to triggering operations such as dragging by the user. Rebound instead of retention, so that multiple trigger operations of the user can be reflected in the face image.
  • triggering operations such as dragging by the user.
  • the user can drag the logo information 1, 4, and 5 to The deformation operation of the face realizes the personalized effect shown in Figure 16-2.
  • the method for deforming a face image can flexibly adjust the position of each pixel in the region to be deformed, so as to satisfy the personalized service of generating a deformed face image.
  • the embodiments of the present disclosure also provide a schematic structural diagram of a device for deforming a face image.
  • Fig. 17 is a schematic structural diagram of an apparatus for deforming a face image according to an exemplary embodiment.
  • the face image deformation device includes: a first determination module 171, an extraction module 172, a second determination module 173, and a deformation adjustment module 174, wherein,
  • the first determination module 171 is configured to obtain the initial trigger position selected by the user on the face image, and determine the area to be deformed in the face image according to the initial trigger position;
  • the extraction module 172 is configured to acquire the trigger trajectory with the initial trigger position as the trigger start point, and extract the trajectory length and trajectory direction of the trigger trajectory;
  • the second determination module 173 is configured to respectively determine the displacement length and displacement of each pixel in the area to be deformed according to the trajectory length and the trajectory direction;
  • the third determining module 174 is configured to determine the displacement direction of each pixel point according to the trajectory direction;
  • the deformation adjustment module 175 is configured to adjust the position of each pixel in the region to be deformed according to the displacement length and the displacement direction, so as to generate a deformed face image.
  • the device for deforming a face image obtains the initial trigger position selected by the user for the face image, and determines the to-be-deformed area in the face image according to the initial trigger position, and then obtains the initial trigger position as Trigger the trigger trajectory of the starting point, extract the trajectory length and trajectory direction of the trigger trajectory, determine the displacement length of each pixel in the area to be deformed according to the trajectory length, and determine the displacement direction of each pixel according to the trajectory direction. and the displacement direction to adjust the position of each pixel in the area to be deformed to generate and display the deformed face image.
  • the personalized deformation of the face image is performed, the face deformation function with a higher degree of freedom is realized, and the stickiness of the user and the product is increased.
  • the first determination module 171 determines the area to be deformed in the face image in different ways according to the initial trigger position. Examples are as follows:
  • the first determining module 171 is specifically configured to:
  • the preset control area with the smallest distance from the initial trigger position is determined as the area to be deformed.
  • the first determining module 171 is specifically configured to:
  • each identification information in the multiple identification information corresponds to an image area on the face image
  • the preset database is queried, and the image area corresponding to the target identification information is determined as the area to be deformed.
  • the first determination module 171 includes: a first determination unit 1711 , a first acquisition unit 1712 , and a second determination unit 1713 , wherein,
  • the first determining unit 1711 is configured to determine the target face key point in the face image according to the initial trigger position
  • the obtaining unit 1712 is configured to obtain the preset deformation radius corresponding to the target face key point
  • the second determining unit 1713 is configured to take the target face key point as the center of the circle, and take the preset deformation radius as the circle radius, to determine the area to be deformed.
  • the first determining unit 1711 is specifically configured to:
  • the device for deforming a face image flexibly determines the to-be-deformed area corresponding to the initial trigger position according to the needs of the application scenario, which improves the stickiness of the user and the product.
  • the second determining module 173 is specifically configured to:
  • the second determining module 173 is specifically configured to:
  • the displacement of each pixel point can be determined in different ways to achieve different deformation effects.
  • different implementations may be used to adjust the position of each pixel in the region to be deformed according to the displacement length and the displacement direction.
  • the deformation adjustment module 174 includes: a third determination unit 1741 and a first adjustment unit 1742 , wherein,
  • the third determining unit 1741 is configured to, in response to the displacement direction belonging to the preset direction, determine the target adjustment position of each pixel point according to the corresponding displacement direction and displacement length of each pixel point, wherein the preset direction includes the human face the horizontal and vertical directions of the image;
  • the first adjustment unit 1742 is configured to adjust each pixel to the target adjustment position.
  • the deformation adjustment module 174 further includes: a fourth determination unit 1743 , a fifth determination unit 1744 and a second adjustment unit 1745 , wherein ,
  • the fourth determination unit 1743 configured to split the displacement direction into a horizontal direction and a vertical direction in response to the displacement direction not belonging to the preset direction;
  • the fifth determination unit 1744 configured to determine the first displacement in the horizontal direction and the second displacement in the vertical direction according to the displacement length
  • the second adjustment unit 1745 is configured to control each pixel in the area to be deformed to move according to the first displacement in the horizontal direction and the second displacement in the vertical direction, so as to generate and display the deformed face image.
  • the device for deforming a face image can flexibly adjust the position of each pixel in the region to be deformed, so as to satisfy the personalized service of generating a deformed face image.
  • FIG. 21 is a block diagram of an electronic device proposed according to the present disclosure.
  • the above-mentioned electronic device 200 includes:
  • the memory 210 and the processor 220 are connected to a bus 230 of different components (including the memory 210 and the processor 220).
  • the memory 210 stores a computer program.
  • the processor 220 executes the program, the method for deforming a face image of the embodiment of the present disclosure is implemented.
  • Bus 230 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any of a variety of bus structures.
  • these architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, Enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect ( PCI) bus.
  • Electronic device 200 typically includes a variety of electronic device-readable media. These media can be any available media that can be accessed by electronic device 200, including volatile and non-volatile media, removable and non-removable media.
  • Memory 210 may also include computer system readable media in the form of volatile memory, such as random access memory (RAM) 240 and/or cache memory 250 .
  • Electronic device 200 may further include other removable/non-removable, volatile/non-volatile computer system storage media.
  • storage system 260 may be used to read and write to non-removable, non-volatile magnetic media (not shown in FIG. 21, commonly referred to as a "hard drive”).
  • a magnetic disk drive for reading and writing to removable non-volatile magnetic disks (eg "floppy disks") and removable non-volatile optical disks (eg CD-ROM, DVD-ROM) may be provided or other optical media) to read and write optical drives.
  • each drive may be connected to bus 230 through one or more data media interfaces.
  • the memory 210 may include at least one program product having a set (eg, at least one) of program modules configured to perform the functions of various embodiments of the present disclosure.
  • Program modules 270 generally perform the functions and/or methods in the embodiments described in this disclosure.
  • the electronic device 200 may also communicate with one or more external devices 290 (eg, keyboard, pointing device, display 291, etc.), with one or more devices that enable a user to interact with the electronic device 200, and/or with Any device (eg, network card, modem, etc.) that enables the electronic device 200 to communicate with one or more other computing devices. Such communication may take place through input/output (I/O) interface 292 . Also, the electronic device 200 may communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN), and/or a public network such as the Internet) through a network adapter 293 . As shown in FIG. 21 , the network adapter 293 communicates with other modules of the electronic device 200 through the bus 230 . It should be understood that, although not shown, other hardware and/or software modules may be used in conjunction with electronic device 200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives and data backup storage systems.
  • the processor 220 executes various functional applications and data processing by executing programs stored in the memory 210 .
  • the electronic device of the face image obtains the initial trigger position selected by the user for the face image, determines the area to be deformed in the face image according to the initial trigger position, and then obtains the trigger of the initial trigger position. track, and extract the track length and track direction of the trigger track, determine the displacement length of each pixel in the area to be deformed according to the track length, determine the displacement direction of each pixel point according to the track direction, and adjust the The position of each pixel in the deformed area to generate and display the deformed face image. Therefore, based on the way of interacting with the user, the personalized deformation of the face image is satisfied, the face deformation function with a higher degree of freedom is realized, and the stickiness of the user and the product is increased.
  • the present disclosure also proposes a non-volatile computer-readable storage medium.
  • the electronic device when the instructions in the non-volatile computer-readable storage medium are executed by the processor of the electronic device, the electronic device can execute the aforementioned method for deforming a face image.
  • the present disclosure also provides a computer program product, which, when executed by a processor of an electronic device, enables the electronic device to execute the aforementioned method for deforming a face image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

本公开关于一种人脸图像的变形方法、装置、电子设备及存储介质,属于计算机视觉处理技术领域。其中,方法包括:获取用户对人脸图像选择的初始触发位置,并根据初始触发位置确定人脸图像中的待变形区域;获取以初始触发位置为触发起始点的触发轨迹,并提取触发轨迹的轨迹长度和轨迹方向;根据轨迹长度确定待变形区域中每个像素点的位移长度,根据轨迹长度和轨迹方向分别确定每个像素点的位移长度和位移方向;根据位移长度和位移方向调整待变形区域中每个像素点的位置,以生成变形后的人脸图像。

Description

人脸图像的变形方法和变形装置
相关申请的交叉引用
本公开要求2020年07月27日提交的申请号为“202010732569.2”中国专利的优先权。
技术领域
本公开涉及计算机视觉处理技术领域,尤其涉及一种人脸图像的变形方法和变形装置。
背景技术
随着计算机视觉处理技术的进步,对人脸图像进行视觉处理的功能也愈发多样化,比如,美颜应用中的特效添加、滤镜叠加等。
相关技术中,对人脸图像进行视觉处理的功能是系统预先设置的,用户选择对应的功能后,根据系统预设的与该功能对应的默认处理效果参数来实现对应的处理效果,无法满足用户的个性化视觉处理需求,导致用户和产品的粘性不高。
发明内容
本公开提供一种人脸图像的变形方法和变形装置。本公开的技术方案如下:
根据本公开的一些实施例,提供一种人脸图像的变形方法,包括:获取用户对人脸图像选择的初始触发位置,获取以所述初始触发位置为触发起始点的触发轨迹,并提取所述触发轨迹的轨迹长度和轨迹方向;根据所述轨迹长度和所述轨迹方向,分别确定所述待变形区域中每个像素点的位移长度,根据所述轨迹方向确定所述每个像素点的和位移方向;根据所述位移长度和所述位移方向调整所述待变形区域中所述每个像素点的位置,以生成并显示变形后的,以生成变形后的人脸图像。
另外本公开实施例的人脸图像的变形方法,还包括如下附加的技术特征:
在本公开的一个实施例中,所述根据所述触发操作的初始触发位置确定所述人脸图像中的待变形区域,包括:计算所述初始触发位置与多个预设控制区域之间的距离;确定与所述初始触发位置距离最小的预设控制区域为所述待变形区域。
在本公开的一个实施例中,所述根据所述触发操作的初始触发位置确定所述人脸图像 中的待变形区域,包括:在所述人脸图像上预设的多个标识信息中,确定与所述初始触发位置对应的目标标识信息,其中,所述多个标识信息中的每个标识信息对应于所述人脸图像上的一个图像区域;确定与所述目标标识信息对应的图像区域为所述待变形区域。
在本公开的一个实施例中,所述根据所述触发操作的初始触发位置确定所述人脸图像中的待变形区域,包括:根据所述初始触发位置在所述人脸图像中确定目标人脸关键点;获取与所述目标人脸关键点对应的预设变形半径;以所述目标人脸关键点为圆心,并以所述预设变形半径为圆形半径,确定所述待变形区域。
在本公开的一个实施例中,所述根据所述轨迹长度确定所述待变形区域中每个像素点的位移长度,包括:计算所述待变形区域中每个像素点与所述目标人脸关键点的第一距离;根据所述第一距离确定所述每个像素点对应的第一变形系数;计算所述轨迹长度和所述第一变形系数的第一乘积,并将所述第一乘积作为所述每个像素点的所述位移长度。
在本公开的一个实施例中,所述根据所述轨迹长度确定所述待变形区域中每个像素点的位移长度,包括:计算所述待变形区域中每个像素点,与所述人脸图像中的预设参考关键点的第二距离;根据所述第二距离确定所述每个像素点对应的第二变形系数;计算所述轨迹长度与所述第二变形系数的第二乘积,并将所述第二乘积作为所述每个像素点的所述位移长度。
在本公开的一个实施例中,所述根据所述位移长度和所述位移方向调整所述待变形区域中每个像素点的位置,包括:响应于所述位移方向属于所述预设方向,根据所述每个像素点向对应的位移方向和位移长度,确定所述每个像素点的目标调整位置,其中,所述预设方向包括所述人脸图像的水平方向和竖直方向;将所述每个像素点调整至所述目标调整位置。
在本公开的一个实施例中,还包括:响应于所述位移方向不属于所述预设方向,将所述位移方向拆分为水平方向和竖直方向;根据所述位移长度确定所述水平方向上的第一位移,和在所述竖直方向上的第二位移;控制所述待变形区域中每个像素点根据所述水平方向上的第一位移,和所述竖直方向上的第二位移移动,以生成并显示变形后的人脸图像。
根据本公开的一些实施例,提供一种人脸图像的变形装置,包括:第一确定模块,被配置为获取用户对人脸图像选择的初始触发位置,并根据所述初始触发位置确定所述人脸图像中的待变形区域;提取模块,被配置为获取以所述初始触发位置的为触发起始点的触发轨迹,并提取所述触发轨迹的轨迹长度和轨迹方向;第二确定模块,被配置为根据所述轨迹长度和所述轨迹方向,分别确定所述待变形区域中每个像素点的位移长度和位移方向;变形调整模块,被配置为根据所述位移长度和所述位移方向调整所述待变形区域中所述每 个像素点的位置以生成变形后的人脸图像。
另外本公开实施例的人脸图像的变形装置,还包括如下附加的技术特征:
在本公开的一个实施例中,所述第一确定模块,具体被配置为:计算所述初始触发位置与多个预设控制区域之间的距离;确定与所述初始触发位置距离最小的预设控制区域为所述待变形区域。
在本公开的一个实施例中,所述第一确定模块,具体被配置为:在所述人脸图像上预设的多个标识信息中,确定与所述初始触发位置对应的目标标识信息,其中,所述多个标识信息中的每个标识信息对应于所述人脸图像上的一个图像区域;确定与所述目标标识信息对应的图像区域为所述待变形区域。
在本公开的一个实施例中,所述第一确定模块,包括:第一确定单元,被配置为根据所述初始触发位置在所述人脸图像中确定目标人脸关键点;获取单元,被配置为获取与所述目标人脸关键点对应的预设变形半径;第二确定单元,被配置为以所述目标人脸关键点为圆心,并以所述预设变形半径为圆形半径,确定所述待变形区域。
在本公开的一个实施例中,所述第二确定模块,具体被配置为:计算所述待变形区域中每个像素点与所述目标人脸关键点的第一距离;根据所述第一距离确定所述每个像素点对应的第一变形系数;计算所述轨迹长度和所述第一变形系数的第一乘积,并将所述第一乘积作为所述每个像素点的所述位移长度。
在本公开的一个实施例中,所述第二确定模块,具体被配置为:计算所述待变形区域中每个像素点,与所述人脸图像中的预设参考关键点的第二距离;根据所述第二距离确定所述每个像素点对应的第二变形系数;计算所述轨迹长度与所述第二变形系数的第二乘积,并将所述第二乘积作为所述每个像素点的所述位移长度。
在本公开的一个实施例中,所述变形调整模块,具体包括:第三确定单元,被配置为响应于所述位移方向属于所述预设方向,根据所述每个像素点向对应的位移方向和位移长度,确定所述每个像素点的目标调整位置,其中,所述预设方向包括所述人脸图像的水平方向和竖直方向;第一调整单元,被配置为将所述每个像素点调整至所述目标调整位置。
在本公开的一个实施例中,所述变形调整模块,还包括:第四确定单元,被配置为响应于所述位移方向不属于所述预设方向,将所述位移方向拆分为水平方向和竖直方向;第五确定单元,被配置为根据所述位移长度确定所述水平方向上的第一位移,和在所述竖直方向上的第二位移;第二调整单元,被配置为控制所述待变形区域中每个像素点根据所述水平方向上的第一位移,和所述竖直方向上的第二位移移动,以生成并显示变形后的人脸图像。
根据本公开的一些实施例,提供一种电子设备,包括:处理器;用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现如前所述的人脸图像的变形方法。
根据本公开的一些实施例,提供一种非易失性计算机可读存储介质,当所述非易失性计算机可读存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行如前所述的人脸图像的变形方法。
根据本公开的一些实施例,提供一种计算机程序产品,该计算机程序由电子设备的处理器执行时,使得电子设备能够执行如前所述的人脸图像的变形方法。
获取用户对人脸图像选择的初始触发位置,确定待变形区域中每个像素点的位移长度和位移方向后,根据位移长度和位移方向调整待变形区域中每个像素点的位置,以生成并显示变形后的人脸图像。由此,响应用户的触发,进行人脸图像的个性化变形,实现自由度更高的人脸变形功能。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理,并不构成对本公开的不当限定。
图1是根据一示例性实施例示出的一种人脸图像的变形方法的流程图;
图2是根据一示例性实施例示出的轨迹方向示意图;
图3是根据一示例性实施例示出的轨迹方向示意图;
图4是根据一示例性实施例示出的一种人脸图像的变形场景示意图;
图5是根据一示例性实施例示出的一种人脸图像的变形方法的流程图;
图6-1是根据一示例性实施例示出的初始触发位置与对应的预设控制区域之间的距离示意图;
图6-2是根据一示例性实施例示出的初始触发位置与对应的预设控制区域之间的距离示意图;
图7是根据一示例性实施例示出的一种人脸图像的变形方法的流程图;
图8是根据一示例性实施例示出的一种标识信息示意图;
图9是根据一示例性实施例示出的一种人脸图像的变形方法的流程图;
图10是根据一示例性实施例示出的一种待变形区域确定场景示意图;
图11是根据一示例性实施例示出的一种待变形区域确定场景示意图;
图12是根据一示例性实施例示出的一种人脸图像的变形方法的流程图;
图13是根据一示例性实施例示出的一种人脸图像的变形方法的流程图;
图14是根据一示例性实施例示出的一种人脸图像的变形方法的流程图;
图15-1是根据一示例性实施例示出的一种目标调整位置场景示意图;
图15-2是根据一示例性实施例示出的一种变形人脸图像示意图;
图16-1是根据一示例性实施例示出的一种第一方向和第二方向场景示意图;
图16-2是根据一示例性实施例示出的一种变形人脸图像示意图;
图17是根据一例性实施例示出的一种人脸图像的变形装置的结构示意图;
图18是根据一示例性实施例示出的一种人脸图像的变形装置的结构示意图;
图19是根据一示例性实施例示出的一种人脸图像的变形装置的结构示意图;
图20是根据一示例性实施例示出的一种人脸图像的变形装置的结构示意图;
图21是根据一示例性实施例示出的一种终端设备的结构示意图
具体实施方式
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
针对上述背景技术中提到的,相关技术中基于视觉处理的处理效果由系统预先设置的处理效果参数决定,难以满足用户的个性化处理需求的问题,本公开提出一种可以根据和用户交互的方式实现人脸图像的个性化变形的方法,该方法中,可根据用户的拖拽程度的不同,使得人脸图像呈现不同的变形效果,满足了用户的个性化需求,增加了趣味性。
当然,本公开实施例的人脸图像处理方法除了应用在人脸图像之外,还可以应用在任意主体图像的变形处理中,比如,建筑图像、水果图像等,为了便于说明,下述实施例主要集中在人脸图像的变形来说明。
图1是根据一示例性实施例示出的一种人脸图像的变形方法的流程图,如图1所示, 人脸图像的变形方法用于电子设备中,该电子设备可以为智能手机、便携式电脑等,包括以下步骤。
在步骤101中,获取用户对人脸图像选择的初始触发位置,并根据初始触发位置确定人脸图像中的待变形区域。
其中,用户实施初始触发位置的主体可以为手指也可以为触控笔等。
在本公开的实施例中,获取用户对人脸图像选择的初始触发位置,比如,用户使用手指初始点击的位置为初始触发位置等,进而,根据初始触发位置确定人脸图像中的待变形区域,该待变形区域包括需要变形的人脸区域等。
在一些可能的实施例中,可以直接将距离初始触发位置为预设范围的图像区域确定为待变形区域;在另一些可能的实施例中,可以预先构建每个触发位置和待变形区域的对应关系,若是该初始触发位置属于预先构建的触发位置1,则将触发位置1对应的待变形区域a确定为待变形区域。如何根据初始触发位置确定人脸图像中的待变形区域的具体实细节会在后续实施例中说明,在此不再赘述。
在步骤102中,获取以初始触发位置为触发起始点的触发轨迹,并提取触发轨迹的轨迹长度和轨迹方向。
应当理解的是,本实施例中基于用户的拖拽操作来实现人脸变形显示,因而,获取初始触发位置的触发轨迹,比如,基于终端设备中的电容值来检测初始触发位置的触发轨迹,进而,提取该触发轨迹的轨迹长度和轨迹方向。
其中,在本公开的一个实施例中,如图2所示,可以识别触发轨迹的终点触发位置,构建由初始触发位置到终点触发位置的方向为轨迹方向,轨迹长度可以为由初始触发位置到终点触发位置的之间的距离得到。
在本公开的另一个实施例中,如图3所示,可以在当前的触发轨迹中按照预设的时间间隔选择多个触发点,构建每个触发点和上一个相邻触发点的方向作为轨迹方向,此时轨迹方向为多个,每个触发点和上一个相邻触发点的距离作为对应轨迹方向下的轨迹长度。
在步骤103中,根据轨迹长度和轨迹方向,分别确定待变形区域中每个像素点的位移长度和位移方向。
在本公开的实施例中,为了响应用户的触发操作对人脸执行变形操作,根据轨迹长度确定待变形区域中每个像素点的位移长度,根据轨迹方向确定每个像素点的位移方向,以便于将用户的触发轨迹反映到待变形区域中每个像素点的移动上。
需要说明的是,轨迹长度和待变形区域中每个像素点的位移长度成正比关系,从而,手触发的越远,偏移越大,则向该待变形区域的拉扯越强,表现就是脸被拉的更夸张。位 移方向可以和轨迹方向相同,也可以根据用户的个性需要设置成不相同,比如,用户设置轨迹方向为向左,则对应的位移方向为向右等。
在步骤104中,根据位移长度和位移方向调整待变形区域中每个像素点的位置,以生成变形后的人脸图像。
在本实施例中,根据位移长度和位移方向调整待变形区域中每个像素点的位置,以生成变形后的人脸图像。举例而言,如图4的左图所示,当用户的初始触发位置为眼皮所在位置,则确定的待变形区域为整个眼睛区域,则用户的触发轨迹为向下移动a轨迹长度,则如图4的右图所示,人脸图像区域的眼睛区域发生变形。
综上,本公开实施例的人脸图像的变形方法,获取用户对人脸图像选择的初始触发位置,并根据初始触发位置确定人脸图像中的待变形区域,进而,获取以初始触发位置为触发起始点的触发轨迹,并提取触发轨迹的轨迹长度和轨迹方向,根据轨迹长度确定待变形区域中每个像素点的位移长度,根据轨迹方向确定每个像素点的位移方向后,根据位移长度和位移方向调整待变形区域中每个像素点的位置,以生成并显示变形后的人脸图像。由此,基于和用户互动的方式,满足人脸图像的个性化变形,实现自由度更高的人脸变形功能,增加了用户和产品的粘性。
需要说明的是,在不同的应用场景中,根据初始触发位置确定人脸图像中的待变形区域的方式不同,示例说明如下:
在一些实施例中,如图5所示,上述根据触发操作的初始触发位置确定人脸图像中的待变形区域,包括:
在步骤201中,计算初始触发位置与多个预设控制区域之间的距离。
在步骤202中,确定与初始触发位置距离最小的预设控制区域为待变形区域。
在本实施例中,可以理解,预先在人脸图像中设置多个控制区域,每个控制区域对应一个人脸区域,比如,控制区域1对应于人脸的左眼区域,控制区域2对应于人脸的嘴巴区域等,不同的预设控制区域可以由大数据标定,也可以由用户根据个人需要标定,当根据用户个人需要标定时,可以根据用户在人脸图像上触发的划分轨迹等来划分不同的预设控制区域。
在获知到初始触发位置后,计算初始触发位置与多个预设控制区域之间的距离,比如,如图6-1所示,提取初始触发位置的中心点坐标和每个预设控制区域的中心点坐标,基于二者中心点坐标之间的距离来计算初始触发位置与对应的预设控制区域之间的距离。又比如,如图6-2所示,计算初始触发位置的边缘轮廓与每个预设控制区域的边缘轮廓之间的 最近距离,作为初始触发位置与对应的预设控制区域之间的距离。
进而,确定与初始触发位置距离最小的预设控制区域为待变形区域。当然,在本实施例中,若是初始触发位置属于某个预设控制区域,则直接将对应的预设控制区域作为待变形区域,若是初始触发位置属于多个预设控制区域,则统计初始触发位置在每个控制区域中的面积占初始触发位置总体面积的比例,将比例最高值对应的预设控制区域作为待变形区域。由此,可以实现“以点带面”的变形效果。
在一些实施例中,如图7所示,上述根据触发操作的初始触发位置确定人脸图像中的待变形区域,包括:
在步骤301中,在人脸图像上预设的多个标识信息中,确定与初始触发位置对应的目标标识信息,其中,所述多个标识信息中的每个标识信息对应于人脸图像上的一个图像区域。
其中,标识信息可以为图标形式、文字形式、动态效果中的一种或多种的组合,在本实施例中,如图8所示,在人脸图像上预先设置多个标识信息(多个标识信息可以显示出来也可以不显示),其中,每个标识信息作为人脸图像中一个图像区域的控制模块,在预设数据库中存储每个标识信息和图像区域的对应关系,继续参照图8,比如,对于标识信息1而言,其对应的图像区域为额头区域。
在步骤302中,确定与目标标识信息对应的图像区域为待变形区域。
在本实施例中,确定初始触发位置对应的目标标识信息,比如,确定位置与初始触发位置重叠的目标标识信息为初始触发位置对应的目标标识信息,还可以计算初始触发位置的中心点与标识信息中心点之间的距离,将距离初始触发位置的中心点最近距离的标识信息,作为初始触发位置对应的目标标识信息。
正如以上提到的,有标识信息和图像区域的对应关系,因而,在本实施例中,通过查询预设的数据库等方式,确定与目标标识信息对应的图像区域为待变形区域。由此,可以根据用户触发的目标标识信息直接确定一个待变形的图像区域,且图像区域和标识信息预先绑定,提高了变形的效率。
在一些实施例中,如图9所示,上述根据触发操作的初始触发位置确定人脸图像中的待变形区域,包括:
步骤401,根据初始触发位置在人脸图像中确定目标人脸关键点。
在本实施例中,根据初始触发位置在人脸图像中确定目标人脸关键点,该目标人脸关键点可以是初始触发位置对应区域的中心点,也可以是预先训练卷积神经网络,如图10所示,该卷积神经网络可以在将人脸图像输入后,得到多个人脸关键点(图中为101个), 进而,确定与初始触发位置关联的人脸关键点为目标人脸关键点,当初始触发位置重叠了一个人脸关键点,则将该一个人脸关键点作为目标人脸关键点,当初始触发位置重叠了多个人脸关键点,则在多个人脸关键点中随机选择一个作为目标人脸关键点。当初始触发位置没有重叠到人脸关键点,则确定距离初始触发位置的中心点或者边缘点最近的关键点为目标人脸关键点。
步骤402,获取与目标人脸关键点对应的预设变形半径。
在一些可能的实施例中,针对人脸图像设置固定的预设变形半径,本实施例中的预设变形半径可以是根据预设大数据标定的,也可以是根据人脸图像的尺寸设置的,人脸图像的尺寸越大,则对应的预设变形半径越大,当然,在一些可能的实施例中,该预设变形半径也可以由用户个人喜好设置。
在另一些可能的示例中,人脸图像中的预设半径可以是不同的,比如,将人脸图像划分为多个区域,比如划分为额头区域、脸颊区域、嘴巴区域等,为每个区域设置预设变形半径,不同区域对应的预设变形半径是不同的,以保证不同的区域得到的变形效果不同的,进而,确定目标人脸关键点所在的区域,将该区域对应的预设变形半径作为目标人脸关键点的预设变形半径。
在本示例中,还可以针对每个人脸图像中的关键点预先设置对应的预设变形半径,并存储关键点的坐标和预设变形半径对应关系,查询该对应关系,确定目标人脸关键点对应的预设变形半径,由此,进一步提升了人脸变形的灵活性,保证了变形效果的多样性。
步骤403,以目标人脸关键点为圆心,并以预设变形半径为圆形半径,确定待变形区域。
在本实施例中,如图11所示,以目标人脸关键点为圆心,并以预设变形半径为圆形半径,确定待变形区域,其中,该待变形区域除了包括人脸区域之外,还也可以包括背景区域等。
综上,本公开实施例的人脸图像的变形方法,根据应用场景的需要,灵活确定与初始触发位置对应的待变形区域,提高了用户和产品的粘性。
为了使得本领域的技术人员,更清楚的了解本公开实施例中如何根据轨迹长度确定待变形区域中每个像素点的位移长度,下面结合具体的示例来说明:
在一些实施例中,确定待变形区域的方式为上述图9所示的实施例,从而,如图12所示,根据轨迹长度确定待变形区域中每个像素点的位移长度,包括:
在步骤501中,计算待变形区域中每个像素点与目标人脸关键点的第一距离。
在一些可能的实施例中,计算待变形区域中每个像素点与目标人脸关键点的第一距离,该距离可以根据每个像素点的坐标和目标人脸关键点的坐标计算得到。其中,由于目标人脸关键点位于人脸图像的人脸区域中,能保证变形以人脸区域为主体变形。
在步骤502中,根据第一距离确定每个像素点的第一变形系数。
在步骤503中,计算轨迹长度和第一变形系数的第一乘积,并将第一乘积作为每个像素点的位移长度。
在一些可能的示例中,第一距离和第一变形系数为反比关系,第一距离越大,则对应的第一变形系数越小,从而,计算轨迹长度和第一变形系数的第一乘积,获取每个像素点的位移长度后,根据位移长度位移后实现了以目标人脸关键点为中心的液化处理的效果。
在另一可能的示例中,第一距离和第一变形系数为正比关系,第一距离越大,则对应的第一变形系数越大,从而,计算轨迹长度和第一变形系数的第一乘积,获取每个像素点的位移长度后,根据位移长度位移后实现了聚焦虚化的效果。
在一些实施例中,如图13所示,根据轨迹长度确定待变形区域中每个像素点的位移长度,包括:
在步骤601中,计算待变形区域中每个像素点,与人脸图像中的预设参考关键点的第二距离。
其中,人脸图像中的预设参考关键点是预先设置的固定的人脸区域中的关键点,可以为鼻尖上的关键点,也可以为额头上的关键点等,预设参考关键点设置位置不同,则形变的效果必然不同。
在步骤602中,根据第二距离确定每个像素点的第二变形系数。
在步骤603中,计算轨迹长度与第二变形系数的第二乘积,并将第二乘积作为每个像素点的位移长度。
在一些可能的示例中,第二距离和第二变形系数为反比关系,第二距离越大,则对应的第二变形系数越小,从而,计算轨迹长度和第二变形系数的第二乘积,获取每个像素点的位移长度后,根据位移长度位移后实现了以预设参考关键点为中心的液化处理的效果。
在另一可能的示例中,第二距离和第二变形系数为正比关系,第二距离越大,则对应的第二变形系数越大,从而,计算轨迹长度和第二变形系数的第二乘积,获取每个像素点的位移长度后,根据位移长度位移后实现了聚焦虚化的效果。
在一些实施例中,直接将轨迹长度作为位移长度,从而,每个像素点根据位移长度移动后,实现了待变形区域整体移动的效果。
当然,在实际操作过程中,为了保证变形图像的处理效果,还可以设置位移长度的极 限值,当位移长度一旦大于该极限值,则将位移长度确定为该极限值,其中,位移长度的极限值可以根据用户初始触发位置的不同而不同,用户可以根据个人喜好设置,比如,当用户初始触发位置在眼睛,在对应的极限值可能较大等。
由此,本实施例中可以采用不同的方式来确定每个像素点的位移,以实现不同的变形效果。
进一步的,在确定位移长度后,可以采用不同的实现方式来根据位移长度和位移方向调整待变形区域中每个像素点的位置。
在本公开的一个实施例中,如图14所示,步骤根据位移长度和位移方向调整待变形区域中每个像素点的位置,包括:
在步骤701中,响应于位移方向属于预设方向,根据每个像素点向对应的位移方向和位移长度,确定每个像素点的目标调整位置,其中,预设方向包括人脸图像的水平方向和竖直方向。
在本公开的一个实施例中,获取触发轨迹的终点触发位置,确定由初始触发位置到终点触发位置的方向为位移方向,判断位移方向是否为水平方向或者竖直方向,其中,若位移方向为水平方向或者竖直方向,则确定位移方向属于预设方向,也可以理解,在本实施例中,确定位移方向是否是上、下、左、右四个正方向,以确定其是否满足预设方向。
当然,在不同的应用场景中,该预设方向条件也可以为上、下、左、右中任意1-3个方向,或者还可以是任意其他方向,均可以根据场景需要设置。
具体的,若属于该预设方向,则根据每个像素点向对应的位移方向和位移长度,确定每个像素点的目标调整位置,如图15-1所示,每个像素点的目标位置为在对应的唯一方向上距离位移长度的位置。
在步骤702中,将每个像素点调整至目标调整位置。
举例而言,如图15-2所示,当用户触发如图8所示的额头上的标识信息1,且对应的位移为向人脸图像的正上方时,则待变形区域的每个像素点调整至目标调整位置得到了额头被拉长的变形人脸图像。
在本公开的一个实施例中,继续参照图14,在上述步骤701之后,还包括:
在步骤703中,响应于位移方向不属于预设方向,将位移方向拆分为水平方向和竖直方向。
应当理解的是,在一次触发轨迹中,其对应的位移方向可大致为左下方、右下方、左上方、右上方等,每个方向都可以拆分出水平方向和竖直方向比如,对于左下方而言,可以拆分为左方和下方。
在步骤704中,根据位移长度确定水平方向上的第一位移,和在竖直方向上的第二位移。
在本公开的实施例中,根据位移长度确定水平方向上的第一位移,和在竖直方向上的第二位移,如图16-1所示,若是位移方向为左下方,可以拆分为左方和下方,则对于位移长度,按照直角坐标系拆分为左方向的第一位移和下方向的第二位移。
在步骤705中,控制待变形区域中每个像素点根据水平方向上的第一位移,和竖直方向上的第二位移移动,以生成并显示变形后的人脸图像。。
在本实施例中,可以采用帧缓冲区(Frame buffer Object,FBO)的思想来进行变形人脸图像的生成,首先,控制待变形区域中每个像素点根据第一位移向水平方向移动至第一像素位置,获取人脸图像的参考人脸图像,此时并不显示该参考人脸图像,而是缓存早离线控件中,其次,根据第二位移向竖直方向确定每个像素点对应的第二像素位置,以该参考人脸图像作为输入的纹理图,控制参考人脸图像中每个像素点由第一像素位置调整至第二像素位置。由此,仅仅将最后调整到第二像素位置的人脸图像渲染显示,大大降低了图像的处理成本,提高了图像的生成效率。
由此,本实施例中的人脸图像的变形方法,可以根据用户的拖拽等触发操作,实现任意变形人脸的操作,其中,在每次触发操作的触发轨迹结束后,并不对变形效果进行回弹而是进行保留,从而,人脸图像中可以反映用户的多次触发操作,比如,对于图16-2而言,用户可以通过拖拽标志信息1、4、5来实现对用户人脸的变形操作,实现图16-2所示的个性化效果。
综上,本公开实施例的人脸图像的变形方法,可以灵活实现对待变形区域中所述每个像素点的位置的调整,满足个性化的变形后的人脸图像的生成服务。
为了实现上述实施例,本公开的实施例还提出了一种人脸图像的变形装置的结构示意图。
图17是根据一示例性实施例示出的一种人脸图像的变形装置的结构示意图。如图17所示,该人脸图像的变形装置包括:第一确定模块171、提取模块172、第二确定模块173、和变形调整模块174,其中,
第一确定模块171,被配置为获取用户对人脸图像选择的初始触发位置,并根据初始触发位置确定人脸图像中的待变形区域;
提取模块172,被配置为获取以初始触发位置的为触发起始点的触发轨迹,并提取触发轨迹的轨迹长度和轨迹方向;
第二确定模块173,被配置为根据轨迹长度和轨迹方向,分别确定待变形区域中每个像素点的位移长度和位移;
第三确定模块174,被配置为根据轨迹方向确定每个像素点的位移方向;
变形调整模块175,被配置为根据位移长度和位移方向调整待变形区域中每个像素点的位置,以生成变形后的人脸图像。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
综上,本公开实施例的人脸图像的变形装置,获取用户对人脸图像选择的初始触发位置,并根据初始触发位置确定人脸图像中的待变形区域,进而,获取以初始触发位置为触发起始点的触发轨迹,并提取触发轨迹的轨迹长度和轨迹方向,根据轨迹长度确定待变形区域中每个像素点的位移长度,根据轨迹方向确定每个像素点的位移方向后,根据位移长度和位移方向调整待变形区域中每个像素点的位置,以生成并显示变形后的人脸图像。由此,响应用户的触发,进行人脸图像的个性化变形,实现自由度更高的人脸变形功能,增加了用户和产品的粘性。
需要说明的是,在不同的应用场景中,第一确定模块171根据初始触发位置确定人脸图像中的待变形区域的方式不同,示例说明如下:
在一些实施例中,第一确定模块171,具体被配置为:
计算初始触发位置与多个预设控制区域之间的距离;
确定与初始触发位置距离最小的预设控制区域为待变形区域。
在一些实施例中,第一确定模块171,具体被配置为:
在人脸图像上预设的多个标识信息中,确定与初始触发位置对应的目标标识信息,其中,多个标识信息中的每个标识信息对应于人脸图像上的一个图像区域;
查询预设数据库,确定与目标标识信息对应的图像区域为待变形区域。
在一些实施例中,如图18所示,在如图17所示的基础上,第一确定模块171,包括:第一确定单元1711、第一获取单元1712、第二确定单元1713,其中,
第一确定单元1711,被配置为根据初始触发位置在人脸图像中确定目标人脸关键点;
获取单元1712,被配置为获取与目标人脸关键点对应的预设变形半径;
第二确定单元1713,被配置为以目标人脸关键点为圆心,并以预设变形半径为圆形半径,确定待变形区域。
在本公开的一个实施例中,第一确定单元1711具体被配置为:
将人脸图像输入预先训练的卷积神经网络,以生成多个人脸关键点;
确定与初始触发位置关联的人脸关键点为目标人脸关键点。
综上,本公开实施例的人脸图像的变形装置,根据应用场景的需要,灵活确定与初始触发位置对应的待变形区域,提高了用户和产品的粘性。
为了使得本领域的技术人员,更清楚的了解本公开实施例中如何根据轨迹长度确定待变形区域中每个像素点的位移长度,下面结合具体的示例来说明:
在一些实施例中,第二确定模块173,被具体配置为:
计算待变形区域中每个像素点与目标人脸关键点的第一距离;
根据第一距离确定每个像素点对应的第一变形系数;
计算轨迹长度和第一变形系数的第一乘积,并将第一乘积作为每个像素点的位移长度。
在一些实施例中,第二确定模块173,被具体配置为:
计算待变形区域中每个像素点,与人脸图像中的预设参考关键点的第二距离;
根据第二距离确定每个像素点对应的第二变形系数;
计算轨迹长度与第二变形系数的第二乘积,并将第二乘积作为每个像素点的位移长度。
由此,本实施例中可以采用不同的方式来确定每个像素点的位移,以实现不同的变形效果。
进一步的,在确定位移长度后,可以采用不同的实现方式来根据位移长度和位移方向调整待变形区域中每个像素点的位置。
在本公开的一个实施例中,如图19所示,在如图17所示的基础上,变形调整模块174包括:第三确定单元1741、第一调整单元1742,其中,
第三确定单元1741,被配置为响应于位移方向属于预设方向,根据每个像素点向对应的位移方向和位移长度,确定每个像素点的目标调整位置,其中,预设方向包括人脸图像的水平方向和竖直方向;
第一调整单元1742,被配置为将每个像素点调整至目标调整位置。
在本公开的一个实施例中,参照图20,在如图19所示的基础上,该变形调整模块174还包括:第四确定单元1743、第五确定单元1744和第二调整单元1745,其中,
第四确定单元1743,被配置为响应于位移方向不属于预设方向,将位移方向拆分为水平方向和竖直方向;
第五确定单元1744,被配置为根据位移长度确定水平方向上的第一位移,和在竖直方向上的第二位移;
第二调整单元1745,被配置为控制待变形区域中每个像素点根据水平方向上的第一位移,和竖直方向上的第二位移移动,以生成并显示变形后的人脸图像。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
综上,本公开实施例的人脸图像的变形装置,可以灵活实现对待变形区域中每个像素点的位置的调整,满足个性化的变形后的人脸图像的生成服务。
为了实现上述实施例,本公开还提出了一种电子设备。图21是根据本公开提出的一种电子设备的框图。
如图21所示,上述电子设备200包括:
存储器210及处理器220,连接不同组件(包括存储器210和处理器220)的总线230,存储器210存储有计算机程序,当处理器220执行程序时实现本公开实施例的人脸图像的变形方法。
总线230表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线,微通道体系结构(MAC)总线,增强型ISA总线、视频电子标准协会(VESA)局域总线以及外围组件互连(PCI)总线。
电子设备200典型地包括多种电子设备可读介质。这些介质可以是任何能够被电子设备200访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
存储器210还可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)240和/或高速缓存存储器250。电子设备200可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统260可以用于读写不可移动的、非易失性磁介质(图21未显示,通常称为“硬盘驱动器”)。尽管图21中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线230相连。存储器210可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本公开各实施例的功能。
具有一组(至少一个)程序模块270的程序/实用工具280,可以存储在例如存储器210中,这样的程序模块270包括——但不限于——操作系统、一个或者多个应用程序、其它 程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块270通常执行本公开所描述的实施例中的功能和/或方法。
电子设备200也可以与一个或多个外部设备290(例如键盘、指向设备、显示器291等)通信,还可与一个或者多个使得用户能与该电子设备200交互的设备通信,和/或与使得该电子设备200能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口292进行。并且,电子设备200还可以通过网络适配器293与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图21所示,网络适配器293通过总线230与电子设备200的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备200使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理器220通过运行存储在存储器210中的程序,从而执行各种功能应用以及数据处理。
需要说明的是,本实施例的电子设备的实施过程和技术原理参见前述对本公开实施例的人脸图像的变形方法的解释说明,此处不再赘述。
综上,本公开实施例的人脸图像的电子设备,获取用户对人脸图像选择的初始触发位置,并根据初始触发位置确定人脸图像中的待变形区域,进而,获取初始触发位置的触发轨迹,并提取触发轨迹的轨迹长度和轨迹方向,根据轨迹长度确定待变形区域中每个像素点的位移长度,根据轨迹方向确定每个像素点的位移方向后,根据位移长度和位移方向调整待变形区域中每个像素点的位置,以生成并显示变形后的人脸图像。由此,基于和用户互动的方式,满足人脸图像的个性化变形,实现自由度更高的人脸变形功能,增加了用户和产品的粘性。
为了实现上述实施例,本公开还提出一种非易失性计算机可读存储介质。
其中,该非易失性计算机可读存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行如前所述的人脸图像的变形方法。
为了实现上述实施例,本公开还提供一种计算机程序产品,该计算机程序由电子设备的处理器执行时,使得电子设备能够执行如前所述的人脸图像的变形方法。
本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围。
本领域技术人员在考虑说明书及实践本公开后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变 化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (18)

  1. 一种人脸图像的变形方法,其特征在于,包括:
    获取用户对人脸图像选择的初始触发位置,并根据所述初始触发位置确定所述人脸图像中的待变形区域;
    获取以所述初始触发位置为触发起始点的触发轨迹,并提取所述触发轨迹的轨迹长度和轨迹方向;
    根据所述轨迹长度和所述轨迹方向,分别确定所述待变形区域中每个像素点的位移长度和位移方向;
    根据所述位移长度和所述位移方向调整所述待变形区域中所述每个像素点的位置,以生成变形后的人脸图像。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述触发操作的初始触发位置确定所述人脸图像中的待变形区域,包括:
    计算所述初始触发位置与多个预设控制区域之间的距离;
    确定与所述初始触发位置距离最小的预设控制区域为所述待变形区域。
  3. 如权利要求1所述的方法,其特征在于,所述根据所述触发操作的初始触发位置确定所述人脸图像中的待变形区域,包括:
    在所述人脸图像上预设的多个标识信息中,确定与所述初始触发位置对应的目标标识信息,其中,所述多个标识信息中的每个标识信息对应于所述人脸图像上的一个图像区域;
    确定与所述目标标识信息对应的图像区域为所述待变形区域。
  4. 如权利要求1所述的方法,其特征在于,所述根据所述触发操作的初始触发位置确定所述人脸图像中的待变形区域,包括:
    根据所述初始触发位置在所述人脸图像中确定目标人脸关键点;
    获取与所述目标人脸关键点对应的预设变形半径;
    以所述目标人脸关键点为圆心,并以所述预设变形半径为圆形半径,确定所述待变形区域。
  5. 如权利要求4所述的方法,其特征在于,所述根据所述轨迹长度确定所述待变形区域中每个像素点的位移长度,包括:
    计算所述待变形区域中每个像素点与所述目标人脸关键点的第一距离;
    根据所述第一距离确定所述每个像素点对应的第一变形系数;
    计算所述轨迹长度和所述第一变形系数的第一乘积,并将所述第一乘积作为所述每个 像素点的所述位移长度。
  6. 如权利要求1所述的方法,其特征在于,所述根据所述轨迹长度确定所述待变形区域中每个像素点的位移长度,包括:
    计算所述待变形区域中每个像素点,与所述人脸图像中的预设参考关键点的第二距离;
    根据所述第二距离确定所述每个像素点对应的第二变形系数;
    计算所述轨迹长度与所述第二变形系数的第二乘积,并将所述第二乘积作为所述每个像素点的所述位移长度。
  7. 如权利要求1-6任一所述的方法,其特征在于,所述根据所述位移长度和所述位移方向调整所述待变形区域中每个像素点的位置,包括:
    响应于所述位移方向属于所述预设方向,根据所述每个像素点向对应的位移方向和位移长度,确定所述每个像素点的目标调整位置,其中,所述预设方向包括所述人脸图像的水平方向和竖直方向;
    将所述每个像素点调整至所述目标调整位置。
  8. 如权利要求7所述的方法,其特征在于,还包括:
    响应于所述位移方向不属于所述预设方向,将所述位移方向拆分为水平方向和竖直方向;
    根据所述位移长度确定所述水平方向上的第一位移,和在所述竖直方向上的第二位移;
    控制所述待变形区域中每个像素点根据所述水平方向上的第一位移,和所述竖直方向上的第二位移移动,以生成并显示变形后的人脸图像。
  9. 一种人脸图像的变形装置,其特征在于,包括:
    第一确定模块,被配置为获取用户对人脸图像选择的初始触发位置,并根据所述初始触发位置确定所述人脸图像中的待变形区域;
    提取模块,被配置为获取以所述初始触发位置的为触发起始点的触发轨迹,并提取所述触发轨迹的轨迹长度和轨迹方向;
    第二确定模块,被配置为根据所述轨迹长度和所述轨迹方向,分别确定所述待变形区域中每个像素点的位移长度和位移方向;
    变形调整模块,被配置为根据所述位移长度和所述位移方向调整所述待变形区域中所述每个像素点的位置,以生成变形后的人脸图像。
  10. 如权利要求9所述的装置,其特征在于,所述第一确定模块,具体被配置为:
    计算所述初始触发位置与多个预设控制区域之间的距离;
    确定与所述初始触发位置距离最小的预设控制区域为所述待变形区域。
  11. 如权利要求9所述的装置,其特征在于,所述第一确定模块,具体被配置为:
    在所述人脸图像上预设的多个标识信息中,确定与所述初始触发位置对应的目标标识信息,其中,所述多个标识信息中的每个标识信息对应于所述人脸图像上的一个图像区域;
    确定与所述目标标识信息对应的图像区域为所述待变形区域。
  12. 如权利要求9所述的装置,其特征在于,所述第一确定模块,包括:
    第一确定单元,被配置为根据所述初始触发位置在所述人脸图像中确定目标人脸关键点;
    获取单元,被配置为获取与所述目标人脸关键点对应的预设变形半径;
    第二确定单元,被配置为以所述目标人脸关键点为圆心,并以所述预设变形半径为圆形半径,确定所述待变形区域。
  13. 如权利要求12所述的装置,其特征在于,所述第二确定模块,具体被配置为:
    计算所述待变形区域中每个像素点与所述目标人脸关键点的第一距离;
    根据所述第一距离确定所述每个像素点对应的第一变形系数;
    计算所述轨迹长度和所述第一变形系数的第一乘积,并将所述第一乘积作为所述每个像素点的所述位移长度。
  14. 如权利要求9所述的装置,其特征在于,所述第二确定模块,具体被配置为:
    计算所述待变形区域中每个像素点,与所述人脸图像中的预设参考关键点的第二距离;
    根据所述第二距离确定所述每个像素点对应的第二变形系数;
    计算所述轨迹长度与所述第二变形系数的第二乘积,并将所述第二乘积作为所述每个像素点的所述位移长度。
  15. 如权利要求9-14任一所述的装置,其特征在于,所述变形调整模块,具体包括:
    第三确定单元,被配置为响应于所述位移方向属于所述预设方向,根据所述每个像素点向对应的位移方向和位移长度,确定所述每个像素点的目标调整位置,其中,所述预设方向包括所述人脸图像的水平方向和竖直方向;
    第一调整单元,被配置为将所述每个像素点调整至所述目标调整位置。
  16. 如权利要求15所述的装置,其特征在于,所述变形调整模块,还包括:
    第四确定单元,被配置为响应于所述位移方向不属于所述预设方向,将所述位移方向拆分为水平方向和竖直方向;
    第五确定单元,被配置为根据所述位移长度确定所述水平方向上的第一位移,和在所述竖直方向上的第二位移;
    第二调整单元,被配置为控制所述待变形区域中每个像素点根据所述水平方向上的第 一位移,和所述竖直方向上的第二位移移动,以生成并显示变形后的人脸图像。
  17. 一种电子设备,其特征在于,包括:
    处理器;
    被配置为存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令,以实现如权利要求1-8中任一项所述的人脸图像的变形方法。
  18. 一种非易失性计算机可读存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行如权利要求1-8中任一项所述的人脸图像的变形方法。
PCT/CN2021/104093 2020-07-27 2021-07-01 人脸图像的变形方法和变形装置 WO2022022220A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010732569.2 2020-07-27
CN202010732569.2A CN113986105B (zh) 2020-07-27 2020-07-27 人脸图像的变形方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022022220A1 true WO2022022220A1 (zh) 2022-02-03

Family

ID=79731505

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/104093 WO2022022220A1 (zh) 2020-07-27 2021-07-01 人脸图像的变形方法和变形装置

Country Status (2)

Country Link
CN (1) CN113986105B (zh)
WO (1) WO2022022220A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118015685A (zh) * 2024-04-09 2024-05-10 湖北楚天龙实业有限公司 一种一卡通的识别方法及系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118096870B (zh) * 2024-03-27 2024-07-09 深圳大学 解算方法测量相机、及存储介质、电子装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104965654A (zh) * 2015-06-15 2015-10-07 广东小天才科技有限公司 一种头像调整的方法和系统
US20180101987A1 (en) * 2016-10-11 2018-04-12 Disney Enterprises, Inc. Real time surface augmentation using projected light
CN109087239A (zh) * 2018-07-25 2018-12-25 腾讯科技(深圳)有限公司 一种人脸图像处理方法、装置及存储介质
CN109242765A (zh) * 2018-08-31 2019-01-18 腾讯科技(深圳)有限公司 一种人脸图像处理方法、装置和存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107154030B (zh) * 2017-05-17 2023-06-09 腾讯科技(上海)有限公司 图像处理方法及装置、电子设备及存储介质
CN110069191B (zh) * 2019-01-31 2021-03-30 北京字节跳动网络技术有限公司 基于终端的图像拖拽变形实现方法和装置
CN110502993B (zh) * 2019-07-18 2022-03-25 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104965654A (zh) * 2015-06-15 2015-10-07 广东小天才科技有限公司 一种头像调整的方法和系统
US20180101987A1 (en) * 2016-10-11 2018-04-12 Disney Enterprises, Inc. Real time surface augmentation using projected light
CN109087239A (zh) * 2018-07-25 2018-12-25 腾讯科技(深圳)有限公司 一种人脸图像处理方法、装置及存储介质
CN109242765A (zh) * 2018-08-31 2019-01-18 腾讯科技(深圳)有限公司 一种人脸图像处理方法、装置和存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118015685A (zh) * 2024-04-09 2024-05-10 湖北楚天龙实业有限公司 一种一卡通的识别方法及系统

Also Published As

Publication number Publication date
CN113986105A (zh) 2022-01-28
CN113986105B (zh) 2024-05-31

Similar Documents

Publication Publication Date Title
US11403757B2 (en) Sight line detection method and sight line detection device
Yu et al. Unsupervised representation learning for gaze estimation
US10832039B2 (en) Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
US10739849B2 (en) Selective peripheral vision filtering in a foveated rendering system
US9921663B2 (en) Moving object detecting apparatus, moving object detecting method, pointing device, and storage medium
US12039752B2 (en) Method for determining line-of-sight, method for processing video, device, and storage medium
US9789403B1 (en) System for interactive image based game
US20180088663A1 (en) Method and system for gesture-based interactions
WO2022022220A1 (zh) 人脸图像的变形方法和变形装置
US20220148333A1 (en) Method and system for estimating eye-related geometric parameters of a user
CN109727305B (zh) 虚拟现实系统画面处理方法、装置及存储介质
Arcoverde Neto et al. Enhanced real-time head pose estimation system for mobile device
WO2021185110A1 (zh) 眼球追踪校准方法及装置
Pires et al. Unwrapping the eye for visible-spectrum gaze tracking on wearable devices
Yong et al. Emotion recognition in gamers wearing head-mounted display
Liu et al. CamType: assistive text entry using gaze with an off-the-shelf webcam
KR20160046399A (ko) 텍스쳐 맵 생성 방법 및 장치와 데이터 베이스 생성 방법
KR20200081529A (ko) 사회적 수용성을 고려한 hmd 기반 사용자 인터페이스 방법 및 장치
CN113129362A (zh) 一种三维坐标数据的获取方法及装置
WO2018140397A1 (en) System for interactive image based game
JP7408562B2 (ja) プログラム、情報処理装置、定量化方法及び情報処理システム
US10599934B1 (en) Spoof detection using optokinetic response
WO2019024068A1 (en) SYSTEMS AND METHODS FOR DETECTING EMOTION IN VIDEO DATA
US11340706B2 (en) Gesture recognition based on depth information and computer vision
Raees et al. THE-3DI: Tracing head and eyes for 3D interactions: An interaction technique for virtual environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21849195

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.05.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21849195

Country of ref document: EP

Kind code of ref document: A1