WO2020019664A1 - 基于人脸的形变图像生成方法和装置 - Google Patents

基于人脸的形变图像生成方法和装置 Download PDF

Info

Publication number
WO2020019664A1
WO2020019664A1 PCT/CN2018/123640 CN2018123640W WO2020019664A1 WO 2020019664 A1 WO2020019664 A1 WO 2020019664A1 CN 2018123640 W CN2018123640 W CN 2018123640W WO 2020019664 A1 WO2020019664 A1 WO 2020019664A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
deformation
generating
feature points
face
Prior art date
Application number
PCT/CN2018/123640
Other languages
English (en)
French (fr)
Inventor
林鑫
袁芳
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Publication of WO2020019664A1 publication Critical patent/WO2020019664A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to a method, an apparatus, an electronic device, and a computer-readable storage medium for generating a deformation image based on a human face.
  • smart terminals can listen to music, play games, chat on the Internet, and take pictures.
  • the camera pixels have reached more than 10 million pixels, which has higher resolution and is comparable to that of professional cameras.
  • an additional function can be achieved by downloading an application (Application, abbreviated as APP) from the network.
  • APP Application, abbreviated as APP
  • an app that can implement functions such as dark light detection, beauty camera and super pixel.
  • the beauty functions of smart terminals usually include beauty treatment effects such as skin tone adjustment, microdermabrasion, big eyes, and thin face, which can perform the same degree of beauty treatment on all faces that have been identified in the image.
  • there are APPs that can implement the function of deforming human faces.
  • the current face image deformation function only includes some preset deformation effects. Only the deformation effects can be directly selected, and the deformation effects cannot be flexibly edited.
  • an embodiment of the present disclosure provides a method for generating a deformed image based on a face, including: obtaining a standard face image, where the standard face image includes a plurality of feature points, and the feature points are located at a default position; selecting the At least one of a plurality of feature points; dragging the selected feature point from a default position to a first position; and generating a first deformed image of the standard face image according to the first position.
  • the feature point has a mirrored feature point
  • the selecting at least one feature point among the plurality of feature points includes: selecting at least one feature point among the plurality of feature points, and the mirrored feature point of the feature point Also selected.
  • the dragging the selected feature point from the default position to the first position includes: moving the selected feature point to the first position, and the mirrored feature point of the feature point is simultaneously moved to the first position.
  • Mirror position
  • generating the first deformed image of the standard face image according to the first position includes: generating the deformed image of the standard face image according to the first position and the mirror position of the first position. .
  • generating the first deformation image of the standard face image according to the first position includes: generating a deformation coefficient according to the first position and a default position, and moving the selected feature point according to the deformation coefficient.
  • the surrounding pixels generate a deformed image of the standard face image.
  • the method further includes: selecting an intensity of the deformation, where the intensity of the deformation indicates a degree of deformation.
  • the method further includes: receiving a command to increase the deformation, and continuing to drag the feature point on the basis of the deformed image to generate a second deformed image.
  • the method further includes: setting a serial number of the standard face image.
  • the method further includes: identifying a human face collected by the image sensor; and generating the first deformed image according to the first deformed image. Deformed image of human face.
  • the method further includes: identifying a plurality of faces in the image collected by the image sensor; and numbering the faces according to the recognition order; Generate a deformed image of the human face according to the number, the serial number, and the first deformed image.
  • an embodiment of the present disclosure provides a face-based deformed image generating device, including: an obtaining module for obtaining a standard face image, where the standard face image includes multiple feature points, and the feature points are located in a default Position; a feature point selection module for selecting at least one feature point among the plurality of feature points; a feature point drag module for dragging the selected feature point from a default position to a first position; a deformation module, Configured to generate a first deformed image of the standard face image according to the first position.
  • the feature point selection module is configured to select at least one feature point among the plurality of feature points, and a mirrored feature point of the feature point is selected at the same time.
  • the feature point dragging module is configured to move the selected feature point to a first position, and a mirrored feature point of the feature point is simultaneously moved to a mirrored position of the first position.
  • the deformation module is configured to generate a deformation image of the standard face image according to the first position and the mirror position of the first position.
  • the deformation module is configured to generate a deformation coefficient according to the first position and a default position, and move pixels around the selected feature point according to the deformation coefficient to generate a deformation image of the standard face image.
  • the device further includes: a deformation intensity selection module, configured to select a deformation intensity, and the deformation intensity indicates a degree of deformation.
  • the device further includes: a deformation increasing module, configured to receive a command for increasing deformation, and continue to drag the feature point on the basis of the deformation image to generate a second deformation image.
  • a deformation increasing module configured to receive a command for increasing deformation, and continue to drag the feature point on the basis of the deformation image to generate a second deformation image.
  • the device further includes: a serial number setting module for setting a serial number of the standard face image.
  • the device further includes: a first face recognition module and a first deformation mapping module, wherein: the first face recognition module is used to recognize a human face collected by the image sensor; and the first deformation mapping module is used to: Based on the first deformed image, a deformed image of the human face is generated.
  • the device further includes a second face recognition module, a numbering module, and a second deformation mapping module, wherein the second face recognition module is used to identify multiple faces collected by the image sensor in the image; the numbering module, Configured to number the faces according to the recognition order;
  • a second deformation mapping module is configured to generate a deformation image of the human face according to the number, the serial number, and the first deformation image.
  • an embodiment of the present disclosure provides an electronic device including: at least one processor; and,
  • a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processing
  • the processor can execute any one of the aforementioned first aspect-based method for generating a deformation image based on a face.
  • an embodiment of the present disclosure provides a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to cause a computer to execute the foregoing first aspect. Any one of the face-based deformation image generation methods.
  • Embodiments of the present disclosure provide a method, an apparatus, an electronic device, and a computer-readable storage medium for generating a deformation image based on a human face.
  • the face-based deformation image generating method includes: obtaining a standard face image, the standard face image including a plurality of feature points, and the feature points are located at a default position; and selecting at least one feature point among the plurality of feature points ; Drag the selected feature point from a default position to a first position; and generate a first deformed image of the standard face image according to the first position.
  • FIG. 1a is a flowchart of Embodiment 1 of a face-based deformation image generation method according to an embodiment of the present disclosure
  • FIG. 1b is a schematic diagram of a standard face image provided by an embodiment of the present disclosure
  • FIG. 2 is a flowchart of a second embodiment of a method for generating a deformation image based on a face according to an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a first embodiment of a face-based deformation image generating apparatus according to an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a face-based deformation image generating terminal according to an embodiment of the present disclosure.
  • FIG. 1a is a flowchart of Embodiment 1 of a face-based deformation image generation method according to an embodiment of the present disclosure.
  • the deformation image generation method provided by this embodiment may be performed by a face-based deformation image generation device.
  • the image generating device may be implemented as software or a combination of software and hardware.
  • the deformed image generating device may be integrated in a device in an image processing system, such as an image processing server or an image processing terminal device. As shown in Figure 1a, the method includes the following steps:
  • Step S101 Obtain a standard face image, where the standard face image includes a plurality of feature points, and the feature points are located at a default position;
  • the standard face image is a preset face image.
  • the standard face image is a frontal face image, and the standard
  • the face image has preset feature points, in which the number of feature points can be set, and the user can freely set the required number of feature points.
  • the feature points of an image are points in the image that have distinctive characteristics and can effectively reflect the essential characteristics of the image and can identify the target object in the image. If the target object is a human face, then the key points of the face need to be obtained. If the target image is a house, then the key points of the house need to be obtained. Take the human face as an example to illustrate how to obtain key points.
  • the face contour mainly includes 5 parts: eyebrows, eyes, nose, mouth, and cheeks, and sometimes also includes pupils and nostrils. Generally, a complete description of the face contour is achieved.
  • the number of key points required is about 60. If only the basic structure is described, the details of each part do not need to be described in detail, or the cheeks need not be described, then the number of key points can be reduced accordingly. If you need to describe the pupils, nostrils, or need More detailed features can increase the number of key points.
  • Face keypoint extraction on the image is equivalent to finding the corresponding position coordinates of each face contour keypoint in the face image, that is, keypoint positioning. This process needs to be performed based on the features corresponding to the keypoints.
  • a search and comparison is performed in the image based on this feature to accurately locate the positions of the key points on the image.
  • the feature points occupy only a very small area in the image (usually only a few to tens of pixels in size)
  • the area occupied by the features corresponding to the feature points on the image is also usually very limited and local.
  • the features currently used There are two kinds of extraction methods: (1) one-dimensional range image feature extraction along the vertical contour; (2) two-dimensional range image feature extraction of square neighborhood of feature points.
  • ASM and AAM methods statistical energy function methods, regression analysis methods, deep learning methods, classifier methods, batch extraction methods, and so on.
  • the number of key points, accuracy, and speed used by the above various implementation methods are different, which are suitable for different application scenarios.
  • the face image includes 106 feature points.
  • the user can reduce the feature points as needed. For example, 10 feature points were originally required to represent the eyes. The user can reduce some of the feature points and use 4 main features.
  • Feature points can be represented by numbers and positions, such as 1 (0,0), which indicates that the default position of feature point 1 is at coordinate point (0,0).
  • the user may retain only the feature points of a certain image area, such as only the feature points of the eyes or only the feature points of the nose, for processing on a certain area.
  • Step S102 selecting at least one feature point among the plurality of feature points
  • one feature point may be selected or multiple feature points may be selected.
  • the user may send a selection command through an input device such as a mouse or a keyboard, for example, using a mouse to select a feature point displayed on a display device or using a keyboard to input. Select the feature point number to select the corresponding feature point.
  • the selected feature points can be circled by dragging the selection box generated by the mouse, or multiple feature points can be selected and selected through a check command.
  • the selected state of the feature point can be displayed through color changes. For example, the default state of the feature point is white. When selected, the feature point turns blue to indicate that the feature point has been selected.
  • a selection box is displayed outside the plurality of feature points, and the selection box may be a smallest rectangle including the plurality of feature points.
  • the above method for selecting a feature point and displaying a state of a feature point is merely an example and is not a limitation. In fact, any method suitable for selecting a feature point and displaying a state of a feature point applicable to the present disclosure may be used in the present disclosure.
  • Step S103 Move the selected feature point from the default position to the first position.
  • the user may move the feature point selected in step S102 to the first position through a human-computer interaction.
  • the user may use a mouse to drag the selected feature point to the first position.
  • a position may be a position inside the face image or a position outside the face image.
  • the user may also directly input the coordinates of the first position using a keyboard, so that the selected feature point is directly moved to the first position.
  • the above method of moving feature points is merely an example and is not a limitation, and in fact, any method suitable for moving the feature points in the present disclosure can be used in the present disclosure.
  • Step S104 Generate a first deformed image of the standard face image based on the first position.
  • the standard face image is deformed due to the movement of the feature points to form a deformed standard face image, that is, a first deformed image.
  • each feature point has a correlation coefficient with other feature points.
  • the correlation coefficient defines the proportion of other feature points that follow the movement when the selected feature point moves. The higher the correlation coefficient, the higher the proportion of follow-up movement. The lower the coefficient, the lower the proportion of following movement.
  • the correlation coefficient between feature point 1 and feature point 2 is 0.5. When feature point 1 moves 1 cm, feature point 2 moves 0.5 cm.
  • the correlation coefficient is 0.1, when feature point 1 moves 1 cm, The feature point 2 moves by 0.1 cm, and the correlation coefficient may be a negative number. At this time, it represents the proportion of the associated feature point moving in the opposite direction.
  • the standard face image is triangulated according to feature points, and each feature point is a vertex of a triangle in the triangulation diagram.
  • the correlation coefficient of the feature points located on the same triangle is the highest. , You can set the correlation coefficient of these feature points to 1, feature points located on different triangles according to distance, the correlation coefficient decreases by distance, the distance can be the number of separated feature points, such as feature point correlation
  • the coefficient is set to 0.5, the correlation coefficient of feature points separated by two feature points is 0.25, and so on. That is, when a feature point is moved, the relative positions of other feature points on the same triangle and the feature point remain unchanged.
  • the feature points separated by one feature point are moved by a distance of 0.5, and the feature points separated by two feature points. Move at a distance of 0.25.
  • the pixels in the standard face image are moved according to rules similar to the feature points to generate a first deformed image. Specifically, in the standard face image processed according to the above triangulation, if the pixels are located in the selected feature In the triangle where the point is located, the pixel follows the selected feature point and moves at a ratio of 1: 1. If the pixel point is located in a triangle separated by one feature point, the pixel point follows the selected feature point and moves at a ratio of 0.5. If the pixel point is located in a triangle separated by two feature points, the pixel point follows the selected feature point and moves according to a ratio of 0.25.
  • the above-mentioned following movement ratio may be set according to requirements, and the embodiments in the present disclosure are merely examples, and do not constitute a limitation.
  • a deformation coefficient is generated according to the first position and the default position, and pixel points around the selected feature point are moved according to the deformation coefficient to generate a deformation image of the standard face image.
  • the deformation coefficient includes a deformation direction and a deformation range. Specifically, a direction between the default position and the first position determines a deformation direction, and a distance between the default position and the first position determines a deformation range. The larger the distance is, the larger the deformation range is, and the smaller the distance between the default position and the first position is, the smaller the deformation range is.
  • the range can be a circle with the distance as the radius or diameter, or a rectangle with the distance as the side length, etc., or the range can be calculated according to certain calculation rules with the range as a parameter.
  • the size of the range has an associated relationship with the distance, and the disclosure does not limit the associated relationship here.
  • the method may further include:
  • one or more fixed feature points are selected, and the fixed feature points are feature points that do not follow other feature points.
  • multiple fixed feature points are selected to form a fixed area. Pixels in the fixed area will not move according to the movement of other feature points, because deformation will not occur, such as selecting all the feature points of the eyes. For fixed feature points, the image of the eye area will not be deformed regardless of how other feature points move. It should be noted that setting the correlation coefficients of the feature points and other coefficients to 0 can also achieve the same function as fixed feature points. Providing a method for selecting fixed feature points can more conveniently set the fixed area without setting the correlation coefficient between the feature points and each other feature point.
  • the method may further include:
  • the strength of the deformation is selected, which indicates the degree of deformation.
  • the strength of the deformation refers to the degree of deformation caused by moving the same distance, and the strength of the deformation can be set by various human-machine interfaces, such as an input box, a pull-down menu, a slide button, and the like, which are not limited in the present disclosure.
  • the method includes a preset deformation intensity. If the deformation intensity is not selected, all deformation images are generated according to the preset intensity. After the user selects the deformation intensity, the deformation image is generated according to the new deformation intensity.
  • the deformation intensity can be set in stages. For example, a distance threshold can be set.
  • a deformation image is formed according to the first deformation intensity
  • an image is formed according to the second deformation intensity.
  • step S104 the method may further include:
  • the added deformation here can be in two forms.
  • the first form is based on the original deformation and continues to move the feature points that were moved before.
  • the second part of the deformation image can be formed according to the new deformation coefficient.
  • a feature point or the same set of feature points forms a combined deformation of the two parts; another form is to select a new feature point and move the new feature point to form a new deformation image.
  • the new deformation image is also a combination deformation of the two parts.
  • step S102 the method further includes:
  • the serial number of the standard face images can be set to number different deformation image effects.
  • step S104 the method further includes:
  • a face image recognized from a camera is obtained.
  • the face image may be a face recognized from a real person, or a face recognized by using a camera to take a picture or video including the face This disclosure does not limit, in short, the face image is different from the standard face image.
  • Face detection is any given image or a set of image sequences, and it is searched with a certain strategy to determine the position and area of all faces.
  • the methods of face detection can be divided into four categories: (1) methods based on prior knowledge, which encodes a typical face formation rule base to encode faces and locate the faces through the relationship between facial features; (2) feature-invariant method, which finds stable features under changing attitude, viewing angle, or lighting conditions, and then uses these features to determine the face; (3) template matching method, which stores several standard human faces Mode, which is used to describe the entire face and facial features respectively, and then calculate the correlation between the input image and the stored pattern and use it for detection; (4) appearance-based method, which is the opposite of template matching method and is performed from the training image set Learn to get models and use them for detection.
  • an implementation of the method (4) is used to explain the face detection process: First, features need to be extracted to complete modeling.
  • Haar features are a simple Rectangular features with fast extraction speed.
  • the feature template used for the calculation of general Haar features uses a simple rectangle combination consisting of two or more congruent rectangles, where the feature template has two rectangles, black and white; after that, use
  • the AdaBoost algorithm finds a part of the features that play a key role from a large number of Haar features, and uses these features to generate an effective classifier.
  • the constructed classifier can detect faces in the image. There may be one or more human faces in the image in this embodiment.
  • each face detection algorithm has its own advantages and different adaptation ranges, multiple different detection algorithms can be set to automatically switch different algorithms for different environments, such as in images with relatively simple background environments. , You can use the algorithm with a lower detection rate but faster; in the image with a more complex background environment, you can use the algorithm with a higher detection rate but a slower speed; for the same image, you can use multiple algorithms multiple times Detection to improve detection rate.
  • step S104 according to the selected feature points and the first deformation image generated in step S104, the same face deformation image as that on the standard face image is generated on the face image recognized from the camera.
  • the deformation method can be divided into fixed deformation and tracking deformation.
  • fixed deformation is used.
  • Deformation This kind of deformation is relatively simple. It only needs to set the absolute position of the entire deformation range in the image sensor.
  • the implementation method can be one-to-one correspondence between the display device and the pixel points of the image acquisition window of the image sensor. Position in the device, and then perform corresponding deformation processing on the corresponding position of the image collected by the image sensor acquisition window.
  • the advantage of this deformation processing method is simple and easy to operate.
  • the parameters used in this implementation are relative to the position of the acquisition window.
  • step S101 when generating a deformation image, first obtain feature points of the standard face image in step S101, and determine the position of the deformation in the standard face image by using the feature points; Recognize the face image corresponding to the standard face image in the collected image ; The standard face image is mapped to the determined position of the face image; do face image deformation processing, generates a strain image.
  • the relative position of the deformation in the face image is determined. No matter how the face image moves and changes, the deformation is always located at the relative position, and the purpose of tracking the deformation is achieved.
  • the standard face image is triangulated and has 106 feature points.
  • the relative position of the deformation in the face image is determined using the deformation and the relative position of the feature points, and the face collected by the camera is used.
  • the image is subjected to the same triangulation.
  • the deformation can be fixed at the relative position on the face to achieve the effect of tracking the deformation.
  • a plurality of faces in the image collected by the image sensor are identified; the faces are numbered according to the recognition order; and a deformed image of the face is generated according to the number, the serial number, and the first deformed image.
  • the user can select one face image that needs to be deformed, or multiple face images for the same or different processing.
  • you can number standard human faces, such as ID1 and ID2, and generate deformed images for ID1 and ID2 standard human face images, respectively.
  • the deformed images can be the same or different. Recognize multiple face images and add deformation effects to the multiple face images according to the identified order.
  • the deformation effect on the standard face image of ID1 is added to the first face.
  • the No. 2 face add the deformation effect on the standard face image of ID2 to the No. 2 face; if only the standard face image deformation of ID1 is produced, you can use the No. 1 and No. 2 faces
  • the distortion on the standard face image of ID1 is added to the image, and the distortion on the face of No. 1 can also be added.
  • Embodiments of the present disclosure provide a method, an apparatus, an electronic device, and a computer-readable storage medium for generating a deformation image based on a human face.
  • the face-based deformation image generating method includes: obtaining a standard face image, the standard face image including a plurality of feature points, the feature points being located at a default position; and selecting at least one feature point among the plurality of feature points ; Drag the selected feature point from a default position to a first position; and generate a first deformed image of the standard face image according to the first position.
  • the embodiment of the present disclosure generates a deformed image of a face image by moving the position of a feature point, which solves the technical problem of deforming a face image by using a preset effect in the prior art, and improves the flexibility of generating a deformed image of a face. And convenience.
  • FIG. 2 is a flowchart of Embodiment 2 of a method for generating a deformation image based on a face according to an embodiment of the present disclosure. As shown in FIG. 2, the method may include the following steps:
  • S201 Obtain a standard face image, where the standard face image includes multiple feature points, and the feature points are located at a default position;
  • S203 Move the selected feature point to the first position, and the mirrored feature point of the feature point is simultaneously moved to the mirrored position of the first position;
  • the other feature points include mirror feature points, which are feature points that are symmetrical with the feature points based on the face midline as the axis of symmetry.
  • the mirror selection function can be set in advance so that when a feature point is selected, its mirror feature points are also selected together.
  • a mirror feature point of a feature point can be set in advance, such as feature point 1 and feature 10
  • the points are mirrored feature points, so whether the 1st feature point or the 10th feature point is selected, the 10th feature point and the 1st feature point will be selected at the same time.
  • the feature point mirrored feature point will be mirrored accordingly.
  • the feature point is moved to the first position, its mirrored feature point is moved to the mirrored position of the first position. According to the first position and the mirror position of the first position, a deformed image of the standard face image is generated.
  • FIG. 3 is a schematic structural diagram of a first embodiment of a face-based deformation image generating apparatus according to an embodiment of the present disclosure.
  • the apparatus includes: an obtaining module 31, a feature point selecting module 32, a feature point moving module 33, and ⁇ ⁇ ⁇ 34 ⁇ Deformation module 34.
  • the obtaining module 31 is configured to obtain a standard face image, where the standard face image includes multiple feature points, and the feature points are located at a default position;
  • a feature point selection module 32 configured to select at least one feature point of the plurality of feature points
  • a feature point moving module 33 configured to drag the selected feature point from a default position to a first position
  • a deformation module 34 is configured to generate a first deformed image of the standard face image according to the first position.
  • the device further includes:
  • the deformation intensity selection module is used to select the intensity of the deformation, and the deformation intensity indicates the degree of deformation.
  • the device further includes:
  • the deformation increasing module is configured to receive a command for adding deformation, and continue to drag feature points on the basis of the deformation image to generate a second deformation image.
  • the device further includes:
  • a serial number setting module is used to set a serial number of the standard face image.
  • the device further includes a first face recognition module and a first deformation mapping module, wherein:
  • a first face recognition module configured to recognize a face in an image collected by an image sensor
  • a first deformation mapping module is configured to generate a deformation image of the human face according to the first deformation image.
  • the device further includes a second face recognition module, a numbering module, and a second deformation mapping module, wherein:
  • a second face recognition module configured to recognize multiple faces in an image collected by an image sensor
  • a numbering module configured to number the faces according to the recognition order
  • a second deformation mapping module is configured to generate a deformation image of the human face according to the number, the serial number, and the first deformation image.
  • the apparatus shown in FIG. 3 can execute the method in the embodiment shown in FIG. 1.
  • An obtaining module configured to obtain a standard face image, where the standard face image includes multiple feature points, and the feature points are located at a default position;
  • a feature point selection module configured to select at least one feature point among the plurality of feature points, and a mirrored feature point of the feature point is selected at the same time;
  • a feature point moving module configured to move the selected feature point to a first position, and the mirrored feature point of the feature point is simultaneously moved to the mirror position of the first position;
  • a deformation module configured to generate a deformation image of the standard face image according to the first position and the mirror position of the first position.
  • the device in the foregoing second embodiment may execute the method in the embodiment shown in FIG. 2.
  • the parts that are not described in detail in this embodiment reference may be made to the related description of the embodiment shown in FIG. 2.
  • the implementation process and technical effect of the technical solution refer to the description in the embodiment shown in FIG. 2, and details are not described herein again.
  • FIG. 4 is a hardware block diagram illustrating an electronic device according to an embodiment of the present disclosure.
  • the electronic device 40 according to an embodiment of the present disclosure includes a memory 41 and a processor 42.
  • the memory 41 is configured to store non-transitory computer-readable instructions.
  • the memory 41 may include one or more computer program products, and the computer program product may include various forms of computer-readable storage media, such as volatile memory and / or non-volatile memory.
  • the volatile memory may include, for example, a random access memory (RAM) and / or a cache memory.
  • the non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
  • the processor 42 may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and / or instruction execution capabilities, and may control other components in the electronic device 40 to perform desired functions.
  • the processor 42 is configured to execute the computer-readable instructions stored in the memory 41, so that the electronic device 40 executes the face-based deformation image generation method of the foregoing embodiments of the present disclosure. All or part of the steps.
  • this embodiment may also include well-known structures such as a communication bus and an interface. These well-known structures should also be included in the protection scope of the present invention. within.
  • FIG. 5 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
  • a computer-readable storage medium 50 according to an embodiment of the present disclosure stores non-transitory computer-readable instructions 51 thereon.
  • the non-transitory computer-readable instructions 51 are executed by a processor, all or part of the steps of the method for generating a deformation image based on a human face according to the embodiments of the present disclosure are performed.
  • the computer-readable storage medium 50 includes, but is not limited to, optical storage media (such as CD-ROM and DVD), magneto-optical storage media (such as MO), magnetic storage media (such as magnetic tape or mobile hard disk), Non-volatile memory rewritable media (for example: memory card) and media with built-in ROM (for example: ROM box).
  • optical storage media such as CD-ROM and DVD
  • magneto-optical storage media such as MO
  • magnetic storage media such as magnetic tape or mobile hard disk
  • Non-volatile memory rewritable media for example: memory card
  • media with built-in ROM for example: ROM box
  • FIG. 6 is a schematic diagram illustrating a hardware structure of a terminal device according to an embodiment of the present disclosure.
  • the face-based deformation image generation terminal 60 includes the embodiment of the face-based deformation image generation device described above.
  • the terminal device may be implemented in various forms, and the terminal device in the present disclosure may include, but is not limited to, such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), Mobile terminal devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like, and fixed terminal devices such as digital TVs, desktop computers, and the like.
  • PMPs portable multimedia players
  • navigation devices such as PMPs (portable multimedia players), navigation devices, vehicle-mounted terminal devices, vehicle-mounted display terminals, vehicle-mounted electronic rear-view mirrors, and the like
  • fixed terminal devices such as digital TVs, desktop computers, and the like.
  • the terminal may further include other components.
  • the face-based deformation image generating terminal 60 may include a power supply unit 61, a wireless communication unit 62, an A / V (audio / video) input unit 63, a user input unit 64, a sensing unit 65, and an interface.
  • FIG. 6 shows a terminal with various components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the wireless communication unit 62 allows radio communication between the terminal 60 and a wireless communication system or network.
  • the A / V input unit 63 is used to receive audio or video signals.
  • the user input unit 64 may generate key input data according to a command input by the user to control various operations of the terminal device.
  • the sensing unit 65 detects the current status of the terminal 60, the position of the terminal 60, the presence or absence of a user's touch input to the terminal 60, the orientation of the terminal 60, the acceleration or deceleration movement and direction of the terminal 60, and the like, and generates a terminal for controlling the terminal Command or signal of operation of 60.
  • the interface unit 66 functions as an interface through which at least one external device can be connected to the terminal 60.
  • the output unit 68 is configured to provide an output signal in a visual, audio, and / or tactile manner.
  • the storage unit 69 may store software programs and the like for processing and control operations performed by the controller 67, or may temporarily store data that has been output or is to be output.
  • the storage unit 69 may include at least one type of storage medium.
  • the terminal 60 may cooperate with a network storage device that performs a storage function of the storage unit 69 through a network connection.
  • the controller 67 generally controls the overall operation of the terminal device.
  • the controller 67 may include a multimedia module for reproducing or playing back multimedia data.
  • the controller 67 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 61 receives external power or internal power under the control of the controller 67 and provides appropriate power required to operate each element and component.
  • Various embodiments of the face-based deformation image generation method proposed by the present disclosure may be implemented using a computer-readable medium such as computer software, hardware, or any combination thereof.
  • various embodiments of the face-based deformation image generation method proposed by the present disclosure can be implemented by using an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), a programmable At least one of a logic device (PLD), a field programmable gate array (FPGA), a processor, a controller, a microcontroller, a microprocessor, and an electronic unit designed to perform the functions described herein is implemented in some
  • ASIC application specific integrated circuit
  • DSP digital signal processor
  • DSPD digital signal processing device
  • FPGA field programmable gate array
  • processor a controller
  • microcontroller a microcontroller
  • microprocessor a microprocessor
  • various embodiments of the face-based deformation image generation method proposed by the present disclosure may be implemented with a separate software module that allows at least one function or operation to be performed.
  • the software codes may be implemented by a software application (or program) written in any suitable programming language, and the software codes may be stored in the storage unit 69 and executed by the controller 67.
  • an "or” used in an enumeration of items beginning with “at least one” indicates a separate enumeration such that, for example, an "at least one of A, B, or C” enumeration means A or B or C, or AB or AC or BC, or ABC (ie A and B and C).
  • the word "exemplary” does not mean that the described example is preferred or better than other examples.
  • each component or each step can be disassembled and / or recombined.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种基于人脸的形变图像生成方法、装置、电子设备和计算机可读存储介质。其中该基于人脸的形变图像生成方法包括:获取标准人脸图像,所述标准人脸图像上包括多个特征点,所述特征点位于默认位置(S101);选择所述多个特征点中的至少一个特征点(S102);将所选择的特征点从默认位置拖动到第一位置(S103);根据所述第一位置,生成所述标准人脸图像的第一形变图像(S104)。通过采取该方法,解决了现有技术中只能采用预设效果对图像进行形变的技术问题,提高生成形变图像的灵活度。

Description

基于人脸的形变图像生成方法和装置
交叉引用
本公开引用于2018年07月27日递交的名称为“基于人脸的形变图像生成方法和装置”的、申请号为201810838359.4的中国专利申请,其通过引用被全部并入本申请。
技术领域
本公开涉及图像处理领域,尤其涉及一种基于人脸的形变图像生成方法、装置、电子设备及计算机可读存储介质。
背景技术
随着计算机技术的发展,智能终端的应用范围得到了广泛的提高,例如可以通过智能终端听音乐、玩游戏、上网聊天和拍照等。对于智能终端的拍照技术来说,其拍照像素已经达到千万像素以上,具有较高的清晰度和媲美专业相机的拍照效果。
目前在采用智能终端进行拍照时,不仅可以使用出厂时内置的拍照软件实现传统功能的拍照效果,还可以通过从网络端下载应用程序(Application,简称为:APP)来实现具有附加功能的拍照效果,例如可以实现暗光检测、美颜相机和超级像素等功能的APP。智能终端的美颜功能通常包括肤色调整、磨皮、大眼和瘦脸等美颜处理效果,能对图像中已识别出的所有人脸进行相同程度的美颜处理。目前也有APP可以实现对人脸进行形变的功能。
然而目前的人脸图像形变功能,只包括了一些预设的形变效果,只能直接选 择形变的效果,无法对形变效果进行灵活编辑。
发明内容
第一方面,本公开实施例提供一种基于人脸的形变图像生成方法,包括:获取标准人脸图像,所述标准人脸图像上包括多个特征点,特征点位于默认位置;选择所述多个特征点中的至少一个特征点;将所选择的特征点从默认位置拖动到第一位置;根据所述第一位置,生成所述标准人脸图像的第一形变图像。
进一步的,所述特征点具有镜像特征点,所述选择所述多个特征点中的至少一个特征点包括:选择所述多个特征点中的至少一个特征点,该特征点的镜像特征点同时被选中。
进一步的,所述将所选择的特征点从默认位置拖动到第一位置,包括:移动所选择的特征点到第一位置,该特征点的镜像特征点同时被移动到所述第一位置的镜像位置。
进一步的,所述根据所述第一位置,生成所述标准人脸图像的第一形变图像包括:根据所述第一位置和第一位置的镜像位置,生成所述标准人脸图像的形变图像。
进一步的,所述根据所述第一位置,生成所述标准人脸图像的第一形变图像,包括:根据第一位置与默认位置,产生形变系数,根据所述形变系数移动所选择的特征点周围的像素点,生成所述标准人脸图像的形变图像。
进一步的,在根据所述第一位置,生成所述标准人脸图像的形变图像之前,还包括:选择形变的强度,所述形变强度表示形变的程度。
进一步的,在根据所述第一位置,生成所述标准人脸图像的形变图像之后,还包括:接收增加形变的命令,在形变图像的基础上继续拖动特征点,以生成第二形变图像。
进一步的,在选择所述多个特征点中的至少一个特征点之前,还包括:设置所述标准人脸图像的序号。
进一步的,在根据所述第一位置,生成所述标准人脸图像的第一形变图像之后,还包括:识别图像传感器采集到图像中的人脸;根据所述第一形变图像,生成所述人脸的形变图像。
进一步的,在根据所述第一位置,生成所述标准人脸图像的第一形变图像之后,还包括:识别图像传感器采集到图像中的多个人脸;根据识别顺序对所述人脸编号;根据所述编号、所述序号以及第一形变图像,生成所述人脸的形变图像。
第二方面,本公开实施例提供一种基于人脸的形变图像生成装置,包括:获取模块,用于获取标准人脸图像,所述标准人脸图像上包括多个特征点,特征点位于默认位置;特征点选择模块,用于选择所述多个特征点中的至少一个特征点;特征点拖动模块,用于将所选择的特征点从默认位置拖动到第一位置;形变模块,用于根据所述第一位置,生成所述标准人脸图像的第一形变图像。
进一步的,所述特征点选择模块,用于选择所述多个特征点中的至少一个特征点,该特征点的镜像特征点同时被选中。
进一步的,所述特征点拖动模块,用于移动所选择的特征点到第一位置,该特征点的镜像特征点同时被移动到所述第一位置的镜像位置。
进一步的,所述形变模块,用于根据所述第一位置和第一位置的镜像位置,生成所述标准人脸图像的形变图像。
进一步的,所述形变模块,用于根据第一位置与默认位置,产生形变系数,根据所述形变系数移动所选择的特征点周围的像素点,生成所述标准人脸图像的形变图像。
进一步的,所述装置还包括:形变强度选择模块,用于选择形变的强度,所述形变强度表示形变的程度。
进一步的,该装置还包括:形变增加模块,用于接收增加形变的命令,在形变图像的基础上继续拖动特征点,以生成第二形变图像。
进一步的,该装置还包括:序号设置模块,用于设置所述标准人脸图像的序 号。
进一步的,该装置还包括:第一人脸识别模块和第一形变映射模块,其中:第一人脸识别模块,用于识别图像传感器采集到图像中的人脸;第一形变映射模块,用于根据所述第一形变图像,生成所述人脸的形变图像。
进一步的,该装置还包括:第二人脸识别模块、编号模块以及第二形变映射模块,其中:第二人脸识别模块,用于识别图像传感器采集到图像中的多个人脸;编号模块,用于根据识别顺序对所述人脸编号;
第二形变映射模块,用于根据所述编号、所述序号以及第一形变图像,生成所述人脸的形变图像。
第三方面,本公开实施例提供一种电子设备,包括:至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有能被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行前述第一方面中的任一所述基于人脸的形变图像生成方法。
第四方面,本公开实施例提供一种非暂态计算机可读存储介质,其特征在于,该非暂态计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行前述第一方面中的任一所述基于人脸的形变图像生成方法。
本公开实施例提供一种基于人脸的形变图像生成方法、装置、电子设备和计算机可读存储介质。其中该基于人脸的形变图像生成方法包括:获取标准人脸图像,所述标准人脸图像上包括多个特征点,特征点位于默认位置;选择所述多个特征点中的至少一个特征点;将所选择的特征点从默认位置拖动到第一位置;根据所述第一位置,生成所述标准人脸图像的第一形变图像。本公开实施例通过采取该技术方案,解决了现有技术中只能采用预设效果对图像进行形变的技术问题,提高生成形变图像的灵活度。
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技术手 段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1a为本公开实施例提供的基于人脸的形变图像生成方法实施例一的流程图;
图1b为本公开实施例提供的标准人脸图像的示意图;
图2为本公开实施例提供的基于人脸的形变图像生成方法实施例二的流程图;
图3为本公开实施例提供的基于人脸的形变图像生成装置实施例一的结构示意图;
图4为根据本公开实施例提供的电子设备的结构示意图;
图5为根据本公开实施例提供的计算机可读存储介质的结构示意图;
图6为根据本公开实施例提供的基于人脸的形变图像生成终端的结构示意图。
具体实施方式
以下通过特定的具体实例说明本公开的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本公开的其他优点与功效。显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。本公开还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本公开的精神下进行各种修饰或改变。需说明的是,在不 冲突的情况下,以下实施例及实施例中的特征可以相互组合。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
需要说明的是,下文描述在所附权利要求书的范围内的实施例的各种方面。应显而易见,本文中所描述的方面可体现于广泛多种形式中,且本文中所描述的任何特定结构及/或功能仅为说明性的。基于本公开,所属领域的技术人员应了解,本文中所描述的一个方面可与任何其它方面独立地实施,且可以各种方式组合这些方面中的两者或两者以上。举例来说,可使用本文中所阐述的任何数目个方面来实施设备及/或实践方法。另外,可使用除了本文中所阐述的方面中的一或多者之外的其它结构及/或功能性实施此设备及/或实践此方法。
还需要说明的是,以下实施例中所提供的图示仅以示意方式说明本公开的基本构想,图式中仅显示与本公开中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。
另外,在以下描述中,提供具体细节是为了便于透彻理解实例。然而,所属领域的技术人员将理解,可在没有这些特定细节的情况下实践所述方面。
图1a为本公开实施例提供的基于人脸的形变图像生成方法实施例一的流程图,本实施例提供的该形变图像生成方法可以由一基于人脸的形变图像生成装置来执行,该形变图像生成装置可以实现为软件,或者实现为软件和硬件的组合,该形变图像生成装置可以集成设置在图像处理系统中的某设备中,比如图像处理服务器或者图像处理终端设备中。如图1a所示,该方法包括如下步骤:
步骤S101,获取标准人脸图像,所述标准人脸图像上包括多个特征点,所述特征点位于默认位置;
获取标准人脸图像,并在显示装置上显示标准人脸图像,所述标准人脸图像是预先设置好的人脸图像,通常来说,该标准人脸图像为正面人脸图像,且该标 准人脸图像带有预先设置好的特征点,其中特征点的数量可以设置,用户可以自由设定所需要的特征点的数量。图像的特征点是指图像中具有鲜明特性并能够有效反映图像本质特征且能够标识图像中目标物体的点。如果目标物体为人脸,那么就需要获取人脸关键点,如果目标图像为一栋房子,那么就需要获取房子的关键点。以人脸为例说明关键点的获取方法,人脸轮廓主要包括眉毛、眼睛、鼻子、嘴巴和脸颊5个部分,有时还会包括瞳孔和鼻孔,一般来说实现对人脸轮廓较为完整的描述,需要关键点的个数在60个左右,如果只描述基本结构,不需要对各部位细节进行详细描述,或不需要描述脸颊,则可以相应降低关键点数目,如果需要描述瞳孔、鼻孔或者需要更细节的五官特征,则可以增加关键点的数目。在图像上进行人脸关键点提取,相当于寻找每个人脸轮廓关键点在人脸图像中的对应位置坐标,即关键点定位,这一过程需要基于关键点对应的特征进行,在获得了能够清晰标识关键点的图像特征之后,依据此特征在图像中进行搜索比对,在图像上精确定位关键点的位置。由于特征点在图像中仅占据非常小的面积(通常只有几个至几十个像素的大小),特征点对应的特征在图像上所占据的区域通常也是非常有限和局部的,目前用的特征提取方式有两种:(1)沿轮廓垂向的一维范围图像特征提取;(2)特征点方形邻域的二维范围图像特征提取。上述两种方式有很多种实现方法,如ASM和AAM类方法、统计能量函数类方法、回归分析方法、深度学习方法、分类器方法、批量提取方法等等。上述各种实现方法所使用的关键点个数,准确度以及速度各不相同,适用于不同的应用场景。
在一个典型的场景中,人脸图像上包括106个特征点,用户可以根据需要对特征点进行精简,比如原先需要10个特征点来表示眼睛,用户可以精简部分特征点,使用4个主要特征点来表示眼睛;每个特征点都有编号和默认位置,参见图1b为带有特征点的标准人脸图像的示例,在实际使用中,特征点的数量可以比示例中的多也可以比示例中的少。特征点可以由编号和位置表示,比如1(0,0),表示1号特征点默认位置在坐标点(0,0)处。
在一个实施例中,用户可以只保留某个图像区域的特征点,比如只保留眼睛的特征点或者只保留鼻子的特征点,以针对某个区域做处理。
步骤S102,选择所述多个特征点中的至少一个特征点;
在该实施例中,可以选择一个特征点也可以选择多个特征点,用户可以通过鼠标或者键盘等输入设备,发送选择命令,比如使用鼠标点选显示装置上所显示的特征点或者使用键盘输入特征点的编号来选择相应的特征点。在选择多个特征点时,可以通过鼠标拖动产生的选择框来圈定所选择的特征点,或者通过复选命令,选择选择多个特征点。可以通过颜色变化显示特征点的选中状态,比如特征点的默认状态为白色,当被选中之后,特征点变为蓝色,以表示该特征点已经被选中,当选中多个特征点时,可以在所述多个特征点外显示一个选择框,该选择框可以是包括所述多个特征点的最小矩形。上述选择特征点和显示特征点状态的方法仅仅是示例,并不构成限定,实际上任何适于应用于本公开中的选择特征点和显示特征点状态的方法均可使用于本公开中。
步骤S103,将所选择的特征点从默认位置移动到第一位置。
在该实施例中,用户可以通过人机交互方式将步骤S102中所选择的特征点移动的第一位置,典型的,用户可以使用鼠标拖动说选择的特征点到第一位置,所述第一位置可以是人脸图像内的位置,也可以是人脸图像外的位置。用户也可以使用键盘直接输入第位置的坐标,以使所选择的特征点直接移动到所述第一位置上。当所选择的特征点为多个时,可以使用鼠标拖动选择框以同时移动所选择的多个特征点。上述移动特征点的方法仅仅是示例,并不构成限定,实际上任何适于应用于本公开中移动特征点的方法均可使用于本公开中。
步骤S104,根据所述第一位置,生成所述标准人脸图像的第一形变图像。
在该实施例中,当在步骤S103中移动特征点之后,标准人脸图像会由于特征点的移动而发生形变,形成形变后的标准人脸图像,即第一形变图像。
在一个实施例中,选择特征点中的一个或多个特征点,并拖动特征点,则标 准人脸图像会根据拖动的距离和拖动后的位置发生形变,拖动特征点时,跟当前特征点关联的特征点也会移动。具体的,每个特征点与其他特征点均有关联系数,所述关联系数定义了当所选择的特征点移动时,其他特征点跟随移动的比例,关联系数越高跟随移动的比例越高,关联系数越低,跟随移动的比例越低。比如特征点1和特征点2的关联系数为0.5,则当特征点1移动了1厘米时,特征点2移动0.5厘米,当关联系数为0.1时,则当特征点1移动了1厘米时,特征点2移动0.1厘米,该关联系数可以是负数,此时表示关联特征点向相反的方向移动的比例。
在一个实施例中,所述标准人脸图像根据特征点进行三角剖分,每个特征点均是三角剖分图中的三角形的顶点,位于同一个三角形上的特征点关联系数最高,典型的,可以设置这些特征点的关联系数为1,位于不同的三角形上的特征点按照距离,关联系数按距离递减,所述距离可以是相隔的特征点个数,比如相隔一个特征点的特征点关联系数设置为0.5,相隔两个特征点的特征点关联系数为0.25,以此类推。即当一个特征点被移动时,位于同一个三角形上的其他特征点与该特征点的相对位置保持不变,相隔一个特征点的特征点按照0.5的距离移动,相隔两个特征点的特征点按照0.25的距离移动。
标准人脸图像中的像素,按照与特征点类似的规则发生移动,以产生第一形变图像,具体的,在按照上述三角剖分处理过的标准人脸图像中,如果像素点位于选择的特征点所在的三角形中,则像素点跟随选择的特征点按照1:1的比例跟随移动,如果像素点位于相隔一个特征点的三角形中,则像素点跟随选择的特征点按照0.5的比例跟随移动,如果像素点位于相隔两个特征点的三角形中,则像素点跟随选择的特征点按照0.25的比例跟随移动。上述跟随移动的比例可以根据需要设置,本公开中的实施例仅是举例,不构成限定。
在一个实施例中,根据第一位置与默认位置,产生形变系数,根据所述形变系数移动所选择的特征点周围的像素点,生成所述标准人脸图像的形变图像。所述的形变系数包括形变方向和形变范围,具体的,所述默认位置和第一位置之间 的方向决定了形变方向,所述默认位置和第一位置之间的距离决定了形变的范围,所述距离越大,形变的范围越大,所述默认位置和第一位置之间的距离越小,形变的范围越小。所述范围可以是以所述距离为半径或直径的圆,或者以所述距离为边长的矩形等等,也可以是以所述范围为参数按照一定的运算规则计算出范围的大小,总之范围的大小与所述距离有关联关系,本公开在此不对所述关联关系做限定。
进一步的,在步骤S102之前或者之后,还可以包括步骤:
选择至少一个固定特征点。
在该步骤中,选择一个或多个固定特征点,所述固定特征点是指不会跟随其他特征点移动的特征点。在一个典型的应用中,选择多个固定特征点以的形成一个固定区域,该固定区域中像素点不会根据其他特征点的移动而移动,因为不会发生形变,比如选择眼睛的所有特征点为固定特征点,则无论其他特征点如何移动,眼睛区域的图像均不会发生形变,需要说明的是,将特征点的与其他相关系数设置为0也可以实现与固定特征点相同的功能,提供选择固定特征点的方法可以更方便的设置固定区域,而不用设置特征点与其他每个特征点的关联系数。
进一步的,在步骤S104之前,还可以包括:
选择形变的强度,所述形变强度表示形变的程度。
所述形变的强度,指移动同样的距离所产生的形变程度,该形变强度可以用各种人机接口设置,如输入框、下拉菜单、滑动按钮等等,本公开不做限制。该方法中包括预设的形变强度,如果不选择形变强度,所有的形变图像都按照预设的强度生成,用户选择形变强度之后,则按照新的形变强度生成形变图像。所述形变强度可以分段设置,举例来说,可以设置距离阈值,当移动距离小于等于阈值时,按照第一形变强度形成形变图像,当移动距离大于阈值时,按照第二形变强度形成图像,可以理解的是所述阈值可以设置多个,以按照多个形变强度分段形成形变图像。
进一步的,在步骤S104之后,还可以包括:
接收增加形变的命令,在形变图像的基础上继续移动特征点,以生成第二形变图像。
此处的增加形变,可以有两种形式,第一种形式是在原有的形变基础上,继续移动之前移动的特征点,可以按照新的形变系数来形成第二部分形变图像,这样可以根据同一个特征点或者同一组特征点形成两部分的组合形变;另一种形式是选择新的特征点,移动新的特征点形成新的形变图像,该新的形变图像也是两部分的组合形变。
进一步的,在步骤S102之前,还包括:
设置标准人脸图像的序号。
可以理解的,由于存在对多个标准人脸图像生成形变效果的情况,因此在获取标准人脸图像之后,可以对标准人脸图像的序号进行设置,以对不同的形变图像效果进行编号。
进一步的,在步骤S104之后,还包括:
识别图像传感器采集到图像中的人脸;根据所述第一形变图像,生成所述人脸的形变图像。
该步骤中,获取从摄像头中识别出来的人脸图像,该人脸图像可以是从真实的人识别出来的人脸,也可以是使用摄像头拍摄包括人脸的图片或者视频所识别出的人脸,本公开不做限制,总之该人脸图像有别于标准人脸图像。
识别人脸图像,主要是在图像中检测出人脸,人脸检测是任意给定一个图像或者一组图像序列,采用一定策略对其进行搜索,以确定所有人脸的位置和区域的一个过程,从各种不同图像或图像序列中确定人脸是否存在,并确定人脸数量和空间分布的过程。通常人脸检测的方法可以分为4类:(1)基于先验知识的方法,该方法将典型的人脸形成规则库对人脸进行编码,通过面部特征之间的关系进行人脸定位;(2)特征不变方法,该方法在姿态、视角或光照条件改变的情况 下找到稳定的特征,然后使用这些特征确定人脸;(3)模板匹配方法,该方法存储几种标准的人脸模式,用来分别描述整个人脸和面部特征,然后计算输入图像和存储的模式间的相互关系并用于检测;(4)基于外观的方法,该方法与模板匹配方法相反,从训练图像集中进行学习从而获得模型,并将这些模型用于检测。在此使用第(4)种方法中的一个实现方式来说明人脸检测的过程:首先需要提取特征完成建模,本实施例使用Haar特征作为判断人脸的关键特征,Haar特征是一种简单的矩形特征,提取速度快,一般Haar特征的计算所使用的特征模板采用简单的矩形组合由两个或多个全等的矩形组成,其中特征模板内有黑色和白色两种矩形;之后,使用AdaBoost算法从大量的Haar特征中找到起关键作用的一部分特征,并用这些特征产生有效的分类器,通过构建出的分类器可以对图像中的人脸进行检测。本实施例中图像中的人脸可以是一个或多个。
可以理解的是,由于每种人脸检测算法各有优点,适应范围也不同,因此可以设置多个不同的检测算法,针对不同的环境自动切换不同的算法,比如在背景环境比较简单的图像中,可以使用检出率较差但是速度较快的算法;在背景环境比较复杂的图像中,可以使用检出率较高但是速度较慢的算法;对于同一图像,也可以使用多种算法多次检测以提高检出率。
在该步骤中,根据所选择的特征点以及步骤S104中生成的第一形变图像,在从摄像头中识别出的人脸图像上生成与标准人脸图像上相同的人脸形变图像。
由于标准人脸图像上的形变到图像传感器所采集到人脸图像的形变需要有一个映射关系,根据映射的方式不同,形变的方式可以分为固定形变和跟踪形变,在一个实施例中使用固定形变,这种形变比较简单,只需要设置整个形变范围在图像传感器中的绝对位置即可,其实现方式可以是将显示装置与图像传感器的图像采集窗口的像素点一一对应,判断形变在显示装置中的位置,之后对图像传感器采集窗口采集到的图像的对应位置进行相应的形变处理,这种形变处理方式的优点是简单易操作,该实现方式所使用的参数都相对于采集窗口的位置;在另一个实施例中,生成形变图像时,先获取步骤S101中的标准人脸图像的特征 点,通过所述特征点确定所述形变在标准人脸图像中的位置;从通过图像传感器所采集到的图像中识别与标准人脸图像对应的人脸图像;将在标准人脸图像中所确定的位置映射到人脸图像中;对人脸图像做形变处理,生成形变图像。该方式中,确定形变在人脸图像中的相对位置,无论人脸图像如何移动变化,所述形变总位于该相对位置上,达到跟踪形变的目的。在一个典型的应用中,所述标准人脸图像经过三角剖分,有106个特征点,利用形变和特征点的相对位置确定形变在人脸图像中的相对位置,对摄像头采集到的人脸图像做同样的三角剖分,之后当摄像头中的人脸发生移动或转动时,所述形变可以一直固定在人脸上的相对位置上,以达到追踪形变的效果。
可以理解的是,识别图像传感器采集到图像中的多个人脸;根据识别顺序对所述人脸编号;根据所述编号、所述序号以及第一形变图像,生成所述人脸的形变图像。当图像传感器中采集到的图像中识别出多个人脸图像时,用户可以选择需要进行形变的一个人脸图像,也可以选择多个人脸图像做相同或者不同的处理。举例来说,在生成形变图像时,可以对标准人脸进行编号,如ID1和ID2,分别对ID1和ID2标准人脸图像生成形变图像,所述形变图像可以相同也可以不同,当从摄像头中识别出多个人脸图像,根据识别出的顺序对所述多个人脸图像添加形变效果,比如先识别出1号人脸,则在1号人脸上添加ID1的标准人脸图像上的形变效果,之后识别出2号人脸,则在2号人脸上添加ID2的标准人脸图像上的形变效果;如果只制作了ID1的标准人脸图像形变,则可以在1号和2号人脸图像上均添加ID1的标准人脸图像上的形变,也可以只在1号人脸上添加形变。
本公开实施例提供一种基于人脸的形变图像生成方法、装置、电子设备和计算机可读存储介质。其中该基于人脸的形变图像生成方法包括:获取标准人脸图像,所述标准人脸图像上包括多个特征点,特征点位于默认位置;选择所述多个特征点中的至少一个特征点;将所选择的特征点从默认位置拖动到第一位置;根据所述第一位置,生成所述标准人脸图像的第一形变图像。本公开实施例通过移 动特征点的位置,生成人脸图像的形变图像,解决了现有技术中只能采用预设效果对人脸图像进行形变的技术问题,提高生成人脸形变图像的灵活度与便捷度。
图2为本公开实施例提供的基于人脸的形变图像生成方法实施例二的流程图,如图2所示,可以包括如下步骤:
S201,获取标准人脸图像,所述标准人脸图像上包括多个特征点,特征点位于默认位置;
S202,选择所述多个特征点中的至少一个特征点,该特征点的镜像特征点同时被选中;
S203,移动所选择的特征点到第一位置,该特征点的镜像特征点同时被移动到所述第一位置的镜像位置;
S204,根据所述第一位置和第一位置的镜像位置,生成所述标准人脸图像的形变图像。
在实际应用中,由于人脸是对称的,因此用户常常会对人脸两边做对称的形变效果,该实施例即对应这种情况,一般情况下,需要在两边分别制作形变效果,这样首选重复制作形变效果,效率低,其次两次制作的形变效果可能会有误差,效果不好。
在该实施例中,除了人脸中线上的特征点,其他特征点均包括镜像特征点,所述镜像特征点是以人脸中线为对称轴跟特征点对称的特征点。可以提前设置镜像选择功能,这样当选中一个特征点时,其镜像特征点也一起被选中,在具体实现时,可以预先设置好一个特征点的镜像特征点,比如1号特征点和10号特征点互为镜像特征点,这样无论选中1号特征点或者选中10号特征点,10号特征点和1号特征点会同时被选中。当移动所选中的特征点时,该特征点镜像特征点会做相应的镜像移动,当所述特征点移动到第一位置,则其镜像特征点被移动到所述第一位置的镜像位置,根据所述第一位置和第一位置的镜像位置,生成所述标准人脸图像的形变图像。
图3为本公开实施例提供的基于人脸的形变图像生成装置实施例一的结构示意图,如图3所示,该装置包括:获取模块31、特征点选择模块32、特征点移动模块33和形变模块34。
获取模块31,用于获取标准人脸图像,所述标准人脸图像上包括多个特征点,特征点位于默认位置;
特征点选择模块32,用于选择所述多个特征点中的至少一个特征点;
特征点移动模块33,用于将所选择的特征点从默认位置拖动到第一位置;
形变模块34,用于根据所述第一位置,生成所述标准人脸图像的第一形变图像。
进一步的,该装置还包括:
形变强度选择模块,用于选择形变的强度,所述形变强度表示形变的程度。
进一步的,该装置还包括:
形变增加模块,用于接收增加形变的命令,在形变图像的基础上继续拖动特征点,以生成第二形变图像。
进一步的,该装置还包括:
序号设置模块,用于设置所述标准人脸图像的序号。
进一步的,该装置还包括:第一人脸识别模块和第一形变映射模块,其中:
第一人脸识别模块,用于识别图像传感器采集到图像中的人脸;
第一形变映射模块,用于根据所述第一形变图像,生成所述人脸的形变图像。
进一步的,该装置还包括:第二人脸识别模块、编号模块以及第二形变映射模块,其中:
第二人脸识别模块,用于识别图像传感器采集到图像中的多个人脸;
编号模块,用于根据识别顺序对所述人脸编号;
第二形变映射模块,用于根据所述编号、所述序号以及第一形变图像,生成所述人脸的形变图像。
图3所示装置可以执行图1所示实施例的方法,本实施例未详细描述的部分,可参考对图1所示实施例的相关说明。该技术方案的执行过程和技术效果参见图1所示实施例中的描述,在此不再赘述。
在本公开实施例提供的基于人脸的形变图像生成装置实施例二中:
获取模块,用于获取标准人脸图像,所述标准人脸图像上包括多个特征点,特征点位于默认位置;
特征点选择模块,用于选择所述多个特征点中的至少一个特征点,该特征点的镜像特征点同时被选中;
特征点移动模块,用于移动所选择的特征点到第一位置,该特征点的镜像特征点同时被移动到所述第一位置的镜像位置;
形变模块,用于根据所述第一位置和第一位置的镜像位置,生成所述标准人脸图像的形变图像。
上述实施例二中的装置可以执行图2所示实施例的方法,本实施例未详细描述的部分,可参考对图2所示实施例的相关说明。该技术方案的执行过程和技术效果参见图2所示实施例中的描述,在此不再赘述。
图4是图示根据本公开的实施例的电子设备的硬件框图。如图4所示,根据本公开实施例的电子设备40包括存储器41和处理器42。
该存储器41用于存储非暂时性计算机可读指令。具体地,存储器41可以包括一个或多个计算机程序产品,该计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。该易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。该非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。
该处理器42可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制电子设备40中的其它组件以执行期望的功能。在本公开的一个实施例中,该处理器42用于运行该存储器41中存储的该计算机可读指令,使得该电子设备40执行前述的本公开各实施例的基于人脸的形变图像生成方法的全部或部分步骤。
本领域技术人员应能理解,为了解决如何获得良好用户体验效果的技术问题,本实施例中也可以包括诸如通信总线、接口等公知的结构,这些公知的结构也应包含在本发明的保护范围之内。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
图5是图示根据本公开的实施例的计算机可读存储介质的示意图。如图5所示,根据本公开实施例的计算机可读存储介质50,其上存储有非暂时性计算机可读指令51。当该非暂时性计算机可读指令51由处理器运行时,执行前述的本公开各实施例的基于人脸的形变图像生成方法的全部或部分步骤。
上述计算机可读存储介质50包括但不限于:光存储介质(例如:CD-ROM和DVD)、磁光存储介质(例如:MO)、磁存储介质(例如:磁带或移动硬盘)、具有内置的可重写非易失性存储器的媒体(例如:存储卡)和具有内置ROM的媒体(例如:ROM盒)。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
图6是图示根据本公开实施例的终端设备的硬件结构示意图。如图6所示,该基于人脸的形变图像生成终端60包括上述基于人脸的形变图像生成装置实施例。
该终端设备可以以各种形式来实施,本公开中的终端设备可以包括但不限于诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助 理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置、车载终端设备、车载显示终端、车载电子后视镜等等的移动终端设备以及诸如数字TV、台式计算机等等的固定终端设备。
作为等同替换的实施方式,该终端还可以包括其他组件。如图6所示,该基于人脸的形变图像生成终端60可以包括电源单元61、无线通信单元62、A/V(音频/视频)输入单元63、用户输入单元64、感测单元65、接口单元66、控制器67、输出单元68和存储单元69等等。图6示出了具有各种组件的终端,但是应理解的是,并不要求实施所有示出的组件,也可以替代地实施更多或更少的组件。
其中,无线通信单元62允许终端60与无线通信系统或网络之间的无线电通信。A/V输入单元63用于接收音频或视频信号。用户输入单元64可以根据用户输入的命令生成键输入数据以控制终端设备的各种操作。感测单元65检测终端60的当前状态、终端60的位置、用户对于终端60的触摸输入的有无、终端60的取向、终端60的加速或减速移动和方向等等,并且生成用于控制终端60的操作的命令或信号。接口单元66用作至少一个外部装置与终端60连接可以通过的接口。输出单元68被构造为以视觉、音频和/或触觉方式提供输出信号。存储单元69可以存储由控制器67执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据。存储单元69可以包括至少一种类型的存储介质。而且,终端60可以与通过网络连接执行存储单元69的存储功能的网络存储装置协作。控制器67通常控制终端设备的总体操作。另外,控制器67可以包括用于再现或回放多媒体数据的多媒体模块。控制器67可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。电源单元61在控制器67的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
本公开提出的基于人脸的形变图像生成方法的各种实施方式可以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,本公 开提出的基于人脸的形变图像生成方法的各种实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,本公开提出的基于人脸的形变图像生成方法的各种实施方式可以在控制器67中实施。对于软件实施,本公开提出的基于人脸的形变图像生成方法的各种实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储单元69中并且由控制器67执行。
有关本实施例的详细说明可以参考前述各实施例中的相应说明,在此不再赘述。
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。
本公开中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
另外,如在此使用的,在以“至少一个”开始的项的列举中使用的“或”指示分离的列举,以便例如“A、B或C的至少一个”的列举意味着A或B或C,或AB或AC或BC,或ABC(即A和B和C)。此外,措辞“示例的”不意味 着描述的例子是优选的或者比其他例子更好。
还需要指出的是,在本公开的系统和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。
可以不脱离由所附权利要求定义的教导的技术而进行对在此所述的技术的各种改变、替换和更改。此外,本公开的权利要求的范围不限于以上所述的处理、机器、制造、事件的组成、手段、方法和动作的具体方面。可以利用与在此所述的相应方面进行基本相同的功能或者实现基本相同的结果的当前存在的或者稍后要开发的处理、机器、制造、事件的组成、手段、方法或动作。因而,所附权利要求包括在其范围内的这样的处理、机器、制造、事件的组成、手段、方法或动作。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (13)

  1. 一种基于人脸的形变图像生成方法,其特征在于,包括:
    获取标准人脸图像,所述标准人脸图像上包括多个特征点,所述特征点位于默认位置;
    选择所述多个特征点中的至少一个特征点;
    将所选择的特征点从默认位置移动到第一位置;
    根据所述第一位置,生成所述标准人脸图像的第一形变图像。
  2. 如权利要求1所述的基于人脸的形变图像生成方法,其特征在于,所述特征点具有镜像特征点,所述选择所述多个特征点中的至少一个特征点包括:
    选择所述多个特征点中的至少一个特征点,该特征点的镜像特征点同时被选中。
  3. 如权利要求2所述的基于人脸的形变图像生成方法,其特征在于,所述将所选择的特征点从默认位置拖动到第一位置,包括:
    移动所选择的特征点到第一位置,该特征点的镜像特征点同时被移动到所述第一位置的镜像位置。
  4. 如权利要求3所述的基于人脸的形变图像生成方法,其特征在于,所述根据所述第一位置,生成所述标准人脸图像的第一形变图像包括:
    根据所述第一位置和第一位置的镜像位置,生成所述标准人脸图像的形变图像。
  5. 如权利要求1所述的基于人脸的形变图像生成方法,其特征在于,所述根据所述第一位置,生成所述标准人脸图像的第一形变图像,包括:
    根据所述第一位置与所述默认位置,产生形变系数,根据所述形变系数移动所选择的特征点周围的像素点,生成所述标准人脸图像的形变图像。
  6. 如权利要求1所述的基于人脸的形变图像生成方法,其特征在于,在根据所述第一位置,生成所述标准人脸图像的形变图像之前,还包括:
    选择形变的强度,所述形变强度表示形变的程度。
  7. 如权利要求1所述的基于人脸的形变图像生成方法,其特征在于,在根据所述第一位置,生成所述标准人脸图像的形变图像之后,还包括:
    接收增加形变的命令,在形变图像的基础上继续拖动特征点,以生成第二形变图像。
  8. 如权利要求1所述的基于人脸的形变图像生成方法,其特征在于,在选择所述多个特征点中的至少一个特征点之前,还包括:
    设置所述标准人脸图像的序号。
  9. 权利要求1所述的基于人脸的形变图像生成方法,其特征在于,在根据所述第一位置,生成所述标准人脸图像的第一形变图像之后,还包括:
    识别图像传感器采集到图像中的人脸;
    根据所述第一形变图像,生成所述人脸的形变图像。
  10. 权利要求8所述的基于人脸的形变图像生成方法,其特征在于,在根据所述第一位置,生成所述标准人脸图像的第一形变图像之后,还包括:
    识别图像传感器采集到图像中的多个人脸;
    根据识别顺序对所述多个人脸编号;
    根据所述编号、所述序号以及第一形变图像,生成所述人脸的形变图像。
  11. 一种基于人脸的形变图像生成装置,其特征在于,包括:
    获取模块,用于获取标准人脸图像,所述标准人脸图像上包括多个特征点,特征点位于默认位置;
    特征点选择模块,用于选择所述多个特征点中的至少一个特征点;
    特征点移动模块,用于将所选择的特征点从默认位置拖动到第一位置;
    形变模块,用于根据所述第一位置,生成所述标准人脸图像的第一形变图像。
  12. 一种电子设备,包括:
    存储器,用于存储非暂时性计算机可读指令;以及
    处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现根据权利要求1-10中任意一项所述的基于人脸的形变图像生成方法。
  13. 一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行权利要求1-10中任意一项所述的基于人脸的形变图像生成方法。
PCT/CN2018/123640 2018-07-27 2018-12-25 基于人脸的形变图像生成方法和装置 WO2020019664A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810838359.4A CN109003224A (zh) 2018-07-27 2018-07-27 基于人脸的形变图像生成方法和装置
CN201810838359.4 2018-07-27

Publications (1)

Publication Number Publication Date
WO2020019664A1 true WO2020019664A1 (zh) 2020-01-30

Family

ID=64596481

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/123640 WO2020019664A1 (zh) 2018-07-27 2018-12-25 基于人脸的形变图像生成方法和装置

Country Status (2)

Country Link
CN (1) CN109003224A (zh)
WO (1) WO2020019664A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109003224A (zh) * 2018-07-27 2018-12-14 北京微播视界科技有限公司 基于人脸的形变图像生成方法和装置
CN111488759A (zh) * 2019-01-25 2020-08-04 北京字节跳动网络技术有限公司 动物脸部的图像处理方法和装置
CN110069191B (zh) * 2019-01-31 2021-03-30 北京字节跳动网络技术有限公司 基于终端的图像拖拽变形实现方法和装置
CN110008873B (zh) * 2019-04-25 2021-06-22 北京华捷艾米科技有限公司 面部表情捕捉方法、系统及设备
CN110443745B (zh) * 2019-07-03 2024-03-19 平安科技(深圳)有限公司 图像生成方法、装置、计算机设备及存储介质
CN110837332A (zh) * 2019-11-13 2020-02-25 北京字节跳动网络技术有限公司 面部图像变形方法、装置、电子设备和计算机可读介质
CN112257594B (zh) * 2020-10-22 2024-09-13 广州繁星互娱信息科技有限公司 多媒体数据的显示方法、装置、计算机设备及存储介质
CN114445555A (zh) * 2021-12-24 2022-05-06 广东时谛智能科技有限公司 鞋楦建模调整方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1545068A (zh) * 2003-11-11 2004-11-10 易连科技股份有限公司 快速建立人脸影像平面模型的方法
CN102393951A (zh) * 2011-06-30 2012-03-28 Tcl集团股份有限公司 一种人脸模型的变形方法
CN103208133A (zh) * 2013-04-02 2013-07-17 浙江大学 一种图像中人脸胖瘦的调整方法
CN105242888A (zh) * 2014-07-10 2016-01-13 联想(北京)有限公司 一种系统控制方法及电子设备
CN107730449A (zh) * 2017-11-07 2018-02-23 深圳市云之梦科技有限公司 一种人脸五官美化处理的方法及系统
CN109003224A (zh) * 2018-07-27 2018-12-14 北京微播视界科技有限公司 基于人脸的形变图像生成方法和装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5240795B2 (ja) * 2010-04-30 2013-07-17 オムロン株式会社 画像変形装置、電子機器、画像変形方法、および画像変形プログラム
KR102013928B1 (ko) * 2012-12-28 2019-08-23 삼성전자주식회사 영상 변형 장치 및 그 방법
CN105205779B (zh) * 2015-09-15 2018-10-19 厦门美图之家科技有限公司 一种基于图像变形的眼部图像处理方法、系统及拍摄终端
CN107203963B (zh) * 2016-03-17 2019-03-15 腾讯科技(深圳)有限公司 一种图像处理方法及装置、电子设备
CN105956997B (zh) * 2016-04-27 2019-07-05 腾讯科技(深圳)有限公司 图像形变处理的方法和装置
CN107958236B (zh) * 2017-12-28 2021-03-19 深圳市金立通信设备有限公司 人脸识别样本图像的生成方法及终端
CN108259496B (zh) * 2018-01-19 2021-06-04 北京市商汤科技开发有限公司 特效程序文件包的生成及特效生成方法与装置、电子设备
CN108280883B (zh) * 2018-02-07 2021-05-04 北京市商汤科技开发有限公司 变形特效程序文件包的生成及变形特效生成方法与装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1545068A (zh) * 2003-11-11 2004-11-10 易连科技股份有限公司 快速建立人脸影像平面模型的方法
CN102393951A (zh) * 2011-06-30 2012-03-28 Tcl集团股份有限公司 一种人脸模型的变形方法
CN103208133A (zh) * 2013-04-02 2013-07-17 浙江大学 一种图像中人脸胖瘦的调整方法
CN105242888A (zh) * 2014-07-10 2016-01-13 联想(北京)有限公司 一种系统控制方法及电子设备
CN107730449A (zh) * 2017-11-07 2018-02-23 深圳市云之梦科技有限公司 一种人脸五官美化处理的方法及系统
CN109003224A (zh) * 2018-07-27 2018-12-14 北京微播视界科技有限公司 基于人脸的形变图像生成方法和装置

Also Published As

Publication number Publication date
CN109003224A (zh) 2018-12-14

Similar Documents

Publication Publication Date Title
WO2020019664A1 (zh) 基于人脸的形变图像生成方法和装置
US11017580B2 (en) Face image processing based on key point detection
US11354825B2 (en) Method, apparatus for generating special effect based on face, and electronic device
US11749020B2 (en) Method and apparatus for multi-face tracking of a face effect, and electronic device
US20180167559A1 (en) Real-time visual effects for a live camera view
WO2020001014A1 (zh) 图像美化方法、装置及电子设备
US11276238B2 (en) Method, apparatus and electronic device for generating a three-dimensional effect based on a face
WO2020029554A1 (zh) 增强现实多平面模型动画交互方法、装置、设备及存储介质
WO2021213067A1 (zh) 物品显示方法、装置、设备及存储介质
WO2020024569A1 (zh) 动态生成人脸三维模型的方法、装置、电子设备
WO2019242271A1 (zh) 图像变形方法、装置及电子设备
WO2020037923A1 (zh) 图像合成方法和装置
CN116051729B (zh) 三维内容生成方法、装置和电子设备
WO2019237747A1 (zh) 图像裁剪方法、装置、电子设备及计算机可读存储介质
US11709593B2 (en) Electronic apparatus for providing a virtual keyboard and controlling method thereof
JP2020507159A (ja) ピクチャプッシュの方法、移動端末および記憶媒体
WO2020037924A1 (zh) 动画生成方法和装置
US20170131785A1 (en) Method and apparatus for providing interface interacting with user by means of nui device
WO2020001016A1 (zh) 运动图像生成方法、装置、电子设备及计算机可读存储介质
JP7372945B2 (ja) シナリオ制御方法、装置および電子装置
CN109146770A (zh) 一种形变图像生成方法、装置、电子设备及计算机可读存储介质
JP2023043849A (ja) 仮想顔部モデルを調整する方法、装置、電子機器及び記憶媒体
CN110070478B (zh) 变形图像生成方法和装置
Lissoboi et al. Development of an efficient method for eye detection on mobile CE devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18927604

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.05.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18927604

Country of ref document: EP

Kind code of ref document: A1