WO2020134558A1 - 图像处理方法、装置、电子设备及存储介质 - Google Patents

图像处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2020134558A1
WO2020134558A1 PCT/CN2019/114886 CN2019114886W WO2020134558A1 WO 2020134558 A1 WO2020134558 A1 WO 2020134558A1 CN 2019114886 W CN2019114886 W CN 2019114886W WO 2020134558 A1 WO2020134558 A1 WO 2020134558A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
facial
grid
preset
facial features
Prior art date
Application number
PCT/CN2019/114886
Other languages
English (en)
French (fr)
Inventor
侯沛宏
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Priority to EP19902558.6A priority Critical patent/EP3905662A4/en
Publication of WO2020134558A1 publication Critical patent/WO2020134558A1/zh
Priority to US17/098,066 priority patent/US11030733B2/en
Priority to US17/306,340 priority patent/US20210256672A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image processing method, device, electronic device, and storage medium.
  • the electronic device after receiving the preset special effect adding command, can obtain a certain frame in the video taken by the user as the image to be processed; then, the electronic device can use the preset face recognition algorithm to extract the image from the image to be processed Extract the first position of the face, eyes, and mouth; then, the electronic device can obtain the second positions of the reference face, reference eyes, and reference mouth in the pre-stored reference mask, and based on the first position and the second position Establish a combination matrix; finally, the electronic device can map the reference mask to the image to be processed according to the combination matrix to obtain the target image with the reference mask added to the user's face.
  • the electronic device can only add special effects by adding a reference mask to the user's face.
  • the special effects are single and the user experience is poor.
  • the present application provides an image processing method, device, electronic device, and storage medium, so as to add a fly-away special effect to an image and improve user experience.
  • an image processing method including:
  • the facial features image is mapped onto the facial image to obtain the processed target image.
  • an image processing apparatus including:
  • the establishing unit is configured to execute the establishment of a facial mesh and a facial features mesh in the image to be processed when a preset fly away special effect instruction is received;
  • the first determining unit is configured to perform determining the facial image in the image area covered by the facial grid
  • a setting unit configured to perform setting the pixel value of each pixel in the facial image to the target pixel value
  • An extraction unit configured to perform extraction of the image area covered by the facial features grid in the image to be processed to obtain facial features images
  • the mapping unit is configured to perform mapping of the facial features image on the facial image according to a preset triangle mapping algorithm and a preset offset to obtain a processed target image.
  • an electronic device including:
  • a memory for storing a computer program, and candidate intermediate data and result data generated by executing the computer program
  • the processor is configured to establish a face grid and a facial feature grid in the image to be processed when receiving a preset fly away special effect instruction; determine the face image in the image area covered by the face grid; The pixel value of each pixel in the facial image is set as the target pixel value; in the image to be processed, the image area covered by the facial features grid is extracted to obtain the facial features image; according to the preset triangle mapping algorithm and the preset The offset maps the facial features image to the facial image to obtain the processed target image.
  • a computer-readable storage medium which carries one or more computer instruction programs, and when the computer instruction programs are executed by one or more processors, the one or Multiple processors implement the method steps of any one of the first aspect.
  • a computer program product containing instructions, which when executed on a computer, causes the computer to execute any of the image processing methods described above.
  • the technical solution provided by the embodiments of the present application may include the following beneficial effects:
  • the facial features image and the facial image are determined from the images to be processed, and the position of the facial features image relative to the facial image is changed according to a preset offset, Therefore, the user's facial features have the effect of flying away, which can improve the user experience.
  • Fig. 1 is a flow chart showing an image processing method according to an exemplary embodiment
  • Fig. 2a is a schematic diagram of an image to be processed according to an exemplary embodiment
  • Fig. 2b is a schematic diagram of a face grid according to an exemplary embodiment
  • Fig. 2c is a schematic diagram of a left-eye facial feature grid according to an exemplary embodiment
  • Fig. 2d is a schematic diagram of a right-eye facial features grid according to an exemplary embodiment
  • Fig. 2e is a schematic diagram of a nose facial feature grid according to an exemplary embodiment
  • Fig. 2f is a schematic diagram of a facial feature grid according to an exemplary embodiment
  • Fig. 2g is a schematic diagram of a facial image according to an exemplary embodiment
  • Fig. 2h is a schematic diagram of a target image according to an exemplary embodiment
  • Fig. 2i is a schematic diagram of a first face template according to an exemplary embodiment
  • Fig. 2j is a schematic diagram of a second face template according to an exemplary embodiment
  • Fig. 3 is a flow chart showing an image processing method according to an exemplary embodiment
  • Fig. 4 is a block diagram of an image processing apparatus according to an exemplary embodiment
  • Fig. 5 is a block diagram of an electronic device according to an exemplary embodiment
  • Fig. 6 is a block diagram of another electronic device according to an exemplary embodiment.
  • Fig. 1 is a flowchart of an image processing method according to an exemplary embodiment. As shown in Fig. 1, the image processing method is used in a terminal, and the terminal may be an electronic device with a shooting function, such as a mobile phone, for specific processing
  • the process includes the following steps.
  • step S11 when a preset fly away special effect instruction is received, a face grid and a facial features grid are created in the image to be processed.
  • the terminal's preset display interface may display multiple special effect icons, and the user may select a special effect icon from the multiple special effect icons displayed on the terminal when shooting video or photos on the terminal to add video or photo Corresponding special effects.
  • the terminal will receive the fly away special effect instruction.
  • the terminal can recognize whether the photo includes a face through a preset face recognition algorithm. If the recognition result is yes, the photo is taken as the image to be processed. If the recognition result is no, no action is taken. Subsequent processing. Then, the terminal may establish a face grid and a facial features grid in the image to be processed based on the image to be processed.
  • the terminal can identify whether the frame image includes a face through a preset face recognition algorithm for each frame image in the video. If the recognition result is yes, the frame image is used as a pending Process the image, and if the recognition result is no, then recognize the next frame of the image.
  • the terminal can establish a facial grid and facial features grid in the image to be processed.
  • facial features include eyes, mouth, and nose.
  • FIG. 2a it is a schematic diagram of an image to be processed provided by an embodiment of the present application; as shown in FIGS. 2b-2f, a facial grid, a left-eye facial features grid, and a right-eye facial features grid established for the terminal, Schematic diagram of the nose facial features grid and the mouth facial features grid.
  • step S12 in the image area covered by the face grid, a face image is determined.
  • a facial area mask may be pre-stored in the terminal, and the facial area mask is a human face mask that does not contain facial features images.
  • the terminal may use the image to be processed as the first layer, and after determining the face grid, the terminal may determine the image area covered by the face grid from the image to be processed. Then, the terminal may acquire the facial area mask stored in advance, and map the facial area mask to the image area covered by the facial grid through a preset mapping algorithm to obtain a facial image. The terminal may use the to-be-processed image containing the facial image as the second layer.
  • step S13 the pixel value of each pixel in the face image is set as the target pixel value.
  • the target pixel value may be stored in the terminal in advance, or the terminal may determine the target pixel value based on the image to be processed.
  • the terminal may set the pixel values of multiple pixels included in the facial image as the target pixel values, so that the user's face in the second layer has a smoothing effect, and the skin color of the user's face is more uniform.
  • FIG. 2g an embodiment of the present application provides an example diagram of a face image.
  • the present application provides an embodiment in which the terminal determines the target pixel value based on the image to be processed and sets the pixel value of each pixel in the face area as the target pixel value, specifically including the following steps:
  • Step 1 Extract pixel values of pixels at multiple preset positions in the image to be processed.
  • the preset position includes the user's forehead and/or cheeks, and the preset position may also be other positions of the user's facial image, such as temples.
  • the terminal may identify a preset position from the image to be processed according to a preset face recognition algorithm, and for each preset position, extract pixel values of one or more pixels at the preset position.
  • the terminal can recognize the user's forehead, temples, and cheeks according to the preset face recognition algorithm, and then the terminal can extract the pixel value of a pixel at the forehead, temples, cheeks, etc. .
  • the terminal stores the pixel value of each pixel based on the RedGreenBlue (RGB) color mode
  • the pixel value at the preset position may be (252,224,203,255).
  • Step 2 Calculate the average value of multiple pixel values, and use the average value as the target pixel value.
  • the terminal may calculate an average value of multiple pixel values extracted at a preset position, and then use the average value as the target pixel value.
  • the terminal sets the pixel value of each pixel in the face area as the target pixel value, so that the facial image from which the facial features image is removed is filled with a layer of flesh color, so that the facial image has a smooth visual effect.
  • a variety of skin texture samples may be pre-stored in the terminal, different skin texture samples are used to represent different skin textures, and the staff may preset a certain skin texture sample as the target skin texture sample ,
  • the terminal can select the target skin texture sample after setting the target pixel value for the facial image, and set the facial image to have the skin texture corresponding to the skin texture sample, so as to make the visual effect of the user's facial image in the second layer more Close to the user's actual face.
  • the specific process in which the terminal sets the face area to have the skin texture corresponding to the skin texture sample is the prior art, and will not be repeated here.
  • step S14 in the image to be processed, the image area covered by the facial features grid is extracted to obtain the facial features image.
  • the terminal may determine the image area included in each facial feature grid for multiple facial features grids in the image to be processed, and extract the image region to obtain the facial feature image as the third layer.
  • step S15 according to the preset triangle mapping algorithm and the preset offset, the facial features image is mapped onto the face image to obtain the processed target image.
  • multiple offsets may be stored in the terminal in advance, and the offset includes the offset direction and offset size of a certain facial feature image relative to the position of the facial feature image in the image to be processed.
  • the offset is also different.
  • the terminal may obtain the first offset stored in advance as the offset of the image to be processed.
  • the terminal may store the correspondence between the offset and the time in advance.
  • the terminal may according to the time, offset, and time corresponding to the fly away special effect command received.
  • the second offset is determined as the offset of the image to be processed.
  • the terminal may map the facial feature image to the facial image according to the preset triangle mapping algorithm and the offset, to obtain the processed target image.
  • the multiple offsets stored in the terminal are that the left eye is offset by 1-5 cm in any direction, the right eye is offset by 1-5 cm in any direction, the nose is offset by 1-7 cm in any direction, and the mouth is oriented Either direction is offset by 1-7 cm, and either direction can be any direction centering on a certain facial feature.
  • the terminal can obtain a pre-stored first offset, the first offset is the left eye offset by 1cm to the right, the right eye offset by 1cm to the left, the nose offset by 1cm, the mouth Offset by 1cm upwards, the terminal can map the facial features image to the facial image according to the preset triangle mapping algorithm and the first offset, and the user’s left eye in the obtained target image is offset by 1cm to the right, and the right eye is offset to the left
  • the nose is shifted upward by 1cm
  • the mouth is shifted upward by 1cm, so that the user's facial features have the effect of flying away.
  • the terminal can first determine that the time to receive the fly away special effect command is 1s, and then, the terminal can determine the corresponding relationship between 1s and the time according to the pre-stored correspondence between the offset and the time
  • the second offset is that the left eye is offset 1cm to the left, the right eye is offset 1cm to the right, the nose is offset 1cm upward, and the mouth is offset 1cm downward.
  • the terminal can map the facial features image to the facial image according to the preset triangle mapping algorithm and the second offset.
  • the left eye of the user is offset by 1 cm to the left
  • the right eye is offset by 1 cm to the right
  • the nose is up.
  • Offset by 1cm mouth down by 1cm.
  • an embodiment of the present application provides an example diagram of a target image obtained after a terminal adds a fly-away special effect to a user's face.
  • the offsets corresponding to different times are also different.
  • the terminal determines the offset direction and offset size according to the time of the image to be processed in the preset processing cycle, which can make the user's face have The effect that the five senses fly away and come back increases the user experience.
  • the terminal may also periodically perform the above steps based on the processing cycle during the user's video shooting, so that the facial features of the user have the effect of periodically flying away and returning, further increasing the user experience.
  • the terminal when receiving the fly away special effect instruction, the terminal establishes a face grid and a facial feature grid in the pending image based on the image to be processed, determines the facial image based on the facial grid, and determines the facial features based on the facial feature grid The image, and then, through the preset triangle mapping algorithm and the preset offset, map the facial features image to the face image to obtain the target image.
  • the terminal changes the position of the facial features image relative to the face image according to the preset offset, so that the user's facial features have the effect of flying away.
  • the terminal when mapping a mask to a user's face area, the terminal only performs mask mapping and matching based on three feature points of the user's two eyes and mouth.
  • the terminal establishes the user's Facial grid and facial features grid, and based on the triangle mapping algorithm and multiple key points that make up the facial grid and facial features grid, the facial features image is mapped onto the facial image, which can ensure that the facial features image mapped to the facial image is true to the user Compared with facial features, changes are more natural.
  • the terminal maps the facial features image to the facial image through the triangle mapping algorithm, which is equivalent to mapping the third layer to the second layer.
  • the target image is composed of the first layer (that is, the image to be processed) and the second The layer layer and the third layer layer are composed.
  • the terminal can set the display transparency of the second layer and the third layer layer to realize various display effects and increase the user experience .
  • the terminal creates a facial grid and a facial feature grid as follows:
  • Step 1 Through a preset mapping algorithm, map the first number of key points in the pre-stored first face template to the user's facial image area in the image to be processed.
  • a first face template may be pre-stored in the terminal.
  • FIG. 2i it is a schematic diagram of a first face template provided by an embodiment of the present application.
  • the first face template stores a first number Key points, the first number of key points are evenly distributed in the face, facial features, and image areas within a preset range around the face in the first face template.
  • Each key point belongs to one of many types of preset key points.
  • the preset key point types include face type, mouth type, nose type, and eye type.
  • the type of a certain key point is Yu represents the category of the grid that the key point is used to construct, that is, the key point is used to construct a facial grid or facial features grid.
  • the first number may be 200.
  • the terminal may map the first number of key points in the first face template stored in advance to the user's facial image area in the to-be-processed image through a preset mapping algorithm to obtain the user Corresponding to the first number of key points on the face image area, correspondingly, the type of each key point does not change.
  • the mapping algorithm may be any algorithm with a mapping function.
  • Step 2 Based on the first number of key points and the preset types of each key point, a facial grid and a facial feature grid are established.
  • the terminal may classify the first number of key points according to the types of preset key points to obtain multiple key points corresponding to different types, and then establish a network based on the multiple key points corresponding to each type respectively Grid to get the facial grid and facial features grid.
  • the terminal may number each key point.
  • the first number is 200
  • the number of the key point is 1-200.
  • the terminal may store the number of the key point and the number of the grid.
  • the correspondence relationship between the grid and the digital number for example, the eye grid is composed of key points with the digital number of 67-80.
  • the terminal may determine the digital numbers of multiple key points corresponding to the grid according to the corresponding relationship between the digital number and the grid, and establish the grid based on the determined key points of the digital number. Get the facial grid and facial features grid.
  • 210 is a facial grid, and the facial grid covers the user's facial image area; as shown in Figure 2c, 221 is the facial features grid of the left eye; as shown in Figure 2d, 222 is the facial features of the right eye
  • the official grid; as shown in FIG. 2e, 223 is the facial features grid of the nose; as shown in FIG. 2f, 224 is the facial features grid of the mouth, and the facial features grid or the facial grid are composed of multiple key points.
  • the terminal may determine the first face template based on the pre-stored second face template.
  • the specific processing procedure is as follows:
  • step S31 a pre-stored second face template is obtained, and the second face template includes a second number of key points.
  • the terminal may acquire a pre-stored second face template, and the second face template includes a second number of key points, as shown in FIG. 2j, which is a second face template provided by an embodiment of the present application.
  • FIG. 2j is a second face template provided by an embodiment of the present application.
  • the second number of key points are distributed on the facial contour and the facial features contour in the user's facial image, and each key point belongs to one of multiple types of preset key points.
  • the second number may be 101.
  • the terminal numbers each key point the number of the second number of key points is 1-101.
  • step S32 the third number of key points and the type of the third number of key points are determined according to the second number of key points and the type of each preset key point.
  • the terminal may determine the coordinate information of each second number of key points in a preset coordinate system, and then, the terminal may calculate according to the preset geometric mathematical calculation formula and the coordinate information of each second number of key points.
  • the coordinate information of the third number of key points obtains the third number of key points.
  • the terminal may determine the type of the third number of key points according to the coordinate information of the third number of key points.
  • a and B are two key points in the second number of key points included in the second face template
  • C is a key point determined by the terminal based on the coordinate information of the key point A and the key point B.
  • is the preset weight
  • XA, XB, and XC are the coordinates of the key point A, the key point B, and the key point C in the preset electronic device screen coordinate system, respectively.
  • the terminal may determine the straight line passing through the two key points according to the coordinate information of the two adjacent key points in the preset coordinate system, and through the preset geometric mathematical calculation formula and the two The coordinate information of the key point is calculated to obtain the coordinate information of a certain point on the straight line, and the point is used as the new key point. Then, the terminal may determine the key point closest to the key point in the preset coordinate system according to the coordinate information of the new key point, and use the type of the key point as the type of the new key point. The terminal may determine multiple key points for the second number of key points through the above-mentioned processing procedure to obtain the third number of key points and the type of the third number of key points.
  • the terminal may determine the type of a certain key point according to the received type instruction issued by the technician.
  • the sum of the third number of key points and the second number of key points is the first number of key points, and the difference between the first face template and the second face template is only in the number of included key points s difference.
  • step S33 the first face template is determined based on the second face template and the third number of key points.
  • the terminal may use the second face template containing the third number of key points as the first face template.
  • the terminal determines the third number of key points on the face image and the peripheral area within the preset range of the face image, and based on the original
  • the second number of key points and the third number of key points establish a facial grid and facial features grid, so that the established facial grid and facial features grid are as uniform and fine as possible, which can further improve the terminal to correspond to the facial features grid.
  • the facial features image is mapped to the facial image, the fit of the facial features image to the user's real facial features makes the changes of facial features more natural.
  • the terminal may perform the following steps:
  • Step 1 When a preset facial features change instruction is received, a facial grid and facial features grid are established in the image to be processed.
  • the user can select the preset facial features change icon from the multiple special effect icons displayed on the terminal's preset display interface when taking photos or recording videos through the terminal.
  • the terminal will receive the preset facial features Change instructions.
  • the processing procedure of the terminal is the same as step one and step two.
  • Step 2. Determine the facial image in the image area covered by the facial grid.
  • processing procedure of the terminal is the same as step S12.
  • Step 3 Set the pixel value of each pixel in the face image to the target pixel value.
  • the processing procedure of the terminal is the same as step S13.
  • Step 4 In the image to be processed, extract the image area covered by the facial features grid to obtain the facial features image.
  • the processing procedure of the terminal is the same as step S14.
  • Step 5 Change the shape of the facial features image according to the change flag carried by the facial features change instruction to obtain the deformed facial features image.
  • the facial features change instruction may carry a change logo, and the changed logo includes a logo that makes the user's facial features larger or smaller.
  • the terminal may determine the facial feature image to be changed according to the change identifier carried in the received facial feature change instruction and place the facial feature image at the position corresponding to the corresponding facial feature grid, and then change the facial feature network through a preset image processing algorithm The shape of the grid, thereby changing the shape of the facial features corresponding to the facial features grid to obtain the deformed facial features image.
  • the terminal may determine the facial features image to be changed as the user's left-eye image and right-eye image according to the identifier, and then, The terminal can enlarge the facial features grid of the left eye and the facial features of the right eye according to a preset ratio through a preset image processing algorithm, and then change the shape of the left eye image and the right eye image to obtain the enlarged left Eye image and right eye image.
  • Step 6 Map the deformed facial features image to the facial image according to the preset triangle mapping algorithm to obtain the processed target image.
  • the terminal may map the deformed facial features image to the facial image according to a preset triangle mapping algorithm to obtain the processed target image.
  • the terminal may change the shape of the facial feature image based on the facial feature grid, thereby changing the size of the user's facial features, increasing the diversity of special effects, and further improving the user experience.
  • the technical solution provided by the embodiments of the present application may include the following beneficial effects: when a fly away special effect instruction is received, a facial grid and a facial feature grid are established in the image to be processed; on the one hand, the image covered by the facial grid In the area, determine the facial image; set the pixel value of each pixel in the facial image as the target pixel value; on the other hand, you can extract the image area covered by the facial features grid in the image to be processed to obtain the facial features image; after that, you can According to the preset triangle mapping algorithm and the preset offset, the facial features image is mapped onto the face image to obtain the processed target image.
  • the facial features image and the facial image are determined from the images to be processed, and the position of the facial features image relative to the facial image is changed according to the preset offset, so that the user's facial features have the effect of flying away, improving the user experience.
  • Fig. 4 is a block diagram of an image processing device according to an exemplary embodiment.
  • the apparatus includes an establishment unit 410, a first determination unit 420, a setting unit 430, an extraction unit 440, and a mapping unit 450. among them:
  • the establishing unit 410 is configured to establish a facial grid and a facial features grid in the image to be processed when the preset fly away special effect instruction is received;
  • the first determining unit 420 is configured to determine a facial image in the image area covered by the facial grid;
  • the setting unit 430 is configured to set the pixel value of each pixel in the face image as the target pixel value
  • the extracting unit 440 is configured to extract the image area covered by the facial features grid in the image to be processed to obtain a facial features image
  • the mapping unit 450 is configured to map the facial features image to the facial image according to a preset triangle mapping algorithm and a preset offset, to obtain a processed target image.
  • the establishment unit includes:
  • the mapping subunit is configured to execute a preset mapping algorithm to map the first number of key points in the pre-stored first face template to the user's facial image area in the image to be processed;
  • the device further includes:
  • An acquiring unit configured to perform acquiring a pre-stored second face template, the second face template including a second number of key points;
  • a second determining unit configured to perform the determination of the third number of key points and the type of the third number of key points based on the second number of key points and the type of each preset key point;
  • a third determining unit is configured to determine the first face template based on the second face template and the third number of key points.
  • the setting unit includes:
  • An extraction subunit configured to perform extraction of pixel values of pixels at multiple preset positions in the facial image, where the preset positions include the forehead and/or cheeks of the user;
  • the calculation subunit is configured to perform calculation of an average value of the plurality of pixel values, and use the average value as the target pixel value.
  • the device further includes:
  • a facial grid and facial features grid are established in the image to be processed
  • the facial features image is mapped onto the facial image to obtain the processed target image.
  • Fig. 5 is a block diagram of an electronic device 500 for image processing according to an exemplary embodiment.
  • the electronic device 500 may be a mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, or the like.
  • the electronic device 500 may include one or more of the following components: a processing component 502, a memory 504, a power supply component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, and a sensor component 514 , ⁇ 516.
  • the processing component 502 generally controls the overall operations of the electronic device 500, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 502 may include one or more processors 520 to execute instructions to complete all or part of the steps in the above method.
  • the processing component 502 may include one or more modules to facilitate interaction between the processing component 502 and other components.
  • the processing component 502 may include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
  • the memory 504 is configured to store various types of data to support operation at the device 500. Examples of these data include instructions for any application or method operating on the electronic device 500, contact data, phone book data, messages, pictures, videos, and so on.
  • the memory 504 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable and removable Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable and removable Programmable read only memory
  • PROM programmable read only memory
  • ROM read only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the power supply component 506 provides power to various components of the electronic device 500.
  • the power supply component 506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 500.
  • the multimedia component 508 includes a screen that provides an output interface between the electronic device 500 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or sliding action, but also detect the duration and pressure related to the touch or sliding operation.
  • the multimedia component 508 includes a front camera and/or a rear camera. When the device 500 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 510 is configured to output and/or input audio signals.
  • the audio component 510 includes a microphone (MIC).
  • the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 504 or transmitted via the communication component 516.
  • the audio component 510 further includes a speaker for outputting audio signals.
  • the I/O interface 512 provides an interface between the processing component 502 and a peripheral interface module.
  • the peripheral interface module may be a keyboard, a click wheel, or a button. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 514 includes one or more sensors for providing the electronic device 500 with status assessment in various aspects.
  • the sensor component 514 can detect the on/off state of the device 500, and the relative positioning of the components, for example, the component is the display and keypad of the electronic device 500, and the sensor component 514 can also detect the electronic device 500 or a component of the electronic device 500 , The location of the user changes, the presence or absence of user contact with the electronic device 500, the orientation or acceleration/deceleration of the electronic device 500, and the temperature of the electronic device 500 change.
  • the sensor assembly 514 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • the sensor component 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor assembly 514 may also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 516 is configured to facilitate wired or wireless communication between the electronic device 500 and other devices.
  • the electronic device 500 can access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof.
  • the communication component 516 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 516 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the electronic device 500 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), Programming gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are used to implement the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA Programming gate array
  • controller microcontroller, microprocessor or other electronic components are used to implement the above method.
  • a non-transitory computer-readable storage medium including instructions is also provided, for example, a memory 504 including instructions, which can be executed by the processor 520 of the electronic device 500 to complete the above method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, or the like.
  • FIG. 6 is a structural block diagram of another electronic device according to an exemplary embodiment.
  • the electronic device 600 may have a relatively large difference due to different configurations or performance, and may include one or more processors (Central Processing Units, CPU) 601 and one or more memories 602, wherein at least one instruction is stored in the memory 602, and the at least one instruction is loaded and executed by the processor 601 to implement the following process:
  • processors Central Processing Units, CPU
  • memories 602 wherein at least one instruction is stored in the memory 602, and the at least one instruction is loaded and executed by the processor 601 to implement the following process:
  • create a facial grid and facial features grid in the image to be processed When receiving the preset fly away special effect instruction, create a facial grid and facial features grid in the image to be processed; determine the facial image in the image area covered by the facial grid; divide each of the facial images The pixel value of the pixel is set as the target pixel value; in the image to be processed, the image area covered by the facial features grid is extracted to obtain the facial features image; according to the preset triangle mapping algorithm and the preset offset, the The facial features image is mapped onto the facial image to obtain the processed target image.
  • the processor 601 is specifically configured to: map the first number of key points in the pre-stored first face template to the user's facial image area in the image to be processed through a preset mapping algorithm ; Based on the first number of key points and the type of each preset key point, create a face grid and facial features grid, the type of each preset key point includes face type, mouth type, nose type, And the type of eye.
  • processor 601 is specifically used to:
  • the second face template including a second number of key points; and determining a third number of keys according to the second number of key points and the preset types of each key point Points and the type of the third number of key points; based on the second face template and the third number of key points, the first face template is determined.
  • processor 601 is specifically used to:
  • processor 601 is specifically used to:
  • a facial grid and facial features grid are established in the image to be processed; in the image area covered by the facial grid, a facial image is determined; the pixel value of each pixel in the facial image is determined Set as the target pixel value; in the image to be processed, extract the image area covered by the facial features grid to obtain a facial features image; change the shape of the facial features image according to the change flag carried by the facial features change instruction to obtain The deformed facial features image; according to a preset triangle mapping algorithm, the facial features image is mapped onto the facial image to obtain the processed target image.
  • a computer-readable storage medium which carries one or more computer instruction programs, and when the computer instruction programs are executed by one or more processors, the When executed by one or more processors, the steps of any of the above image processing methods are implemented.
  • a computer program product containing instructions is also provided, which, when it runs on a computer, causes the computer to execute any image processing method in the above embodiments.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, computer, server or data center Transmission to another website, computer, server or data center via wired (eg coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including a server, a data center, and the like integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, Solid State Disk (SSD)), or the like.
  • a magnetic medium for example, a floppy disk, a hard disk, a magnetic tape
  • an optical medium for example, a DVD
  • a semiconductor medium for example, Solid State Disk (SSD)

Abstract

本申请是关于一种图像处理方法、装置、电子设备及存储介质,属于计算机技术领域,所述方法包括:当接收到预设的飞走特效指令时,在待处理图像中建立面部网格和五官网格;在所述面部网格所覆盖的图像区域中,确定面部图像;将所述面部图像内各像素的像素值设置为目标像素值;在所述待处理图像中,提取所述五官网格所覆盖的图像区域,得到五官图像;根据预设的三角形映射算法和预设的偏移量,将所述五官图像映射到所述面部图像上,得到处理后的目标图像。采用本申请,能够实现为图像添加飞走特效,提高用户体验。

Description

图像处理方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请要求在2018年12月24日提交中国专利局、申请号为201811585135.3、申请名称为“图像处理方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种图像处理方法、装置、电子设备及存储介质。
背景技术
随着计算机图形技术的发展,电子设备可以对用户拍摄的照片或视频做图像处理,以实现添加特效的效果。
相关技术中,电子设备可以在接收到预设的特效添加命令后,获取用户拍摄的视频中的某一帧作为待处理图像;然后,电子设备可以通过预设的人脸识别算法从待处理图像中提取人脸、眼、以及嘴的第一位置;之后,电子设备可以获取预先存储的参考面具中参考人脸、参考眼、以及参考嘴的第二位置,并基于第一位置和第二位置建立组合矩阵;最后,电子设备可以根据组合矩阵,将参考面具映射到待处理图像上,得到用户面部添加有参考面具的目标图像。
然而,发明人发现电子设备只能通过为用户面部添加参考面具的方式添加特效,特效效果单一,用户体验差。
发明内容
为克服相关技术中存在的问题,本申请提供一种图像处理方法、装置、电子设备及存储介质,以实现为图像添加飞走特效,提高用户体验。
根据本申请实施例的第一方面,提供一种图像处理方法,包括:
当接收到预设的飞走特效指令时,在待处理图像中建立面部网格和五官网格;
在所述面部网格所覆盖的图像区域中,确定面部图像;
将所述面部图像内各像素的像素值设置为目标像素值;
在所述待处理图像中,提取所述五官网格所覆盖的图像区域,得到五官 图像;
根据预设的三角形映射算法和预设的偏移量,将所述五官图像映射到所述面部图像上,得到处理后的目标图像。
根据本申请实施例的第二方面,提供一种图像处理装置,包括:
建立单元,被配置为执行当接收到预设的飞走特效指令时,在待处理图像中建立面部网格和五官网格;
第一确定单元,被配置为执行在所述面部网格所覆盖的图像区域中,确定面部图像;
设置单元,被配置为执行将所述面部图像内各像素的像素值设置为目标像素值;
提取单元,被配置为执行在所述待处理图像中,提取所述五官网格所覆盖的图像区域,得到五官图像;
映射单元,被配置为执行根据预设的三角形映射算法和预设的偏移量,将所述五官图像映射到所述面部图像上,得到处理后的目标图像。
根据本申请实施例的第三方面,提供一种电子设备,包括:
存储器,用于存储计算机程序,以及执行所述计算机程序产生的候选中间数据以及结果数据;
处理器,用于当接收到预设的飞走特效指令时,在待处理图像中建立面部网格和五官网格;在所述面部网格所覆盖的图像区域中,确定面部图像;将所述面部图像内各像素的像素值设置为目标像素值;在所述待处理图像中,提取所述五官网格所覆盖的图像区域,得到五官图像;根据预设的三角形映射算法和预设的偏移量,将所述五官图像映射到所述面部图像上,得到处理后的目标图像。
根据本申请实施例的第四方面,提供了一种计算机可读存储介质,其上承载一个或多个计算机指令程序,所述计算机指令程序被一个或多个处理器执行时,所述一个或多个处理器实现第一方面任一所述的方法步骤。
根据本申请实施例的第五方面,提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述任一所述的图像处理方法。
本申请的实施例提供的技术方案可以包括以下有益效果:本方案中,从待处理图像中确定五官图像和面部图像,并根据预设的偏移量,改变五官图像相对于面部图像的位置,从而使得用户的五官具有飞走的效果,可以提高用户体验。
附图说明
图1是根据一示例性实施例示出的一种图像处理方法的流程图;
图2a是根据一示例性实施例示出的一种待处理图像的示意图;
图2b是根据一示例性实施例示出的一种面部网格的示意图;
图2c是根据一示例性实施例示出的一种左眼五官网格的示意图;
图2d是根据一示例性实施例示出的一种右眼五官网格的示意图;
图2e是根据一示例性实施例示出的一种鼻子五官网格的示意图;
图2f是根据一示例性实施例示出的一种嘴部五官网格的示意图;
图2g是根据一示例性实施例示出的一种面部图像的示意图;
图2h是根据一示例性实施例示出的一种目标图像的示意图;
图2i是根据一示例性实施例示出的一种第一人脸模板的示意图;
图2j是根据一示例性实施例示出的一种第二人脸模板的示意图;
图3是根据一示例性实施例示出的一种图像处理方法的流程图;
图4是根据一示例性实施例示出的一种图像处理装置的框图;
图5是根据一示例性实施例示出的一种电子设备的框图;
图6是根据一示例性实施例示出的另一种电子设备的框图。
具体实施方式
图1是根据一示例性实施例示出的一种图像处理方法的流程图,如图1所示,该图像处理方法用于终端中,终端可以是具有拍摄功能的电子设备,例如手机,具体处理流程包括以下步骤。
在步骤S11中,当接收到预设的飞走特效指令时,在待处理图像中建立面部网格和五官网格。
在实施中,终端预设的显示界面中可以显示有多个特效图标,用户可以在使用终端拍摄视频或者照片时,从终端显示的多个特效图标中选择某一特效图标,以为视频或照片添加相应的特效。当用户点击预设的飞走特效图标时,终端会接收到飞走特效指令。
当用户拍摄照片时,终端可以通过预设的人脸识别算法,对该照片是否包括人脸进行识别,如果识别结果为是,则将该照片作为待处理图像,如果识别结果为否,则不作后续处理。然后,终端可以基于待处理图像,在待处理图像中建立面部网格和五官网格。
当用户拍摄视频时,终端可以针对视频中的每一帧图像,通过预设的人脸识别算法,对该帧图像是否包括人脸进行识别,如果识别结果为是,则将 该帧图像作为待处理图像,如果识别结果为否,则对下一帧图像进行识别。
终端可以在待处理图像中建立面部网格和五官网格。其中,五官包括眼部、嘴部、以及鼻子。如图2a所示,为本申请实施例提供的一种待处理图像的示意图;如图2b-2f所示,分别为终端建立的面部网格、左眼五官网格、右眼五官网格、鼻子五官网格、以及嘴部五官网格的示意图。
终端建立面部网格和五官网格的具体过程后续会进行详细说明。
在步骤S12中,在面部网格所覆盖的图像区域中,确定面部图像。
在实施中,终端中可以预先存储有面部区域面具,面部区域面具为不包含五官图像的人脸面具。
终端可以将待处理图像作为第一层图层,在确定出面部网格后,终端可以从待处理图像中确定面部网格所覆盖的图像区域。然后,终端可以获取预先存储的面部区域面具,并通过预设的映射算法,将面部区域面具映射到面部网格所覆盖的图像区域中,得到面部图像。终端可以将包含面部图像的待处理图像作为第二层图层。
在步骤S13中,将面部图像内各像素的像素值设置为目标像素值。
在实施中,终端中可以预先存储有目标像素值,终端也可以基于待处理图像确定目标像素值。
终端可以将面部图像包含的多个像素的像素值设置为目标像素值,以使第二层图层中用户面部具有磨平的效果,使用户面部的肤色更加均匀。如图2g所示,本申请实施例提供了一种面部图像的示例图。
可选的,本申请提供了一种终端基于待处理图像确定目标像素值,并将面部区域内各像素的像素值设置为目标像素值的实施方式,具体包括以下步骤:
步骤一、提取待处理图像中多个预设位置处的像素的像素值。
其中,预设位置处包括用户的额头和/或脸颊,预设位置处也可以是用户面部图像的其他位置,例如太阳穴。
在实施中,终端可以根据预设的人脸识别算法,从待处理图像中识别出预设位置,并针对每个预设位置,提取该预设位置处一个或多个像素的像素值。
例如,终端可以根据预设的人脸识别算法,识别出用户的额头、太阳穴、以及脸颊等预设位置,然后,终端可以分别提取额头、太阳穴、以及脸颊等预设位置处一个像素的像素值。在终端基于红绿蓝(RedGreenBlue,RGB)色彩模式存储各像素的像素值时,预设位置处的像素值可以是 (252,224,203,255)。
步骤二、计算多个像素值的平均值,并将平均值作为目标像素值。
在实施中,终端可以计算预设位置处提取得到的多个像素值的平均值,然后将该平均值作为目标像素值。
本申请实施例中,终端将面部区域内各像素的像素值设置为目标像素值,可以使得去除了五官图像的面部图像有一层肉色的填充,使得面部图像具有磨平的视觉效果。
在另一种可行的实现方式中,终端中可以预先存储有多种肌肤纹理样本,不同的肌肤纹理样本用于表示不同的皮肤纹路,工作人员可以预先设置某一肌肤纹理样本为目标肌肤纹理样本,终端可以在为面部图像设置目标像素值后,选择目标肌肤纹理样本,并将面部图像设置为具有该肌肤纹理样本对应的皮肤纹路,以使第二层图层中用户面部图像的视觉效果更接近用户的实际面部。
本申请实施例中,终端将面部区域设置为具有肌肤纹理样本对应的皮肤纹路的具体过程为现有技术,此处不再赘述。
在步骤S14中,在待处理图像中,提取五官网格所覆盖的图像区域,得到五官图像。
在实施中,终端可以在待处理图像中针对多个五官网格,分别确定每个五官网格包含的图像区域,提取该图像区域,得到五官图像,作为第三层图层。
在步骤S15中,根据预设的三角形映射算法和预设的偏移量,将五官图像映射到面部图像上,得到处理后的目标图像。
在实施中,终端中可以预先存储有多个偏移量,偏移量包括某一五官图像相对于待处理图像中该五官图像的位置的偏移方向和偏移尺寸。根据拍摄待处理图像的拍摄模式的不同,偏移量也不同。当待处理图像为照片时,终端可以获取预先存储的第一偏移量,作为待处理图像的偏移量。终端中可以预先存储有偏移量与时间的对应关系,当待处理图像为视频中的某一帧图像时,终端可以根据接收到飞走特效指令的时间、偏移量与时间的对应关系,确定第二偏移量,作为待处理图像的偏移量。
终端可以在确定出各五官图像的偏移量后,根据预设的三角形映射算法和偏移量,将五官图像映射到面部图像上,得到处理后的目标图像。
例如,终端中存储的多个偏移量为左眼向任一方向偏移1-5cm,右眼向任一方向偏移1-5cm,鼻子向任一方向偏移1-7cm,嘴部向任一方向偏移1-7cm, 任一方向可以是以某一五官部件为中心的任一方向。
当待处理图像为照片时,终端可以获取预先存储的第一偏移量,第一偏移量为左眼向右偏移1cm,右眼向左偏移1cm,鼻子向上偏移1cm,嘴部向上偏移1cm,终端可以根据预设的三角形映射算法和第一偏移量,将五官图像映射到面部图像上,得到的目标图像中用户的左眼向右偏移1cm,右眼向左偏移1cm,鼻子向上偏移1cm,嘴部向上偏移1cm,使得用户的五官具有飞走的效果。
当待处理图像为视频中的某一帧图像时,终端可以先确定接收到飞走特效指令的时间为1s,然后,终端可以根据预先存储的偏移量与时间的对应关系,确定1s对应的第二偏移量为左眼向左偏移1cm,右眼向右偏移1cm,鼻子向上偏移1cm,嘴部向下偏移1cm。
终端可以根据预设的三角形映射算法和第二偏移量,将五官图像映射到面部图像上,得到的目标图像中用户的左眼向左偏移1cm,右眼向右偏移1cm,鼻子向上偏移1cm,嘴部向下偏移1cm。如图2h所示,本申请实施例提供了一种终端对用户面部添加飞走特效后得到的目标图像的示例图。
本申请实施例中,不同时间对应的偏移量也不同,终端在预设的处理周期内对待处理图像,根据时间的不同确定出不同的偏移方向和偏移尺寸,能够使得用户的面部具有五官飞走又回来的效果,增加用户体验。终端也可以在用户拍摄视频的过程中,基于处理周期,周期性的执行上述步骤,以使用户的面部五官具有周期性的飞走又回来的效果,进一步增加用户体验。
本申请实施例中,终端在接收到飞走特效指令时,基于待处理图像,在待处理图像中建立面部网格和五官网格,并基于面部网格确定面部图像,基于五官网格确定五官图像,然后,通过预设的三角形映射算法和预设的偏移量,将五官图像映射到面部图像上,得到目标图像。终端根据预设的偏移量,改变五官图像相对于面部图像的位置,从而使得用户的五官具有飞走的效果。
与现有技术中,终端在映射面具到用户面部区域时,只基于用户的两个眼睛和嘴巴三个特征点进行面具的映射匹配相比,本申请中,终端基于多个关键点建立用户的面部网格和五官网格,并基于三角形映射算法和组成面部网格和五官网格的多个关键点,将五官图像映射到面部图像上,能够保证映射到面部图像的五官图像与用户的真实五官相比,变化更加自然。
此外,终端通过三角形映射算法将五官图像映射到面部图像中,相当于将第三层图层映射到第二层图层中,目标图像由第一层图层(即待处理图像)、第二层图层、以及第三层图层组成,在另一种可行的实施方式中,终端可以 通过设置第二层图层和第三层图层的显示透明度,实现多种显示效果,增加用户体验。
可选的,终端基于待处理图像,建立面部网格和五官网格的具体处理流程如下:
步骤一、通过预设的映射算法,将预先存储的第一人脸模板中的第一数目个关键点,映射到待处理图像中用户的面部图像区域上。
在实施中,终端中可以预先存储有第一人脸模板,如图2i所示,为本申请实施例提供的一种第一人脸模板的示意图,第一人脸模板中存储有第一数目个关键点,第一数目个关键点均匀的分布在第一人脸模板中面部、五官、以及面部周围预设范围内的图像区域中。每个关键点属于预设的关键点的多种类型中的一种,预设的各关键点的类型包括面部类型、嘴部类型、鼻子类型、以及眼部类型,某一关键点的类型用于表示该关键点用于构建的网格的类别,即该关键点是用于构建面部网格或五官网格。
在一种可行的实施方式中,第一数目可以是200。
终端可以在获取到待处理图像后,通过预设的映射算法,将预先存储的第一人脸模板中的第一数目个关键点,映射到待处理图像中用户的面部图像区域上,得到用户的面部图像区域上对应的第一数目个关键点,相应的,每个关键点的类型不改变。
本申请实施例中,映射算法可以是任一具有映射功能的算法。
步骤二、基于第一数目个关键点和预设的各关键点的类型,建立面部网格和五官网格。
在实施中,终端可以将第一数目个关键点按照预设的各关键点的类型进行分类,得到不同类型对应的多个关键点,然后分别基于每个类型对应的多个关键点,建立网格,得到面部网格和五官网格。
在一种可行的实现方式中,终端可以为每个关键点进行编号,当第一数目为200时,关键点的数字编号为1-200,终端中可以存储关键点的数字编号与网格的对应关系,网格与数字编号的对应关系例如眼部网格由数字编号为67—80的关键点组成。终端可以在建立某一网格时,根据数字编号与网格的对应关系,确定该网格对应的多个关键点的数字编号,并基于确定出数字编号的关键点建立网格,由此,得到面部网格和五官网格。
如图2b所示,210为面部网格,面部网格覆盖了用户的面部图像区域;如图2c所示,221为左眼的五官网格;如图2d所示,222为右眼的五官网格;如图2e所示,223为鼻子的五官网格;如图2f所示,224为嘴部的五官网格, 五官网格或面部网格均由多个关键点组成。
可选的,如图3所示,终端在对待处理图像进行图像处理之前,可以基于预先存储的第二人脸模板,确定第一人脸模板,具体处理过程如下:
在步骤S31中,获取预先存储的第二人脸模板,第二人脸模板包括第二数目个关键点。
在实施中,终端可以获取预先存储的第二人脸模板,第二人脸模板包括第二数目个关键点,如图2j所示,为本申请实施例提供的一种第二人脸模板的示意图,第二数目个关键点分布在用户的面部图像中面部轮廓以及五官轮廓上,每个关键点属于预设的关键点的多种类型中的一种。在一种可行的实施方式中,第二数目可以是101,相应的,终端为每个关键点进行编号时,第二数目个关键点的数字编号为1-101。
在步骤S32中,根据第二数目个关键点和预设的各关键点的类型,确定第三数目个关键点以及第三数目个关键点的类型。
在实施中,终端可以在预设坐标系中,确定各第二数目个关键点的坐标信息,然后,终端可以根据预设的几何数学计算公式和各第二数目个关键点的坐标信息,计算第三数目个关键点的坐标信息,得到第三数目个关键点。之后,终端可以根据第三数目关键点的坐标信息,确定第三数目关键点的类型。
本申请实施例提供了一种几何数学计算公式的示例:
XC=XB+(XB-XA)*λ
其中,A和B为第二人脸模板中包含的第二数目个关键点中的两个关键点,C为终端基于关键点A和关键点B的坐标信息确定出的关键点。λ为预设的权重,XA、XB、以及XC分别为关键点A、关键点B、以及关键点C在预设的电子设备屏幕坐标系中的坐标。
在一种可行的实现方式中,终端可以根据预设坐标系中两个位置相邻的关键点的坐标信息,确定通过该两关键点的直线,并通过预设的几何数学计算公式和该两关键点的坐标信息,计算得到该直线上的某一点的坐标信息,并将该点作为新的关键点。然后,终端可以根据该新的关键点的坐标信息,确定预设坐标系中与该关键点距离最近的关键点,并将该关键点的类型作为该新的关键点的类型。终端可以通过上述处理过程,针对第二数目个关键点,确定多个关键点,得到第三数目个关键点以及第三数目个关键点的类型。
在另一种可行的实现方式中,终端可以根据接收到的技术人员发出的类型指令,确定某一关键点的类型。
本申请实施例中,第三数目个关键点与第二数目个关键点的和为第一数目个关键点,第一人脸模板与第二人脸模板的区别仅在于包含的关键点个数的不同。
在步骤S33中,基于第二人脸模板和第三数目个关键点,确定第一人脸模板。
在实施中,终端可以将包含第三数目个关键点的第二人脸模板,作为第一人脸模板。
本申请实施例中,终端基于第二人脸模板包含的第二数目个关键点,在面部图像以及面部图像预设范围内的周边区域上,确定了第三数目个关键点,并基于原有的第二数目个关键点和第三数目个关键点建立面部网格和五官网格,使得建立的面部网格和五官网格尽可能的均匀、细密,能够进一步的提高终端将五官网格对应的五官图像映射到面部图像时,五官图像与用户真实五官的贴合度,使得五官的变化更加自然。
可选的,终端如果在接收到飞走特效指令的同时或者在接收到飞走特效指令后的预设处理周期内,接收到了五官变化指令,终端可以执行以下步骤:
步骤1、当接收到预设的五官变化指令时,在待处理图像中建立面部网格和五官网格。
在实施中,用户可以在通过终端进行拍摄照片或录制视频时,在终端预设的显示界面显示的多个特效图标中选择预设的五官变化图标,此时,终端会接收到预设的五官变化指令。当接收到预设的五官变化指令时,终端的处理过程与步骤一和步骤二相同。
步骤2、在面部网格所覆盖的图像区域中,确定面部图像。
在实施中,终端的处理过程与步骤S12相同。
步骤3、将面部图像内各像素的像素值设置为目标像素值。
在实施中,终端的处理过程与步骤S13相同。
步骤4、在待处理图像中,提取五官网格所覆盖的图像区域,得到五官图像。
在实施中,终端的处理过程与步骤S14相同。
步骤5、根据五官变化指令携带的变化标识,改变五官图像的形状,得到变形后的五官图像。
在实施中,五官变化指令可以携带有变化标识,变化的标识包括使用户的五官变大或变小的标识。
终端可以根据接收到的五官变化指令携带的变化标识,确定待改变形状 的五官图像并将该五官图像放置在相应的五官网格对应的位置处,然后,通过预设的图像处理算法改变五官网格的形状,从而改变五官网格对应的五官图像的形状,得到变形后的五官图像。
例如,终端接收到的五官变化指令携带的变化标识为使用户的眼睛变大的标识时,终端可以根据该标识,确定待改变形状的五官图像为用户的左眼图像和右眼图像,然后,终端可以通过预设的图像处理算法,将左眼的五官网格和右眼的五官网格按照预设的比例进行放大,进而改变左眼图像和右眼图像的形状,得到变大后的左眼图像和右眼图像。
步骤6、根据预设的三角形映射算法,将变形后的五官图像映射到面部图像上,得到处理后的目标图像。
在实施中,终端可以根据预设的三角形映射算法,将变形后的五官图像映射到面部图像上,得到处理后的目标图像。
本申请实施例中,终端接收到五官变化指令后,可以基于五官网格对五官图像的形状进行改变,从而改变用户的五官大小,增加了特效的多样性,进一步提高了用户体验。
本申请的实施例提供的技术方案可以包括以下有益效果:在接收到飞走特效指令时,在待处理图像中建立面部网格和五官网格;一方面,可以在面部网格所覆盖的图像区域中,确定面部图像;将面部图像内各像素的像素值设置为目标像素值;另一方面,可以在待处理图像中,提取五官网格所覆盖的图像区域,得到五官图像;之后,可以根据预设的三角形映射算法和预设的偏移量,将五官图像映射到面部图像上,得到处理后的目标图像。本方案中,从待处理图像中确定五官图像和面部图像,并根据预设的偏移量,改变五官图像相对于面部图像的位置,从而使得用户的五官具有飞走的效果,提高用户体验。
图4是根据一示例性实施例示出的一种图像处理装置框图。参照图4,该装置包括建立单元410,第一确定单元420,设置单元430,提取单元440和映射单元450。其中:
建立单元410,被配置为当接收到预设的飞走特效指令时,在待处理图像中建立面部网格和五官网格;
第一确定单元420,被配置为在所述面部网格所覆盖的图像区域中,确定面部图像;
设置单元430,被配置为将所述面部图像内各像素的像素值设置为目标像素值;
提取单元440,被配置为在所述待处理图像中,提取所述五官网格所覆盖的图像区域,得到五官图像;
映射单元450,被配置为根据预设的三角形映射算法和预设的偏移量,将所述五官图像映射到所述面部图像上,得到处理后的目标图像。
一种实现方式中,所述建立单元包括:
映射子单元,被配置为执行通过预设的映射算法,将预先存储的第一人脸模板中的第一数目个关键点,映射到待处理图像中用户的面部图像区域上;
建立子单元,被配置为执行基于所述第一数目个关键点和预设的各关键点的类型,建立面部网格和五官网格,所述预设的各关键点的类型包括面部类型、嘴部类型、鼻子类型、以及眼部类型。
一种实现方式中,所述的装置还包括:
获取单元,被配置为执行获取预先存储的第二人脸模板,所述第二人脸模板包括第二数目个关键点;
第二确定单元,被配置为执行根据所述第二数目个关键点和预设的各关键点的类型,确定第三数目个关键点以及所述第三数目个关键点的类型;
第三确定单元,被配置为基于所述第二人脸模板和所述第三数目个关键点,确定所述第一人脸模板。
一种实现方式中,所述设置单元包括:
提取子单元,被配置为执行提取所述面部图像中多个预设位置处的像素的像素值,所述预设位置处包括所述用户的额头和/或脸颊;
计算子单元,被配置为执行计算所述多个像素值的平均值,并将所述平均值作为目标像素值。
一种实现方式中,所述的装置还包括:
当接收到五官变化指令时,在待处理图像中建立面部网格和五官网格;
在所述面部网格所覆盖的图像区域中,确定面部图像;
将所述面部图像内各像素的像素值设置为目标像素值;
在所述待处理图像中,提取所述五官网格所覆盖的图像区域,得到五官图像;
根据所述五官变化指令携带的变化标识,改变所述五官图像的形状,得到变形后的五官图像;
根据预设的三角形映射算法,将所述五官图像映射到所述面部图像上,得到处理后的目标图像。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有 关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图5是根据一示例性实施例示出的一种用于图像处理的电子设备500的框图。例如,电子设备500可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图5,电子设备500可以包括以下一个或多个组件:处理组件502,存储器504,电源组件506,多媒体组件508,音频组件510,输入/输出(I/O)的接口512,传感器组件514,以及通信组件516。
处理组件502通常控制电子设备500的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件502可以包括一个或多个处理器520来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件502可以包括一个或多个模块,便于处理组件502和其他组件之间的交互。例如,处理组件502可以包括多媒体模块,以方便多媒体组件508和处理组件502之间的交互。
存储器504被配置为存储各种类型的数据以支持在设备500的操作。这些数据的示例包括用于在电子设备500上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器504可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件506为电子设备500的各种组件提供电力。电源组件506可以包括电源管理系统,一个或多个电源,及其他与为电子设备500生成、管理和分配电力相关联的组件。
多媒体组件508包括在所述电子设备500和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件508包括一个前置摄像头和/或后置摄像头。当设备500处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件510被配置为输出和/或输入音频信号。例如,音频组件510包括一个麦克风(MIC),当电子设备500处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器504或经由通信组件516发送。在一些实施例中,音频组件510还包括一个扬声器,用于输出音频信号。
I/O接口512为处理组件502和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件514包括一个或多个传感器,用于为电子设备500提供各个方面的状态评估。例如,传感器组件514可以检测到设备500的打开/关闭状态,组件的相对定位,例如所述组件为电子设备500的显示器和小键盘,传感器组件514还可以检测电子设备500或电子设备500一个组件的位置改变,用户与电子设备500接触的存在或不存在,电子设备500方位或加速/减速和电子设备500的温度变化。传感器组件514可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件514还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件514还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件516被配置为便于电子设备500和其他设备之间有线或无线方式的通信。电子设备500可以接入基于通信标准的无线网络,如WiFi,运营商网络(如2G、3G、4G或5G),或它们的组合。在一个示例性实施例中,通信组件516经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件516还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,电子设备500可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器504,上述指令可由电子设备500的处理器520执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、 随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
图6是根据一示例性实施例示出的另一种电子设备的结构框图,该电子设备600可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(Central Processing Units,CPU)601和一个或一个以上的存储器602,其中,所述存储器602中存储有至少一条指令,所述至少一条指令由所述处理器601加载并执行以实现下列过程:
当接收到预设的飞走特效指令时,在待处理图像中建立面部网格和五官网格;在所述面部网格所覆盖的图像区域中,确定面部图像;将所述面部图像内各像素的像素值设置为目标像素值;在所述待处理图像中,提取所述五官网格所覆盖的图像区域,得到五官图像;根据预设的三角形映射算法和预设的偏移量,将所述五官图像映射到所述面部图像上,得到处理后的目标图像。
可选地,所述处理器601具体用于:通过预设的映射算法,将预先存储的第一人脸模板中的第一数目个关键点,映射到待处理图像中用户的面部图像区域上;基于所述第一数目个关键点和预设的各关键点的类型,建立面部网格和五官网格,所述预设的各关键点的类型包括面部类型、嘴部类型、鼻子类型、以及眼部类型。
可选地,所述处理器601具体用于:
获取预先存储的第二人脸模板,所述第二人脸模板包括第二数目个关键点;根据所述第二数目个关键点和预设的各关键点的类型,确定第三数目个关键点以及所述第三数目个关键点的类型;基于所述第二人脸模板和所述第三数目个关键点,确定所述第一人脸模板。
可选地,所述处理器601具体用于:
提取所述面部图像中多个预设位置处的像素的像素值,所述预设位置处包括所述用户的额头和/或脸颊;计算所述多个像素值的平均值,并将所述平均值作为目标像素值。
可选地,所述处理器601具体用于:
当接收到五官变化指令时,在待处理图像中建立面部网格和五官网格;在所述面部网格所覆盖的图像区域中,确定面部图像;将所述面部图像内各像素的像素值设置为目标像素值;在所述待处理图像中,提取所述五官网格所覆盖的图像区域,得到五官图像;根据所述五官变化指令携带的变化标识,改变所述五官图像的形状,得到变形后的五官图像;根据预设的三角形映射算法,将所述五官图像映射到所述面部图像上,得到处理后的目标图像。
在本申请提供的又一实施例中,还提供了一种计算机可读存储介质,其上承载一个或多个计算机指令程序,所述计算机指令程序被一个或多个处理器执行时,所述一个或多个处理器执行时实现上述任一图像处理方法的步骤。
在本申请提供的又一实施例中,还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例中任一图像处理方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (12)

  1. 一种图像处理方法,包括:
    当接收到预设的飞走特效指令时,在待处理图像中建立面部网格和五官网格;
    在所述面部网格所覆盖的图像区域中,确定面部图像;
    将所述面部图像内各像素的像素值设置为目标像素值;
    在所述待处理图像中,提取所述五官网格所覆盖的图像区域,得到五官图像;
    根据预设的三角形映射算法和预设的偏移量,将所述五官图像映射到所述面部图像上,得到处理后的目标图像。
  2. 根据权利要求1所述的图像处理方法,所述在待处理图像中建立面部网格和五官网格包括:
    通过预设的映射算法,将预先存储的第一人脸模板中的第一数目个关键点,映射到待处理图像中用户的面部图像区域上;
    基于所述第一数目个关键点和预设的各关键点的类型,建立面部网格和五官网格,所述预设的各关键点的类型包括面部类型、嘴部类型、鼻子类型、以及眼部类型。
  3. 根据权利要求2所述的图像处理方法,所述方法还包括:
    获取预先存储的第二人脸模板,所述第二人脸模板包括第二数目个关键点;
    根据所述第二数目个关键点和预设的各关键点的类型,确定第三数目个关键点以及所述第三数目个关键点的类型;
    基于所述第二人脸模板和所述第三数目个关键点,确定所述第一人脸模板。
  4. 根据权利要求1所述的图像处理方法,所述将所述面部图像内各像素的像素值设置为目标像素值包括:
    提取所述面部图像中多个预设位置处的像素的像素值,所述预设位置处 包括所述用户的额头和/或脸颊;
    计算所述多个像素值的平均值,并将所述平均值作为目标像素值。
  5. 根据权利要求1所述的图像处理方法,所述方法还包括:
    当接收到五官变化指令时,在待处理图像中建立面部网格和五官网格;
    在所述面部网格所覆盖的图像区域中,确定面部图像;
    将所述面部图像内各像素的像素值设置为目标像素值;
    在所述待处理图像中,提取所述五官网格所覆盖的图像区域,得到五官图像;
    根据所述五官变化指令携带的变化标识,改变所述五官图像的形状,得到变形后的五官图像;
    根据预设的三角形映射算法,将所述五官图像映射到所述面部图像上,得到处理后的目标图像。
  6. 一种图像处理装置,包括:
    建立单元,被配置为当接收到预设的飞走特效指令时,在待处理图像中建立面部网格和五官网格;
    第一确定单元,被配置为在所述面部网格所覆盖的图像区域中,确定面部图像;
    设置单元,被配置为将所述面部图像内各像素的像素值设置为目标像素值;
    提取单元,被配置为在所述待处理图像中,提取所述五官网格所覆盖的图像区域,得到五官图像;
    映射单元,被配置为根据预设的三角形映射算法和预设的偏移量,将所述五官图像映射到所述面部图像上,得到处理后的目标图像。
  7. 一种电子设备,包括:
    存储器,用于存储计算机程序,以及执行所述计算机程序产生的候选中间数据以及结果数据;
    处理器,用于当接收到预设的飞走特效指令时,在待处理图像中建立面 部网格和五官网格;在所述面部网格所覆盖的图像区域中,确定面部图像;将所述面部图像内各像素的像素值设置为目标像素值;在所述待处理图像中,提取所述五官网格所覆盖的图像区域,得到五官图像;根据预设的三角形映射算法和预设的偏移量,将所述五官图像映射到所述面部图像上,得到处理后的目标图像。
  8. 根据权利要求7所述的电子设备,所述处理器具体用于:
    通过预设的映射算法,将预先存储的第一人脸模板中的第一数目个关键点,映射到待处理图像中用户的面部图像区域上;
    基于所述第一数目个关键点和预设的各关键点的类型,建立面部网格和五官网格,所述预设的各关键点的类型包括面部类型、嘴部类型、鼻子类型、以及眼部类型。
  9. 根据权利要求7所述的电子设备,所述处理器还用于:
    获取预先存储的第二人脸模板,所述第二人脸模板包括第二数目个关键点;
    根据所述第二数目个关键点和预设的各关键点的类型,确定第三数目个关键点以及所述第三数目个关键点的类型;
    基于所述第二人脸模板和所述第三数目个关键点,确定所述第一人脸模板。
  10. 根据权利要求7所述的电子设备,所述处理器具体用于:
    提取所述面部图像中多个预设位置处的像素的像素值,所述预设位置处包括所述用户的额头和/或脸颊;
    计算所述多个像素值的平均值,并将所述平均值作为目标像素值。
  11. 根据权利要求7所述的电子设备,所述处理器还用于:
    当接收到五官变化指令时,在待处理图像中建立面部网格和五官网格;
    在所述面部网格所覆盖的图像区域中,确定面部图像;
    将所述面部图像内各像素的像素值设置为目标像素值;
    在所述待处理图像中,提取所述五官网格所覆盖的图像区域,得到五官 图像;
    根据所述五官变化指令携带的变化标识,改变所述五官图像的形状,得到变形后的五官图像;
    根据预设的三角形映射算法,将所述五官图像映射到所述面部图像上,得到处理后的目标图像。
  12. 一种计算机可读存储介质,其上承载一个或多个计算机指令程序,所述计算机指令程序被一个或多个处理器执行时,所述一个或多个处理器执行权利要求1~5任一项所述的方法。
PCT/CN2019/114886 2018-12-24 2019-10-31 图像处理方法、装置、电子设备及存储介质 WO2020134558A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP19902558.6A EP3905662A4 (en) 2018-12-24 2019-10-31 IMAGE PROCESSING APPARATUS AND METHOD, ELECTRONIC DEVICE AND INFORMATION HOLDER
US17/098,066 US11030733B2 (en) 2018-12-24 2020-11-13 Method, electronic device and storage medium for processing image
US17/306,340 US20210256672A1 (en) 2018-12-24 2021-05-03 Method, electronic device and storage medium for processing image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811585135.3 2018-12-24
CN201811585135.3A CN109672830B (zh) 2018-12-24 2018-12-24 图像处理方法、装置、电子设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/098,066 Continuation US11030733B2 (en) 2018-12-24 2020-11-13 Method, electronic device and storage medium for processing image

Publications (1)

Publication Number Publication Date
WO2020134558A1 true WO2020134558A1 (zh) 2020-07-02

Family

ID=66146102

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/114886 WO2020134558A1 (zh) 2018-12-24 2019-10-31 图像处理方法、装置、电子设备及存储介质

Country Status (4)

Country Link
US (2) US11030733B2 (zh)
EP (1) EP3905662A4 (zh)
CN (1) CN109672830B (zh)
WO (1) WO2020134558A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109672830B (zh) 2018-12-24 2020-09-04 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN110942426B (zh) * 2019-12-11 2023-09-29 广州酷狗计算机科技有限公司 图像处理的方法、装置、计算机设备和存储介质
CN111242881B (zh) * 2020-01-07 2021-01-12 北京字节跳动网络技术有限公司 显示特效的方法、装置、存储介质及电子设备
CN113706369A (zh) * 2020-05-21 2021-11-26 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN112967261B (zh) * 2021-03-17 2022-07-29 北京三快在线科技有限公司 图像融合方法、装置、设备及存储介质
CN114501065A (zh) * 2022-02-11 2022-05-13 广州方硅信息技术有限公司 基于面部拼图的虚拟礼物互动方法、系统及计算机设备
CN114567805A (zh) * 2022-02-24 2022-05-31 北京字跳网络技术有限公司 确定特效视频的方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063890A (zh) * 2013-03-22 2014-09-24 中国移动通信集团福建有限公司 一种人脸卡通动漫形象化方法及系统
WO2015139231A1 (en) * 2014-03-19 2015-09-24 Intel Corporation Facial expression and/or interaction driven avatar apparatus and method
CN106415665A (zh) * 2014-07-25 2017-02-15 英特尔公司 具有头部转动的头像面部表情动画
CN106919906A (zh) * 2017-01-25 2017-07-04 迈吉客科技(北京)有限公司 一种图像互动方法及互动装置
CN107431635A (zh) * 2015-03-27 2017-12-01 英特尔公司 化身面部表情和/或语音驱动的动画化
CN109672830A (zh) * 2018-12-24 2019-04-23 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002065761A2 (en) * 2001-02-12 2002-08-22 Carnegie Mellon University System and method for stabilizing rotational images
JP2005237896A (ja) * 2004-02-27 2005-09-08 Aruze Corp 遊技機
US7737996B2 (en) * 2005-12-01 2010-06-15 Microsoft Corporation Techniques for automated animation
GB2451050B (en) * 2006-05-05 2011-08-31 Parham Aarabi Method, system and computer program product for automatic and semiautomatic modification of digital images of faces
US20080065992A1 (en) * 2006-09-11 2008-03-13 Apple Computer, Inc. Cascaded display of video media
US20100321475A1 (en) * 2008-01-23 2010-12-23 Phillip Cox System and method to quickly acquire three-dimensional images
KR101686913B1 (ko) * 2009-08-13 2016-12-16 삼성전자주식회사 전자기기에서 이벤트 서비스 제공 방법 및 장치
CN103176684B (zh) * 2011-12-22 2016-09-07 中兴通讯股份有限公司 一种多区域切换界面的方法及装置
JP5984453B2 (ja) * 2012-01-10 2016-09-06 三菱電機株式会社 空気調和機の室内機
TWI499279B (zh) * 2012-01-11 2015-09-01 Chunghwa Picture Tubes Ltd 影像處理裝置及其方法
JP5966657B2 (ja) * 2012-06-22 2016-08-10 カシオ計算機株式会社 画像生成装置、画像生成方法及びプログラム
KR102212209B1 (ko) * 2014-04-10 2021-02-05 삼성전자주식회사 시선 추적 방법, 장치 및 컴퓨터 판독가능한 기록 매체
US9756299B2 (en) * 2014-04-14 2017-09-05 Crayola, Llc Handheld digital drawing and projection device
US9277180B2 (en) * 2014-06-30 2016-03-01 International Business Machines Corporation Dynamic facial feature substitution for video conferencing
JP6006825B1 (ja) * 2015-03-24 2016-10-12 住友不動産株式会社 フォームチェック装置
EP3311329A4 (en) * 2015-06-19 2019-03-06 Palmer Family Trust SYSTEMS AND METHODS FOR IMAGE ANALYSIS
CN105187736B (zh) * 2015-07-28 2018-07-06 广东欧珀移动通信有限公司 一种将静止人脸图片转化为视频的方法、系统及移动终端
CN105554389B (zh) * 2015-12-24 2020-09-04 北京小米移动软件有限公司 拍摄方法及装置
CN108229278B (zh) * 2017-04-14 2020-11-17 深圳市商汤科技有限公司 人脸图像处理方法、装置和电子设备
CN108965740B (zh) * 2018-07-11 2020-10-30 深圳超多维科技有限公司 一种实时视频换脸方法、装置、设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063890A (zh) * 2013-03-22 2014-09-24 中国移动通信集团福建有限公司 一种人脸卡通动漫形象化方法及系统
WO2015139231A1 (en) * 2014-03-19 2015-09-24 Intel Corporation Facial expression and/or interaction driven avatar apparatus and method
CN106415665A (zh) * 2014-07-25 2017-02-15 英特尔公司 具有头部转动的头像面部表情动画
CN107431635A (zh) * 2015-03-27 2017-12-01 英特尔公司 化身面部表情和/或语音驱动的动画化
CN106919906A (zh) * 2017-01-25 2017-07-04 迈吉客科技(北京)有限公司 一种图像互动方法及互动装置
CN109672830A (zh) * 2018-12-24 2019-04-23 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3905662A4 *

Also Published As

Publication number Publication date
US20210256672A1 (en) 2021-08-19
CN109672830A (zh) 2019-04-23
US11030733B2 (en) 2021-06-08
EP3905662A1 (en) 2021-11-03
CN109672830B (zh) 2020-09-04
US20210065342A1 (en) 2021-03-04
EP3905662A4 (en) 2022-09-28

Similar Documents

Publication Publication Date Title
WO2020134558A1 (zh) 图像处理方法、装置、电子设备及存储介质
US11114130B2 (en) Method and device for processing video
US20230393721A1 (en) Method and Apparatus for Dynamically Displaying Icon Based on Background Image
WO2020216025A1 (zh) 虚拟角色的面部显示方法、装置、计算机设备及可读存储介质
US20190221041A1 (en) Method and apparatus for synthesizing virtual and real objects
CN109308205B (zh) 应用程序的显示适配方法、装置、设备及存储介质
WO2022179025A1 (zh) 图像处理方法及装置、电子设备和存储介质
WO2018120238A1 (zh) 用于处理文档的设备、方法和图形用户界面
WO2020007241A1 (zh) 图像处理方法和装置、电子设备以及计算机可读存储介质
JP2016531362A (ja) 肌色調整方法、肌色調整装置、プログラム及び記録媒体
WO2022068479A1 (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
US20200312022A1 (en) Method and device for processing image, and storage medium
CN109472738B (zh) 图像光照校正方法及装置、电子设备和存储介质
WO2020233403A1 (zh) 三维角色的个性化脸部显示方法、装置、设备及存储介质
CN110782532B (zh) 图像生成方法、生成装置、电子设备及存储介质
JP6170626B2 (ja) 構図変更方法、構図変更装置、端末、プログラム及び記録媒体
WO2022062808A1 (zh) 头像生成方法及设备
CN109978996B (zh) 生成表情三维模型的方法、装置、终端及存储介质
WO2023284632A1 (zh) 图像展示方法、装置及电子设备
TW202013316A (zh) 人臉圖像的處理方法及裝置、電子設備和儲存介質
CN112581358A (zh) 图像处理模型的训练方法、图像处理方法及装置
CN111866372A (zh) 自拍方法、装置、存储介质以及终端
US9665925B2 (en) Method and terminal device for retargeting images
CN115702443A (zh) 将存储的数字化妆增强应用于数字图像中的已辨识面部
CN112257594A (zh) 多媒体数据的显示方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19902558

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019902558

Country of ref document: EP

Effective date: 20210726