WO2022193573A1 - Facial fusion method and apparatus - Google Patents

Facial fusion method and apparatus Download PDF

Info

Publication number
WO2022193573A1
WO2022193573A1 PCT/CN2021/117014 CN2021117014W WO2022193573A1 WO 2022193573 A1 WO2022193573 A1 WO 2022193573A1 CN 2021117014 W CN2021117014 W CN 2021117014W WO 2022193573 A1 WO2022193573 A1 WO 2022193573A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
face
original
blurred
pixel
Prior art date
Application number
PCT/CN2021/117014
Other languages
French (fr)
Chinese (zh)
Inventor
杨丽倩
刘晓强
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2022193573A1 publication Critical patent/WO2022193573A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to a face fusion method, apparatus, electronic device, computer-readable storage medium, and a computer program product.
  • Face-changing special effects refer to replacing the face in one image with the identity features of another face, but keeping other features in the image, such as the background, characters' clothing, etc. unchanged. Therefore, the face-changing special effect is essentially an image. Synthetic processing.
  • the current face-changing special effects mainly detect the key points of the face contour, then determine the face contour according to the key points, and then replace the face with another face according to the face contour. Since there may be differences in human skin color, only part of the face is replaced, and edge color distortion will appear at the junction of the fused face and the unchanged part. For example, after replacing a face with a lighter complexion with a face with a darker complexion, there will be significant color differences between the face and the forehead and neck.
  • Color transfer technology appends the overall color of an image to another image, resulting in the image having the shape of the original image but with the color of the other image added.
  • the present disclosure provides a face fusion method, apparatus, electronic device, computer-readable storage medium, and a computer program product.
  • a face fusion method including:
  • the original face image and the target face image are respectively subjected to a blurring process to obtain a first original face blurred image and a first target face blurred image;
  • Performing a blurring process on the pure white face image generated according to the original face image, to obtain a pure white blurred image of the original face image, and performing a blurring process on the pure white face image generated according to the target face image The image is blurred to obtain a pure white blurred image of the target face image;
  • image decomposition is performed on the first original face blurred image to obtain a second original face blurred image, and a pure white blurred image based on the target face image , performing image decomposition on the first target face blurred image to obtain a second target face blurred image;
  • image decomposition is performed on the original face image to obtain an original face detail image
  • the original face detail image is fused to the second target face blur image to obtain a face fusion image of the target face image.
  • performing image decomposition on the first original blurred face image based on the pure white blurred image of the original face image to obtain a second original blurred face image including:
  • performing normalization processing on the pixel values of each pixel in the pure white blurred image of the original face image to obtain a first normalized blurred image including:
  • the pure white pixel value is the pixel value of the pure white pixel .
  • image decomposition is performed on the first target face blurred image to obtain a second target face blurred image, including:
  • the pixel values of each pixel in the pure white blur image of the target face image are normalized to obtain a second normalized blur image, including:
  • the second normalized blur image is obtained by dividing the pixel value of each pixel in the pure white blur image of the target face image by the pure white pixel value.
  • performing image decomposition on the original face image based on the second original face blur image to obtain an original face detail image including:
  • performing image decomposition on the original face image based on the second original face blur image to obtain an original face detail image including:
  • the pixel value of each pixel point in the original face detail map is added to the pixel value of each pixel point in the second target face blur map to obtain a face fusion map of the target face map.
  • a face fusion device including:
  • an acquisition unit configured to acquire the original face map and the target face map
  • a face blurring unit configured to blur the original face image and the target face image respectively to obtain a first original face blurred image and a first target face blurred image
  • the pure white blurring unit is configured to perform a blurring process on the pure white image of the human face generated according to the original human face image to obtain a pure white blurred image of the original human face image, and, The pure white image of the face generated by the image is subjected to fuzzy processing to obtain the pure white blurred image of the target face image;
  • a blurring processing unit configured to perform image decomposition on the first original face blurred image based on the pure white blurred image of the original face image to obtain a second original face blurred image, and, based on the target person The pure white blurred image of the face image, the first target face blurred image is decomposed to obtain the second target face blurred image;
  • a detail decomposition unit configured to perform image decomposition on the original face map based on the second original face blur map to obtain an original face detail map
  • the face fusion unit is configured to fuse the original face detail image to the second target face fuzzy image to obtain a face fusion image of the target face image.
  • the obfuscation unit is configured to:
  • the obfuscation unit is further configured to:
  • the pure white pixel value is the pixel value of the pure white pixel .
  • the obfuscation unit is configured to:
  • the obfuscation unit is further configured to:
  • the second normalized blur image is obtained by dividing the pixel value of each pixel in the pure white blur image of the target face image by the pure white pixel value.
  • the detail decomposition unit is configured to:
  • the face fusion unit is configured as:
  • the detail decomposition unit is configured to:
  • the face fusion unit is configured as:
  • the pixel value of each pixel point in the original face detail map is added to the pixel value of each pixel point in the second target face blur map to obtain a face fusion map of the target face map.
  • an electronic device including a memory and a processor, the memory stores a computer program, and the processor implements the first aspect or the first aspect when executing the computer program Any possible implementation of the described face fusion method.
  • a computer-readable storage medium when the instructions in the computer-readable storage medium are executed by a processor of an electronic device, the electronic device can execute the first aspect Or the face fusion method of any one of the first aspect.
  • a computer program product includes a computer program, and when the computer program is executed by a processor, the face fusion according to any one of the first aspects is implemented method.
  • the first original face blur image and the first target face blur image which are composed of low-frequency signals and retain the original skin color of the face are obtained.
  • the pure white face image generated by the original face image and the target face image is blurred to obtain the respective pure white blurred image, and the first original face blurred image and the first target face blurred image are processed by using the pure white blurred image.
  • the image is decomposed to eliminate the black edges of the face edges in the first original face blurred image and the first target face blurred image due to blurring processing, and obtain the second original face blurred image and the second target face blurred image , and then decompose the original face image through the second original face blur image to obtain the original face detail image, and finally fuse the original face detail image into the second target face blur image to obtain the target face image.
  • Face fusion map Therefore, the fusion of the facial features in the face image and the original skin color of another face image is achieved in a simple manner, without the need to perform a large number of color migration operations, and the processing efficiency is high. Therefore, color distortion due to blurring processing is avoided, and the speed of face fusion is improved at the same time.
  • FIG. 1A shows a schematic diagram of an initial human face image that has not been processed by the special effect of face-changing.
  • FIG. 1B shows a schematic diagram of a face-changing effect.
  • Fig. 2 is a flow chart of a method for face fusion according to an exemplary embodiment.
  • FIG. 3A shows a schematic diagram of a face image to be subjected to face swapping.
  • FIG. 3B shows a schematic diagram of a face blur map.
  • FIG. 3C shows a schematic diagram of black borders appearing at the edge of a human face in a blurred image of a human face.
  • FIG. 3D shows a schematic diagram of a pure white image of a human face.
  • Figure 3E shows a schematic diagram of a pure white blur map.
  • FIG. 3F shows a schematic diagram of a face detail map.
  • FIG. 3G shows a schematic diagram of a face fusion map for performing face fusion based on high and low frequency signals of an image.
  • Fig. 4 is a flowchart showing a step of processing a face blurred image by using a pure white blurred image according to an exemplary embodiment.
  • Fig. 5 is a block diagram of a face fusion apparatus according to an exemplary embodiment.
  • Fig. 6 is an internal structure diagram of an electronic device according to an exemplary embodiment.
  • the face fusion method provided by the present disclosure can be applied to the application environment of editing face-changing special effects through a terminal.
  • the terminal may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices.
  • the face fusion method provided by the present disclosure can be applied to various application scenarios that require editing of special effects for changing faces.
  • the user can edit the face-changing special effects for more than two faces in the video while shooting the video, obtain the face-changing video and upload it to the video sharing platform for other users to watch.
  • the user can edit the face-changing special effects in real time under the condition of live video.
  • the user can also photograph a face through the terminal, and edit the face-changing special effect between the photographed face and the face of another image.
  • the user can also shoot two faces through the terminal, and edit the face-changing special effects on the two faces obtained by shooting.
  • Those skilled in the art can apply the face fusion method provided by the present disclosure to various application scenarios for editing face-changing special effects according to the actual situation.
  • FIG. 1A shows a schematic diagram of an initial human face image that has not been processed by the special effect of face-changing.
  • the initial face image contains the original face on the left and the target face on the right.
  • the face-changing effect is performed on the original face and the target face, and the face-changing effect shown in Fig. 1B is obtained.
  • the skin color of the left face is different from that of the right face.
  • After covering the left face on the right face, at the junction between the new face and the uncovered part of the original face On the upper and lower sides, there is a color difference as shown in the area 101 marked in FIG. 1B , that is, there is edge color distortion. Improving the edge through color migration technology will seriously affect the face-changing efficiency.
  • Fig. 2 is a flow chart of a face fusion method according to an exemplary embodiment. As shown in Fig. 2 , the face fusion method of the present disclosure includes the following steps.
  • step S210 the original face map and the target face map are obtained.
  • the face map may be an image containing a human face.
  • the original face and the target face can be two face swap objects for face swap effect editing.
  • the terminal may obtain the above-mentioned original face map and target face map in corresponding different ways.
  • the user can select a video on the terminal, select two faces in the video that need face-changing, and submit a face-changing request.
  • the terminal extracts images containing the two faces in the video frame respectively according to the face-changing request, as the above-mentioned original face image and target face image respectively.
  • Those skilled in the art can determine the means for obtaining the original face image and the target face image according to the actual application scenario.
  • FIG. 3A shows a schematic diagram of a face image to be subjected to face swapping.
  • two face images containing the left and right sides to be swapped can be extracted from an image.
  • the face image on the left is named as the original face image
  • the face image on the right is named as the target face image.
  • step S220 blurring is performed on the original face image and the target face image respectively, to obtain a first original face blurred image and a first target face blurred image.
  • blurring processing is an image processing method used to obtain low-frequency signals in an image and make the image blurred.
  • Common blur processing mainly includes Gaussian blur, a data smoothing technique based on Gaussian distribution. The principle is to take the average value of the surrounding pixels for each pixel in the image, so that the image loses details.
  • the above-mentioned Gaussian blurring can be achieved by the following two-dimensional Gaussian function:
  • (x, y) is the coordinate of a certain pixel
  • G is the blurred pixel value of the pixel (x, y)
  • represents the degree of smoothness.
  • the face blur map may be an image composed of low-frequency signals and used to express the basic color of the face. It should be noted that, from the perspective of the degree of change in image brightness or grayscale, the image contains high-frequency signals and low-frequency signals.
  • the low-frequency signal of the image represents the area where the brightness or grayscale changes slowly in the image, that is, the area where the color changes less and is relatively flat in the image, and the low-frequency signal usually describes the main content of the image.
  • the high-frequency signal represents the area where the brightness or grayscale changes sharply in the image, that is, the area in the image where the color changes greatly, and the edge contour and detail features are displayed.
  • the terminal may perform Gaussian blurring on the original face image and the target face image respectively.
  • Gaussian blurring since the color dithering frequency of each color channel in the image is low, the blurred image is composed of low-frequency signals.
  • the low-frequency signal can reflect the basic skin color of the human face.
  • the face image after Gaussian blurring eliminates the high-frequency signals representing the facial features of the face, but retains the image representing the color of the face and the low-frequency signal with obvious bright contrast. This image is the above-mentioned blurred face. picture.
  • FIG. 3B shows a schematic diagram of a face blur map.
  • two blurred faces on the left and right in Fig. 3B are obtained, namely the first original face blurred image and the first target.
  • Blurred human face The color and brightness of each pixel in the blurred image have a low frequency of change and are all low-frequency signals.
  • the high-frequency signals representing the detailed features of the facial features of the face are removed, and an image composed of low-frequency signals, representing the colors in the face and with obvious bright contrast, is obtained as a face blur map.
  • step S230 blurring the pure white face image generated according to the original face image to obtain a pure white blurred image of the original face image
  • the pure white image of the target face is blurred to obtain the pure white blurred image of the target face image.
  • the original face image is directly divided by the first original face blur image to obtain the original face detail image, and then the original face detail image and the first target face blur image are multiplied to realize the human face. Face fusion may cause color distortion at the edges of the fused face.
  • the Gaussian blur method can save the calculation amount of color migration, however, since the Gaussian blur has a certain blur radius, in the case of blur, the generated pixel values outside the face range tend to be 0 pixels, thus forming a circle of black borders at the edge of the face, as shown in Figure 3C, a schematic diagram of the presence of black borders on the edge of the face in the blurred face image, the edge of the face in the first original blurred face image There is a black border around the face. Therefore, the fusion based on the blurred face image with black borders will cause the edge color distortion of the final fused face. Therefore, the edge color distortion of the fusion face can be avoided by introducing a pure white image method.
  • the terminal may first generate a corresponding pure-white face image according to the original face image, and generate a corresponding pure-white face image according to the target face image.
  • FIG. 3D shows a schematic diagram of a pure white image of a human face.
  • the image on the right is a pure white face image generated from the original face image on the left.
  • the pure white face image has a solid color area filled with pure white with the same grayscale.
  • the edge of the region matches the face in the original face image on the left in terms of shape, size and other features.
  • the pure white face images of the original face image and the target face image are respectively blurred to obtain the pure white blurred image of the original face image and the pure white blurred image of the target face image.
  • Figure 3E shows a schematic diagram of a pure white blurred image. It can be seen from the figure that Gaussian blurring is performed on the pure white image of the face on the left, and the pure white blurred image on the right is obtained, and the edges in the pure white blurred image are blurred.
  • step S240 based on the pure white blurred image of the original face image, image decomposition is performed on the first original face blurred image to obtain a second original face blurred image, and, based on the target face image The pure white blurred image of the first target face is decomposed to obtain the second target face blurred image.
  • the terminal may first perform normalization processing on the pixel values of each pixel in the pure white blurred images of the original face image and the target face image, respectively, to obtain the original face image and the target face image respectively. Then, divide the pixel value of each pixel in the original face image and the target face image by the pixel value of each pixel in the corresponding normalized blur image to obtain The above-mentioned second original face blurred image and second target face blurred image.
  • the pixel values of the pixels outside the face edge (black border) in the original face blur image can be changed, Change its pixel value to a pixel value close to the pixel value of the pixel point on the edge of the face.
  • the black edge of the face edge in the face blurred image disappears, and after the subsequent face fusion processing, the edge color distortion of the fused face is avoided.
  • step S250 based on the second original face blurred image, image decomposition is performed on the original face image to obtain an original face detail image.
  • the face detail map may be an image composed of high-frequency signals and used to express the detailed features of the facial features of the face.
  • the terminal may perform image decomposition on the original face image by using the second original face blurred image, so as to obtain an image composed of high-frequency signals and used to express the detailed features of the facial features of the human face. Detail of the face.
  • an image can be composed of a detail layer and a blur layer
  • the detail layer can be obtained by decomposing the blur layer on the image.
  • image decomposition for example, the blur layer can be removed from the original image to obtain the detail layer based on multiplication decomposition or additive decomposition.
  • image decomposition is performed by means of additive decomposition, the grayscale of each pixel in the face image can be extracted to form a grayscale matrix, and the grayscale of each pixel in the face blur layer can be extracted to form another grayscale By subtracting the two grayscale matrices in alignment, the resulting grayscale matrix can constitute the face detail layer.
  • FIG. 3F shows a schematic diagram of a face detail map. It can be seen from the figure that the detailed face image decomposed by removing the blurred image of the face retains the texture and contour details of the facial features such as eyebrows, eyes, nose, mouth, etc., but the original facial features of the face have been removed. complexion.
  • step S260 the original face detail image is fused to the second target face fuzzy image to obtain a face fusion image of the target face image.
  • the face fusion image may be an image obtained by merging the facial features of one face with the facial features of another face except the facial features.
  • the terminal may fuse the original face detail image and the second target face blur image into the above-mentioned face fusion image based on multiplicative fusion or additive fusion. For example, by performing image fusion by means of additive fusion, the grayscale matrix formed by the grayscale of each pixel in the original face detail image can be combined with the grayscale formed by each pixel in the second target face blurred image. The grayscale matrix is added, and the obtained grayscale matrix can constitute the above-mentioned face fusion map.
  • FIG. 3G shows a schematic diagram of a face fusion map for performing face fusion based on high and low frequency signals of an image.
  • the face on the right retains the basic skin color of the original target face, but The facial features of the original face image on the left are integrated. Even if there is a difference in skin color, the facial features of another face are fused on the basis of the original skin color of the face during the fusion process. Therefore, after the left face is fused to the right face, the area 301 in the figure does not appear. Color differences appear, no edge color distortion. Moreover, by using the pure white blurred image to eliminate the black edges generated by the blurring of the edge of the face, there is no color distortion on the left and right faces in the face fusion image.
  • the first original face blur image and the first target which are composed of low-frequency signals and retain the original skin color of the face are obtained.
  • the face blurred image, and then the pure white face image generated according to the original face image and the target face image is blurred to obtain the respective pure white blurred image, and the first original face blurred image and
  • the first target face blurred image is decomposed to eliminate the black edges of the face edges in the first original face blurred image and the first target face blurred image due to blurring processing, and the second original face blurred image is obtained.
  • the face fusion map of the target face map is obtained. Therefore, the fusion of the facial features in the face image and the original skin color of another face image is achieved in a simple manner, without the need to perform a large number of color migration operations, and the processing efficiency is high. Therefore, color distortion due to blurring processing is avoided, and the speed of face fusion is improved at the same time.
  • step S240 the following steps may be used:
  • step S441 normalize the pixel values of each pixel in the pure white blurred image of the original face image to obtain a first normalized blurred image;
  • the pixel value of each pixel is divided by the pixel value of each pixel in the first normalized blur map to obtain the second original face blur map.
  • the pixel values of each pixel in the pure white blur image are normalized to obtain the first normalized blur image. For example, divide the pixel value of each pixel in the pure white blur image by the pixel value of the pure white pixel by 255 (the pixel value of the pure white pixel is the theoretical highest value), so as to divide the pixel value of each pixel in the pure white blur image The value is normalized, thereby obtaining the first normalized blur image generated for the pure white blur image of the original face image.
  • the pixel value of each pixel in the first original face blur map is divided by the pixel value of each pixel in the first normalized blur map to obtain the above-mentioned second original face blur map.
  • step S442 normalize the pixel values of each pixel in the pure white fuzzy map of the target face map to obtain a second normalized fuzzy map;
  • the pixel value of each pixel is divided by the pixel value of each pixel in the second normalized blur map to obtain the second target face blur map.
  • the same steps as the pure white blurred image of the original face image are used to obtain the second target face blurred image, which will not be repeated here. .
  • a normalized blur image is obtained based on normalizing each pixel point in the pure white blur image, and then the pixel values of the pixels in the face blur image are divided by the normalized blur image.
  • the pixel values of each pixel point are used to obtain the second original face blur map and the second target face blur map, and subsequent large-scale numerical operations based on the normalized values can effectively reduce the amount of calculation.
  • the processing can eliminate the black edges in the blurred face image to avoid the edge color distortion of the fusion face, which improves the speed of face fusion while ensuring the fusion quality of the face fusion image.
  • step S441 the pixel value of each pixel in the pure white blur image of the original face image is normalized to obtain a first normalized blur image, which can be obtained by the following method: Steps to achieve:
  • the pure white pixel value is the pixel value of the pure white pixel ;
  • step S442 the pixel value of each pixel in the pure white blurred image of the target face image is normalized to obtain a second normalized blurred image, which can be achieved through the following steps:
  • the second normalized blur image is obtained by dividing the pixel value of each pixel point in the pure white blur image of the target face image by the pure white pixel value.
  • Table 1 shows the pixel points A and B on the original face map, the pixel points C at the edge of the face, and the pixels D and E outside the edge of the face.
  • RGB three channels are used to represent the pixel value of each pixel point.
  • the pixel value of the pure white pixel is used to normalize the pixel value of each pixel in the pure white blur image to obtain the first normalized blur image and the second normalized blur image.
  • the normalization can be completed without complex numerical transformation processing, which improves the speed of face fusion.
  • step S250 the following steps may be used:
  • step S260 it can be achieved by the following steps:
  • the pixel value of each pixel in the original face image (for example, the pixel value of three RGB channels) can be extracted first, and the pixel value of each pixel in the image can be extracted according to the The coordinate position of the original face image is established, and the pixel value matrix of the original face image is established. Then, the pixel value of each pixel point in the second original face blur map can be extracted, and the pixel value matrix of the second original face blur map is established according to the coordinate position of each pixel point in the image. Then, the two pixel value matrices are divided, that is, each pixel value is divided according to the coordinate position of each pixel in the image, and the original face details reflecting the facial features of the original face map are obtained. picture.
  • dst (dst ori /dst low )*source low ;
  • dst low dst blur /dst White blur ;
  • source low source blur /sourceWhite blur
  • dst represents the final output face fusion image
  • dst ori represents the original face image
  • dst blur represents the first original face blur image
  • dstWhite blur represents the pure white image obtained by blurring the original face image.
  • dst low represents the second original face blur image of the original face image
  • source blur represents the first target face blur image
  • sourceWhite blur represents the pure white image of the target face image obtained after blurring
  • the pure white blur map of ; source low represents the second target face blur map.
  • face fusion is performed by means of multiplicative decomposition. Compared with the method of additive decomposition, the original face color and facial features can be preserved as much as possible, and the quality of face fusion is better.
  • step S250 the following steps may be used:
  • step S260 it can be achieved by the following steps:
  • the pixel value of each pixel point in the original face detail map is added to the pixel value of each pixel point in the second target face blur map to obtain a face fusion map of the target face map.
  • the pixel value of each pixel in the original face image (for example, the pixel value of three RGB channels) can be extracted first, and the pixel value of each pixel in the image can be extracted according to the The coordinate position of the original face image is established, and the pixel value matrix of the original face image is established. Then, the pixel value of each pixel in the second original face blur image can be extracted, and a pixel value matrix of the second original face blur image is established according to the coordinate position of each pixel in the image. Then, the two pixel value matrices are subtracted, that is, each pixel value is subtracted according to the coordinate position of each pixel in the image, and the original face detail map that reflects the facial features of the original face map is obtained. .
  • the pixel value matrix of the original face detail map is added to the pixel value matrix of the second target face blur map, and the resulting pixel value matrix can be generated according to the addition.
  • the above face fusion map In the case of using additive decomposition to fuse images, the pixel value matrix of the original face detail map is added to the pixel value matrix of the second target face blur map, and the resulting pixel value matrix can be generated according to the addition.
  • dst (dst ori -dst blur )+source blur ;
  • dst low dst blur /dst White blur ;
  • source low source blur /sourceWhite blur
  • the face fusion is carried out by means of additive decomposition. Compared with the method of multiplicative decomposition, the required amount of computation is less, and the face fusion can be completed faster, and the efficiency of face fusion can be improved. higher.
  • FIGS. 2-7 are shown in sequence according to the arrows, these steps are not necessarily executed in the sequence shown by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in FIGS. 2-7 may include multiple steps or multiple stages. These steps or stages are not necessarily executed and completed at the same time, but may be executed at different times. The execution of these steps or stages The order is also not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the steps or phases within the other steps.
  • Fig. 5 is a block diagram of a face fusion apparatus according to an exemplary embodiment. 5 , the apparatus includes an acquisition unit 502 , a face blurring unit 504 , a pure white blurring unit 506 , a blurring processing unit 508 , a detail decomposition unit 510 and a face fusion unit 512 .
  • an obtaining unit 502 configured to obtain the original face map and the target face map
  • the face blurring unit 504 is configured to blur the original face image and the target face image respectively to obtain the first original face blurred image and the first target face blurred image;
  • the pure white blurring unit 506 is configured to perform a blurring process on the pure white face image generated according to the original face image to obtain a pure white blurred image of the original face image, and The pure white image of the face generated by the face image is subjected to fuzzy processing to obtain the pure white blurred image of the target face image;
  • the blurring processing unit 508 is configured to perform image decomposition on the first original face blurred image based on the pure white blurred image of the original face image to obtain a second original face blurred image, and, based on the target The pure white blurred image of the face image, the first target face blurred image is decomposed to obtain the second target face blurred image;
  • the detail decomposition unit 510 is configured to perform image decomposition on the original face map based on the second original face blur map to obtain the original face detail map;
  • the face fusion unit 512 is configured to fuse the original face detail image to the second target face fuzzy image to obtain a face fusion image of the target face image.
  • the blurring unit 508 is configured to:
  • the detail decomposition unit 510 is configured to:
  • the face fusion unit 512 is configured as:
  • the detail decomposition unit 510 is configured to:
  • the face fusion unit 512 is configured as:
  • the pixel value of each pixel point in the original face detail map is added to the pixel value of each pixel point in the second target face blur map to obtain a face fusion map of the target face map.
  • the blurring unit 508 is configured to:
  • the pure white pixel value is the pixel value of the pure white pixel ; Divide the pixel value of each pixel in the pure white blurred image of the target face image by the pure white pixel value to obtain the second normalized blurred image.
  • FIG. 6 is a block diagram of an electronic device 600 for face fusion according to an exemplary embodiment.
  • electronic device 600 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, or the like.
  • electronic device 600 may include one or more of the following components: processing component 602, memory 604, power supply component 606, multimedia component 608, audio component 610, input/output (I/O) interface 612, sensor component 614 and communication component 616 .
  • the processing component 602 generally controls the overall operation of the electronic device 600, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 602 may include one or more processors 620 to execute instructions to perform all or some of the steps of the methods described above. Additionally, processing component 602 may include one or more modules that facilitate interaction between processing component 602 and other components. For example, processing component 602 may include a multimedia module to facilitate interaction between multimedia component 608 and processing component 602.
  • Memory 604 is configured to store various types of data to support operation at electronic device 600 . Examples of such data include instructions for any application or method operating on electronic device 600, contact data, phonebook data, messages, pictures, videos, and the like. Memory 604 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • Power supply assembly 606 provides power to various components of electronic device 600 .
  • Power supply components 1006 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 600.
  • Multimedia component 608 includes a screen that provides an output interface between the electronic device 600 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP).
  • the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 608 includes a front-facing camera and/or a rear-facing camera. When the electronic device 600 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data.
  • Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 610 is configured to output and/or input audio signals.
  • audio component 610 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 600 is in operating modes, such as calling mode, recording mode, and voice recognition mode.
  • the received audio signal may be further stored in memory 604 or transmitted via communication component 616 .
  • audio component 610 also includes a speaker for outputting audio signals.
  • the I/O interface 612 provides an interface between the processing component 602 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 614 includes one or more sensors for providing status assessment of various aspects of electronic device 600 .
  • the sensor assembly 614 can detect the open/closed state of the electronic device 600, the relative positioning of the components, such as the display and the keypad of the electronic device 600, and the sensor assembly 614 can also detect the electronic device 600 or one of the electronic devices 600. Changes in the positions of components, presence or absence of user contact with the electronic device 600 , orientation or acceleration/deceleration of the electronic device 600 and changes in the temperature of the electronic device 600 .
  • Sensor assembly 614 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 616 is configured to facilitate wired or wireless communication between electronic device 600 and other devices.
  • Electronic device 600 may access wireless networks based on communication standards, such as WiFi, carrier networks (eg, 2G, 3G, 4G, or 5G), or a combination thereof.
  • the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 616 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • electronic device 600 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmed gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmed gate array
  • controller microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • non-transitory computer readable storage medium including instructions, such as memory 604 including instructions, executable by the processor 620 of the electronic device 600 to perform the above method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
  • a computer program product including a computer program, wherein the computer program, when executed by a processor, completes the above-mentioned face fusion method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

A facial fusion method and apparatus, and an electronic device, a storage medium and a program product, which relate to the field of image processing. The facial fusion method comprises: respectively blurring an original facial image and a target facial image to obtain a first original facial blurred image and a first target facial blurred image; performing blurring processing on facial pure-white images respectively generated according to the original facial image and the target facial image, so as to obtain respective pure-white blurred images of the original facial image and the target facial image; performing image decomposition on the first original facial blurred image on the basis of the pure-white blurred image of the original facial image, so as to obtain a second original facial blurred image, and performing image decomposition on the first target facial blurred image on the basis of the pure-white blurred image of the target facial image, so as to obtain a second target facial blurred image; performing image decomposition on the original facial image on the basis of the second original facial blurred image, so as to obtain an original facial detail image; and fusing the original facial detail image with the second target facial blurred image, so as to obtain a facial fused image.

Description

人脸融合方法及装置Face fusion method and device
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请基于申请号为202110290169.5、申请日为2021年03月18日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。This application is based on the Chinese patent application with the application number of 202110290169.5 and the filing date of March 18, 2021, and claims the priority of the Chinese patent application. The entire content of the Chinese patent application is incorporated herein by reference.
技术领域technical field
本公开涉及图像处理领域,尤其涉及一种人脸融合方法、装置、电子设备、计算机可读存储介质和一种计算机程序产品。The present disclosure relates to the field of image processing, and in particular, to a face fusion method, apparatus, electronic device, computer-readable storage medium, and a computer program product.
背景技术Background technique
随着短视频、直播等视频应用的流行,人们产生了对视频中的人脸进行各种特效编辑的需求。其中,换脸特效是各种特效编辑中的热点。换脸特效是指将一个图像中的人脸与另一人脸的身份特征进行替换,但保持图像中的如背景、人物衣着等的其他特征不变,因此,换脸特效实质上是一种图像合成处理。With the popularity of video applications such as short videos and live broadcasts, people have a need for various special effects editing for faces in videos. Among them, the face-changing special effect is a hot spot in various special effects editing. Face-changing special effects refer to replacing the face in one image with the identity features of another face, but keeping other features in the image, such as the background, characters' clothing, etc. unchanged. Therefore, the face-changing special effect is essentially an image. Synthetic processing.
目前的换脸特效主要是通过检测人脸轮廓的关键点、然后根据该关键点确定人脸轮廓,然后根据人脸轮廓将人脸替换到另一人脸中。由于人的肤色可能存在差异,仅将人脸部分替换,人脸融合后的人脸与未变化部分的交界处,会出现边缘颜色失真。例如,将肤色较白的人脸与肤色较黑的人脸进行替换后,脸部与额头、颈部则会出现明显颜色差异。The current face-changing special effects mainly detect the key points of the face contour, then determine the face contour according to the key points, and then replace the face with another face according to the face contour. Since there may be differences in human skin color, only part of the face is replaced, and edge color distortion will appear at the junction of the fused face and the unchanged part. For example, after replacing a face with a lighter complexion with a face with a darker complexion, there will be significant color differences between the face and the forehead and neck.
颜色迁移技术将图像的整体颜色附加至另一图像中,最后使得图像拥有原图的形状、但附上了另一图像的色彩。Color transfer technology appends the overall color of an image to another image, resulting in the image having the shape of the original image but with the color of the other image added.
发明内容SUMMARY OF THE INVENTION
本公开提供一种人脸融合方法、装置、电子设备、计算机可读存储介质和一种计算机程序产品。The present disclosure provides a face fusion method, apparatus, electronic device, computer-readable storage medium, and a computer program product.
根据本公开实施例的第一方面,提供一种人脸融合方法,包括:According to a first aspect of the embodiments of the present disclosure, there is provided a face fusion method, including:
获取原始人脸图和目标人脸图;Obtain the original face map and the target face map;
对所述原始人脸图和所述目标人脸图分别进行模糊处理,得到第一原始人脸模糊图和第一目标人脸模糊图;The original face image and the target face image are respectively subjected to a blurring process to obtain a first original face blurred image and a first target face blurred image;
对根据所述原始人脸图所生成的人脸纯白图进行模糊处理,得到所述原始人脸图的纯白模糊图,以及,对根据所述目标人脸图所生成的人脸纯白图进行模糊处理,得到所述目标人脸图的纯白模糊图;Performing a blurring process on the pure white face image generated according to the original face image, to obtain a pure white blurred image of the original face image, and performing a blurring process on the pure white face image generated according to the target face image The image is blurred to obtain a pure white blurred image of the target face image;
基于所述原始人脸图的纯白模糊图,对所述第一原始人脸模糊图进行图像分解,得到第二原始人脸模糊图,以及,基于所述目标人脸图的纯白模糊图,对所述第一目标人脸模糊图进行图像分解,得到第二目标人脸模糊图;Based on the pure white blurred image of the original face image, image decomposition is performed on the first original face blurred image to obtain a second original face blurred image, and a pure white blurred image based on the target face image , performing image decomposition on the first target face blurred image to obtain a second target face blurred image;
基于所述第二原始人脸模糊图,对所述原始人脸图进行图像分解,得到原始人脸细节图;和Based on the second original face blurred image, image decomposition is performed on the original face image to obtain an original face detail image; and
将所述原始人脸细节图融合至所述第二目标人脸模糊图,得到所述目标人脸图的人脸融合图。The original face detail image is fused to the second target face blur image to obtain a face fusion image of the target face image.
在一个示例性实施例中,所述基于所述原始人脸图的纯白模糊图,对所述第一原始人脸模糊图进行图像分解,得到第二原始人脸模糊图,包括:In an exemplary embodiment, performing image decomposition on the first original blurred face image based on the pure white blurred image of the original face image to obtain a second original blurred face image, including:
将所述原始人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第一归一化模糊图;Normalizing the pixel value of each pixel in the pure white blurred image of the original face image to obtain a first normalized blurred image;
将所述第一原始人脸模糊图中各个像素点的像素值除以所述第一归一化模糊图中各个像素点的像素值,得到所述第二原始人脸模糊图。Divide the pixel value of each pixel in the first original face blur map by the pixel value of each pixel in the first normalized blur map to obtain the second original face blur map.
在一个示例性实施例中,所述将所述原始人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第一归一化模糊图,包括:In an exemplary embodiment, performing normalization processing on the pixel values of each pixel in the pure white blurred image of the original face image to obtain a first normalized blurred image, including:
将所述原始人脸图的纯白模糊图中各个像素点的像素值除以纯白像素值,得到所述第一归一化模糊图;所述纯白像素值为纯白像素点的像素值。Divide the pixel value of each pixel in the pure white fuzzy image of the original face image by the pure white pixel value to obtain the first normalized fuzzy image; the pure white pixel value is the pixel value of the pure white pixel .
在一个示例性实施例中,基于所述目标人脸图的纯白模糊图,对所述第一目标人脸模糊图进行图像分解,得到第二目标人脸模糊图,包括:In an exemplary embodiment, based on the pure white blurred image of the target face image, image decomposition is performed on the first target face blurred image to obtain a second target face blurred image, including:
将所述目标人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第二归一化模糊图;Normalizing the pixel value of each pixel in the pure white blur image of the target face image to obtain a second normalized blur image;
将所述第一目标人脸模糊图中各个像素点的像素值除以所述第二归一化模糊图中各个像素点的像素值,得到所述第二目标人脸模糊图。Divide the pixel value of each pixel point in the first target face blur map by the pixel value of each pixel point in the second normalized blur map to obtain the second target face blur map.
在一个示例性实施例中,将所述目标人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第二归一化模糊图,包括:In an exemplary embodiment, the pixel values of each pixel in the pure white blur image of the target face image are normalized to obtain a second normalized blur image, including:
将所述目标人脸图的纯白模糊图中各个像素点的像素值除以所述纯白像素值,得到所述第二归一化模糊图。The second normalized blur image is obtained by dividing the pixel value of each pixel in the pure white blur image of the target face image by the pure white pixel value.
在一个示例性实施例中,所述基于所述第二原始人脸模糊图,对所述原始人脸图进行图像分解,得到原始人脸细节图,包括:In an exemplary embodiment, performing image decomposition on the original face image based on the second original face blur image to obtain an original face detail image, including:
将所述原始人脸图中各个像素点的像素值除以所述第二原始人脸模糊图中各个像素点的像素值,得到所述原始人脸细节图;dividing the pixel value of each pixel in the original face image by the pixel value of each pixel in the second original face blur image to obtain the original face detail image;
所述将所述原始人脸细节图融合至所述第二目标人脸模糊图,得到所述目标人脸图的人脸融合图,包括:The fusion of the original face detail image to the second target face blur image to obtain a face fusion image of the target face image includes:
将所述原始人脸细节图中各个像素点的像素值乘以所述第二目标人脸模糊图中各个像素点的像素值,得到所述目标人脸图的人脸融合图。Multiply the pixel value of each pixel in the original face detail map by the pixel value of each pixel in the second target face blur map to obtain a face fusion map of the target face map.
在一个示例性实施例中,所述基于所述第二原始人脸模糊图,对所述原始人脸图进行图像分解,得到原始人脸细节图,包括:In an exemplary embodiment, performing image decomposition on the original face image based on the second original face blur image to obtain an original face detail image, including:
将所述原始人脸图中各个像素点的像素值减去所述第二原始人脸模糊图中各个像素点的像素值,得到所述原始人脸细节图;Subtracting the pixel value of each pixel in the original face image from the pixel value of each pixel in the second original face blur image to obtain the original face detail image;
所述将所述原始人脸细节图融合至所述第二目标人脸模糊图,得到所述目标人脸图的人脸融合图,包括:The fusion of the original face detail image to the second target face blur image to obtain a face fusion image of the target face image includes:
将所述原始人脸细节图中各个像素点的像素值加上所述第二目标人脸模糊图中各个像素点的像素值,得到所述目标人脸图的人脸融合图。The pixel value of each pixel point in the original face detail map is added to the pixel value of each pixel point in the second target face blur map to obtain a face fusion map of the target face map.
根据本公开实施例的第二方面,提供一种人脸融合装置,包括:According to a second aspect of the embodiments of the present disclosure, there is provided a face fusion device, including:
获取单元,被配置为获取原始人脸图和目标人脸图;an acquisition unit, configured to acquire the original face map and the target face map;
人脸模糊单元,被配置为对所述原始人脸图和所述目标人脸图分别进行模糊处理,得到第一原始人脸模糊图和第一目标人脸模糊图;a face blurring unit, configured to blur the original face image and the target face image respectively to obtain a first original face blurred image and a first target face blurred image;
纯白模糊单元,被配置为对根据所述原始人脸图所生成的人脸纯白图进行模糊处理,得到所述原始人脸图的纯白模糊图,以及,对根据所述目标人脸图所生成的人脸纯白图进行模糊处理,得到所述目标人脸图的纯白模糊图;The pure white blurring unit is configured to perform a blurring process on the pure white image of the human face generated according to the original human face image to obtain a pure white blurred image of the original human face image, and, The pure white image of the face generated by the image is subjected to fuzzy processing to obtain the pure white blurred image of the target face image;
模糊处理单元,被配置为基于所述原始人脸图的纯白模糊图,对所述第一原始人脸模糊图进行图像分解,得到第二原始人脸模糊图,以及,基于所述目标人脸图的纯白模糊图,对所述第一目标人脸模糊图进行图像分解,得到第二目标人脸模糊图;a blurring processing unit, configured to perform image decomposition on the first original face blurred image based on the pure white blurred image of the original face image to obtain a second original face blurred image, and, based on the target person The pure white blurred image of the face image, the first target face blurred image is decomposed to obtain the second target face blurred image;
细节分解单元,被配置为基于所述第二原始人脸模糊图,对所述原始人脸图进行图像分解,得到原始人脸细节图;和a detail decomposition unit, configured to perform image decomposition on the original face map based on the second original face blur map to obtain an original face detail map; and
人脸融合单元,被配置为将所述原始人脸细节图融合至所述第二目标人脸模糊图,得到所述目标人脸图的人脸融合图。The face fusion unit is configured to fuse the original face detail image to the second target face fuzzy image to obtain a face fusion image of the target face image.
在一个示例性实施例中,所述模糊处理单元,被配置为:In an exemplary embodiment, the obfuscation unit is configured to:
将所述原始人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第一归一化模糊图;Normalizing the pixel value of each pixel in the pure white blurred image of the original face image to obtain a first normalized blurred image;
将所述第一原始人脸模糊图中各个像素点的像素值除以所述第一归一化模糊图中各个像素点的像素值,得到所述第二原始人脸模糊图。Divide the pixel value of each pixel in the first original face blur map by the pixel value of each pixel in the first normalized blur map to obtain the second original face blur map.
在一个示例性实施例中,所述模糊处理单元进一步被配置为:In an exemplary embodiment, the obfuscation unit is further configured to:
将所述原始人脸图的纯白模糊图中各个像素点的像素值除以纯白像素值,得到所述第一归一化模糊图;所述纯白像素值为纯白像素点的像素值。Divide the pixel value of each pixel in the pure white fuzzy image of the original face image by the pure white pixel value to obtain the first normalized fuzzy image; the pure white pixel value is the pixel value of the pure white pixel .
在一个示例性实施例中,所述模糊处理单元,被配置为:In an exemplary embodiment, the obfuscation unit is configured to:
将所述目标人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第二归一化模糊图;Normalizing the pixel value of each pixel in the pure white blur image of the target face image to obtain a second normalized blur image;
将所述第一目标人脸模糊图中各个像素点的像素值除以所述第二归一化模糊图中各个像素点的像素值,得到所述第二目标人脸模糊图。Divide the pixel value of each pixel point in the first target face blur map by the pixel value of each pixel point in the second normalized blur map to obtain the second target face blur map.
在一个示例性实施例中,所述模糊处理单元进一步被配置为:In an exemplary embodiment, the obfuscation unit is further configured to:
将所述目标人脸图的纯白模糊图中各个像素点的像素值除以所述纯白像素值,得到所述第二归一化模糊图。The second normalized blur image is obtained by dividing the pixel value of each pixel in the pure white blur image of the target face image by the pure white pixel value.
在一个示例性实施例中,所述细节分解单元,被配置为:In an exemplary embodiment, the detail decomposition unit is configured to:
将所述原始人脸图中各个像素点的像素值除以所述第二原始人脸模糊图中各个像 素点的像素值,得到所述原始人脸细节图;Divide the pixel value of each pixel in the original face image by the pixel value of each pixel in the second original face blur to obtain the original face detail image;
所述人脸融合单元,被配置为:The face fusion unit is configured as:
将所述原始人脸细节图中各个像素点的像素值乘以所述第二目标人脸模糊图中各个像素点的像素值,得到所述目标人脸图的人脸融合图。Multiply the pixel value of each pixel in the original face detail map by the pixel value of each pixel in the second target face blur map to obtain a face fusion map of the target face map.
在一个示例性实施例中,所述细节分解单元,被配置为:In an exemplary embodiment, the detail decomposition unit is configured to:
将所述原始人脸图中各个像素点的像素值减去所述第二原始人脸模糊图中各个像素点的像素值,得到所述原始人脸细节图;Subtracting the pixel value of each pixel in the original face image from the pixel value of each pixel in the second original face blur image to obtain the original face detail image;
所述人脸融合单元,被配置为:The face fusion unit is configured as:
将所述原始人脸细节图中各个像素点的像素值加上所述第二目标人脸模糊图中各个像素点的像素值,得到所述目标人脸图的人脸融合图。The pixel value of each pixel point in the original face detail map is added to the pixel value of each pixel point in the second target face blur map to obtain a face fusion map of the target face map.
根据本公开实施例的第三方面,提供一种电子设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现如第一方面或第一方面的任一种可能实现方式所述的人脸融合方法。According to a third aspect of the embodiments of the present disclosure, an electronic device is provided, including a memory and a processor, the memory stores a computer program, and the processor implements the first aspect or the first aspect when executing the computer program Any possible implementation of the described face fusion method.
根据本公开实施例的第四方面,提供一种计算机可读存储介质,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行如第一方面或第一方面的任一项所述的人脸融合方法。According to a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, when the instructions in the computer-readable storage medium are executed by a processor of an electronic device, the electronic device can execute the first aspect Or the face fusion method of any one of the first aspect.
根据本公开实施例的第五方面,提供一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序被处理器执行时实现如第一方面的任一项所述的人脸融合方法。According to a fifth aspect of the embodiments of the present disclosure, a computer program product is provided, the computer program product includes a computer program, and when the computer program is executed by a processor, the face fusion according to any one of the first aspects is implemented method.
通过分别对原始人脸图和目标人脸图进行模糊处理,得到了由低频信号构成的、保留有人脸原有肤色的第一原始人脸模糊图和第一目标人脸模糊图,再对根据原始人脸图和目标人脸图生成的人脸纯白图进行模糊处理得到各自的纯白模糊图,利用该纯白模糊图对第一原始人脸模糊图和第一目标人脸模糊图进行图像分解,以消除第一原始人脸模糊图和第一目标人脸模糊图中人脸边缘的由于模糊处理而产生的黑边,得到第二原始人脸模糊图和第二目标人脸模糊图,然后通过第二原始人脸模糊图对原始人脸图进行图像分解得到原始人脸细节图,最后将原始人脸细节图融合至第二目标人脸模糊图,得到了目标人脸图的人脸融合图。由此,通过简单的方式即实现了对人脸图像中的五官细节特征和另一人脸图像的原有肤色的融合,而无须进行大量的颜色迁移运算,处理效率较高。因此,避免了由于模糊处理而导致的颜色失真,同时提升了人脸融合的速度。By blurring the original face image and the target face image respectively, the first original face blur image and the first target face blur image which are composed of low-frequency signals and retain the original skin color of the face are obtained. The pure white face image generated by the original face image and the target face image is blurred to obtain the respective pure white blurred image, and the first original face blurred image and the first target face blurred image are processed by using the pure white blurred image. The image is decomposed to eliminate the black edges of the face edges in the first original face blurred image and the first target face blurred image due to blurring processing, and obtain the second original face blurred image and the second target face blurred image , and then decompose the original face image through the second original face blur image to obtain the original face detail image, and finally fuse the original face detail image into the second target face blur image to obtain the target face image. Face fusion map. Therefore, the fusion of the facial features in the face image and the original skin color of another face image is achieved in a simple manner, without the need to perform a large number of color migration operations, and the processing efficiency is high. Therefore, color distortion due to blurring processing is avoided, and the speed of face fusion is improved at the same time.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理,并不构成对本公开的不当限定。The accompanying drawings, which are incorporated into and constitute a part of this specification, illustrate embodiments consistent with the present disclosure, and together with the description, serve to explain the principles of the present disclosure and do not unduly limit the present disclosure.
图1A示出一种未经换脸特效处理的初始人脸图像示意图。FIG. 1A shows a schematic diagram of an initial human face image that has not been processed by the special effect of face-changing.
图1B示出一种换脸效果示意图。FIG. 1B shows a schematic diagram of a face-changing effect.
图2是根据一示例性实施例示出的一种人脸融合方法的流程图。Fig. 2 is a flow chart of a method for face fusion according to an exemplary embodiment.
图3A示出了一种待进行换脸的人脸图像的示意图。FIG. 3A shows a schematic diagram of a face image to be subjected to face swapping.
图3B示出了一种人脸模糊图的示意图。FIG. 3B shows a schematic diagram of a face blur map.
图3C示出了一种人脸模糊图在人脸边缘处出现黑边的示意图。FIG. 3C shows a schematic diagram of black borders appearing at the edge of a human face in a blurred image of a human face.
图3D示出了一种人脸纯白图的示意图。FIG. 3D shows a schematic diagram of a pure white image of a human face.
图3E示出了一种纯白模糊图的示意图。Figure 3E shows a schematic diagram of a pure white blur map.
图3F示出了一种人脸细节图的示意图。FIG. 3F shows a schematic diagram of a face detail map.
图3G示出了一种基于图像高低频信号进行人脸融合的人脸融合图的示意图。FIG. 3G shows a schematic diagram of a face fusion map for performing face fusion based on high and low frequency signals of an image.
图4是根据一示例性实施例示出的一种利用纯白模糊图处理人脸模糊图步骤的流程图。Fig. 4 is a flowchart showing a step of processing a face blurred image by using a pure white blurred image according to an exemplary embodiment.
图5是根据一示例性实施例示出的一种人脸融合装置的框图。Fig. 5 is a block diagram of a face fusion apparatus according to an exemplary embodiment.
图6是根据一示例性实施例示出的一种电子设备的内部结构图。Fig. 6 is an internal structure diagram of an electronic device according to an exemplary embodiment.
具体实施方式Detailed ways
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。In order to make those skilled in the art better understand the technical solutions of the present disclosure, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。It should be noted that the terms "first", "second" and the like in the description and claims of the present disclosure and the above drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence. It is to be understood that the data so used may be interchanged under appropriate circumstances such that the embodiments of the disclosure described herein can be practiced in sequences other than those illustrated or described herein. The implementations described in the illustrative examples below are not intended to represent all implementations consistent with this disclosure.
本公开所提供的人脸融合方法,可以应用于通过终端进行换脸特效编辑的应用环境中。其中,终端可以是但不限于各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。本公开所提供的人脸融合方法可以适用于多种需要进行换脸特效编辑的应用场景中。其中一种场景中,用户可以在拍摄视频的情况下,对视频中两个以上的人脸进行换脸特效的编辑,得到换脸后的视频并上传至视频分享平台上,供其他用户观看。另一种场景中,用户可以在进行视频直播的情况下,实时地进行换脸特效的编辑。用户还可以通过终端拍摄人脸,将拍摄得到的人脸与另一图像的人脸进行换脸特效的编辑。用户还可以通过终端对两个人脸进行拍摄,将拍摄得到的两个人脸进行换脸特效的编辑。本领域技术人员可以根据实际情况将本公开提供的人脸融合方法应用于各种进行换脸特效编辑的应用场景中。The face fusion method provided by the present disclosure can be applied to the application environment of editing face-changing special effects through a terminal. Wherein, the terminal may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices. The face fusion method provided by the present disclosure can be applied to various application scenarios that require editing of special effects for changing faces. In one of the scenarios, the user can edit the face-changing special effects for more than two faces in the video while shooting the video, obtain the face-changing video and upload it to the video sharing platform for other users to watch. In another scenario, the user can edit the face-changing special effects in real time under the condition of live video. The user can also photograph a face through the terminal, and edit the face-changing special effect between the photographed face and the face of another image. The user can also shoot two faces through the terminal, and edit the face-changing special effects on the two faces obtained by shooting. Those skilled in the art can apply the face fusion method provided by the present disclosure to various application scenarios for editing face-changing special effects according to the actual situation.
图1A示出一种未经换脸特效处理的初始人脸图像示意图。从图中可见,初始人脸图像中包含有左侧的原始人脸和右侧的目标人脸。针对原始人脸和目标人脸进行换脸特效,得到了图1B所示的换脸效果。然而,左侧人脸的肤色与右侧人脸的肤色存在差异,将左侧人脸覆盖在右侧人脸上后,在新的人脸与原人脸未被覆盖部分之间的交界处上下 两侧,产生了如图1B中所标注的区域101所示的颜色差异,即存在边缘颜色失真。通过颜色迁移技术改善边缘,会严重影响换脸效率。FIG. 1A shows a schematic diagram of an initial human face image that has not been processed by the special effect of face-changing. As can be seen from the figure, the initial face image contains the original face on the left and the target face on the right. The face-changing effect is performed on the original face and the target face, and the face-changing effect shown in Fig. 1B is obtained. However, the skin color of the left face is different from that of the right face. After covering the left face on the right face, at the junction between the new face and the uncovered part of the original face On the upper and lower sides, there is a color difference as shown in the area 101 marked in FIG. 1B , that is, there is edge color distortion. Improving the edge through color migration technology will seriously affect the face-changing efficiency.
图2是根据一示例性实施例示出的一种人脸融合方法的流程图,如图2所示,本公开的人脸融合方法,包括以下步骤。Fig. 2 is a flow chart of a face fusion method according to an exemplary embodiment. As shown in Fig. 2 , the face fusion method of the present disclosure includes the following steps.
在步骤S210中,获取原始人脸图和目标人脸图。In step S210, the original face map and the target face map are obtained.
其中,人脸图可以为包含有人脸的图像。原始人脸和目标人脸可以为用于进行换脸特效编辑的两个换脸对象。The face map may be an image containing a human face. The original face and the target face can be two face swap objects for face swap effect editing.
在一些实施例中,在不同的应用场景中,终端可以通过相应的不同方式得到上述的原始人脸图和目标人脸图。例如,用户对视频中的人脸进行换脸特效编辑的场景中,用户可以在终端上选取某个视频,并选定视频中需要进行换脸的两个人脸,并提交换脸请求。终端根据换脸请求,提取视频帧中分别包含该两个人脸的图像,分别作为上述的原始人脸图和目标人脸图。本领域技术人员可以根据实际的应用场景,确定获取原始人脸图和目标人脸图的实现手段。In some embodiments, in different application scenarios, the terminal may obtain the above-mentioned original face map and target face map in corresponding different ways. For example, in a scenario where a user edits a face-changing special effect on a face in a video, the user can select a video on the terminal, select two faces in the video that need face-changing, and submit a face-changing request. The terminal extracts images containing the two faces in the video frame respectively according to the face-changing request, as the above-mentioned original face image and target face image respectively. Those skilled in the art can determine the means for obtaining the original face image and the target face image according to the actual application scenario.
图3A示出了一种待进行换脸的人脸图像的示意图。从图中可见,可以从某张图像中提取出包含有将要进行换脸的左侧和右侧的两张人脸图像。为了便于说明,左侧的人脸图像命名为原始人脸图,相应地,右侧的人脸图像命名为目标人脸图。FIG. 3A shows a schematic diagram of a face image to be subjected to face swapping. As can be seen from the figure, two face images containing the left and right sides to be swapped can be extracted from an image. For the convenience of explanation, the face image on the left is named as the original face image, and correspondingly, the face image on the right is named as the target face image.
在步骤S220中,对所述原始人脸图和所述目标人脸图分别进行模糊处理,得到第一原始人脸模糊图和第一目标人脸模糊图。In step S220, blurring is performed on the original face image and the target face image respectively, to obtain a first original face blurred image and a first target face blurred image.
其中,模糊处理是一种用于获取图像中的低频信号、使得图像变得模糊的图像处理方式。常见的模糊处理主要有高斯模糊,一种基于高斯分布的数据平滑技术(datasmoothing)。其原理是针对图像中的每个像素取周边像素的平均值,从而使得图像失去细节特征。Among them, blurring processing is an image processing method used to obtain low-frequency signals in an image and make the image blurred. Common blur processing mainly includes Gaussian blur, a data smoothing technique based on Gaussian distribution. The principle is to take the average value of the surrounding pixels for each pixel in the image, so that the image loses details.
实际应用中,可以通过以下的二维高斯函数实现上述的高斯模糊处理:In practical applications, the above-mentioned Gaussian blurring can be achieved by the following two-dimensional Gaussian function:
Figure PCTCN2021117014-appb-000001
Figure PCTCN2021117014-appb-000001
其中,(x,y)为某个像素点的坐标,G为像素点(x,y)经过模糊后的像素值,σ代表平滑程度。Among them, (x, y) is the coordinate of a certain pixel, G is the blurred pixel value of the pixel (x, y), and σ represents the degree of smoothness.
其中,人脸模糊图可以为由低频信号组成的、用于表达人脸基本颜色的图像。需要说明的是,从图像亮度或灰度变化程度的角度而言,图像中包含有高频信号和低频信号。图像的低频信号代表着图像中亮度或灰度变化缓慢的区域,即图像中颜色变化较少、较为平坦的区域,低频信号通常描述了图像的主要内容。而高频信号代表着图像中亮度或灰度变化剧烈的区域,即图像中颜色变化较大、展示边缘轮廓、细节特征的区域。The face blur map may be an image composed of low-frequency signals and used to express the basic color of the face. It should be noted that, from the perspective of the degree of change in image brightness or grayscale, the image contains high-frequency signals and low-frequency signals. The low-frequency signal of the image represents the area where the brightness or grayscale changes slowly in the image, that is, the area where the color changes less and is relatively flat in the image, and the low-frequency signal usually describes the main content of the image. The high-frequency signal represents the area where the brightness or grayscale changes sharply in the image, that is, the area in the image where the color changes greatly, and the edge contour and detail features are displayed.
在一些实施例中,终端可以分别对原始人脸图和目标人脸图进行高斯模糊,经过高斯模糊之后,由于图像中各个颜色通道的颜色抖动频率较低,因此,模糊后的图像由低频信号所组成,该低频信号可以反映出人脸的基础肤色。经过高斯模糊后的人脸图像,消除了代表人脸的五官细节特征的高频信号,但保留了代表人脸颜色、明亮对比较为明 显的低频信号的图像,该图像即为上述的人脸模糊图。In some embodiments, the terminal may perform Gaussian blurring on the original face image and the target face image respectively. After Gaussian blurring, since the color dithering frequency of each color channel in the image is low, the blurred image is composed of low-frequency signals. The low-frequency signal can reflect the basic skin color of the human face. The face image after Gaussian blurring eliminates the high-frequency signals representing the facial features of the face, but retains the image representing the color of the face and the low-frequency signal with obvious bright contrast. This image is the above-mentioned blurred face. picture.
图3B示出了一种人脸模糊图的示意图。从图中可见,对图3A中的原始人脸图和目标人脸图分别进行高斯模糊后,得到了图3B中左右两张模糊的人脸,即第一原始人脸模糊图和第一目标人脸模糊图。该模糊的图像中各个像素的颜色、亮度变化频率较低,均为低频信号。通过对人脸图像进行模糊,除去了代表人脸五官细节特征的高频信号,得到由低频信号组成的、代表人脸中颜色、明亮对比较为明显的图像,作为人脸模糊图。FIG. 3B shows a schematic diagram of a face blur map. As can be seen from the figure, after Gaussian blurring is performed on the original face image and the target face image in Fig. 3A respectively, two blurred faces on the left and right in Fig. 3B are obtained, namely the first original face blurred image and the first target. Blurred human face. The color and brightness of each pixel in the blurred image have a low frequency of change and are all low-frequency signals. By blurring the face image, the high-frequency signals representing the detailed features of the facial features of the face are removed, and an image composed of low-frequency signals, representing the colors in the face and with obvious bright contrast, is obtained as a face blur map.
在步骤S230中,对根据所述原始人脸图所生成的人脸纯白图进行模糊处理,得到所述原始人脸图的纯白模糊图,以及,对根据所述目标人脸图所生成的人脸纯白图进行模糊处理,得到所述目标人脸图的纯白模糊图。In step S230, blurring the pure white face image generated according to the original face image to obtain a pure white blurred image of the original face image, The pure white image of the target face is blurred to obtain the pure white blurred image of the target face image.
需要说明的是,直接采用原始人脸图除以第一原始人脸模糊图以得到原始人脸细节图,然后再将原始人脸细节图与第一目标人脸模糊图进行相乘以实现人脸融合,可能会导致融合人脸的边缘颜色失真。经申请人深入研究发现,虽然通过高斯模糊的方法可以节省颜色迁移的计算量,然而,由于高斯模糊具有一定的模糊半径,在模糊的情况下,会在人脸范围之外生成像素值趋于0的像素点,从而在人脸边缘的位置上形成一圈黑边,如图3C所示的人脸模糊图的人脸边缘存在黑边的示意图,第一原始人脸模糊图的人脸边缘上存在有一圈围绕人脸的黑边。因此,基于该带黑边的人脸模糊图进行融合,会导致最终的融合人脸的边缘颜色失真。因此,可以通过引入纯白图像的方法以避免融合人脸的边缘颜色失真。It should be noted that the original face image is directly divided by the first original face blur image to obtain the original face detail image, and then the original face detail image and the first target face blur image are multiplied to realize the human face. Face fusion may cause color distortion at the edges of the fused face. After in-depth research by the applicant, it is found that although the Gaussian blur method can save the calculation amount of color migration, however, since the Gaussian blur has a certain blur radius, in the case of blur, the generated pixel values outside the face range tend to be 0 pixels, thus forming a circle of black borders at the edge of the face, as shown in Figure 3C, a schematic diagram of the presence of black borders on the edge of the face in the blurred face image, the edge of the face in the first original blurred face image There is a black border around the face. Therefore, the fusion based on the blurred face image with black borders will cause the edge color distortion of the final fused face. Therefore, the edge color distortion of the fusion face can be avoided by introducing a pure white image method.
在一些实施例中,终端可以首先根据原始人脸图生成相应的人脸纯白图,以及,根据目标人脸图生成相应的人脸纯白图。In some embodiments, the terminal may first generate a corresponding pure-white face image according to the original face image, and generate a corresponding pure-white face image according to the target face image.
图3D示出了一种人脸纯白图的示意图。从图中可见,右侧的图像为根据左侧的原始人脸图生成的人脸纯白图,该人脸纯白图具有由灰度相同的纯白色所填充的纯色区域,纯色区域的纯色区域边缘在形状、尺寸等特征上,均与左侧的原始人脸图像中的人脸匹配。FIG. 3D shows a schematic diagram of a pure white image of a human face. As can be seen from the figure, the image on the right is a pure white face image generated from the original face image on the left. The pure white face image has a solid color area filled with pure white with the same grayscale. The edge of the region matches the face in the original face image on the left in terms of shape, size and other features.
然后,分别对原始人脸图和目标人脸图各自的人脸纯白图进行模糊处理,得到原始人脸图的纯白模糊图和目标人脸图的纯白模糊图。Then, the pure white face images of the original face image and the target face image are respectively blurred to obtain the pure white blurred image of the original face image and the pure white blurred image of the target face image.
图3E示出了一种纯白模糊图像的示意图。从图中可见,对左侧的人脸纯白图进行高斯模糊,得到右侧的纯白模糊图,纯白模糊图中的边缘被模糊化。Figure 3E shows a schematic diagram of a pure white blurred image. It can be seen from the figure that Gaussian blurring is performed on the pure white image of the face on the left, and the pure white blurred image on the right is obtained, and the edges in the pure white blurred image are blurred.
在步骤S240中,基于所述原始人脸图的纯白模糊图,对所述第一原始人脸模糊图进行图像分解,得到第二原始人脸模糊图,以及,基于所述目标人脸图的纯白模糊图,对所述第一目标人脸模糊图进行图像分解,得到第二目标人脸模糊图。In step S240, based on the pure white blurred image of the original face image, image decomposition is performed on the first original face blurred image to obtain a second original face blurred image, and, based on the target face image The pure white blurred image of the first target face is decomposed to obtain the second target face blurred image.
在一些实施例中,终端可以首先分别针对原始人脸图和目标人脸图各自的纯白模糊图中各个像素点的像素值进行归一化处理,得到原始人脸图和目标人脸图各自的归一化模糊图,然后,将原始人脸图和目标人脸图各自的人脸模糊图中各个像素点的像素值除以相应的归一化模糊图中各个像素点的像素值,得到上述的第二原始人脸模糊图和第二目标人脸模糊图。In some embodiments, the terminal may first perform normalization processing on the pixel values of each pixel in the pure white blurred images of the original face image and the target face image, respectively, to obtain the original face image and the target face image respectively. Then, divide the pixel value of each pixel in the original face image and the target face image by the pixel value of each pixel in the corresponding normalized blur image to obtain The above-mentioned second original face blurred image and second target face blurred image.
通过利用纯白模糊图对第一人脸模糊图进行图像分解以得到第二人脸模糊图,可以将原来的人脸模糊图中人脸边缘外的像素点(黑边)的像素值改变,使得其像素值变更为与人脸边缘上的像素点的像素值接近的像素值。从而,使得人脸模糊图中人脸边缘的黑边消失,经过后续的人脸融合处理后,避免融合人脸的边缘颜色失真。By using the pure white blur image to decompose the first face blur image to obtain the second face blur image, the pixel values of the pixels outside the face edge (black border) in the original face blur image can be changed, Change its pixel value to a pixel value close to the pixel value of the pixel point on the edge of the face. As a result, the black edge of the face edge in the face blurred image disappears, and after the subsequent face fusion processing, the edge color distortion of the fused face is avoided.
在步骤S250中,基于所述第二原始人脸模糊图,对所述原始人脸图进行图像分解,得到原始人脸细节图。In step S250, based on the second original face blurred image, image decomposition is performed on the original face image to obtain an original face detail image.
其中,人脸细节图可以为由高频信号组成的、用于表达人脸五官细节特征的图像。The face detail map may be an image composed of high-frequency signals and used to express the detailed features of the facial features of the face.
在一些实施例中,终端可以利用第二原始人脸模糊图对原始人脸图进行图像分解,以得到由高频信号组成的、用于表达人脸五官细节特征的图像,作为上述的原始人脸细节图。In some embodiments, the terminal may perform image decomposition on the original face image by using the second original face blurred image, so as to obtain an image composed of high-frequency signals and used to express the detailed features of the facial features of the human face. Detail of the face.
需要说明的是,通常图像可以由细节图层和模糊图层组成,在图像上分解掉模糊图层,即可得到细节图层。图像分解的实施方式可以有多种,例如,可以基于乘法分解或者加法分解的方式,从原始图像上去除模糊图层以得到细节图层。在一些实施例中,通过加法分解的方式进行图像分解,可以提取人脸图像中各个像素点的灰度形成一个灰度矩阵,提取人脸模糊图层中各个像素点的灰度形成另一个灰度矩阵,将两个灰度矩阵对位相减,所得的灰度矩阵,即可构成人脸细节图层。It should be noted that usually an image can be composed of a detail layer and a blur layer, and the detail layer can be obtained by decomposing the blur layer on the image. There can be various implementations of image decomposition, for example, the blur layer can be removed from the original image to obtain the detail layer based on multiplication decomposition or additive decomposition. In some embodiments, image decomposition is performed by means of additive decomposition, the grayscale of each pixel in the face image can be extracted to form a grayscale matrix, and the grayscale of each pixel in the face blur layer can be extracted to form another grayscale By subtracting the two grayscale matrices in alignment, the resulting grayscale matrix can constitute the face detail layer.
图3F示出了一种人脸细节图的示意图。从图中可见,通过除去人脸模糊图而分解出的人脸细节图,保留了人脸中的眉毛、眼睛、鼻子、嘴巴等五官的纹理、轮廓的细节特征,但已除去了人脸原来的肤色。FIG. 3F shows a schematic diagram of a face detail map. It can be seen from the figure that the detailed face image decomposed by removing the blurred image of the face retains the texture and contour details of the facial features such as eyebrows, eyes, nose, mouth, etc., but the original facial features of the face have been removed. complexion.
在步骤S260中,将所述原始人脸细节图融合至所述第二目标人脸模糊图,得到所述目标人脸图的人脸融合图。In step S260, the original face detail image is fused to the second target face fuzzy image to obtain a face fusion image of the target face image.
其中,人脸融合图像可以为将一个人脸的五官细节特征与另一人脸中除五官细节外的人脸特征进行融合后的图像。The face fusion image may be an image obtained by merging the facial features of one face with the facial features of another face except the facial features.
在一些实施例中,终端可以基于乘法融合或者加法融合的方式,将原始人脸细节图与第二目标人脸模糊图融合为上述的人脸融合图。例如,通过加法融合的方式进行图像融合,可以将原始人脸细节图中各个像素点的灰度所形成的灰度矩阵,与第二目标人脸模糊图中各个像素点的灰度所形成的灰度矩阵进行相加,所得的灰度矩阵,即可构成上述的人脸融合图。In some embodiments, the terminal may fuse the original face detail image and the second target face blur image into the above-mentioned face fusion image based on multiplicative fusion or additive fusion. For example, by performing image fusion by means of additive fusion, the grayscale matrix formed by the grayscale of each pixel in the original face detail image can be combined with the grayscale formed by each pixel in the second target face blurred image. The grayscale matrix is added, and the obtained grayscale matrix can constitute the above-mentioned face fusion map.
图3G示出了一种基于图像高低频信号进行人脸融合的人脸融合图的示意图。从图中可见,通过将原始人脸细节图与第二目标人脸模糊图进行融合,所得到的人脸融合图中,右侧的人脸保留了原有的目标人脸的基础肤色,但融合了左侧原始人脸图的五官细节特征。即使肤色存在差异,由于融合过程中是在人脸的原有肤色的基础上融合另一人脸的五官细节特征,因此,将左侧人脸融合至右侧人脸后,图中区域301并未出现颜色差异,不存在边缘颜色失真。而且,通过利用纯白模糊图消除了人脸边缘在进行模糊处理后产生的黑边,人脸融合图中左右两侧的人脸均没出现颜色失真。FIG. 3G shows a schematic diagram of a face fusion map for performing face fusion based on high and low frequency signals of an image. As can be seen from the figure, by fusing the original face detail image with the second target face blur image, the obtained face fusion image, the face on the right retains the basic skin color of the original target face, but The facial features of the original face image on the left are integrated. Even if there is a difference in skin color, the facial features of another face are fused on the basis of the original skin color of the face during the fusion process. Therefore, after the left face is fused to the right face, the area 301 in the figure does not appear. Color differences appear, no edge color distortion. Moreover, by using the pure white blurred image to eliminate the black edges generated by the blurring of the edge of the face, there is no color distortion on the left and right faces in the face fusion image.
需要说明的是,在进行换脸的情况下,还需要通过上述的人脸融合方法将目标人脸 图像的五官细节特征融合至原始人脸图中,得到另一个人脸的人脸融合图。即,一个完整的换脸特效处理,需要通过上述的人脸融合方法进行至少两次的人脸融合。由于对另一人脸的融合过程与上述实施例相似,仅仅是融合对象不同,本领域技术人员根据上述的人脸融合方法即可明确得知完整的人脸换脸特效处理方法,在此不再赘述。It should be noted that, in the case of changing faces, it is also necessary to fuse the facial features of the target face image into the original face map through the above-mentioned face fusion method to obtain a face fusion map of another face. That is, a complete face-changing special effect processing requires at least two face fusions through the above-mentioned face fusion method. Since the fusion process of another face is similar to the above-mentioned embodiment, only the fusion objects are different, those skilled in the art can clearly know the complete special effect processing method of face-changing according to the above-mentioned face fusion method, which is not repeated here. Repeat.
上述的人脸融合方法中,通过分别对原始人脸图和目标人脸图进行模糊处理,得到了由低频信号构成的、保留有人脸原有肤色的第一原始人脸模糊图和第一目标人脸模糊图,再对根据原始人脸图和目标人脸图生成的人脸纯白图进行模糊处理得到各自的纯白模糊图,利用该纯白模糊图对第一原始人脸模糊图和第一目标人脸模糊图进行图像分解,以消除第一原始人脸模糊图和第一目标人脸模糊图中人脸边缘的由于模糊处理而产生的黑边,得到第二原始人脸模糊图和第二目标人脸模糊图,然后通过第二原始人脸模糊图对原始人脸图进行图像分解得到原始人脸细节图,最后将原始人脸细节图融合至第二目标人脸模糊图,得到了目标人脸图的人脸融合图。由此,通过简单的方式即实现了对人脸图像中的五官细节特征和另一人脸图像的原有肤色的融合,而无须进行大量的颜色迁移运算,处理效率较高。因此,避免了由于模糊处理而导致的颜色失真,同时提升了人脸融合的速度。In the above-mentioned face fusion method, by blurring the original face image and the target face image respectively, the first original face blur image and the first target which are composed of low-frequency signals and retain the original skin color of the face are obtained. The face blurred image, and then the pure white face image generated according to the original face image and the target face image is blurred to obtain the respective pure white blurred image, and the first original face blurred image and The first target face blurred image is decomposed to eliminate the black edges of the face edges in the first original face blurred image and the first target face blurred image due to blurring processing, and the second original face blurred image is obtained. and the second target face blur map, and then decompose the original face map through the second original face blur map to obtain the original face detail map, and finally fuse the original face detail map into the second target face blur map, The face fusion map of the target face map is obtained. Therefore, the fusion of the facial features in the face image and the original skin color of another face image is achieved in a simple manner, without the need to perform a large number of color migration operations, and the processing efficiency is high. Therefore, color distortion due to blurring processing is avoided, and the speed of face fusion is improved at the same time.
在一示例性实施例中,如图4所示,在步骤S240中,可以通过以下步骤实现:In an exemplary embodiment, as shown in FIG. 4 , in step S240, the following steps may be used:
在步骤S441中,将所述原始人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第一归一化模糊图;将所述第一原始人脸模糊图中各个像素点的像素值除以所述第一归一化模糊图中各个像素点的像素值,得到所述第二原始人脸模糊图。In step S441, normalize the pixel values of each pixel in the pure white blurred image of the original face image to obtain a first normalized blurred image; The pixel value of each pixel is divided by the pixel value of each pixel in the first normalized blur map to obtain the second original face blur map.
在一些实施例中,在得到原始人脸图的纯白模糊图后,将纯白模糊图中各个像素点的像素值进行归一化处理,得到第一归一化模糊图。例如,将纯白模糊图中各个像素点的像素值除以纯白像素点的像素值255(纯白像素点的像素值为理论上最高值),从而将纯白模糊图中各个像素点的像素值进行归一化,由此得到了针对原始人脸图的纯白模糊图所生成的第一归一化模糊图。In some embodiments, after the pure white blur image of the original face image is obtained, the pixel values of each pixel in the pure white blur image are normalized to obtain the first normalized blur image. For example, divide the pixel value of each pixel in the pure white blur image by the pixel value of the pure white pixel by 255 (the pixel value of the pure white pixel is the theoretical highest value), so as to divide the pixel value of each pixel in the pure white blur image The value is normalized, thereby obtaining the first normalized blur image generated for the pure white blur image of the original face image.
然后,将第一原始人脸模糊图中各个像素点的像素值除以第一归一化模糊图中各个像素点的像素值,得到上述的第二原始人脸模糊图。Then, the pixel value of each pixel in the first original face blur map is divided by the pixel value of each pixel in the first normalized blur map to obtain the above-mentioned second original face blur map.
在步骤S442中,将所述目标人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第二归一化模糊图;将所述第一目标人脸模糊图中各个像素点的像素值除以所述第二归一化模糊图中各个像素点的像素值,得到所述第二目标人脸模糊图。In step S442, normalize the pixel values of each pixel in the pure white fuzzy map of the target face map to obtain a second normalized fuzzy map; The pixel value of each pixel is divided by the pixel value of each pixel in the second normalized blur map to obtain the second target face blur map.
在一些实施例中,在得到原始人脸图的纯白模糊图后,采用与上述原始人脸图的纯白模糊图相同的步骤,可以得到第二目标人脸模糊图,在此不再赘述。In some embodiments, after the pure white blurred image of the original face image is obtained, the same steps as the pure white blurred image of the original face image are used to obtain the second target face blurred image, which will not be repeated here. .
上述的人脸融合方法中,基于对纯白模糊图中各个像素点进行归一化处理得到归一化模糊图,再将人脸模糊图的像素点的像素值除以归一化模糊图中各个像素点的像素值以得到第二原始人脸模糊图和第二目标人脸模糊图,基于归一化后的数值进行后续大规模数值运算可以有效降低运算量,由此,通过简单的运算处理即可消除人脸模糊图中的 黑边以避免融合人脸的边缘颜色失真,在保证人脸融合图的融合质量的同时提升了人脸融合的速度。In the above face fusion method, a normalized blur image is obtained based on normalizing each pixel point in the pure white blur image, and then the pixel values of the pixels in the face blur image are divided by the normalized blur image. The pixel values of each pixel point are used to obtain the second original face blur map and the second target face blur map, and subsequent large-scale numerical operations based on the normalized values can effectively reduce the amount of calculation. The processing can eliminate the black edges in the blurred face image to avoid the edge color distortion of the fusion face, which improves the speed of face fusion while ensuring the fusion quality of the face fusion image.
在一示例性实施例中,在步骤S441中的将所述原始人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第一归一化模糊图,可以通过以下步骤实现:In an exemplary embodiment, in step S441, the pixel value of each pixel in the pure white blur image of the original face image is normalized to obtain a first normalized blur image, which can be obtained by the following method: Steps to achieve:
将所述原始人脸图的纯白模糊图中各个像素点的像素值除以纯白像素值,得到所述第一归一化模糊图;所述纯白像素值为纯白像素点的像素值;Divide the pixel value of each pixel in the pure white fuzzy image of the original face image by the pure white pixel value to obtain the first normalized fuzzy image; the pure white pixel value is the pixel value of the pure white pixel ;
在步骤S442中的将所述目标人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第二归一化模糊图,可以通过以下步骤实现:In step S442, the pixel value of each pixel in the pure white blurred image of the target face image is normalized to obtain a second normalized blurred image, which can be achieved through the following steps:
将所述目标人脸图的纯白模糊图中各个像素点的像素值除以所述纯白像素值,得到所述第二归一化模糊图。The second normalized blur image is obtained by dividing the pixel value of each pixel point in the pure white blur image of the target face image by the pure white pixel value.
表1示出了原始人脸图上的像素点A、B、处于人脸边缘的像素点C和人脸边缘之外的像素点D和E,采用RGB三通道表示各个像素点的像素值。Table 1 shows the pixel points A and B on the original face map, the pixel points C at the edge of the face, and the pixels D and E outside the edge of the face. RGB three channels are used to represent the pixel value of each pixel point.
   AA BB CC DD EE
RR 204204 206206 160160 // //
GG 148148 154154 108108 // //
BB 120120 132132 8686 // //
表1Table 1
对原始人脸图进行模糊处理后,得到如下表2的第一原始人脸模糊图的像素点数据:After the original face image is blurred, the pixel point data of the first original face blurred image in Table 2 is obtained:
   AA BB CC DD EE
RR 102102 103103 8080 6464 4646
GG 7474 7474 5454 4343 3131
BB 6060 6060 4343 3434 2525
表2Table 2
从表2中可见,在进行高斯模糊的情况下,处于像素点C的模糊半径内的像素点D和E均被赋予了一定的像素值(通常是趋近于0),该像素点D和E即为模糊人脸中人脸边缘处的黑边。通过引入纯白模糊图可以消除该黑边。表3示出了根据原始人脸模糊图所生成的纯白人脸图的像素点数据:It can be seen from Table 2 that in the case of Gaussian blurring, the pixels D and E within the blur radius of the pixel C are given a certain pixel value (usually close to 0), the pixel D and E is the black edge at the edge of the face in the blurred face. This black border can be eliminated by introducing a pure white blur image. Table 3 shows the pixel point data of the pure white face image generated from the original face blurred image:
   AA BB CC DD EE
RR 255255 255255 255255 // //
GG 255255 255255 255255 // //
BB 255255 255255 255255 // //
表3table 3
对纯白人脸图中各个像素点进行高斯模糊,得到了表4所示的纯白模糊图的像素点数据:Gaussian blur is performed on each pixel in the pure white face image, and the pixel data of the pure white blurred image shown in Table 4 is obtained:
   AA BB CC DD EE
RR 127127 127127 127127 102102 7373
GG 127127 127127 127127 102102 7373
BB 127127 127127 127127 102102 7373
表4Table 4
下一步,对表4中所示的纯白模糊图中各个像素点的像素值进行归一化处理,即,各个像素点除以纯白像素点的像素值255,得到如下表5的归一化模糊图的像素点数据:Next, normalize the pixel value of each pixel in the pure white blur map shown in Table 4, that is, divide each pixel by the pixel value of the pure white pixel 255, and obtain the normalization in Table 5 below Pixel point data of the blurred image:
   AA BB CC DD EE
RR 0.4980.498 0.4980.498 0.4980.498 0.40.4 0.2860.286
GG 0.4980.498 0.4980.498 0.4980.498 0.40.4 0.2860.286
BB 0.4980.498 0.4980.498 0.4980.498 0.40.4 0.2860.286
表5table 5
最后,将表2所示的第一原始人脸模糊图的像素点的像素值,除以表5所示的归一化模糊图的像素点的像素值,得到如下表6所示的第二原始人脸模糊图的像素点数据:Finally, divide the pixel value of the pixel point of the first original face blurred image shown in Table 2 by the pixel value of the pixel point of the normalized blurred image shown in Table 5 to obtain the second image shown in Table 6 below. Pixel data of the original face blur map:
   AA BB CC DD EE
RR 204204 206206 160160 160160 161161
GG 148148 154154 108108 107107 108108
BB 120120 132132 8686 8585 8787
表6Table 6
从表6中的第二原始人脸模糊图的像素点数据可见,第二原始人脸模糊图的像素点D和E的像素值已接近于像素点C的像素值,黑边被转变为与人脸边缘的像素值相似的像素点,从而通过引入纯白模糊图消除了由于高斯模糊所产生的黑边。It can be seen from the pixel data of the second original face blurred image in Table 6 that the pixel values of pixels D and E of the second original face blurred image are close to the pixel value of pixel C, and the black borders are transformed into Pixels with similar pixel values at the edge of the face, thus eliminating the black edges caused by Gaussian blur by introducing pure white blur.
上述的人脸融合方法中,采用纯白像素点的像素值对纯白模糊图中各个像素点的像素值进行归一化处理以得到第一归一化模糊图和第二归一化模糊图,无须经过复杂的数值变换处理即可完成归一化,提升了人脸融合的速度。In the above-mentioned face fusion method, the pixel value of the pure white pixel is used to normalize the pixel value of each pixel in the pure white blur image to obtain the first normalized blur image and the second normalized blur image. , the normalization can be completed without complex numerical transformation processing, which improves the speed of face fusion.
在一示例性实施例中,在步骤S250中,可以通过以下步骤实现:In an exemplary embodiment, in step S250, the following steps may be used:
将所述原始人脸图中各个像素点的像素值除以所述第二原始人脸模糊图中各个像素点的像素值,得到所述原始人脸细节图;dividing the pixel value of each pixel in the original face image by the pixel value of each pixel in the second original face blur image to obtain the original face detail image;
在步骤S260中,可以通过以下步骤实现:In step S260, it can be achieved by the following steps:
将所述原始人脸细节图中各个像素点的像素值乘以所述第二目标人脸模糊图中各个像素点的像素值,得到所述目标人脸图的人脸融合图。Multiply the pixel value of each pixel in the original face detail map by the pixel value of each pixel in the second target face blur map to obtain a face fusion map of the target face map.
在一些实施例中,在利用乘性分解的方式分解图像的情况下,可以首先提取出原始人脸图像中各个像素点的像素值(例如RGB三通道的像素值),根据各个像素点在图像中所处的坐标位置,建立原始人脸图像的像素值矩阵。然后,可以提取出第二原始人脸模糊图中各 个像素点的像素值,根据各个像素点在图像中所处的坐标位置,建立第二原始人脸模糊图的像素值矩阵。然后将两个像素值矩阵相除,即,根据各个像素点在图像中所处的坐标位置将各个像素值分别进行相除运算,得到由反映原始人脸图的五官细节特征的原始人脸细节图。In some embodiments, when the image is decomposed by means of multiplicative decomposition, the pixel value of each pixel in the original face image (for example, the pixel value of three RGB channels) can be extracted first, and the pixel value of each pixel in the image can be extracted according to the The coordinate position of the original face image is established, and the pixel value matrix of the original face image is established. Then, the pixel value of each pixel point in the second original face blur map can be extracted, and the pixel value matrix of the second original face blur map is established according to the coordinate position of each pixel point in the image. Then, the two pixel value matrices are divided, that is, each pixel value is divided according to the coordinate position of each pixel in the image, and the original face details reflecting the facial features of the original face map are obtained. picture.
在利用乘性分解的方式融合图像的情况下,将原始人脸细节图的像素值矩阵,与第二目标人脸模糊图的像素值矩阵相乘,根据相乘所得的像素值矩阵即可生成上述的人脸融合图。In the case of fusing images by means of multiplicative decomposition, multiply the pixel value matrix of the original face detail map with the pixel value matrix of the second target face blur map, and then generate the pixel value matrix obtained by the multiplication. The above face fusion map.
实际应用中,可以通过下列算法实现上述基于乘性分解的人脸融合方法:In practical applications, the above face fusion method based on multiplicative decomposition can be implemented by the following algorithms:
dst=(dst ori/dst low)*source lowdst=(dst ori /dst low )*source low ;
dst low=dst blur/dstWhite blurdst low = dst blur /dst White blur ;
source low=source blur/sourceWhite blursource low = source blur /sourceWhite blur ;
其中,dst代表最终输出的人脸融合图;dst ori代表原始人脸图;dst blur代表第一原始人脸模糊图;dstWhite blur代表对原始人脸图的人脸纯白图进行模糊后得到的纯白模糊图;dst low代表原始人脸图的第二原始人脸模糊图;source blur代表第一目标人脸模糊图;sourceWhite blur代表对目标人脸图的人脸纯白图进行模糊后得到的纯白模糊图;source low代表第二目标人脸模糊图。 Among them, dst represents the final output face fusion image; dst ori represents the original face image; dst blur represents the first original face blur image; dstWhite blur represents the pure white image obtained by blurring the original face image. Pure white blur image; dst low represents the second original face blur image of the original face image; source blur represents the first target face blur image; sourceWhite blur represents the pure white image of the target face image obtained after blurring The pure white blur map of ; source low represents the second target face blur map.
上述的人脸融合方法中,通过乘性分解的方式进行人脸融合,相比起加性分解的方式,可以尽量保留有原有人脸颜色、五官细节特征,人脸融合的质量较佳。In the above-mentioned face fusion method, face fusion is performed by means of multiplicative decomposition. Compared with the method of additive decomposition, the original face color and facial features can be preserved as much as possible, and the quality of face fusion is better.
在一示例性实施例中,在步骤S250中,可以通过以下步骤实现:In an exemplary embodiment, in step S250, the following steps may be used:
将所述原始人脸图中各个像素点的像素值减去所述第二原始人脸模糊图中各个像素点的像素值,得到所述原始人脸细节图;Subtracting the pixel value of each pixel in the original face image from the pixel value of each pixel in the second original face blur image to obtain the original face detail image;
在步骤S260中,可以通过以下步骤实现:In step S260, it can be achieved by the following steps:
将所述原始人脸细节图中各个像素点的像素值加上所述第二目标人脸模糊图中各个像素点的像素值,得到所述目标人脸图的人脸融合图。The pixel value of each pixel point in the original face detail map is added to the pixel value of each pixel point in the second target face blur map to obtain a face fusion map of the target face map.
在一些实施例中,在利用加性分解的方式分解图像的情况下,可以首先提取出原始人脸图像中各个像素点的像素值(例如RGB三通道的像素值),根据各个像素点在图像中所处的坐标位置,建立原始人脸图像的像素值矩阵。然后,可以提取出第二原始人脸模糊图中各个像素点的像素值,根据各个像素点在图像中所处的坐标位置,建立第二原始人脸模糊图的像素值矩阵。然后将两个像素值矩阵相减,即,根据各个像素点在图像中所处的坐标位置将各个像素值分别进行减法运算,得到由反映原始人脸图的五官细节特征的原始人脸细节图。In some embodiments, when the image is decomposed by means of additive decomposition, the pixel value of each pixel in the original face image (for example, the pixel value of three RGB channels) can be extracted first, and the pixel value of each pixel in the image can be extracted according to the The coordinate position of the original face image is established, and the pixel value matrix of the original face image is established. Then, the pixel value of each pixel in the second original face blur image can be extracted, and a pixel value matrix of the second original face blur image is established according to the coordinate position of each pixel in the image. Then, the two pixel value matrices are subtracted, that is, each pixel value is subtracted according to the coordinate position of each pixel in the image, and the original face detail map that reflects the facial features of the original face map is obtained. .
在利用加性分解的方式融合图像的情况下,将原始人脸细节图的像素值矩阵,与第二目标人脸模糊图的像素值矩阵相加,根据相加所得的像素值矩阵即可生成上述的人脸融合图。In the case of using additive decomposition to fuse images, the pixel value matrix of the original face detail map is added to the pixel value matrix of the second target face blur map, and the resulting pixel value matrix can be generated according to the addition. The above face fusion map.
实际应用中,可以通过下列算法实现上述基于加性分解的人脸融合方法:In practical applications, the above-mentioned face fusion method based on additive decomposition can be implemented by the following algorithms:
dst=(dst ori–dst blur)+source blurdst=(dst ori -dst blur )+source blur ;
dst low=dst blur/dstWhite blurdst low = dst blur /dst White blur ;
source low=source blur/sourceWhite blursource low = source blur /sourceWhite blur ;
上述的人脸融合方法中,通过加性分解的方式进行人脸融合,相比起乘性分解的方式,所需的运算量较少,可以较快地完成人脸融合,人脸融合的效率较高。In the above-mentioned face fusion method, the face fusion is carried out by means of additive decomposition. Compared with the method of multiplicative decomposition, the required amount of computation is less, and the face fusion can be completed faster, and the efficiency of face fusion can be improved. higher.
应该理解的是,虽然图2-7的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-7中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the steps in the flowcharts of FIGS. 2-7 are shown in sequence according to the arrows, these steps are not necessarily executed in the sequence shown by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in FIGS. 2-7 may include multiple steps or multiple stages. These steps or stages are not necessarily executed and completed at the same time, but may be executed at different times. The execution of these steps or stages The order is also not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the steps or phases within the other steps.
图5是根据一示例性实施例示出的一种人脸融合装置框图。参照图5,该装置包括获取单元502、人脸模糊单元504、纯白模糊单元506、模糊处理单元508、细节分解单元510和人脸融合单元512。Fig. 5 is a block diagram of a face fusion apparatus according to an exemplary embodiment. 5 , the apparatus includes an acquisition unit 502 , a face blurring unit 504 , a pure white blurring unit 506 , a blurring processing unit 508 , a detail decomposition unit 510 and a face fusion unit 512 .
获取单元502,被配置为获取原始人脸图和目标人脸图;an obtaining unit 502, configured to obtain the original face map and the target face map;
人脸模糊单元504,被配置为对所述原始人脸图和所述目标人脸图分别进行模糊处理,得到第一原始人脸模糊图和第一目标人脸模糊图;The face blurring unit 504 is configured to blur the original face image and the target face image respectively to obtain the first original face blurred image and the first target face blurred image;
纯白模糊单元506,被配置为对根据所述原始人脸图所生成的人脸纯白图进行模糊处理,得到所述原始人脸图的纯白模糊图,以及,对根据所述目标人脸图所生成的人脸纯白图进行模糊处理,得到所述目标人脸图的纯白模糊图;The pure white blurring unit 506 is configured to perform a blurring process on the pure white face image generated according to the original face image to obtain a pure white blurred image of the original face image, and The pure white image of the face generated by the face image is subjected to fuzzy processing to obtain the pure white blurred image of the target face image;
模糊处理单元508,被配置为基于所述原始人脸图的纯白模糊图,对所述第一原始人脸模糊图进行图像分解,得到第二原始人脸模糊图,以及,基于所述目标人脸图的纯白模糊图,对所述第一目标人脸模糊图进行图像分解,得到第二目标人脸模糊图;The blurring processing unit 508 is configured to perform image decomposition on the first original face blurred image based on the pure white blurred image of the original face image to obtain a second original face blurred image, and, based on the target The pure white blurred image of the face image, the first target face blurred image is decomposed to obtain the second target face blurred image;
细节分解单元510,被配置为基于所述第二原始人脸模糊图,对所述原始人脸图进行图像分解,得到原始人脸细节图;The detail decomposition unit 510 is configured to perform image decomposition on the original face map based on the second original face blur map to obtain the original face detail map;
人脸融合单元512,被配置为将所述原始人脸细节图融合至所述第二目标人脸模糊图,得到所述目标人脸图的人脸融合图。The face fusion unit 512 is configured to fuse the original face detail image to the second target face fuzzy image to obtain a face fusion image of the target face image.
在一示例性实施例中,所述模糊处理单元508,被配置为:In an exemplary embodiment, the blurring unit 508 is configured to:
将所述原始人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第一归一化模糊图;将所述第一原始人脸模糊图中各个像素点的像素值除以所述第一归一化模糊图中各个像素点的像素值,得到所述第二原始人脸模糊图;将所述目标人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第二归一化模糊图;将所述第一目标人脸模糊图中各个像素点的像素值除以所述第二归一化模糊图中各个像素点的像素值,得到所述第二目标人脸模糊图。Normalize the pixel value of each pixel in the pure white blurred image of the original face image to obtain a first normalized blurred image; The value is divided by the pixel value of each pixel in the first normalized blur map to obtain the second original face blur map; the pixel value of each pixel in the pure white blur map of the target face map is Perform normalization processing to obtain a second normalized fuzzy map; divide the pixel value of each pixel in the first target face fuzzy map by the pixel value of each pixel in the second normalized fuzzy map , to obtain the second target face blur map.
在一示例性实施例中,所述细节分解单元510,被配置为:In an exemplary embodiment, the detail decomposition unit 510 is configured to:
将所述原始人脸图中各个像素点的像素值除以所述第二原始人脸模糊图中各个像素点的像素值,得到所述原始人脸细节图;dividing the pixel value of each pixel in the original face image by the pixel value of each pixel in the second original face blur image to obtain the original face detail image;
所述人脸融合单元512,被配置为:The face fusion unit 512 is configured as:
将所述原始人脸细节图中各个像素点的像素值乘以所述第二目标人脸模糊图中各个像素点的像素值,得到所述目标人脸图的人脸融合图。Multiply the pixel value of each pixel in the original face detail map by the pixel value of each pixel in the second target face blur map to obtain a face fusion map of the target face map.
在一示例性实施例中,所述细节分解单元510,被配置为:In an exemplary embodiment, the detail decomposition unit 510 is configured to:
将所述原始人脸图中各个像素点的像素值减去所述第二原始人脸模糊图中各个像素点的像素值,得到所述原始人脸细节图;Subtracting the pixel value of each pixel in the original face image from the pixel value of each pixel in the second original face blur image to obtain the original face detail image;
所述人脸融合单元512,被配置为:The face fusion unit 512 is configured as:
将所述原始人脸细节图中各个像素点的像素值加上所述第二目标人脸模糊图中各个像素点的像素值,得到所述目标人脸图的人脸融合图。The pixel value of each pixel point in the original face detail map is added to the pixel value of each pixel point in the second target face blur map to obtain a face fusion map of the target face map.
在一示例性实施例中,所述模糊处理单元508,被配置为:In an exemplary embodiment, the blurring unit 508 is configured to:
将所述原始人脸图的纯白模糊图中各个像素点的像素值除以纯白像素值,得到所述第一归一化模糊图;所述纯白像素值为纯白像素点的像素值;将所述目标人脸图的纯白模糊图中各个像素点的像素值除以所述纯白像素值,得到所述第二归一化模糊图。Divide the pixel value of each pixel in the pure white fuzzy image of the original face image by the pure white pixel value to obtain the first normalized fuzzy image; the pure white pixel value is the pixel value of the pure white pixel ; Divide the pixel value of each pixel in the pure white blurred image of the target face image by the pure white pixel value to obtain the second normalized blurred image.
关于上述实施例中的装置,其中各个模块执行操作的实施方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。Regarding the apparatus in the above-mentioned embodiments, the implementation manner in which each module performs operations has been described in detail in the embodiment of the method, and will not be described in detail here.
图6是根据一示例性实施例示出的一种用于人脸融合的电子设备600的框图。例如,电子设备600可以是移动电话、计算机、数字广播终端、消息收发设备、游戏控制台、平板设备、医疗设备、健身设备、个人数字助理等。FIG. 6 is a block diagram of an electronic device 600 for face fusion according to an exemplary embodiment. For example, electronic device 600 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, or the like.
参照图6,电子设备600可以包括以下一个或多个组件:处理组件602、存储器604、电源组件606、多媒体组件608、音频组件610、输入/输出(I/O)的接口612、传感器组件614以及通信组件616。6, electronic device 600 may include one or more of the following components: processing component 602, memory 604, power supply component 606, multimedia component 608, audio component 610, input/output (I/O) interface 612, sensor component 614 and communication component 616 .
处理组件602通常控制电子设备600的整体操作,诸如与显示、电话呼叫、数据通信、相机操作和记录操作相关联的操作。处理组件602可以包括一个或多个处理器620来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件602可以包括一个或多个模块,便于处理组件602和其他组件之间的交互。例如,处理组件602可以包括多媒体模块,以方便多媒体组件608和处理组件602之间的交互。The processing component 602 generally controls the overall operation of the electronic device 600, such as operations associated with display, phone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or some of the steps of the methods described above. Additionally, processing component 602 may include one or more modules that facilitate interaction between processing component 602 and other components. For example, processing component 602 may include a multimedia module to facilitate interaction between multimedia component 608 and processing component 602.
存储器604被配置为存储各种类型的数据以支持在电子设备600的操作。这些数据的示例包括用于在电子设备600上操作的任何应用程序或方法的指令、联系人数据、电话簿数据、消息、图片、视频等。存储器604可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM)、电可擦除可编程只读存储器(EEPROM)、可擦除可编程只读存储器(EPROM)、可编程只读存储器(PROM)、只读存储器(ROM)、磁存储器、快闪存储器、磁盘或光盘。 Memory 604 is configured to store various types of data to support operation at electronic device 600 . Examples of such data include instructions for any application or method operating on electronic device 600, contact data, phonebook data, messages, pictures, videos, and the like. Memory 604 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
电源组件606为电子设备600的各种组件提供电力。电源组件1006可以包括电源管理 系统,一个或多个电源,及其他与为电子设备600生成、管理和分配电力相关联的组件。 Power supply assembly 606 provides power to various components of electronic device 600 . Power supply components 1006 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 600.
多媒体组件608包括在所述电子设备600和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。在屏幕包括触摸面板的情况下,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件608包括一个前置摄像头和/或后置摄像头。在电子设备600处于操作模式,如拍摄模式或视频模式的情况下,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。 Multimedia component 608 includes a screen that provides an output interface between the electronic device 600 and the user. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). In the case where the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action. In some embodiments, the multimedia component 608 includes a front-facing camera and/or a rear-facing camera. When the electronic device 600 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
音频组件610被配置为输出和/或输入音频信号。例如,音频组件610包括一个麦克风(MIC),在电子设备600处于操作模式,如呼叫模式、记录模式和语音识别模式的情况下,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器604或经由通信组件616发送。在一些实施例中,音频组件610还包括一个扬声器,用于输出音频信号。 Audio component 610 is configured to output and/or input audio signals. For example, audio component 610 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 600 is in operating modes, such as calling mode, recording mode, and voice recognition mode. The received audio signal may be further stored in memory 604 or transmitted via communication component 616 . In some embodiments, audio component 610 also includes a speaker for outputting audio signals.
I/O接口612为处理组件602和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。The I/O interface 612 provides an interface between the processing component 602 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
传感器组件614包括一个或多个传感器,用于为电子设备600提供各个方面的状态评估。例如,传感器组件614可以检测到电子设备600的打开/关闭状态,组件的相对定位,例如所述组件为电子设备600的显示器和小键盘,传感器组件614还可以检测电子设备600或电子设备600一个组件的位置改变,用户与电子设备600接触的存在或不存在,电子设备600方位或加速/减速和电子设备600的温度变化。传感器组件614可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件614还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件614还可以包括加速度传感器、陀螺仪传感器、磁传感器、压力传感器或温度传感器。 Sensor assembly 614 includes one or more sensors for providing status assessment of various aspects of electronic device 600 . For example, the sensor assembly 614 can detect the open/closed state of the electronic device 600, the relative positioning of the components, such as the display and the keypad of the electronic device 600, and the sensor assembly 614 can also detect the electronic device 600 or one of the electronic devices 600. Changes in the positions of components, presence or absence of user contact with the electronic device 600 , orientation or acceleration/deceleration of the electronic device 600 and changes in the temperature of the electronic device 600 . Sensor assembly 614 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. Sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
通信组件616被配置为便于电子设备600和其他设备之间有线或无线方式的通信。电子设备600可以接入基于通信标准的无线网络,如WiFi,运营商网络(如2G、3G、4G或5G),或它们的组合。在一个示例性实施例中,通信组件616经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件616还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。 Communication component 616 is configured to facilitate wired or wireless communication between electronic device 600 and other devices. Electronic device 600 may access wireless networks based on communication standards, such as WiFi, carrier networks (eg, 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 also includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
在示例性实施例中,电子设备600可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。In an exemplary embodiment, electronic device 600 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmed gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括 指令的存储器604,上述指令可由电子设备600的处理器620执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。In an exemplary embodiment, there is also provided a non-transitory computer readable storage medium including instructions, such as memory 604 including instructions, executable by the processor 620 of the electronic device 600 to perform the above method. For example, the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
在示例性实施例中,还提供一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时完成上述的人脸融合方法。In an exemplary embodiment, a computer program product is also provided, including a computer program, wherein the computer program, when executed by a processor, completes the above-mentioned face fusion method.
本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围。All the embodiments of the present disclosure can be implemented independently or in combination with other embodiments, which are all regarded as the protection scope required by the present disclosure.

Claims (17)

  1. 一种人脸融合方法,包括:A face fusion method, including:
    获取原始人脸图和目标人脸图;Obtain the original face map and the target face map;
    对所述原始人脸图和所述目标人脸图分别进行模糊处理,得到第一原始人脸模糊图和第一目标人脸模糊图;The original face image and the target face image are respectively subjected to a blurring process to obtain a first original face blurred image and a first target face blurred image;
    对根据所述原始人脸图所生成的人脸纯白图进行模糊处理,得到所述原始人脸图的纯白模糊图,以及,对根据所述目标人脸图所生成的人脸纯白图进行模糊处理,得到所述目标人脸图的纯白模糊图;Performing a blurring process on the pure white face image generated according to the original face image, to obtain a pure white blurred image of the original face image, and performing a blurring process on the pure white face image generated according to the target face image The image is blurred to obtain a pure white blurred image of the target face image;
    基于所述原始人脸图的纯白模糊图,对所述第一原始人脸模糊图进行图像分解,得到第二原始人脸模糊图,以及,基于所述目标人脸图的纯白模糊图,对所述第一目标人脸模糊图进行图像分解,得到第二目标人脸模糊图;Based on the pure white blurred image of the original face image, image decomposition is performed on the first original face blurred image to obtain a second original face blurred image, and a pure white blurred image based on the target face image , performing image decomposition on the first target face blurred image to obtain a second target face blurred image;
    基于所述第二原始人脸模糊图,对所述原始人脸图进行图像分解,得到原始人脸细节图;和Based on the second original face blurred image, image decomposition is performed on the original face image to obtain an original face detail image; and
    将所述原始人脸细节图融合至所述第二目标人脸模糊图,得到所述目标人脸图的人脸融合图。The original face detail image is fused to the second target face blur image to obtain a face fusion image of the target face image.
  2. 根据权利要求1所述的人脸融合方法,其中,所述基于所述原始人脸图的纯白模糊图,对所述第一原始人脸模糊图进行图像分解,得到第二原始人脸模糊图,包括:The face fusion method according to claim 1, wherein the first original face blurred image is decomposed based on the pure white blurred image of the original face image to obtain the second original face blurred image Figures, including:
    将所述原始人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第一归一化模糊图;Normalizing the pixel value of each pixel in the pure white blurred image of the original face image to obtain a first normalized blurred image;
    将所述第一原始人脸模糊图中各个像素点的像素值除以所述第一归一化模糊图中各个像素点的像素值,得到所述第二原始人脸模糊图。Divide the pixel value of each pixel in the first original face blur map by the pixel value of each pixel in the first normalized blur map to obtain the second original face blur map.
  3. 根据权利要求2所述的人脸融合方法,其中,所述将所述原始人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第一归一化模糊图,包括:The face fusion method according to claim 2, wherein the pixel value of each pixel in the pure white blur image of the original face image is normalized to obtain a first normalized blur image, include:
    将所述原始人脸图的纯白模糊图中各个像素点的像素值除以纯白像素值,得到所述第一归一化模糊图;所述纯白像素值为纯白像素点的像素值。Divide the pixel value of each pixel in the pure white fuzzy image of the original face image by the pure white pixel value to obtain the first normalized fuzzy image; the pure white pixel value is the pixel value of the pure white pixel .
  4. 根据权利要求1所述的人脸融合方法,其中,基于所述目标人脸图的纯白模糊图,对所述第一目标人脸模糊图进行图像分解,得到第二目标人脸模糊图,包括:The face fusion method according to claim 1, wherein, based on the pure white blurred image of the target face image, image decomposition is performed on the first target face blurred image to obtain a second target face blurred image, include:
    将所述目标人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第二归一化模糊图;Normalizing the pixel value of each pixel in the pure white blur image of the target face image to obtain a second normalized blur image;
    将所述第一目标人脸模糊图中各个像素点的像素值除以所述第二归一化模糊图中各个像素点的像素值,得到所述第二目标人脸模糊图。Divide the pixel value of each pixel point in the first target face blur map by the pixel value of each pixel point in the second normalized blur map to obtain the second target face blur map.
  5. 根据权利要求4所述的人脸融合方法,其中,将所述目标人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第二归一化模糊图,包括:The face fusion method according to claim 4, wherein the pixel value of each pixel in the pure white blur image of the target face image is normalized to obtain a second normalized blur image, comprising:
    将所述目标人脸图的纯白模糊图中各个像素点的像素值除以所述纯白像素值,得到所述第二归一化模糊图。The second normalized blur image is obtained by dividing the pixel value of each pixel in the pure white blur image of the target face image by the pure white pixel value.
  6. 根据权利要求1所述的人脸融合方法,其中,所述基于所述第二原始人脸模糊图, 对所述原始人脸图进行图像分解,得到原始人脸细节图,包括:The human face fusion method according to claim 1, wherein the image decomposition is performed on the original human face image based on the second original human face blurred image to obtain an original human face detail image, comprising:
    将所述原始人脸图中各个像素点的像素值除以所述第二原始人脸模糊图中各个像素点的像素值,得到所述原始人脸细节图;dividing the pixel value of each pixel in the original face image by the pixel value of each pixel in the second original face blur image to obtain the original face detail image;
    所述将所述原始人脸细节图融合至所述第二目标人脸模糊图,得到所述目标人脸图的人脸融合图,包括:The fusion of the original face detail image to the second target face blur image to obtain a face fusion image of the target face image includes:
    将所述原始人脸细节图中各个像素点的像素值乘以所述第二目标人脸模糊图中各个像素点的像素值,得到所述目标人脸图的人脸融合图。Multiply the pixel value of each pixel in the original face detail map by the pixel value of each pixel in the second target face blur map to obtain a face fusion map of the target face map.
  7. 根据权利要求1所述的人脸融合方法,其中,所述基于所述第二原始人脸模糊图,对所述原始人脸图进行图像分解,得到原始人脸细节图,包括:The face fusion method according to claim 1, wherein the image decomposition is performed on the original face image based on the second original face blurred image to obtain an original face detail image, comprising:
    将所述原始人脸图中各个像素点的像素值减去所述第二原始人脸模糊图中各个像素点的像素值,得到所述原始人脸细节图;Subtracting the pixel value of each pixel in the original face image from the pixel value of each pixel in the second original face blur image to obtain the original face detail image;
    所述将所述原始人脸细节图融合至所述第二目标人脸模糊图,得到所述目标人脸图的人脸融合图,包括:The fusion of the original face detail image to the second target face blur image to obtain a face fusion image of the target face image includes:
    将所述原始人脸细节图中各个像素点的像素值加上所述第二目标人脸模糊图中各个像素点的像素值,得到所述目标人脸图的人脸融合图。The pixel value of each pixel point in the original face detail map is added to the pixel value of each pixel point in the second target face blur map to obtain a face fusion map of the target face map.
  8. 一种人脸融合装置,包括:A face fusion device, comprising:
    获取单元,被配置为获取原始人脸图和目标人脸图;an acquisition unit, configured to acquire the original face map and the target face map;
    人脸模糊单元,被配置为对所述原始人脸图和所述目标人脸图分别进行模糊处理,得到第一原始人脸模糊图和第一目标人脸模糊图;a face blurring unit, configured to blur the original face image and the target face image respectively to obtain a first original face blurred image and a first target face blurred image;
    纯白模糊单元,被配置为对根据所述原始人脸图所生成的人脸纯白图进行模糊处理,得到所述原始人脸图的纯白模糊图,以及,对根据所述目标人脸图所生成的人脸纯白图进行模糊处理,得到所述目标人脸图的纯白模糊图;The pure white blurring unit is configured to perform a blurring process on the pure white image of the human face generated according to the original human face image to obtain a pure white blurred image of the original human face image, and, The pure white image of the face generated by the image is subjected to fuzzy processing to obtain the pure white blurred image of the target face image;
    模糊处理单元,被配置为基于所述原始人脸图的纯白模糊图,对所述第一原始人脸模糊图进行图像分解,得到第二原始人脸模糊图,以及,基于所述目标人脸图的纯白模糊图,对所述第一目标人脸模糊图进行图像分解,得到第二目标人脸模糊图;a blurring processing unit, configured to perform image decomposition on the first original face blurred image based on the pure white blurred image of the original face image to obtain a second original face blurred image, and, based on the target person The pure white blurred image of the face image, the first target face blurred image is decomposed to obtain the second target face blurred image;
    细节分解单元,被配置为基于所述第二原始人脸模糊图,对所述原始人脸图进行图像分解,得到原始人脸细节图;和a detail decomposition unit, configured to perform image decomposition on the original face map based on the second original face blur map to obtain an original face detail map; and
    人脸融合单元,被配置为将所述原始人脸细节图融合至所述第二目标人脸模糊图,得到所述目标人脸图的人脸融合图。The face fusion unit is configured to fuse the original face detail image to the second target face fuzzy image to obtain a face fusion image of the target face image.
  9. 根据权利要求8所述的人脸融合装置,其中,所述模糊处理单元,被配置为:The face fusion device according to claim 8, wherein the blurring processing unit is configured to:
    将所述原始人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第一归一化模糊图;Normalizing the pixel value of each pixel in the pure white blurred image of the original face image to obtain a first normalized blurred image;
    将所述第一原始人脸模糊图中各个像素点的像素值除以所述第一归一化模糊图中各个像素点的像素值,得到所述第二原始人脸模糊图。Divide the pixel value of each pixel in the first original face blur map by the pixel value of each pixel in the first normalized blur map to obtain the second original face blur map.
  10. 根据权利要求9所述的人脸融合装置,其中,所述模糊处理单元进一步被配置为:The face fusion device according to claim 9, wherein the blurring processing unit is further configured to:
    将所述原始人脸图的纯白模糊图中各个像素点的像素值除以纯白像素值,得到所述 第一归一化模糊图;所述纯白像素值为纯白像素点的像素值。Divide the pixel value of each pixel in the pure white fuzzy image of the original face image by the pure white pixel value to obtain the first normalized fuzzy image; the pure white pixel value is the pixel value of the pure white pixel .
  11. 根据权利要求8所述的人脸融合装置,所述模糊处理单元,被配置为:The human face fusion device according to claim 8, the blurring processing unit is configured to:
    将所述目标人脸图的纯白模糊图中各个像素点的像素值进行归一化处理,得到第二归一化模糊图;Normalizing the pixel value of each pixel in the pure white blur image of the target face image to obtain a second normalized blur image;
    将所述第一目标人脸模糊图中各个像素点的像素值除以所述第二归一化模糊图中各个像素点的像素值,得到所述第二目标人脸模糊图。Divide the pixel value of each pixel point in the first target face blur map by the pixel value of each pixel point in the second normalized blur map to obtain the second target face blur map.
  12. 根据权利要求11所述的人脸融合装置,所述模糊处理单元进一步被配置为:The face fusion device according to claim 11, the blurring processing unit is further configured to:
    将所述目标人脸图的纯白模糊图中各个像素点的像素值除以所述纯白像素值,得到所述第二归一化模糊图。The second normalized blur image is obtained by dividing the pixel value of each pixel in the pure white blur image of the target face image by the pure white pixel value.
  13. 根据权利要求8所述的人脸融合装置,其中,所述细节分解单元,被配置为:The human face fusion device according to claim 8, wherein the detail decomposition unit is configured to:
    将所述原始人脸图中各个像素点的像素值除以所述第二原始人脸模糊图中各个像素点的像素值,得到所述原始人脸细节图;dividing the pixel value of each pixel in the original face image by the pixel value of each pixel in the second original face blur image to obtain the original face detail image;
    所述人脸融合单元,被配置为:The face fusion unit is configured as:
    将所述原始人脸细节图中各个像素点的像素值乘以所述第二目标人脸模糊图中各个像素点的像素值,得到所述目标人脸图的人脸融合图。Multiply the pixel value of each pixel in the original face detail map by the pixel value of each pixel in the second target face blur map to obtain a face fusion map of the target face map.
  14. 根据权利要求8所述的人脸融合装置,其中,所述细节分解单元,被配置为:The human face fusion device according to claim 8, wherein the detail decomposition unit is configured to:
    将所述原始人脸图中各个像素点的像素值减去所述第二原始人脸模糊图中各个像素点的像素值,得到所述原始人脸细节图;Subtracting the pixel value of each pixel in the original face image from the pixel value of each pixel in the second original face blur image to obtain the original face detail image;
    所述人脸融合单元,被配置为:The face fusion unit is configured as:
    将所述原始人脸细节图中各个像素点的像素值加上所述第二目标人脸模糊图中各个像素点的像素值,得到所述目标人脸图的人脸融合图。The pixel value of each pixel point in the original face detail map is added to the pixel value of each pixel point in the second target face blur map to obtain a face fusion map of the target face map.
  15. 一种电子设备,包括:An electronic device comprising:
    处理器;processor;
    用于存储所述处理器可执行指令的存储器;a memory for storing the processor-executable instructions;
    其中,所述处理器被配置为执行所述指令,以实现以下步骤:wherein the processor is configured to execute the instructions to implement the following steps:
    获取原始人脸图和目标人脸图;Obtain the original face map and the target face map;
    对所述原始人脸图和所述目标人脸图分别进行模糊处理,得到第一原始人脸模糊图和第一目标人脸模糊图;The original face image and the target face image are respectively subjected to a blurring process to obtain a first original face blurred image and a first target face blurred image;
    对根据所述原始人脸图所生成的人脸纯白图进行模糊处理,得到所述原始人脸图的纯白模糊图,以及,对根据所述目标人脸图所生成的人脸纯白图进行模糊处理,得到所述目标人脸图的纯白模糊图;Performing a blurring process on the pure white face image generated according to the original face image, to obtain a pure white blurred image of the original face image, and performing a blurring process on the pure white face image generated according to the target face image The image is blurred to obtain a pure white blurred image of the target face image;
    基于所述原始人脸图的纯白模糊图,对所述第一原始人脸模糊图进行图像分解,得到第二原始人脸模糊图,以及,基于所述目标人脸图的纯白模糊图,对所述第一目标人脸模糊图进行图像分解,得到第二目标人脸模糊图;Based on the pure white blurred image of the original face image, image decomposition is performed on the first original face blurred image to obtain a second original face blurred image, and a pure white blurred image based on the target face image , performing image decomposition on the first target face blurred image to obtain a second target face blurred image;
    基于所述第二原始人脸模糊图,对所述原始人脸图进行图像分解,得到原始人脸细节图;和Based on the second original face blurred image, image decomposition is performed on the original face image to obtain an original face detail image; and
    将所述原始人脸细节图融合至所述第二目标人脸模糊图,得到所述目标人脸图的人脸融合图。The original face detail image is fused to the second target face blur image to obtain a face fusion image of the target face image.
  16. 一种计算机可读存储介质,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下步骤:A computer-readable storage medium, when the instructions in the computer-readable storage medium are executed by a processor of an electronic device, enable the electronic device to perform the following steps:
    获取原始人脸图和目标人脸图;Obtain the original face map and the target face map;
    对所述原始人脸图和所述目标人脸图分别进行模糊处理,得到第一原始人脸模糊图和第一目标人脸模糊图;The original face image and the target face image are respectively subjected to a blurring process to obtain a first original face blurred image and a first target face blurred image;
    对根据所述原始人脸图所生成的人脸纯白图进行模糊处理,得到所述原始人脸图的纯白模糊图,以及,对根据所述目标人脸图所生成的人脸纯白图进行模糊处理,得到所述目标人脸图的纯白模糊图;Performing a blurring process on the pure white face image generated according to the original face image, to obtain a pure white blurred image of the original face image, and performing a blurring process on the pure white face image generated according to the target face image The image is blurred to obtain a pure white blurred image of the target face image;
    基于所述原始人脸图的纯白模糊图,对所述第一原始人脸模糊图进行图像分解,得到第二原始人脸模糊图,以及,基于所述目标人脸图的纯白模糊图,对所述第一目标人脸模糊图进行图像分解,得到第二目标人脸模糊图;Based on the pure white blurred image of the original face image, image decomposition is performed on the first original face blurred image to obtain a second original face blurred image, and a pure white blurred image based on the target face image , performing image decomposition on the first target face blurred image to obtain a second target face blurred image;
    基于所述第二原始人脸模糊图,对所述原始人脸图进行图像分解,得到原始人脸细节图;和Based on the second original face blurred image, image decomposition is performed on the original face image to obtain an original face detail image; and
    将所述原始人脸细节图融合至所述第二目标人脸模糊图,得到所述目标人脸图的人脸融合图。The original face detail image is fused to the second target face blur image to obtain a face fusion image of the target face image.
  17. 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现以下步骤:A computer program product comprising a computer program, wherein the computer program implements the following steps when executed by a processor:
    获取原始人脸图和目标人脸图;Obtain the original face map and the target face map;
    对所述原始人脸图和所述目标人脸图分别进行模糊处理,得到第一原始人脸模糊图和第一目标人脸模糊图;The original face image and the target face image are respectively subjected to a blurring process to obtain a first original face blurred image and a first target face blurred image;
    对根据所述原始人脸图所生成的人脸纯白图进行模糊处理,得到所述原始人脸图的纯白模糊图,以及,对根据所述目标人脸图所生成的人脸纯白图进行模糊处理,得到所述目标人脸图的纯白模糊图;Performing a blurring process on the pure white face image generated according to the original face image, to obtain a pure white blurred image of the original face image, and performing a blurring process on the pure white face image generated according to the target face image The image is blurred to obtain a pure white blurred image of the target face image;
    基于所述原始人脸图的纯白模糊图,对所述第一原始人脸模糊图进行图像分解,得到第二原始人脸模糊图,以及,基于所述目标人脸图的纯白模糊图,对所述第一目标人脸模糊图进行图像分解,得到第二目标人脸模糊图;Based on the pure white blurred image of the original face image, image decomposition is performed on the first original face blurred image to obtain a second original face blurred image, and a pure white blurred image based on the target face image , performing image decomposition on the first target face blurred image to obtain a second target face blurred image;
    基于所述第二原始人脸模糊图,对所述原始人脸图进行图像分解,得到原始人脸细节图;和Based on the second original face blurred image, image decomposition is performed on the original face image to obtain an original face detail image; and
    将所述原始人脸细节图融合至所述第二目标人脸模糊图,得到所述目标人脸图的人脸融合图。The original face detail image is fused to the second target face blur image to obtain a face fusion image of the target face image.
PCT/CN2021/117014 2021-03-18 2021-09-07 Facial fusion method and apparatus WO2022193573A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110290169.5A CN113160099B (en) 2021-03-18 2021-03-18 Face fusion method, device, electronic equipment, storage medium and program product
CN202110290169.5 2021-03-18

Publications (1)

Publication Number Publication Date
WO2022193573A1 true WO2022193573A1 (en) 2022-09-22

Family

ID=76887862

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/117014 WO2022193573A1 (en) 2021-03-18 2021-09-07 Facial fusion method and apparatus

Country Status (2)

Country Link
CN (1) CN113160099B (en)
WO (1) WO2022193573A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160099B (en) * 2021-03-18 2023-12-26 北京达佳互联信息技术有限公司 Face fusion method, device, electronic equipment, storage medium and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014102458A1 (en) * 2012-12-31 2014-07-03 Nokia Corporation Method and apparatus for image fusion
CN105469407A (en) * 2015-11-30 2016-04-06 华南理工大学 Facial image layer decomposition method based on improved guide filter
CN106156730A (en) * 2016-06-30 2016-11-23 腾讯科技(深圳)有限公司 The synthetic method of a kind of facial image and device
CN109784301A (en) * 2019-01-28 2019-05-21 广州酷狗计算机科技有限公司 Image processing method, device, computer equipment and storage medium
CN111127352A (en) * 2019-12-13 2020-05-08 北京达佳互联信息技术有限公司 Image processing method, device, terminal and storage medium
CN113160099A (en) * 2021-03-18 2021-07-23 北京达佳互联信息技术有限公司 Face fusion method, face fusion device, electronic equipment, storage medium and program product

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993716B (en) * 2017-12-29 2023-04-14 微软技术许可有限责任公司 Image fusion transformation
CN112150393A (en) * 2020-10-12 2020-12-29 深圳数联天下智能科技有限公司 Face image buffing method and device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014102458A1 (en) * 2012-12-31 2014-07-03 Nokia Corporation Method and apparatus for image fusion
CN105469407A (en) * 2015-11-30 2016-04-06 华南理工大学 Facial image layer decomposition method based on improved guide filter
CN106156730A (en) * 2016-06-30 2016-11-23 腾讯科技(深圳)有限公司 The synthetic method of a kind of facial image and device
CN109784301A (en) * 2019-01-28 2019-05-21 广州酷狗计算机科技有限公司 Image processing method, device, computer equipment and storage medium
CN111127352A (en) * 2019-12-13 2020-05-08 北京达佳互联信息技术有限公司 Image processing method, device, terminal and storage medium
CN113160099A (en) * 2021-03-18 2021-07-23 北京达佳互联信息技术有限公司 Face fusion method, face fusion device, electronic equipment, storage medium and program product

Also Published As

Publication number Publication date
CN113160099A (en) 2021-07-23
CN113160099B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
CN108898546B (en) Face image processing method, device and equipment and readable storage medium
WO2016011747A1 (en) Skin color adjustment method and device
WO2022077970A1 (en) Method and apparatus for adding special effects
WO2022179025A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN107798654B (en) Image buffing method and device and storage medium
US11030733B2 (en) Method, electronic device and storage medium for processing image
CN110599410B (en) Image processing method, device, terminal and storage medium
WO2022110837A1 (en) Image processing method and device
CN107730448B (en) Beautifying method and device based on image processing
CN107341777B (en) Picture processing method and device
CN107967459B (en) Convolution processing method, convolution processing device and storage medium
US11308692B2 (en) Method and device for processing image, and storage medium
US11403789B2 (en) Method and electronic device for processing images
US11847769B2 (en) Photographing method, terminal, and storage medium
CN108734754B (en) Image processing method and device
CN110580688A (en) Image processing method and device, electronic equipment and storage medium
US20220327749A1 (en) Method and electronic device for processing images
WO2022193573A1 (en) Facial fusion method and apparatus
CN107730443B (en) Image processing method and device and user equipment
WO2023103813A1 (en) Image processing method and apparatus, device, storage medium, and program product
CN114926350A (en) Image beautifying method and device, electronic equipment and storage medium
US11252341B2 (en) Method and device for shooting image, and storage medium
CN114998115A (en) Image beautification processing method and device and electronic equipment
WO2023245364A1 (en) Image processing method and apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21931148

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21931148

Country of ref document: EP

Kind code of ref document: A1