WO2022089185A1 - Procédé de traitement d'image et dispositif de traitement d'image - Google Patents

Procédé de traitement d'image et dispositif de traitement d'image Download PDF

Info

Publication number
WO2022089185A1
WO2022089185A1 PCT/CN2021/123080 CN2021123080W WO2022089185A1 WO 2022089185 A1 WO2022089185 A1 WO 2022089185A1 CN 2021123080 W CN2021123080 W CN 2021123080W WO 2022089185 A1 WO2022089185 A1 WO 2022089185A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
face
area
deformed
background
Prior art date
Application number
PCT/CN2021/123080
Other languages
English (en)
Chinese (zh)
Inventor
赵明菲
闻兴
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2022089185A1 publication Critical patent/WO2022089185A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the technical field of image processing, and more particularly, to an image processing method and an image processing apparatus.
  • the present disclosure provides an image processing method and an image processing apparatus.
  • an image processing method comprising: identifying a face region in a first image; performing face-lifting on the face region in the first image based on a face-lifting algorithm, to obtain a face-lifted face the second image; perform repairing on the deformed area in the second image resulting from performing face-lifting to obtain a repaired third image.
  • the deformed area may be an area other than the face area after face reduction in a predetermined area including the face area involved in performing face reduction in the first image.
  • the performing inpainting on the deformed region in the second image resulting from performing face reduction may include: performing inpainting by filling the deformed region with background pixels.
  • the performing inpainting by filling the deformed area with background pixels may include: based on the deformed area and the first image or first image, using an image inpainting algorithm to fill the deformed area background pixels.
  • the performing inpainting by filling the deformed area with background pixels may include: filling the deformed area with background pixels based on the background image and the second image, wherein the background image is a The first image has a pure background image of the same scene.
  • an image processing apparatus comprising: an identification unit configured to identify a face region in a first image; a face reduction unit configured to performing face-lifting on the face region to obtain a second image after face-lifting; the repairing unit is configured to perform repairing on a deformed region in the second image that is generated by performing face-lifting, so as to generate a repaired third image.
  • the deformed area may be an area other than the face area after face reduction in a predetermined area including the face area involved in performing face reduction in the first image.
  • the inpainting unit may be configured to perform inpainting by filling the deformed region with background pixels.
  • the inpainting unit may be configured to fill the deformed region with background pixels using an image inpainting algorithm based on the deformed region and the first image or the first image.
  • the repair unit may be configured to fill the deformed region with background pixels based on a background image and a second image, wherein the background image is a pure background image having the same scene as the first image.
  • the repairing unit may be configured to: search for an area in the background image that is the same as the deformed area; replace the deformed area with pixel values of pixels of the searched area in the background image The pixel value of the pixel in .
  • an electronic device comprising: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions are executed by the at least one processor when the at least one processor is caused to execute the image processing method according to the present disclosure.
  • a computer-readable storage medium storing instructions, wherein, when the instructions are executed by at least one processor, the at least one processor is caused to execute the method according to the present disclosure. image processing method.
  • a computer program product wherein instructions in the computer program product can be executed by a processor of a computer device to complete the image processing method according to the present disclosure.
  • a more natural and more realistic face-lifting effect can be obtained by performing restoration on a region that is distorted and deformed by performing face-lifting.
  • FIG. 2 is a schematic diagram illustrating an implementation scenario of an image processing method and an image processing apparatus according to an exemplary embodiment of the present disclosure.
  • FIG. 3 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram showing a deformed area due to face-lifting.
  • FIG. 5 is a schematic diagram illustrating a background frame replacement method according to an exemplary embodiment of the present disclosure.
  • FIG. 6 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment of the present disclosure.
  • FIG. 7 is a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
  • FIG. 1 is a schematic diagram illustrating a face-lifting algorithm.
  • the process of the face-lifting algorithm may include: (1) As shown in (a) in Figure 1, first, obtain basic information on key points of the face, mainly including eyebrows, eyes, nose, mouth, face The 106 key points of the outer contour; (2) As shown in (b) in Figure 1, secondly, based on the detected 106 key points, the facial key points are densified, and additional key points are inserted, such as The forehead area and the peripheral area of the face are limited so that it can cover the entire face area; (3) As shown in (c) in Figure 1, finally, based on the face key points after densification, the entire face is constructed for it The triangulation of the entire face area (Delaunay Triangulation) is realized.
  • the triangulation divides the face into multiple non-overlapping triangular areas, and then the area transformation can achieve the effect of thin face.
  • region transformation by translating the vertices of the triangulation, then updating the translated vertices to the corresponding texture coordinates, and rendering through openGL or D3D, so as to realize the deformation of the entire associated triangulation.
  • This achieves the face-reduction effect, it will also lead to deformation or unnaturalness of the non-face area (as shown in (c) in Figure 1) involved in the triangular mesh of the face-reduction algorithm.
  • the non-face area includes the background area around the original face area and the area in the original face area that is thinned by the face thinning algorithm.
  • the present disclosure proposes an image processing method and an image processing apparatus, which can perform restoration on an area deformed by a face-reduction operation after a face-reduction operation is performed on an image, so as to obtain a more natural face-reduction image.
  • the image processing method and the image processing apparatus according to the present disclosure will be described in detail with reference to FIGS. 2 to 7 .
  • FIG. 2 is a schematic diagram illustrating an implementation scenario of an image processing method and an image processing apparatus according to an exemplary embodiment of the present disclosure.
  • the host in the network live broadcast system, can use the live broadcast equipment 201 to shoot the live broadcast program and upload the live broadcast program to the server 202 in the live broadcast room through the client of the live broadcast equipment 201, and the server 202 distributes the live broadcast program to the host who enters the host.
  • the client terminal of the user terminal 203 or 204 in the live broadcast room can present the live broadcast program to the users watching the live broadcast.
  • the live broadcast device 201 may be any device including a shooting function or a device capable of being connected with the shooting device, for example, a mobile phone, a portable computer, a tablet computer, a video camera, and the like.
  • the client used by the host during the live broadcast can perform face-lifting on the part of the face in the video and/or image captured by the live-streaming device 201, and include the face-lifting performed
  • the resulting live video and/or image live program is uploaded to the server 202 and distributed to each user terminal 203 or 204 for viewing by the user, so the video and/or image of the anchor with a face-lifted and more beautified image can be viewed. Therefore, the image processing method and the image processing apparatus according to the present disclosure can be applied to this live broadcast scene.
  • the image processing method and image processing apparatus according to the present disclosure can be applied to any scene where face reduction can be performed, such as a short video recording scene, a photographing scene, a self-portraiting scene, and the like, in addition to a live broadcast scene.
  • FIG. 3 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present disclosure.
  • a face region in a first image may be identified.
  • the first image may be an image captured by photographing, or may be an image of a video obtained by capturing a video.
  • the first image may be obtained in real-time from the photographing device, or may be obtained from local storage or a local database as needed, or received from an external data source (eg, the Internet, a server, a database, etc.) through an input device or transmission medium, the present
  • an external data source eg, the Internet, a server, a database, etc.
  • the face area may be an area occupied by a face in the first image.
  • the face area may be an area including only a face part, or an area including a face part and related parts such as hair and accessories, which is not limited in the present disclosure.
  • any possible face recognition method can be used to identify the face region in the image, which is not limited in the present disclosure.
  • a face-lift may be performed on a face region in the first image based on a face-lift algorithm to obtain a face-lifted second image.
  • a face-lift algorithm can be used to perform face reduction on the face region, but any other possible face reduction algorithm can also be used to perform face reduction, which is not limited in the present disclosure.
  • the second image may be an intermediate process image and may not be the actual output image.
  • FIG. 4 is a schematic diagram showing a deformed area due to face-lifting. As shown in FIG. 4 , (a) in FIG. 4 exemplarily shows a first image before performing face-lifting, and (b) in FIG. 4 exemplarily shows a second image after performing face-lifting. As shown in (a) of FIG.
  • the first image may include a face region 401 and a background region 402 .
  • the predetermined area 403 to be involved in performing face-reduction on the face area using the face-reduction algorithm may include the face area 401 and a part of the background area 402 .
  • the predetermined area 403 may be an area involved in a triangular mesh (as shown in (c) in FIG. 1 ). As shown in (b) of FIG.
  • the second image may include a face region 401 ′ and a background region 402 ′, wherein the face region 401 ′ is reduced due to the face-lifting algorithm.
  • the background area 402' increases because the reduced part of the face area is filled with background pixels.
  • the area other than the face area 401' after face reduction in the predetermined area 403' will be deformed, that is, it may be referred to as a deformed area.
  • the deformed area may include a background area 404' around the original face area and an area 405' in the original face area that is thinned by a face thinning algorithm.
  • step 303 inpainting may be performed on the deformed region in the second image resulting from performing face-lifting to obtain a repaired third image.
  • a deformed area resulting from performing face reduction may be first determined.
  • the deformed area may be determined as an area other than the face area after face reduction in a predetermined area including a face area involved in performing face reduction in the first image, for example, in (b) of FIG. 4
  • the deformed area is an area (404'+405') other than the face area 401' in the predetermined area 403'.
  • the deformed region can also be determined by comparing the pixel values around the face region of the second image after face-lifting with the pixel values around the face region of the first image before face-lifting.
  • any possible method can be used to determine the deformation area, which is not limited in the present disclosure.
  • inpainting may be performed on the deformed region by filling the deformed region with background pixels.
  • the inpainting can be performed on the deformed regions by an image inpainting algorithm or a method of background frame replacement.
  • the present disclosure is not limited to these repair methods, and any possible repair method can be used to perform repair on the deformed region. The following describes in detail how to perform inpainting on deformed regions through image inpainting algorithms or background frame replacement.
  • the deformed region may be filled with background pixels using an image inpainting algorithm based on the deformed region and the first image or the first image.
  • image inpainting algorithms may include traditional inpainting algorithms (non-deep learning algorithms) and deep learning algorithms.
  • Conventional patching algorithms may include patch-based methods and diffusion-based methods.
  • an image patch similar to the deformed area can be filled in the deformed area by searching the first image before performing face-lifting.
  • the diffusion-based method is used, the pixels at the edge of the deformed area can be in-grown according to the properties of the corresponding area of the first image before performing face-lifting, and the entire deformed area can be filled by diffusion.
  • Deep learning algorithms may include convolutional neural network (CNN) based methods, generative adversarial network (GAN) based methods, recurrent neural network (RNN) based methods, and the like.
  • a mask may be generated based on the deformed area, and based on the second image after performing face reduction and the generated mask, image inpainting may be performed on the deformed area using a deep learning algorithm.
  • the specific method may be inputting the second image and the generated mask after performing face-lifting into the model based on the deep learning algorithm, and the model based on the deep learning algorithm outputs the third image after the deformed area has been repaired.
  • a simpler and faster background frame replacement method can be used to repair the deformed region.
  • the background image before adding the human face and the first image after adding the human face can be obtained respectively.
  • multiple frames of video images can be continuously captured, and the shooting device can first capture image frames with only background to obtain background frame images, and then allow the user to shoot in front of the shooting device to obtain video frame images (for example, the first image).
  • FIG. 5 is a schematic diagram illustrating a background frame replacement method according to an exemplary embodiment of the present disclosure.
  • a pure background image 501 having the same scene as the first image can be acquired.
  • the pure background image 501 is searched for the same area 502 (shown by hatching) as the deformed area (404'+405') shown in (b) of Fig. 4 .
  • the pixel values of the pixels in the deformed area (404'+405') are replaced with the pixel values of the pixels in the searched area 502.
  • FIG. 6 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment of the present disclosure.
  • an image processing apparatus 600 may include a recognition unit 601 , a face reduction unit 602 , and a repair unit 603 .
  • the identifying unit 601 can identify the face region in the first image.
  • the first image may be an image captured by photographing, or may be an image of a video obtained by capturing a video.
  • the first image may be obtained in real-time from the photographing device, or may be obtained from local storage or a local database as needed, or received from an external data source (eg, the Internet, a server, a database, etc.) through an input device or transmission medium, the present There is no restriction on this disclosure.
  • the face area may be an area occupied by a face in the first image.
  • the face area may be an area including only a face part, or an area including a face part and related parts such as hair and accessories, which is not limited in the present disclosure.
  • the recognition unit 601 may use any possible face recognition method to recognize the face area in the image, which is not limited in the present disclosure.
  • the face-reduction unit 602 may perform face-reduction on the face region in the first image based on a face-reduction algorithm to obtain a face-reduced second image.
  • the face-reduction unit 602 may use the above-mentioned triangulation method to perform face-reduction on the face region, but may also use any other possible face-reduction algorithms to perform face-reduction, which is not limited in the present disclosure.
  • the second image may be an intermediate process image and may not be the actual output image.
  • the surrounding area may include a background area around the original face area and an area in the original face area that is thinned by the face thinning algorithm. Therefore, the inpainting unit 603 may perform inpainting on the deformed region in the second image resulting from performing the face reduction, to obtain a repaired third image.
  • the repairing unit 603 may first determine a deformed area resulting from performing face reduction. For example, the repairing unit 603 may determine the deformed area as an area other than the face area after face reduction in the predetermined area including the face area involved in performing face reduction in the first image, for example, (b in FIG. 4 ) The deformed area in ) is the area (404'+405') in the predetermined area 403' except the face area 401'. For another example, the repairing unit 603 may also determine the deformed region by comparing the pixel values around the face region of the second image after face reduction with the pixel values around the face region of the first image before face reduction. Of course, any possible method can be used to determine the deformation area, which is not limited in the present disclosure.
  • the repairing unit 603 may perform repairing on the deformed region by filling the deformed region with background pixels.
  • the inpainting unit 603 may perform inpainting on the deformed region through an image inpainting algorithm or a background frame replacement method.
  • the present disclosure is not limited to these repairing methods, and the repairing unit 603 may also use any possible repairing method to repair the deformed area. The following describes in detail how to perform inpainting on deformed regions through image inpainting algorithms or background frame replacement.
  • the repairing unit 603 may fill the deformed region with background pixels using an image inpainting algorithm based on the deformed region and the first image or the first image.
  • image inpainting algorithms may include traditional inpainting algorithms (non-deep learning algorithms) and deep learning algorithms.
  • Conventional patching algorithms may include patch-based methods and diffusion-based methods.
  • the repairing unit 603 may fill in the deformed area by searching for an image block similar to the deformed area on the first image before performing face-lifting.
  • the repairing unit 603 uses the diffusion-based method, the pixels at the edge of the deformed area may grow inward according to the properties of the corresponding area of the first image before performing face-lifting, and the entire deformed area may be filled by diffusion.
  • the deep learning algorithm may include a convolutional neural network (CNN)-based method, a generative adversarial network (GAN)-based method, a recurrent neural network (RNN)-based method, and the like.
  • the repairing unit 603 may generate a mask based on the deformed area, and perform image inpainting on the deformed area by using a deep learning algorithm based on the second image after performing face reduction and the generated mask.
  • the specific method may be that the repairing unit 603 inputs the second image after face-lifting and the generated mask to the model based on the deep learning algorithm, and the model based on the deep learning algorithm outputs the third image after the deformed area is repaired.
  • the repairing unit 603 may use a simpler and faster background frame replacement method to repair the deformed region.
  • the background image before adding the human face and the first image after adding the human face can be obtained respectively.
  • multiple frames of video images can be continuously captured, and the shooting device can first capture image frames with only background to obtain background frame images, and then allow the user to shoot in front of the shooting device to obtain video frame images (for example, the first image).
  • the repairing unit 603 may fill the deformed area with background pixels based on the background image and the second image. For example, the repairing unit 603 may search for the same area as the deformed area in the background image, and replace the pixel value of the pixel in the deformed area with the pixel value of the pixel of the searched area in the background image.
  • FIG. 7 is a block diagram of an electronic device 700 according to an exemplary embodiment of the present disclosure.
  • the electronic device 700 includes at least one memory 701 and at least one processor 702.
  • the at least one memory 701 stores a computer-executable instruction set.
  • the computer-executable instruction set is executed by the at least one processor 702 the execution An image processing method according to an exemplary embodiment of the present disclosure.
  • the electronic device 700 may be a PC computer, a tablet device, a personal digital assistant, a smart phone, or other device capable of executing the above set of instructions.
  • the electronic device 700 is not necessarily a single electronic device, but can also be a collection of any device or circuit capable of individually or jointly executing the above-mentioned instructions (or instruction sets).
  • Electronic device 700 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces locally or remotely (eg, via wireless transmission).
  • processor 702 may include a central processing unit (CPU), graphics processing unit (GPU), programmable logic device, special purpose processor system, microcontroller, or microprocessor.
  • processors may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
  • Processor 702 may execute instructions or code stored in memory 701, which may also store data. Instructions and data may also be sent and received over a network via a network interface device, which may employ any known transport protocol.
  • the memory 701 may be integrated with the processor 702, eg, RAM or flash memory arranged within an integrated circuit microprocessor or the like. Furthermore, memory 701 may comprise a separate device, such as an external disk drive, a storage array, or any other storage device that may be used by a database system. The memory 701 and the processor 702 may be operatively coupled, or may communicate with each other, eg, through I/O ports, network connections, etc., to enable the processor 702 to read files stored in the memory.
  • the electronic device 700 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of electronic device 700 may be connected to each other via a bus and/or network.
  • a video display such as a liquid crystal display
  • a user interaction interface such as a keyboard, mouse, touch input device, etc.
  • a computer-readable storage medium storing instructions, wherein the instructions, when executed by at least one processor, cause the at least one processor to perform a video demarking method according to the present disclosure.
  • Examples of the computer-readable storage medium herein include: Read Only Memory (ROM), Random Access Programmable Read Only Memory (PROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Random Access Memory (RAM) , dynamic random access memory (DRAM), static random access memory (SRAM), flash memory, non-volatile memory, CD-ROM, CD-R, CD+R, CD-RW, CD+RW, DVD-ROM , DVD-R, DVD+R, DVD-RW, DVD+RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, Blu-ray or Optical Disc Storage, Hard Disk Drive (HDD), Solid State Hard disk (SSD), card memory (such as a multimedia card, Secure Digital (SD) card, or
  • the computer program in the above-mentioned computer readable storage medium can be executed in an environment deployed in a computer device such as a client, a host, a proxy device, a server, etc.
  • a computer device such as a client, a host, a proxy device, a server, etc.
  • the computer program and any associated data, data files and data structures are distributed over networked computer systems so that the computer programs and any associated data, data files and data structures are stored, accessed and executed in a distributed fashion by one or more processors or computers.
  • a computer program product wherein instructions in the computer program product can be executed by a processor of a computer device to complete the image processing method according to the exemplary embodiment of the present disclosure.
  • a more natural and more realistic face-lifting effect can be obtained by performing restoration on a region that is distorted and deformed by performing face-lifting.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

La présente invention se rapporte à un procédé de traitement d'image et à un dispositif de traitement d'image. Le procédé de traitement d'image consiste : à reconnaître une région de visage dans une première image ; à réaliser un amincissement de visage sur la région de visage dans la première image sur la base d'un algorithme d'amincissement de visage de sorte à obtenir une deuxième image après l'amincissement de visage ; et à réparer une région déformée générée en raison de l'amincissement de visage dans la deuxième image de sorte à obtenir une troisième image réparée.
PCT/CN2021/123080 2020-10-30 2021-10-11 Procédé de traitement d'image et dispositif de traitement d'image WO2022089185A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011192273.2A CN113012031A (zh) 2020-10-30 2020-10-30 图像处理方法和图像处理装置
CN202011192273.2 2020-10-30

Publications (1)

Publication Number Publication Date
WO2022089185A1 true WO2022089185A1 (fr) 2022-05-05

Family

ID=76382998

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/123080 WO2022089185A1 (fr) 2020-10-30 2021-10-11 Procédé de traitement d'image et dispositif de traitement d'image

Country Status (2)

Country Link
CN (1) CN113012031A (fr)
WO (1) WO2022089185A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012031A (zh) * 2020-10-30 2021-06-22 北京达佳互联信息技术有限公司 图像处理方法和图像处理装置
CN113435445A (zh) * 2021-07-05 2021-09-24 深圳市鹰硕技术有限公司 图像过优化自动纠正方法以及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050275723A1 (en) * 2004-06-02 2005-12-15 Sezai Sablak Virtual mask for use in autotracking video camera images
CN109410138A (zh) * 2018-10-16 2019-03-01 北京旷视科技有限公司 修饰双下巴的方法、装置和系统
CN110675420A (zh) * 2019-08-22 2020-01-10 华为技术有限公司 一种图像处理方法和电子设备
CN113012031A (zh) * 2020-10-30 2021-06-22 北京达佳互联信息技术有限公司 图像处理方法和图像处理装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019090502A1 (fr) * 2017-11-08 2019-05-16 深圳传音通讯有限公司 Procédé et système de capture d'images basés sur un terminal intelligent
CN110706179B (zh) * 2019-09-30 2023-11-10 维沃移动通信有限公司 一种图像处理方法及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050275723A1 (en) * 2004-06-02 2005-12-15 Sezai Sablak Virtual mask for use in autotracking video camera images
CN109410138A (zh) * 2018-10-16 2019-03-01 北京旷视科技有限公司 修饰双下巴的方法、装置和系统
CN110675420A (zh) * 2019-08-22 2020-01-10 华为技术有限公司 一种图像处理方法和电子设备
CN113012031A (zh) * 2020-10-30 2021-06-22 北京达佳互联信息技术有限公司 图像处理方法和图像处理装置

Also Published As

Publication number Publication date
CN113012031A (zh) 2021-06-22

Similar Documents

Publication Publication Date Title
CN110503703B (zh) 用于生成图像的方法和装置
US11410457B2 (en) Face reenactment
US10832086B2 (en) Target object presentation method and apparatus
US9699380B2 (en) Fusion of panoramic background images using color and depth data
US8861800B2 (en) Rapid 3D face reconstruction from a 2D image and methods using such rapid 3D face reconstruction
WO2022001509A1 (fr) Procédé et appareil d'optimisation d'image, support de stockage informatique, et dispositif électronique
US20180114363A1 (en) Augmented scanning of 3d models
US20200013212A1 (en) Facial image replacement using 3-dimensional modelling techniques
US20210287449A1 (en) Delivering virtualized content
US20220222892A1 (en) Normalized three-dimensional avatar synthesis and perceptual refinement
WO2022089185A1 (fr) Procédé de traitement d'image et dispositif de traitement d'image
TW202011260A (zh) 活體檢測方法、裝置和電腦可讀取儲存媒介
JP2020529086A (ja) プレビュー写真をぼかすための方法および装置ならびにストレージ媒体
CN112241933A (zh) 人脸图像处理方法、装置、存储介质及电子设备
CN111008935B (zh) 一种人脸图像增强方法、装置、系统及存储介质
US10719729B2 (en) Systems and methods for generating skin tone profiles
US11527014B2 (en) Methods and systems for calibrating surface data capture devices
CN109712082B (zh) 协作修图的方法及装置
CN112330527A (zh) 图像处理方法、装置、电子设备和介质
WO2023066120A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage
JP2023521270A (ja) 多様なポートレートから照明を学習すること
US20140198177A1 (en) Realtime photo retouching of live video
US20230080639A1 (en) Techniques for re-aging faces in images and video frames
US8891857B2 (en) Concave surface modeling in image-based visual hull
CN113223128B (zh) 用于生成图像的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884914

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21884914

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.08.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21884914

Country of ref document: EP

Kind code of ref document: A1