CN113012031A - Image processing method and image processing apparatus - Google Patents

Image processing method and image processing apparatus Download PDF

Info

Publication number
CN113012031A
CN113012031A CN202011192273.2A CN202011192273A CN113012031A CN 113012031 A CN113012031 A CN 113012031A CN 202011192273 A CN202011192273 A CN 202011192273A CN 113012031 A CN113012031 A CN 113012031A
Authority
CN
China
Prior art keywords
image
face
region
thinning
deformed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011192273.2A
Other languages
Chinese (zh)
Inventor
赵明菲
闻兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011192273.2A priority Critical patent/CN113012031A/en
Publication of CN113012031A publication Critical patent/CN113012031A/en
Priority to PCT/CN2021/123080 priority patent/WO2022089185A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T3/04
    • G06T5/77
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The present disclosure relates to an image processing method and an image processing apparatus. The image processing method comprises the following steps: identifying a face region in a first image; performing face thinning on the face area in the first image based on a face thinning algorithm to obtain a second image after face thinning; and repairing the deformed area in the second image, which is generated by performing the face thinning, so as to obtain a repaired third image.

Description

Image processing method and image processing apparatus
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and an image processing apparatus.
Background
In application scenes such as live video, short video, photographing and the like, a very common technology is face thinning, which can make the face of a user smaller and achieve a better subjective effect. However, when the face is thinned, objects around the face are distorted and deformed, so that the face-thinning effect is not natural enough, and the appearance is not good.
Disclosure of Invention
The present disclosure provides an image processing method and an image processing apparatus to solve at least the problems in the related art described above, and may not solve any of the problems described above.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including: identifying a face region in a first image; performing face thinning on the face area in the first image based on a face thinning algorithm to obtain a second image after face thinning; and repairing the deformed area in the second image, which is generated by performing the face thinning, so as to obtain a repaired third image.
Alternatively, the deformed region may be a region other than the face region after face thinning out of a predetermined region including the face region to which face thinning out is performed in the first image.
Optionally, the performing of repairing the deformed region in the second image, which is generated by performing the face thinning, may include: repairing is performed by filling the deformed region with background pixels.
Optionally, the performing repair by filling background pixels in the deformed region may include: and filling background pixels into the deformed region by using an image inpainting algorithm based on the deformed region and the first image or the first image.
Optionally, the performing repair by filling background pixels in the deformed region may include: filling the deformed area with background pixels based on a background image and a second image, wherein the background image is a pure background image having the same scene as the first image.
Optionally, the filling the deformed region with background pixels based on the background image and the second image may include: searching the same area in the background image as the deformation area; replacing pixel values of pixels in the deformed region with pixel values of pixels of the region in the background image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including: a recognition unit configured to recognize a face region in a first image; a face thinning unit configured to perform face thinning on the face region in the first image based on a face thinning algorithm to obtain a thinned second image; and the restoration unit is configured to restore the deformed area in the second image, which is generated by the execution of the face thinning, so as to generate a restored third image.
Alternatively, the deformed region may be a region other than the face region after face thinning out of a predetermined region including the face region to which face thinning out is performed in the first image.
Optionally, the repair unit may be configured to: repairing is performed by filling the deformed region with background pixels.
Optionally, the repair unit may be configured to: and filling background pixels into the deformed region by using an image inpainting algorithm based on the deformed region and the first image or the first image.
Optionally, the repair unit may be configured to: filling the deformed area with background pixels based on a background image and a second image, wherein the background image is a pure background image having the same scene as the first image.
Optionally, the repair unit may be configured to: searching the same area in the background image as the deformation area; replacing pixel values of pixels in the deformed region with pixel values of pixels of the region in the background image.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform an image processing method according to the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium storing instructions, characterized in that the instructions, when executed by at least one processor, cause the at least one processor to perform an image processing method according to the present disclosure.
According to an eighth aspect of embodiments of the present disclosure, there is provided a computer program product, instructions in which are executable by a processor of a computer device to perform an image processing method according to the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the image processing method and the image processing apparatus of the present disclosure, a more natural and realistic face thinning effect can be obtained by performing restoration on an area distorted and deformed by performing face thinning.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a schematic diagram showing a face-thinning algorithm.
Fig. 2 is a schematic diagram illustrating an implementation scenario of an image processing method and an image processing apparatus according to an exemplary embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present disclosure.
Fig. 4 is a schematic view showing a deformed region due to face thinning.
Fig. 5 is a schematic diagram illustrating a background frame replacement method according to an exemplary embodiment of the present disclosure.
Fig. 6 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment of the present disclosure.
Fig. 7 is a block diagram of an electronic device 700 according to an example embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The embodiments described in the following examples do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In this case, the expression "at least one of the items" in the present disclosure means a case where three types of parallel expressions "any one of the items", "a combination of any plural ones of the items", and "the entirety of the items" are included. For example, "include at least one of a and B" includes the following three cases in parallel: (1) comprises A; (2) comprises B; (3) including a and B. For another example, "at least one of the first step and the second step is performed", which means that the following three cases are juxtaposed: (1) executing the step one; (2) executing the step two; (3) and executing the step one and the step two.
The existing face-thinning algorithm is mainly based on the detection of key points of a human face, then carries out densification processing on the key points of the human face, and finally realizes the face-thinning effect by using a triangulation method. Fig. 1 is a schematic diagram showing a face-thinning algorithm. As shown in fig. 1, the process of the face-thinning algorithm may include: (1) as shown in fig. 1(a), first, basic information of key points of a face is obtained, which mainly includes 106 key points of eyebrows, eyes, a nose, a mouth, and an outer contour of the face; (2) secondly, based on the detected 106 key points, performing a densification process on the face key points, and inserting additional key points, such as a forehead area and a face peripheral limiting area, so that the additional key points can cover the whole face area; (3) as shown in fig. 1(c), finally, based on the key points of the face after densification, a triangular mesh of the whole face is constructed, so as to implement Triangulation (Delaunay Triangulation) on the whole face region, the Triangulation divides the face into a plurality of non-overlapping triangular regions, and then region transformation is performed, so as to implement the face-thinning effect. In the process of region transformation, the vertex of the triangulation network is translated, the translated vertex is updated to the corresponding texture coordinate, and drawing and rendering are performed through openGL or D3D, so that the deformation of the whole associated triangulation network is realized. This achieves the face-thinning effect, but also causes distortion or unnaturalness of the non-face region (as shown in fig. 1 (c)) involved in the triangular mesh of the face-thinning algorithm. The non-face area includes a background area around the original face area and an area thinned by a face thinning algorithm in the original face area. In order to solve the above problem, the present disclosure proposes an image processing method and an image processing apparatus capable of performing restoration on an area deformed by a face thinning operation after an image is subjected to face thinning to obtain a more natural face thinning image. Hereinafter, an image processing method and an image processing apparatus according to the present disclosure will be described in detail with reference to fig. 2 to 7.
Fig. 2 is a schematic diagram illustrating an implementation scenario of an image processing method and an image processing apparatus according to an exemplary embodiment of the present disclosure.
Referring to fig. 2, in a webcast system, a anchor may use a live device 201 to capture a live program and upload the live program to a server 202 in a live room through a client of the live device 201, and the server 202 distributes the live program to a client of a user terminal 203 or 204 entering the live room of the anchor to present the live program to a user watching the live program. Here, the live device 201 may be any device including a photographing function or a device capable of connecting with a photographing device, for example, a mobile phone, a portable computer, a tablet computer, a video camera, and the like. In order to make the image of the anchor viewed by the user watching the live broadcast more beautiful, the client used by the anchor in the live broadcast may perform face thinning on the face portion of the video and/or image captured by the live broadcast device 201, and upload the live broadcast program including the video and/or image on which the face thinning is performed to the server 202, so as to distribute the live broadcast program to each user terminal 203 or 204 for the user to watch, thereby watching the video and/or image of the anchor which is more beautiful by the thinned image. Therefore, the image processing method and the image processing apparatus according to the present disclosure can be applied to this live scene.
Further, the image processing method and the image processing apparatus according to the present disclosure may be applied to any face-thinning-executable scene such as a short video recording scene, a photographing scene, a self-photographing scene, and the like, in addition to a live scene.
Fig. 3 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present disclosure.
Referring to fig. 3, in step 301, a face region in a first image may be identified. Here, the first image may be an image of performing photographing capture or may be an image of a video obtained by photographing a video. Further, the first image may be obtained from the photographing apparatus in real time, or may be acquired from a local storage or a local database as needed or received from an external data source (e.g., internet, server, database, etc.) through an input device or a transmission medium, which is not limited by the present disclosure. Further, the face region may be a region occupied by the face in the first image. The face area may be an area including only the face portion, or may be an area including the face portion and related portions such as hair and accessories, which is not limited in this disclosure. Furthermore, any possible face recognition method may be utilized to identify the face region in the image, which is not limited by the present disclosure.
In step 302, face thinning may be performed on the face region in the first image based on a face thinning algorithm to obtain a second image after face thinning. For example, the face thinning may be performed on the face region using the above-mentioned triangulation method, but any other possible face thinning algorithm may be used to perform face thinning, which is not limited by the present disclosure. For example, the second image may be an intermediate process image, and may not be an actually output image.
When face thinning is performed on a face region using a face thinning algorithm, distortion of the surrounding region of the face region to various degrees is inevitably caused. The surrounding area may include a background area around the original face area and an area thinned by the face thinning algorithm in the original face area. Fig. 4 is a schematic view showing a deformed region due to face thinning. As shown in fig. 4, fig. 4(a) exemplarily shows a first image before face thinning is performed, and fig. 4(b) exemplarily shows a second image after face thinning is performed. As shown in fig. 4(a), before performing face thinning on a face region using a face thinning algorithm, the first image may include a face region 401 and a background region 402. Further, the predetermined region 403 to which face thinning will be performed on the face region using the face thinning algorithm may include the face region 401 and a portion of the background region 402. For example, when performing face thinning using triangulation algorithms, the predetermined area 403 may be the area involved by the triangular mesh (as shown in fig. 1 (c)). As shown in fig. 4(b), after performing face thinning on the face region using the face thinning algorithm, the second image may include a face region 401 'and a background region 402', wherein the face region 401 'is reduced due to the face thinning, and the background region 402' is increased due to the background pixels filled in the reduced portion of the face region. In addition, the regions of the predetermined region 403 'other than the face region 401' after face thinning will be deformed, that is, they may be referred to as deformed regions. The deformed region may include a background region 404 'around the original face region and a region 405' thinned by the face thinning algorithm in the original face region.
Referring back to fig. 3, therefore, in step 303, a restoration may be performed on a deformed region in the second image resulting from the execution of the thin face to obtain a restored third image.
According to an exemplary embodiment of the present disclosure, a deformed region resulting from performing face thinning may be first determined. For example, the deformed region may be determined as a region other than the face region after thinning out of the predetermined region including the face region involved in performing thinning out in the first image, for example, the deformed region in fig. 4(b) is a region (404 '+ 405') other than the face region 401 'among the predetermined region 403'. For another example, the deformed region may also be determined by comparing pixel values around the face region of the second image after the face thinning is performed with pixel values around the face region of the first image before the face thinning is performed. Of course, any possible method may be utilized to determine the deformation region, and the present disclosure is not limited thereto.
According to an exemplary embodiment of the present disclosure, the deformed region may be filled with background pixels to perform repair on the deformed region. For example, the repair may be performed on the deformed region by an image inpainting (image inpainting) algorithm or a method of background frame replacement. Of course, the present disclosure is not limited to these repair methods, and any possible repair method may be used to repair the deformed region. The method of repairing the deformed region by the image inpainting algorithm or the background frame replacement is described in detail below.
According to an exemplary embodiment of the present disclosure, the deformed region may be filled with background pixels using an image inpainting algorithm based on the deformed region and the first image or the first image. For example, the image inpainting algorithm may include a conventional inpainting algorithm (non-deep learning algorithm) and a deep learning algorithm. Conventional patching algorithms may include tile-based (patch-based) methods and diffusion-based (diffusion-based) methods. When the image block-based method is used, the deformed region may be filled with an image block similar to the deformed region by searching for the image block on the first image before performing the thinning-out. With the diffusion-based approach, the pixels at the edges of the deformed region can be grown inward according to the properties of the corresponding region of the first image before face thinning is performed, and diffusion fills the entire deformed region. The deep learning algorithm may include a Convolutional Neural Network (CNN) based method, a countermeasure network (GAN) generation based method, a Recurrent Neural Network (RNN) based method, and the like. A mask may be generated based on the deformed region, and image inpainting may be performed on the deformed region using a depth learning algorithm based on the second image after face thinning and the generated mask. The specific way may be to input the second image after face thinning and the generated mask into a model based on a deep learning algorithm, and output a third image after a deformed region is repaired by the model based on the deep learning algorithm.
According to the exemplary embodiments of the present disclosure, in a case where a pure background image having the same scene as the first image can be acquired, a simpler and faster background frame replacement method may be used to repair the deformed region. For example, but not limited to, when a photographing apparatus that photographs the first image is stationary and the position is fixed, the background image before the face is added and the first image after the face is added may be acquired separately. For example, in a video scene, multiple frames of video images may be captured continuously, and the capture device may capture only background image frames to obtain background frame images, and then have the user capture in front of the capture device to obtain video frame images (e.g., the first image).
After the background image is acquired, the deformed region may be filled with background pixels based on the background image and the second image. For example, the same region as the deformed region in the background image may be searched for, and the pixel values of the pixels in the deformed region may be replaced with the pixel values of the pixels of the determined region in the background image. Fig. 5 is a schematic diagram illustrating a background frame replacement method according to an exemplary embodiment of the present disclosure. Referring to fig. 5, a pure background image 501 having the same scene as the first image may be acquired. The pure background image 501 is searched for the same region 502 (shown hatched) as the deformed region (404 '+ 405') as shown in fig. 4 (b). The pixel values of the pixels in the deformed region (404 '+ 405') are replaced with the pixel values of the pixels in the searched region 502.
Fig. 6 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment of the present disclosure.
Referring to fig. 6, an image processing apparatus 600 according to an exemplary embodiment of the present disclosure may include a recognition unit 601, a face thinning unit 602, and a restoring unit 603.
The recognition unit 601 may recognize a face region in the first image. Here, the first image may be an image of performing photographing capture or may be an image of a video obtained by photographing a video. Further, the first image may be obtained from the photographing apparatus in real time, or may be acquired from a local storage or a local database as needed or received from an external data source (e.g., internet, server, database, etc.) through an input device or a transmission medium, which is not limited by the present disclosure. Further, the face region may be a region occupied by the face in the first image. The face area may be an area including only the face portion, or may be an area including the face portion and related portions such as hair and accessories, which is not limited in this disclosure. Furthermore, the recognition unit 601 may recognize the face region in the image by using any possible face recognition method, which is not limited by the present disclosure.
The face-thinning unit 602 may perform face thinning on the face region in the first image based on a face-thinning algorithm to obtain a second image after face thinning. For example, the face thinning unit 602 may perform face thinning on the face region using the above-mentioned triangulation method, but may also perform face thinning using any other possible face thinning algorithm, which is not limited by the present disclosure. For example, the second image may be an intermediate process image, and may not be an actually output image.
When face thinning is performed on a face region using a face thinning algorithm, distortion of the surrounding region of the face region to various degrees is inevitably caused. The surrounding area may include a background area around the original face area and an area thinned by the face thinning algorithm in the original face area. Therefore, the repairing unit 603 may perform repairing on a deformed region in the second image resulting from performing the face thinning to obtain a repaired third image.
According to an exemplary embodiment of the present disclosure, the restoring unit 603 may first determine a deformed region resulting from performing face thinning. For example, the inpainting unit 603 may determine the deformed region as a region other than the face region after thinning out, of a predetermined region including the face region to which thinning out is performed in the first image, for example, the deformed region in fig. 4(b) is a region (404 '+ 405') other than the face region 401 'among the predetermined region 403'. For another example, the restoration unit 603 may also determine the deformed region by comparing pixel values around the face region of the second image after the face thinning is performed with pixel values around the face region of the first image before the face thinning is performed. Of course, any possible method may be utilized to determine the deformation region, and the present disclosure is not limited thereto.
According to an exemplary embodiment of the present disclosure, the repair unit 603 may fill background pixels to the deformed region to perform repair on the deformed region. For example, the repair unit 603 may perform repair on the deformed region by an image inpainting (image inpainting) algorithm or a method of background frame replacement. Of course, the present disclosure is not limited to these repair methods, and the repair unit 603 may also use any possible repair method for the deformed region. The method of repairing the deformed region by the image inpainting algorithm or the background frame replacement is described in detail below.
According to an exemplary embodiment of the present disclosure, the repairing unit 603 may fill the deformed region with background pixels using an image inpainting algorithm based on the deformed region and the first image or the first image. For example, the image inpainting algorithm may include a conventional inpainting algorithm (non-deep learning algorithm) and a deep learning algorithm. Conventional patching algorithms may include tile-based (patch-based) methods and diffusion-based (diffusion-based) methods. When the restoring unit 603 uses the image block-based method, it may fill in the deformed region by searching for an image block similar to the deformed region on the first image before the face thinning is performed. When the repairing unit 603 uses a diffusion-based method, pixels at the edge of the deformed region may be grown inward according to the property of the corresponding region of the first image before face thinning is performed, and the entire deformed region is diffusion-filled. The deep learning algorithm may include a Convolutional Neural Network (CNN) based method, a countermeasure network (GAN) generation based method, a Recurrent Neural Network (RNN) based method, and the like. The repairing unit 603 may generate a mask based on the deformed region, and perform image repairing on the deformed region using a depth learning algorithm based on the second image after performing the face thinning and the generated mask. Specifically, the repairing unit 603 may input the second image after face thinning and the generated mask to a model based on a deep learning algorithm, and output a third image after a deformed region is repaired by the model based on the deep learning algorithm.
According to an exemplary embodiment of the present disclosure, in a case where a pure background image having the same scene as the first image can be acquired, the repair unit 603 may repair the deformed region using a simpler and faster background frame replacement method. For example, but not limited to, when a photographing apparatus that photographs the first image is stationary and the position is fixed, the background image before the face is added and the first image after the face is added may be acquired separately. For example, in a video scene, multiple frames of video images may be captured continuously, and the capture device may capture only background image frames to obtain background frame images, and then have the user capture in front of the capture device to obtain video frame images (e.g., the first image).
After acquiring the background image, the repairing unit 603 may fill the deformed region with background pixels based on the background image and the second image. For example, the repair unit 603 may search for the same region in the background image as the deformed region and replace the pixel values of the pixels in the deformed region with the pixel values of the pixels of the determined region in the background image.
Fig. 7 is a block diagram of an electronic device 700 according to an example embodiment of the present disclosure.
Referring to fig. 7, the electronic device 700 includes at least one memory 701 and at least one processor 702, the at least one memory 701 having stored therein a set of computer-executable instructions, which when executed by the at least one processor 702, performs an image processing method according to an exemplary embodiment of the present disclosure.
By way of example, the electronic device 700 may be a PC computer, tablet device, personal digital assistant, smartphone, or other device capable of executing the set of instructions described above. Here, the electronic device 700 need not be a single electronic device, but can be any collection of devices or circuits that can execute the above instructions (or sets of instructions) either individually or in combination. The electronic device 700 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with local or remote (e.g., via wireless transmission).
In the electronic device 700, the processor 702 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a programmable logic device, a special purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processors may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
The processor 702 may execute instructions or code stored in the memory 701, wherein the memory 701 may also store data. The instructions and data may also be transmitted or received over a network via a network interface device, which may employ any known transmission protocol.
The memory 701 may be integrated with the processor 702, for example, by having RAM or flash memory disposed within an integrated circuit microprocessor or the like. Further, memory 701 may comprise a stand-alone device, such as an external disk drive, storage array, or any other storage device usable by a database system. The memory 701 and the processor 702 may be operatively coupled or may communicate with each other, such as through I/O ports, network connections, etc., so that the processor 702 can read files stored in the memory.
In addition, the electronic device 700 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the electronic device 700 may be connected to each other via a bus and/or a network.
According to an exemplary embodiment of the present disclosure, there may also be provided a computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform a video destinking method according to the present disclosure. Examples of the computer-readable storage medium herein include: read-only memory (ROM), random-access programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, non-volatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD + RW, DVD-ROM, DVD-R, DVD + R, DVD-RW, DVD + RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, Blu-ray or compact disc memory, Hard Disk Drive (HDD), solid-state drive (SSD), card-type memory (such as a multimedia card, a Secure Digital (SD) card or a extreme digital (XD) card), magnetic tape, a floppy disk, a magneto-optical data storage device, an optical data storage device, a hard disk, a magnetic tape, a magneto-optical data storage device, a, A solid state disk, and any other device configured to store and provide a computer program and any associated data, data files, and data structures to a processor or computer in a non-transitory manner such that the processor or computer can execute the computer program. The computer program in the computer-readable storage medium described above can be run in an environment deployed in a computer apparatus, such as a client, a host, a proxy device, a server, and the like, and further, in one example, the computer program and any associated data, data files, and data structures are distributed across a networked computer system such that the computer program and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by one or more processors or computers.
According to an exemplary embodiment of the present disclosure, there may also be provided a computer program product in which instructions are executable by a processor of a computer device to perform an image processing method according to an exemplary embodiment of the present disclosure.
According to the image processing method and the image processing apparatus of the present disclosure, a more natural and realistic face thinning effect can be obtained by performing restoration on an area distorted and deformed by performing face thinning.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image processing method, comprising:
identifying a face region in a first image;
performing face thinning on the face area in the first image based on a face thinning algorithm to obtain a second image after face thinning;
and repairing the deformed area in the second image, which is generated by performing the face thinning, so as to obtain a repaired third image.
2. The image processing method according to claim 1, wherein the deformation region is a region other than the face region after thinning out, of a predetermined region including the face region, to which thinning out is performed in the first image.
3. The image processing method according to claim 1, wherein the performing of the restoration of the deformed region in the second image resulting from the performing of the face thinning comprises:
repairing is performed by filling the deformed region with background pixels.
4. The image processing method of claim 3, wherein the performing a repair by filling the deformed region with background pixels comprises:
and filling background pixels into the deformed region by using an image inpainting algorithm based on the deformed region and the first image or the first image.
5. The image processing method of claim 3, wherein the performing a repair by filling the deformed region with background pixels comprises:
filling the deformed region with background pixels based on the background image and the second image,
wherein the background image is a pure background image having the same scene as the first image.
6. The image processing method of claim 5, wherein the filling the deformed region with background pixels based on a background image and a second image comprises:
searching the same area in the background image as the deformation area;
replacing pixel values of pixels in the deformed region with pixel values of pixels of the region in the background image.
7. An image processing apparatus characterized by comprising:
a recognition unit configured to recognize a face region in a first image;
a face thinning unit configured to perform face thinning on the face region in the first image based on a face thinning algorithm to obtain a thinned second image;
and the restoration unit is configured to restore the deformed area in the second image, which is generated by the execution of the face thinning, so as to generate a restored third image.
8. The image processing apparatus according to claim 7, wherein the deformation region is a region other than the face region after thinning out, of a predetermined region including the face region, to which thinning out is performed in the first image.
9. An electronic device, comprising:
at least one processor;
at least one memory storing computer-executable instructions,
wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform the image processing method of any one of claims 1 to 6.
10. A computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform the image processing method of any one of claims 1 to 6.
CN202011192273.2A 2020-10-30 2020-10-30 Image processing method and image processing apparatus Pending CN113012031A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011192273.2A CN113012031A (en) 2020-10-30 2020-10-30 Image processing method and image processing apparatus
PCT/CN2021/123080 WO2022089185A1 (en) 2020-10-30 2021-10-11 Image processing method and image processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011192273.2A CN113012031A (en) 2020-10-30 2020-10-30 Image processing method and image processing apparatus

Publications (1)

Publication Number Publication Date
CN113012031A true CN113012031A (en) 2021-06-22

Family

ID=76382998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011192273.2A Pending CN113012031A (en) 2020-10-30 2020-10-30 Image processing method and image processing apparatus

Country Status (2)

Country Link
CN (1) CN113012031A (en)
WO (1) WO2022089185A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435445A (en) * 2021-07-05 2021-09-24 深圳市鹰硕技术有限公司 Image over-optimization automatic correction method and device
WO2022089185A1 (en) * 2020-10-30 2022-05-05 北京达佳互联信息技术有限公司 Image processing method and image processing device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019090502A1 (en) * 2017-11-08 2019-05-16 深圳传音通讯有限公司 Intelligent terminal-based image capturing method and image capturing system
CN110706179A (en) * 2019-09-30 2020-01-17 维沃移动通信有限公司 Image processing method and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9210312B2 (en) * 2004-06-02 2015-12-08 Bosch Security Systems, Inc. Virtual mask for use in autotracking video camera images
CN109410138B (en) * 2018-10-16 2021-10-01 北京旷视科技有限公司 Method, device and system for modifying double chin
CN110675420B (en) * 2019-08-22 2023-03-24 华为技术有限公司 Image processing method and electronic equipment
CN113012031A (en) * 2020-10-30 2021-06-22 北京达佳互联信息技术有限公司 Image processing method and image processing apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019090502A1 (en) * 2017-11-08 2019-05-16 深圳传音通讯有限公司 Intelligent terminal-based image capturing method and image capturing system
CN110706179A (en) * 2019-09-30 2020-01-17 维沃移动通信有限公司 Image processing method and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022089185A1 (en) * 2020-10-30 2022-05-05 北京达佳互联信息技术有限公司 Image processing method and image processing device
CN113435445A (en) * 2021-07-05 2021-09-24 深圳市鹰硕技术有限公司 Image over-optimization automatic correction method and device

Also Published As

Publication number Publication date
WO2022089185A1 (en) 2022-05-05

Similar Documents

Publication Publication Date Title
CN110503703B (en) Method and apparatus for generating image
US10878612B2 (en) Facial image replacement using 3-dimensional modelling techniques
US9699380B2 (en) Fusion of panoramic background images using color and depth data
JP7403528B2 (en) Method and system for reconstructing color and depth information of a scene
Cheng et al. Robust algorithm for exemplar-based image inpainting
US20180114363A1 (en) Augmented scanning of 3d models
CN109712082B (en) Method and device for collaboratively repairing picture
US9569888B2 (en) Depth information-based modeling method, graphic processing apparatus and storage medium
Galteri et al. Deep 3d morphable model refinement via progressive growing of conditional generative adversarial networks
CN113012031A (en) Image processing method and image processing apparatus
CN111353965B (en) Image restoration method, device, terminal and storage medium
CN113888431A (en) Training method and device of image restoration model, computer equipment and storage medium
CN110516598B (en) Method and apparatus for generating image
US11069034B2 (en) Method and system to enhance quality of digital images
CN112950738B (en) Rendering engine processing method and device, storage medium and electronic equipment
US9143754B2 (en) Systems and methods for modifying stereoscopic images
CN114140563A (en) Virtual object processing method and device
US11200645B2 (en) Previewing a content-aware fill
CN113077400A (en) Image restoration method and device, computer equipment and storage medium
CN115713585B (en) Texture image reconstruction method, apparatus, computer device and storage medium
CN114299089A (en) Image processing method, image processing device, electronic equipment and storage medium
US20220114767A1 (en) Deep example-based facial makeup transfer system
Gsaxner et al. DeepDR: Deep Structure-Aware RGB-D Inpainting for Diminished Reality
CN113470124A (en) Training method and device of special effect model and special effect generation method and device
US10586311B2 (en) Patch validity test

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination