CN113469903A - Image processing method and device, electronic equipment and readable storage medium - Google Patents

Image processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113469903A
CN113469903A CN202110653986.2A CN202110653986A CN113469903A CN 113469903 A CN113469903 A CN 113469903A CN 202110653986 A CN202110653986 A CN 202110653986A CN 113469903 A CN113469903 A CN 113469903A
Authority
CN
China
Prior art keywords
image
face
sub
target
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110653986.2A
Other languages
Chinese (zh)
Inventor
李巧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110653986.2A priority Critical patent/CN113469903A/en
Publication of CN113469903A publication Critical patent/CN113469903A/en
Priority to PCT/CN2022/097859 priority patent/WO2022258013A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/77
    • G06T3/02
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application discloses an image processing method and device, electronic equipment and a readable storage medium, and belongs to the field of image processing. The method comprises the following steps: acquiring a target mask image of a face region in a first image; acquiring a reference face image matched with the target mask image in a reference face image set based on the binarization face mask image of the target mask image; performing image processing on the reference face image and the target mask image to obtain N reference face sub-images and N target mask sub-images; fusing an image of a target area of the reference face subimage with an image of a corresponding area of the target mask subimage to generate a second image corresponding to the first image; the reference face image set comprises a plurality of face mask images subjected to skin processing; the face skin value of the target area is larger than the face skin value of the area corresponding to the target area in the target mask image.

Description

Image processing method and device, electronic equipment and readable storage medium
Technical Field
The embodiment of the application relates to the field of image processing, in particular to an image processing method and device, an electronic device and a readable storage medium.
Background
With the development of electronic device technology, the frequency of shooting by a user using an electronic device is higher and higher, and the requirement of the user on the quality of images shot by the electronic device is higher and higher.
In the related technology, a camera takes a picture of a person under different illumination imaging conditions, the picture is affected by various degradation problems such as noise, motion blur, highlight, and later-stage beautifying and denoising, the imaged face lacks good skin and details, and meanwhile, flaws (such as acne marks), wrinkles and excessive noise on the face are uneven, so that the skin feeling and the attractiveness of the imaged face are greatly affected.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, an electronic device, and a readable storage medium, which can solve the problem of poor skin quality of face imaging.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including: acquiring a target mask image of a face region in a first image; acquiring a reference face image matched with the target mask image in a reference face image set based on the binarization face mask image of the target mask image; performing image processing on the reference face image and the target mask image to obtain N reference face sub-images and N target mask sub-images; fusing an image of a target area of the reference face subimage with an image of a corresponding area of the target mask subimage to generate a second image corresponding to the first image; the reference face image set comprises a plurality of face mask images subjected to skin processing; the face skin value of the target area is larger than the face skin value of the area corresponding to the target area in the target mask image.
In a second aspect, an embodiment of the present application further provides an image processing apparatus, including: the device comprises an acquisition module and an image processing module; the acquisition module is used for acquiring a target mask image of a face region in the first image; the acquisition module is also used for acquiring a reference face image matched with the target mask image in the reference face image set based on the binarization face mask image of the target mask image; the acquisition module is also used for carrying out image processing on the reference face image and the target mask image to obtain N reference face sub-images and N target mask sub-images; the image processing module is used for fusing the image of the target area of the reference face subimage acquired by the acquisition module with the image of the corresponding area of the target mask subimage acquired by the acquisition module to generate a second image corresponding to the first image; the reference face image set comprises a plurality of face mask images subjected to skin processing; the human face skin value of the target area is higher than the human face skin value of the area corresponding to the target area in the target mask image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the image processing method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, after a first image containing a human face is obtained, a target mask image of a human face region in the first image is obtained, and a reference human face image matched with the target mask image in a reference human face image set is obtained based on a binarization human face mask image of the target mask image. And then, carrying out image processing on the reference face image and the target mask image to obtain N reference face sub-images and N target mask sub-images, and fusing the image of the target area of the reference face sub-image and the image of the corresponding area of the target mask sub-image to remove poor texture and excessive face, recover fine and clear skin texture, obtain a second image with better skin quality and greatly improve the quality of the face skin after imaging.
Drawings
Fig. 1 is a schematic diagram of an interface applied by an image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an image pyramid provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 5 is a second schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the application can be applied to scenes for beautifying images containing human faces.
For example, for a scene beautified by an image including a human face, in the related art, when an electronic device is used for imaging, due to the influence of various degradation problems such as noise, motion blur, highlight, and late-stage face beautification and denoising, a good skin and details of the imaged human face are lost, and meanwhile, the skin feeling and the attractiveness of the imaged human face are extremely influenced due to excessive unevenness caused by flaws, wrinkles and noise of the face.
In order to solve the problem, in the technical scheme provided by the embodiment of the application, the human face skin with flaws in the shot image and the image with better skin are subjected to image fusion by a human face skin migration method based on multi-layer image pyramid fusion, so that poor textures and transition of the human face can be effectively removed, the imaged human face skin is fine and clear, and the skin of the imaged human face is greatly improved.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an image processing method provided in an embodiment of the present application may include the following steps 201 to 204:
step 201, the image processing device acquires a target mask image of a face region in the first image.
For example, the first image may be an image captured by the electronic device, or an image stored in the electronic device and read by the electronic device.
For example, the image processing apparatus acquires an image of a face region of the first image in a Red Green Blue (RGB) color space after acquiring the first image. And generating a mask image of the face region, namely the target mask image, in the acquired face region by a face analysis algorithm.
The target mask image may be an image in which, after the contour of the face included in the first image is acquired, all images outside the contour range of the face are masked, for example, set to the same color, and the image processing apparatus can recognize only the face region.
It can be understood that the image of the face region of the first image is only acquired, so as to eliminate interference of images of other regions and facilitate optimization of the image of the face region.
Step 202, the image processing device obtains a reference face image matched with the target mask image in the reference face image set based on the binarized face mask image of the target mask image.
Illustratively, image binarization (image binarization) is a process of setting the gray value of a pixel point on an image to 0 or 255, that is, making the whole image exhibit a distinct black-and-white effect. The binarized face mask image can be understood as a black and white image of a face image. That is, if all the five sense organ regions of the target mask image are set to black, all the non-five sense organ regions are set to white.
Illustratively, the binarized face image is an image including only five sense organs, that is, the binarized face image is an image including the eyes, nose, eyebrows, mouth, and the like of the face region. The binary face mask image is mainly used for matching with a face image in a reference face image set.
Illustratively, the matching algorithm used in step 202 above is a radical flower template matching algorithm.
Step 203, the image processing device performs image processing on the reference face image and the target mask image to obtain N reference face sub-images and N target mask sub-images.
For example, the image processing for the reference face image and the target mask image may include formatting the reference face image, and then performing degradation processing and scaling operation on the formatted reference face image to generate N reference face sub-images. The scaling ratios of every two adjacent reference face sub-images are the same, and the image with the lower resolution is the image obtained after the image with the higher resolution is subjected to degradation processing.
It should be noted that, a processing manner of obtaining N target mask sub-images after the target mask image is processed is similar to the above processing manner for the reference face image, and the target mask image may be processed based on the above processing manner for the reference face image to obtain N target mask sub-images.
Illustratively, the N reference face sub-images and the N target mask sub-images have a one-to-one correspondence relationship. For example, taking the above N as 5 as an example, there is a correspondence relationship between 5 reference face sub-images numbered 0 to 4 and 5 target mask sub-images numbered 0 to 4, where the same number of images exist. Wherein the resolution of the image is gradually reduced as the number increases.
And step 204, the image processing device fuses the image of the target area of the reference face sub-image and the image of the corresponding area of the target mask sub-image to generate a second image corresponding to the first image.
The reference face image collection comprises a plurality of face mask images subjected to skin processing. And the human face skin value of the target area is higher than the human face skin value of the area corresponding to the target area in the target mask image.
Illustratively, the image processing device finds a matching reference face image from a reference face image collection based on the binarized face mask image. Then, the image of the region with poor skin quality in the target mask image and the image with good skin quality in the corresponding region in the reference face image can be subjected to image fusion, and a second image with good skin quality is obtained.
In a possible implementation manner, the image processing apparatus may generate a first image pyramid according to the reference face image, generate a second image pyramid according to the target mask image, and then process the target mask image based on the first image pyramid and the second image pyramid, so as to perform skin migration on the skin of the reference face image, thereby obtaining a face image with a better first image skin.
In this way, after the first image containing the face is obtained, the target mask image of the face region in the first image is obtained, and the reference face image matched with the target mask image in the reference face image set is obtained based on the binarized face mask image of the target mask image. And then, carrying out image processing on the reference face image and the target mask image to obtain N reference face sub-images and N target mask sub-images, and fusing the image of the target area of the reference face sub-image and the image of the corresponding area of the target mask sub-image to remove poor texture and excessive face, recover fine and clear skin texture, obtain a second image with better skin quality and greatly improve the quality of the face skin after imaging.
Optionally, in this embodiment of the application, the image processing apparatus may implement, based on the image pyramid, migration of an image with a good skin quality in the reference face image into an area with a poor skin quality in the first image.
Before the step 202, the image processing method provided in the embodiment of the present application may further include the following steps 201a1 to 202a 3:
step 202a1, the image processing device acquires N skin-processed face images, where N is a positive integer.
Illustratively, the image processing apparatus further needs to create a reference face image set before acquiring the reference face image matching the target mask image. N human face images with good skin quality can be obtained after image-level professional image processing (including skin color adjustment, freckle and acne removal, skin grinding, enhancement and the like), and the set comprises N human face images.
Step 202a2, the image processing device extracts the information of five sense organs of each face image in the N face images, and constructs a binary mask according to the information of the five sense organs.
Wherein, one face image corresponds to one binarization mask.
Illustratively, the above-mentioned information of five sense organs includes various information of five sense organs of the human face in the human face image, for example, the area where the five sense organs are located, specific position coordinates, and the like.
For example, after acquiring the N face images, the image processing apparatus decomposes pixel-by-pixel segmented mask images of the respective parts of the face region of each face image by using a face analysis model, and only facial features of the face image are retained in the mask images. Then, the image processing apparatus constructs a binary mask image of the mask image based on the mask image. Each face object comprises a corresponding binary mask image.
Step 202a3, the image processing device generates the reference face image set based on the N skin-processed face images and the binarization masks corresponding to each face image.
Illustratively, the reference face image set includes N face images subjected to skin processing, and N binarized mask images corresponding to the face images.
Illustratively, the facial image subjected to the skin processing is mainly used for image fusion with an image of a poor skin area in the target mask image. The main user of the binarization mask image adjusts the positions of the five sense organs of the reference face image to be closer to the positions of the five sense organs of the face in the target mask image, so that the figure growth phase of the reference face image can be kept consistent with the figure growth phase of the face in the target mask image as far as possible after adjustment, and the subsequent skin migration is facilitated.
In this way, the image processing device can construct a reference face image set based on the reference face image subjected to the skin texture processing and the binarization mask image corresponding to the reference face image, so that the image processing device can process the image based on the set after acquiring an image needing to be processed.
Optionally, in this embodiment of the application, after obtaining the reference face image set, the image processing apparatus may process the acquired first image based on the set, where a specific processing process needs to be completed by using an image pyramid.
Illustratively, the step 203 may further include the following steps 203a1 and 203a 2:
step 203a1, the image processing device constructs a first image pyramid based on the reference face image.
Step 203a2, the image processing device constructs a second image pyramid based on the target mask image.
The first image pyramid comprises the N reference face sub-images, and the second image pyramid comprises the N target mask sub-images.
It should be noted that the image pyramid is a kind of multi-scale representation of the image, and is an effective but conceptually simple structure for interpreting the image in multiple resolutions. A pyramid of an image is a series of image sets of progressively lower resolution arranged in a pyramid shape and derived from the same original image. It is obtained by down-sampling in steps, and sampling is not stopped until a certain end condition is reached. We compare the images one level at a time to a pyramid, with the higher the level, the smaller the image and the lower the resolution.
Illustratively, the image processing apparatus needs to construct an image pyramid of each face image in the above-described reference face image set.
Illustratively, the first image pyramid may be a laplacian pyramid. The bottom layer (layer 0) of the first image pyramid may be the reference face image or an image obtained by formatting the reference face image. The image size of the 0 th layer of the pyramid constructed by each face image in the reference face image set is the same, and the scaling ratio between layers is also the same.
For example, as shown in fig. 2, the image pyramid includes five layers (L0 to L4), each layer includes an image, and the images between the layers are scaled according to a preset scaling ratio.
Optionally, in this embodiment of the application, the image processing apparatus may process the target mask image based on the reference face image and the feature points of the target mask image, so as to obtain the second image.
Illustratively, the step 204 may include the following steps 204a1 to 204a 4:
in step 204a1, the image processing device extracts the first feature point of the reference face image and the second feature point of the target mask image.
Step 204a2, the image processing device obtains the binarized face mask image of each image of the N reference face sub-images and the vertex coordinates of each triangular block in the M triangular blocks included in the binarized face mask image of each image based on the first feature points.
In step 204a3, the image processing apparatus acquires the vertex coordinates of each of K triangular blocks included in the binarized face mask image of each of the N target mask sub-images based on the second feature points.
In step 204a4, the image processing apparatus fuses the image of the target region of the reference face sub-image and the image of the corresponding region of the target mask sub-image based on the vertex coordinates.
Illustratively, taking the N reference face sub-images as the images in the first image pyramid and the N target mask sub-images as the images in the second image pyramid as examples, the image processing apparatus may perform the triangulation processing on the first image pyramid and the second image pyramid after the first image pyramid and the second image pyramid are successfully constructed,
for example, the specific processing steps from the step 204a1 to the step 204a4 may include the following steps 204b1 to 204b 3:
step 204b1, the image processing device extracts the first feature point of the reference face image, and triangulates the reference face image based on the first feature point to obtain M triangular blocks.
Wherein, a first characteristic point corresponds to a triangle block, and the range of the circumscribed circle of each triangle block does not include other first characteristic points, and M is a positive integer.
For example, the image processing apparatus may extract a plurality of first feature points from the above-mentioned reference face image, and then triangulate (may also be referred to as image triangularization) based on each feature point so that any other first feature point is not included in the range of the circumscribed circle of each generated triangular block.
It is understood that, if a certain feature point extracted by the image processing apparatus cannot satisfy that no other feature point is included in the range of the circumscribed circle of each triangular block, the feature point cannot be taken as the first feature point.
It should be noted that the image triangulation is understood to divide the image into a plurality of triangle fragments, each of which is a triangle, and any two triangles on the image either do not intersect or just intersect at a common edge (two or more edges cannot intersect at the same time).
Step 204b2, the image processing device obtains the binary face mask image of each layer of the first image pyramid.
Step 204b3, the image processing device determines the vertex coordinates of each triangular block in the M triangular blocks included in the binarized face mask image of each layer of image based on the binarized face mask image corresponding to each layer of image of the first image pyramid and the scaling between the first target layer and the second target layer.
The first target layer and the second target layer are two adjacent layers of the first image pyramid.
For example, after constructing the first image pyramid of the reference face image, the image processing apparatus needs to generate a binarized face mask image corresponding to each layer of image based on the image contained in each layer of the first image pyramid.
For example, after acquiring the M triangular blocks of the reference face image, the image processing apparatus may determine the vertex coordinates of each of the M triangular blocks of each layer image of the first image pyramid based on the vertex coordinates of each of the M triangular blocks.
It should be noted that, because each layer of image in the first image pyramid is obtained based on the reference face image, each triangular block of the reference face image can find a corresponding triangular block in each layer of image. Also, since there is an image scaling between the first image pyramid layers, the vertex coordinates of each triangular block of each layer image can be recalculated based on the scaling.
In this way, the image processing apparatus can adjust the image of each layer based on the triangular blocks of the image of each layer so that the appearance of the person included in the image of each layer is closer to the appearance of the person included in the target mask image after acquiring the vertex coordinates of each triangular block of the image of each layer of the first image pyramid.
Illustratively, the image processing apparatus may construct the second image pyramid of the target mask image according to the method, similar to the method of constructing the first image pyramid of the reference face image described above.
For example, the steps 204a1 to 204a4 may further include the following steps 204c1 to 204c 4:
step 204c1, the image processing device constructs a second image pyramid based on the target mask image.
Illustratively, similar to the first image pyramid, layer 0 of the second image pyramid is constructed for the target mask image or the image obtained after the target mask image is formatted.
It should be noted that the size of each layer of image of the first image pyramid is the same as that of each layer of image of the second image pyramid. And the size of the image constructing the first image pyramid is the same as the size of the image constructing the second image pyramid.
It is to be understood that the size of the image in the embodiment of the present application may be expressed by using resolution, or may be expressed by using inches, which is not limited by the embodiment of the present application.
And step 204c2, the image processing device extracts a second feature point of the target mask image, and triangulates the target mask image based on the second feature point to obtain K triangular blocks.
Wherein, a second characteristic point corresponds to a triangle block, and does not include other second characteristic points in the scope of the circumscribed circle of every triangle block, and K is a positive integer.
And step 204c3, the image processing device acquires the binary face mask image of each layer of image of the second image pyramid.
Step 204c4, the image processing device determines the vertex coordinates of each triangular block in the K triangular blocks included in the binarized face mask image of each layer of image of the second image pyramid based on the binarized face mask image of each layer of image of the second image pyramid and the scaling ratio between the third target layer and the fourth target layer.
Wherein the third target layer and the fourth target layer are adjacent layers of the second image pyramid.
It should be noted that, since the steps 204c1 to 204c4 are similar to the steps 204b1 to 204b3, the explanation of the steps 204c1 to 204c4 can be referred to the explanation of the steps 204b1 to 204b 3. The specific processing procedures of the N reference face sub-images and the N target mask sub-images in the steps 204a1 to 204a4 may refer to the description of the processing procedures of the first image pyramid and the second image pyramid, and are not repeated herein in order to prevent repetition.
In this way, after acquiring the image pyramid of the reference face image and the image pyramid of the target mask image, the image processing apparatus may migrate the image of the region with better skin quality in the reference face image to the image of the region with poorer skin quality in the target mask image based on the image pyramids.
Further optionally, in this embodiment of the application, the image processing apparatus may improve the face skin of the face region in the first image based on the N reference face sub-images and the N target mask sub-images.
Illustratively, the step 204a4 may include the following steps 204d1 to 204d 3:
in step 204d1, the image processing apparatus affine-transforms the vertex coordinates of the M triangular blocks based on the vertex coordinates of the K triangular blocks.
Step 204d2, the image processing device performs image fusion on the first target area image of the first reference face sub-image in the N reference face sub-images after affine transformation and the second target area image of the first target mask sub-image in the N target mask sub-images to obtain N processed target mask sub-images.
In step 204d3, the image processing apparatus reconstructs the N processed target mask sub-images to generate the second image.
Wherein, the first reference face subimage is: any one of the N reference face subimages; the first target mask sub-image is a target mask sub-image corresponding to the first reference face sub-image in the N target mask sub-images; the first target area image is an image of a first target area of the first reference face sub-image, and the first target area corresponds to a second target area of the first target mask sub-image.
For example, taking the N reference face sub-images as the images in the first image pyramid and the N target mask sub-images as the images in the second image pyramid, the steps 204d1 to 204d3 may include the following steps 204e1 to 204e 3:
in step 204e1, the image processing apparatus performs affine transformation on the vertex coordinates of the M triangular blocks of each layer image of the first image pyramid based on the vertex coordinates of the K triangular blocks of each layer image of the second image pyramid.
Illustratively, in order to make the appearance growth phase of the person in the reference face image closer to the appearance growth phase of the person in the first image, the image processing apparatus needs to process each layer of image in the second image pyramid. I.e. an affine transformation of the mechanical energy for each layer of the image.
It should be noted that affine transformation, also called affine mapping, refers to a geometric transformation in which one vector space is linearly transformed and then translated into another vector space. Affine transformation is geometrically defined as an affine transformation between two vector spaces or affine mapping consisting of a non-singular linear transformation (transformation using a linear function) followed by a translation transformation.
For example, before performing affine transformation on each layer of image of the first image pyramid, an affine transformation matrix from each triangular block of the corresponding reference face image to each triangular block of the target mask image needs to be calculated according to coordinates of the triangular blocks included in the target mask image. The image processing apparatus may perform affine transformation on each layer image of the first image pyramid based on the obtained transformation matrix.
Step 204e2, the image processing device performs image fusion on the first target area image of the fifth target layer of the affine-transformed first image pyramid and the second target area image of the sixth target layer of the second image pyramid to obtain a processed second image pyramid.
Illustratively, since each level of the first image pyramid corresponds to each level of the second image pyramid, e.g., level 0 of the first image pyramid corresponds to level 0 of the second image pyramid, and level n of the first image pyramid corresponds to level n of the second image pyramid. Therefore, the image processing apparatus may perform image fusion on the image of the first target region of the first image pyramid and the image of the corresponding region (i.e., the second target region) of the second image pyramid to obtain the processed second image pyramid.
Step 204e3, the image processing device reconstructs the processed second image pyramid to generate the second image.
Wherein the fifth target layer is: any layer of the first image pyramid; the sixth target layer is a layer corresponding to the fifth target layer in the second image pyramid; the first target area image is an image of a first target area of the fifth target layer, and the first target area corresponds to a second target area of the sixth target layer; the first image pyramid and the second image pyramid have the same layer number and the same scaling ratio of each layer.
It is to be understood that the sixth target layer is a layer in the second image pyramid corresponding to the fifth target layer, and it is to be understood that the number of layers of the sixth target layer in the second image pyramid is the same as the number of layers of the fifth target layer in the first image pyramid, that is, the same layer.
Exemplarily, the image processing apparatus reconstructs the laplacian pyramid after the texture migration of the reference image, that is, the second image pyramid, to obtain a final result.
It should be noted that the image processing apparatus needs to construct a gaussian pyramid before constructing the laplacian pyramid of the target mask image or the reference face image. The original image is first convolved with a gaussian kernel (5 × 5) as a bottom image G0 (layer 0 of a gaussian pyramid), and then the convolved image is down-sampled (even rows and columns are removed) to obtain a top image G1. And then, taking the image as input, repeating convolution and downsampling operations to obtain an image of a higher layer, and repeating iteration for multiple times to form a pyramid-shaped image data structure, namely a Gaussian pyramid.
In the operation process of the gaussian pyramid, partial high-frequency detail information of an image is lost through convolution and downsampling operations. To describe this high frequency information, one defines the Laplacian Pyramid (LP). And subtracting the predicted image which is sampled from the upper layer of image and subjected to Gaussian convolution from each layer of image of the Gaussian pyramid to obtain a series of difference images, namely LP decomposition images.
For example, the image processing apparatus may recur the laplacian pyramid after image fusion by pressing the following formula from top to bottom layer by layer from the top layer thereof, so as to recover the corresponding laplacian pyramid, and finally obtain the original image G0. The method of interpolation is used from the highest layer.
In a possible implementation manner, the image processing apparatus may further perform a skin grinding process on each layer of the image of the second image pyramid before performing the image fusion.
After the step 203a2, the image processing method according to the embodiment of the present application may further include the following step 203 b:
and step 203b, the image processing device performs guiding filtering and buffing processing on each layer of image of the second image pyramid according to the preset radius and the floating point relative precision.
For example, the image processing apparatus may also perform guided filtering peeling processing on each of the N target mask sub-images according to a preset radius and a floating point relative precision.
Illustratively, the image processing device can set a reasonable radius and eps floating point relative precision for each layer of the laplacian pyramid, and perform guided filtering and skin polishing to reduce the problems of face acne marks, wrinkles, excessive unevenness and the like.
In this way, the image processing device optimizes the human skin in the first image according to the first image pyramid constructed based on the reference human face image and the second image pyramid constructed based on the target mask image, and obtains a second image with better skin.
According to the image processing method provided by the embodiment of the application, the human face skin with flaws in the shot image and the image with better skin quality are subjected to image fusion through a human face skin migration method based on multilayer image pyramid fusion, so that the bad texture and transition of the human face can be effectively removed, the imaged human face skin is fine and clear, and the skin quality of the imaged human face is greatly improved.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
In the embodiments of the present application, the above-described methods are illustrated in the drawings. The image processing method is exemplarily described with reference to one of the drawings in the embodiments of the present application. In specific implementation, the image processing methods shown in the above method drawings may also be implemented by combining with any other drawings that may be combined, which are illustrated in the above embodiments, and are not described herein again.
Fig. 3 is a schematic diagram of a possible structure of an image processing apparatus for implementing the embodiment of the present application, and as shown in fig. 3, the image processing apparatus 300 includes: an acquisition module 301 and an image processing module 302; an obtaining module 301, configured to obtain a target mask image of a face region in a first image; the obtaining module 301 is further configured to obtain a reference face image matched with the target mask image in the reference face image set based on the binarized face mask image of the target mask image; the obtaining module 301 is further configured to perform image processing on the reference face image and the target mask image to obtain N reference face sub-images and N target mask sub-images; an image processing module 302, configured to fuse the image of the target region of the reference face sub-image acquired by the acquisition module 301 with the image of the corresponding region of the target mask sub-image acquired by the acquisition module 301, and generate a second image corresponding to the first image; the reference face image set comprises a plurality of face mask images subjected to skin processing; the human face skin value of the target area is higher than the human face skin value of the area corresponding to the target area in the target mask image.
Optionally, the apparatus 300 further comprises: a generating module 303; the acquiring module 301 is further configured to acquire N face images subjected to skin processing, where N is a positive integer; the acquisition module 301 is further configured to extract information of five sense organs of each face image in the N face images, and construct a binary mask image according to the information of the five sense organs; one face image corresponds to one binary mask image; a generating module 303, configured to generate a reference face image set based on the N skin-processed face images acquired by the acquiring module 301 and the binarized mask image corresponding to each face image acquired by the acquiring module 301.
Optionally, the apparatus 300 further comprises: a build module 304; a construction module 304, configured to construct a first image pyramid based on the reference face image, where the first image pyramid includes N reference face sub-images; the building module 304 is further configured to build a second image pyramid based on the target mask image, the second image pyramid including the N target mask sub-images.
Optionally, the obtaining module 301 is further configured to extract a first feature point of the reference face image and a second feature point of the target mask image; the obtaining module 301 is further configured to obtain a binarized face mask image of each image of the N reference face sub-images and a vertex coordinate of each triangular block of the M triangular blocks included in the binarized face mask image of each image based on the first feature point; the obtaining module 301 is further configured to obtain, based on the second feature point, a vertex coordinate of each triangular block of K triangular blocks included in the binarized face mask image of each image of the N target mask sub-images; the image processing module 302 is specifically configured to fuse an image of a target region of the reference face sub-image with an image of a corresponding region of the target mask sub-image based on the vertex coordinates.
Optionally, the apparatus 300 further comprises: a transformation module 305; a transformation module 305, configured to perform affine transformation on the vertex coordinates of the M triangular blocks acquired by the acquisition module 301 based on the vertex coordinates of the K triangular blocks acquired by the acquisition module 301; the image processing module 302 is specifically configured to perform image fusion on a first target area image of a first reference face sub-image in the N reference face sub-images after affine transformation and a second target area image of the first target mask sub-image in the N target mask sub-images to obtain N processed target mask sub-images; the image processing module 302 is further configured to reconstruct the N processed target mask sub-images to generate a second image; wherein the first reference face subimage is: any one of N reference face sub-images; the first target mask subimage is a target mask subimage corresponding to the first reference face subimage in the N target mask subimages; the first target area image is an image area of a first target area of the first reference face sub-image, and the first target area corresponds to a second target area of the first target mask sub-image.
Optionally, the image processing module 302 is further configured to perform guided filtering and buffing processing on each layer of image of the second image pyramid constructed by the construction module 304 according to the preset radius and the floating point relative precision.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the image processing apparatus in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition.
The beneficial effects of the various implementation manners in this embodiment may specifically refer to the beneficial effects of the corresponding implementation manners in the above method embodiments, and are not described herein again to avoid repetition.
The image processing device provided by the embodiment of the application performs image fusion on the human face skin with flaws in the shot image and the image with better skin quality through the image processing method, can effectively remove poor textures and transition of the human face, enables the imaged human face skin to be fine and clear, and greatly improves the skin quality of the imaged human face.
Optionally, as shown in fig. 4, an electronic device M00 is further provided in an embodiment of the present application, and includes a processor M01, a memory M02, and a program or an instruction stored in the memory M02 and executable on the processor M01, where the program or the instruction when executed by the processor M01 implements the processes of the foregoing embodiment of the image processing method, and can achieve the same technical effects, and details are not repeated here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 5 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The input unit 104 is configured to acquire a target mask image of a face region in the first image; the acquiring processor 110 is further configured to acquire a reference face image matched with the target mask image in the reference face image set based on the binarized face mask image of the target mask image; the processor 110 is further configured to perform image processing on the reference face image and the target mask image to obtain N reference face sub-images and N target mask sub-images; the processor 110 is configured to fuse an image of a target region of the acquired reference face sub-image with an image of a corresponding region of the acquired target mask sub-image, and generate a second image corresponding to the first image; the reference face image set comprises a plurality of face mask images subjected to skin processing; the human face skin value of the target area is higher than the human face skin value of the area corresponding to the target area in the target mask image.
In this way, after the first image containing the face is obtained, the target mask image of the face region in the first image is obtained, and the reference face image matched with the target mask image in the reference face image set is obtained based on the binarized face mask image of the target mask image. And then, the image of the target area of the reference face image is fused with the image of the corresponding area of the target mask image, so that the poor texture and the excessive texture of the face are removed, the fine and clear skin texture is restored, a second image with better skin quality is obtained, and the quality of the imaged face skin is greatly improved.
Optionally, the input unit 104 is further configured to acquire N skin-processed face images, where N is a positive integer; the processor 110 is further configured to extract information of five sense organs of each face image in the N face images, and construct a binary mask image according to the information of the five sense organs; one face image corresponds to one binary mask image; a processor 110, configured to generate a reference face image set based on the N skin-processed face images acquired by the input unit 104 and the binarized mask image corresponding to each acquired face image.
In this way, the image processing device can construct a reference face image set based on the reference face image subjected to the skin texture processing and the binarization mask image corresponding to the reference face image, so that the image processing device can process the image based on the set after acquiring an image needing to be processed.
Optionally, the processor 110 is configured to construct a first image pyramid based on the reference face image, where the first image pyramid includes N reference face sub-images; the processor 110 is further configured to construct a second image pyramid based on the target mask image, the second image pyramid comprising N target mask sub-images.
Optionally, the processor 110 is further configured to extract a first feature point of the reference face image and a second feature point of the target mask image; the processor 110 is further configured to obtain a binarized face mask image of each image of the N reference face sub-images based on the first feature points, and a vertex coordinate of each triangular block of M triangular blocks included in the binarized face mask image of each image; the processor 110 is further configured to obtain, based on the second feature point, a vertex coordinate of each triangular block of K triangular blocks included in the binarized face mask image of each image of the N target mask sub-images; the processor 110 is specifically configured to fuse the image of the target region of the reference face sub-image with the image of the corresponding region of the target mask sub-image based on the vertex coordinates.
In this way, the image processing apparatus can adjust the image of each layer based on the triangular blocks of the image of each layer so that the appearance of the person included in the image of each layer is closer to the appearance of the person included in the target mask image after acquiring the vertex coordinates of each triangular block of the image of each layer of the first image pyramid.
Optionally, the processor 110 is configured to perform affine transformation on the acquired vertex coordinates of the M triangular blocks based on the acquired vertex coordinates of the K triangular blocks; the processor 110 is specifically configured to perform image fusion on a first target area image of a first reference face sub-image in the N reference face sub-images after affine transformation and a second target area image of the first target mask sub-image in the N target mask sub-images to obtain N processed target mask sub-images; the processor 110 is further configured to reconstruct the N processed target mask sub-images, and generate a second image; wherein the first reference face subimage is: any one of N reference face sub-images; the first target mask subimage is a target mask subimage corresponding to the first reference face subimage in the N target mask subimages; the first target area image is an image area of a first target area of the first reference face sub-image, and the first target area corresponds to a second target area of the first target mask sub-image.
In this way, the image processing apparatus may perform affine transformation on the triangulated N reference face images based on the image feature points to obtain images similar to the person of the target reference face image, and then perform skin migration to obtain a second image with better skin.
Optionally, the image processing processor 110 is further configured to perform guided filtering and skin-polishing processing on each layer of image of the second image pyramid constructed by the configuration processor 110 according to the preset radius and the floating point relative precision.
In this way, the image processing device optimizes the human skin in the first image according to the first image pyramid constructed based on the reference human face image and the second image pyramid constructed based on the target mask image, and obtains a second image with better skin.
According to the electronic equipment provided by the embodiment of the application, through the image processing method, the human face skin with flaws in the shot image and the image with better skin quality are subjected to image fusion, so that the poor texture and transition of the human face can be effectively removed, the imaged human face skin is fine and clear, and the skin quality of the imaged human face is greatly improved.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (14)

1. An image processing method, characterized in that the method comprises:
acquiring a target mask image of a face region in a first image;
acquiring a reference face image matched with the target mask image in a reference face image set based on the binarization face mask image of the target mask image;
performing image processing on the reference face image and the target mask image to obtain N reference face sub-images and N target mask sub-images;
fusing an image of a target area of the reference face sub-image with an image of a corresponding area of the target mask sub-image to generate a second image corresponding to the first image;
wherein the reference face image set comprises a plurality of face mask images subjected to skin processing; and the human face skin value of the target area is larger than the human face skin value of the area corresponding to the target area in the target mask image.
2. The method according to claim 1, wherein before the obtaining of the reference face image in the reference face image set matching the target mask image based on the binarized face mask image of the target mask image, the method further comprises:
acquiring N face images subjected to skin processing, wherein N is a positive integer;
extracting the information of five sense organs of each face image in the N personal face images, and constructing a binary mask image according to the information of the five sense organs; one face image corresponds to one binary mask image;
and generating the reference face image set based on the N face images subjected to the skin processing and the binarization mask image corresponding to each face image.
3. The method of claim 1, wherein the image processing the reference face image and the target mask image to obtain N reference face sub-images and N target mask sub-images comprises:
constructing a first image pyramid based on the reference face image, wherein the first image pyramid comprises N reference face sub-images;
and constructing a second image pyramid based on the target mask image, wherein the second image pyramid comprises N target mask sub-images.
4. The method of claim 1, wherein fusing the image of the target region of the reference face sub-image with the image of the corresponding region of the target mask sub-image to generate the second image corresponding to the first image comprises:
extracting a first characteristic point of the reference face image and a second characteristic point of the target mask image;
acquiring a binarization face mask image of each image of the N reference face sub-images and the vertex coordinates of each triangular block in M triangular blocks contained in the binarization face mask image of each image based on the first characteristic points;
acquiring the vertex coordinates of each triangular block in K triangular blocks contained in the binarization face mask image of each image of the N target mask subimages based on the second characteristic points;
and fusing the image of the target area of the reference face sub-image with the image of the corresponding area of the target mask sub-image based on the vertex coordinates.
5. The method of claim 4, wherein fusing the image of the target region of the reference face sub-image with the image of the corresponding region of the target mask sub-image based on the vertex coordinates comprises:
performing affine transformation on the vertex coordinates of the M triangular blocks based on the vertex coordinates of the K triangular blocks;
carrying out image fusion on a first target area image of a first reference face sub-image in the N reference face sub-images after affine transformation and a second target area image of the first target mask sub-image in the N target mask sub-images to obtain N processed target mask sub-images;
reconstructing the N processed target mask sub-images to generate a second image;
wherein the first reference face subimage is: any one of the N reference face sub-images; the first target mask sub-image is a target mask sub-image corresponding to the first reference face sub-image in the N target mask sub-images; the first target area image is an image of a first target area of the first reference face sub-image, and the first target area corresponds to a second target area of the first target mask sub-image.
6. The method of claim 3, wherein after the constructing the second image pyramid based on the target mask image, the method further comprises:
and performing guiding filtering and buffing processing on each layer of image of the second image pyramid according to a preset radius and the floating point relative precision.
7. An image processing apparatus, characterized in that the apparatus comprises: the device comprises an acquisition module and an image processing module;
the acquisition module is used for acquiring a target mask image of a face region in the first image;
the acquisition module is also used for acquiring a reference face image matched with the target mask image in a reference face image set based on the binarization face mask image of the target mask image;
the acquisition module is further used for carrying out image processing on the reference face image and the target mask image to obtain N reference face sub-images and N target mask sub-images;
the image processing module is used for fusing the image of the target area of the reference face sub-image acquired by the acquisition module with the image of the corresponding area of the target mask sub-image acquired by the acquisition module to generate a second image corresponding to the first image;
wherein the reference face image set comprises a plurality of face mask images subjected to skin processing; and the human face skin value of the target area is higher than the human face skin value of the area corresponding to the target area in the target mask image.
8. The apparatus of claim 7, further comprising: a generation module:
the acquisition module is also used for acquiring N face images subjected to skin processing, wherein N is a positive integer;
the acquisition module is further used for extracting the information of the five sense organs of each face image in the N personal face images and constructing a binary mask image according to the information of the five sense organs; one face image corresponds to one binary mask image;
the generation module is used for generating the reference face image set based on the N face images which are obtained by the obtaining module and are subjected to skin processing and the binarization mask images corresponding to the face images obtained by the obtaining module.
9. The apparatus of claim 7, further comprising: building a module;
the construction module is used for constructing a first image pyramid based on the reference face image, wherein the first image pyramid comprises N reference face sub-images;
the construction module is further configured to construct a second image pyramid based on the target mask image, where the second image pyramid includes N target mask sub-images.
10. The apparatus of claim 7,
the acquisition module is further used for extracting a first characteristic point of the reference face image and a second characteristic point of the target mask image;
the acquiring module is further configured to acquire a binarized face mask image of each image of the N reference face sub-images and vertex coordinates of each triangular block of M triangular blocks included in the binarized face mask image of each image based on the first feature points;
the acquisition module is further used for acquiring the vertex coordinates of each triangular block in K triangular blocks contained in the binarization face mask image of each image of the N target mask sub-images based on the second feature points;
the image processing module is specifically configured to fuse, based on the vertex coordinates, an image of a target region of the reference face sub-image with an image of a corresponding region of the target mask sub-image.
11. The apparatus of claim 10, further comprising: a transformation module;
the transformation module is used for carrying out affine transformation on the vertex coordinates of the M triangular blocks acquired by the acquisition module based on the vertex coordinates of the K triangular blocks acquired by the acquisition module;
the image processing module is specifically configured to perform image fusion on a first target area image of a first reference face sub-image in the N reference face sub-images subjected to affine transformation and a second target area image of the first target mask sub-image in the N target mask sub-images to obtain N processed target mask sub-images;
the image processing module is specifically configured to reconstruct the N processed target mask sub-images and generate the second image;
wherein the first reference face subimage is: any one of the N reference face sub-images; the first target mask sub-image is a target mask sub-image corresponding to the first reference face sub-image in the N target mask sub-images; the first target area image is an image of a first target area of the first reference face sub-image, and the first target area corresponds to a second target area of the first target mask sub-image.
12. The apparatus of claim 9,
and the image processing module is also used for performing guiding filtering and buffing processing on each layer of image of the second image pyramid constructed by the construction module according to the preset radius and the floating point relative precision.
13. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the image processing method of any one of claims 1 to 6.
14. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any one of claims 1 to 6.
CN202110653986.2A 2021-06-11 2021-06-11 Image processing method and device, electronic equipment and readable storage medium Pending CN113469903A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110653986.2A CN113469903A (en) 2021-06-11 2021-06-11 Image processing method and device, electronic equipment and readable storage medium
PCT/CN2022/097859 WO2022258013A1 (en) 2021-06-11 2022-06-09 Image processing method and apparatus, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110653986.2A CN113469903A (en) 2021-06-11 2021-06-11 Image processing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113469903A true CN113469903A (en) 2021-10-01

Family

ID=77869884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110653986.2A Pending CN113469903A (en) 2021-06-11 2021-06-11 Image processing method and device, electronic equipment and readable storage medium

Country Status (2)

Country Link
CN (1) CN113469903A (en)
WO (1) WO2022258013A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022258013A1 (en) * 2021-06-11 2022-12-15 维沃移动通信有限公司 Image processing method and apparatus, electronic device and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017026839A1 (en) * 2015-08-12 2017-02-16 트라이큐빅스 인크. 3d face model obtaining method and device using portable camera
CN107146199A (en) * 2017-05-02 2017-09-08 厦门美图之家科技有限公司 A kind of fusion method of facial image, device and computing device
CN108289141A (en) * 2017-12-27 2018-07-17 努比亚技术有限公司 A kind of the screen locking unlocking method and mobile terminal of mobile terminal
CN109948526A (en) * 2019-03-18 2019-06-28 北京市商汤科技开发有限公司 Image processing method and device, detection device and storage medium
CN109978754A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN111445564A (en) * 2020-03-26 2020-07-24 腾讯科技(深圳)有限公司 Face texture image generation method and device, computer equipment and storage medium
CN111583154A (en) * 2020-05-12 2020-08-25 Oppo广东移动通信有限公司 Image processing method, skin beautifying model training method and related device
CN111836058A (en) * 2019-04-22 2020-10-27 腾讯科技(深圳)有限公司 Method, device and equipment for real-time video playing and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509846B (en) * 2018-02-09 2022-02-11 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, computer device, storage medium, and computer program product
CN112215776B (en) * 2020-10-20 2024-05-07 咪咕文化科技有限公司 Portrait peeling method, electronic device and computer-readable storage medium
CN113469903A (en) * 2021-06-11 2021-10-01 维沃移动通信有限公司 Image processing method and device, electronic equipment and readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017026839A1 (en) * 2015-08-12 2017-02-16 트라이큐빅스 인크. 3d face model obtaining method and device using portable camera
CN107146199A (en) * 2017-05-02 2017-09-08 厦门美图之家科技有限公司 A kind of fusion method of facial image, device and computing device
CN108289141A (en) * 2017-12-27 2018-07-17 努比亚技术有限公司 A kind of the screen locking unlocking method and mobile terminal of mobile terminal
CN109978754A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109948526A (en) * 2019-03-18 2019-06-28 北京市商汤科技开发有限公司 Image processing method and device, detection device and storage medium
CN111836058A (en) * 2019-04-22 2020-10-27 腾讯科技(深圳)有限公司 Method, device and equipment for real-time video playing and storage medium
CN111445564A (en) * 2020-03-26 2020-07-24 腾讯科技(深圳)有限公司 Face texture image generation method and device, computer equipment and storage medium
CN111583154A (en) * 2020-05-12 2020-08-25 Oppo广东移动通信有限公司 Image processing method, skin beautifying model training method and related device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022258013A1 (en) * 2021-06-11 2022-12-15 维沃移动通信有限公司 Image processing method and apparatus, electronic device and readable storage medium

Also Published As

Publication number Publication date
WO2022258013A1 (en) 2022-12-15

Similar Documents

Publication Publication Date Title
CN108765273B (en) Virtual face-lifting method and device for face photographing
CN111598998B (en) Three-dimensional virtual model reconstruction method, three-dimensional virtual model reconstruction device, computer equipment and storage medium
Guo et al. Image retargeting using mesh parametrization
Liu et al. Knowledge-driven deep unrolling for robust image layer separation
WO2020192706A1 (en) Object three-dimensional model reconstruction method and device
Wang et al. Laplacian pyramid adversarial network for face completion
CN109840881B (en) 3D special effect image generation method, device and equipment
CN113205568B (en) Image processing method, device, electronic equipment and storage medium
Xu et al. Structure-texture aware network for low-light image enhancement
An et al. TR-MISR: Multiimage super-resolution based on feature fusion with transformers
CN109993824B (en) Image processing method, intelligent terminal and device with storage function
Yang et al. Joint-feature guided depth map super-resolution with face priors
KR102311796B1 (en) Method and Apparatus for Deblurring of Human Motion using Localized Body Prior
CN116612015A (en) Model training method, image mole pattern removing method and device and electronic equipment
CN113052923B (en) Tone mapping method, tone mapping apparatus, electronic device, and storage medium
WO2022258013A1 (en) Image processing method and apparatus, electronic device and readable storage medium
US20240013358A1 (en) Method and device for processing portrait image, electronic equipment, and storage medium
CN116310105B (en) Object three-dimensional reconstruction method, device, equipment and storage medium based on multiple views
US20210241430A1 (en) Methods, devices, and computer program products for improved 3d mesh texturing
CN110675413A (en) Three-dimensional face model construction method and device, computer equipment and storage medium
CN116342377A (en) Self-adaptive generation method and system for camouflage target image in degraded scene
CN111652792A (en) Image local processing method, image live broadcasting method, image local processing device, image live broadcasting equipment and storage medium
CN110852934A (en) Image processing method and apparatus, image device, and storage medium
CN113379623B (en) Image processing method, device, electronic equipment and storage medium
CN114049473A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination