CN113763286A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113763286A
CN113763286A CN202111137672.3A CN202111137672A CN113763286A CN 113763286 A CN113763286 A CN 113763286A CN 202111137672 A CN202111137672 A CN 202111137672A CN 113763286 A CN113763286 A CN 113763286A
Authority
CN
China
Prior art keywords
target
beautification
beautifying
areas
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111137672.3A
Other languages
Chinese (zh)
Inventor
孙仁辉
苏柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111137672.3A priority Critical patent/CN113763286A/en
Publication of CN113763286A publication Critical patent/CN113763286A/en
Priority to PCT/CN2022/120090 priority patent/WO2023045941A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium. The method comprises the following steps: determining a target position of a target part in a user image in response to a beautification operation for the user image; determining a plurality of target areas to be beautified in the user image based on the target positions, wherein the target areas belong to a preset area range of the target part; according to the beautification parameters in the beautification operation, beautification processing matched with the target areas is respectively carried out on the target areas to obtain a plurality of beautification results; and generating a target user image which is obtained by beautifying the preset area range of the target part according to the beautifying results.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of computer vision technology, operations such as eye shadow rendering on the eye parts in a face image have been increasingly and widely applied to the field of image processing. However, in the related eye shadow rendering operation, rendering is often implemented at one time, so that the rendering result is monotonous.
Disclosure of Invention
The present disclosure proposes an image processing scheme.
According to an aspect of the present disclosure, there is provided an image processing method including:
determining a target position of a target part in a user image in response to a beautification operation for the user image; determining a plurality of target areas to be beautified in the user image based on the target positions, wherein the target areas belong to a preset area range of the target part; according to the beautification parameters in the beautification operation, beautification processing matched with the target areas is respectively carried out on the target areas to obtain a plurality of beautification results; and generating a target user image which is obtained by beautifying the preset area range of the target part according to the beautifying results.
In one possible implementation, the determining a plurality of target areas to be beautified in the user image based on the target position includes: copying the user images into a plurality of image layers respectively; traversing the layers, and taking the traversed layer as a target layer; and in the target image layer, determining N target areas to be beautified in the user image according to the target position, wherein N is a positive integer, and the number of N is smaller than the area number of the plurality of target areas.
In a possible implementation manner, the determining, in the target layer and according to the target position, N target areas to be beautified in the user image includes: acquiring preset position relations between the N target areas and the target part; and in the target map layer, performing region extension according to the preset position relation by taking the target position as a center to obtain the N target regions.
In a possible implementation manner, the plurality of layers are arranged according to a preset sequence, and the preset sequence is matched with a beautifying execution sequence in the beautifying operation.
In one possible implementation, the determining a plurality of target areas to be beautified in the user image based on the target position includes: and taking the target position as a center, respectively performing area extension according to a plurality of preset extension ranges in a plurality of preset directions, and determining the plurality of target areas.
In a possible implementation manner, the performing, according to the beautification parameter in the beautification operation, beautification processing matched with the target area on the plurality of target areas respectively to obtain a plurality of beautification results includes: traversing the target areas, taking the traversed target areas as areas to be beautified, and acquiring the original colors of the user images in the areas to be beautified; determining beautification colors corresponding to the areas to be beautified based on beautification parameters in the beautification operation; and in the area to be beautified, fusing the original color and the beautification color to obtain an beautification result of the area to be beautified.
In a possible implementation manner, the plurality of beautification results respectively belong to a plurality of layers, the plurality of layers are arranged according to a preset sequence, and the preset sequence is matched with a beautification execution sequence in the beautification operation; generating a target user image for beautifying the preset area range of the target part according to the beautifying results, wherein the target user image comprises: according to the preset sequence, overlapping the layers to which the beautifying results belong to obtain a target beautifying result; and fusing the target beautifying result with the user image to obtain the target user image.
In a possible implementation manner, the superimposing the layers to which the beautification results belong according to the preset sequence to obtain a target beautification result includes: according to the preset sequence, overlapping the layers to which the beautifying results belong to obtain a middle beautifying result; and fusing the intermediate beautifying result with a preset texture material to obtain the target beautifying result.
In one possible implementation, the target site includes an eye site, and the beautification operation includes an eye shadow rendering operation; the plurality of target regions includes one or more of a base upper eye shadow region, a base lower eye shadow region, an upper eyelid region, an outer corner of the eye region, an inner corner of the eye region, and an outer upper eye shadow region.
According to an aspect of the present disclosure, there is provided an image processing apparatus including:
the determining module is used for responding to beautifying operation aiming at the user image and determining the target position of the target part in the user image; the area determining module is used for determining a plurality of target areas to be beautified in the user image based on the target positions, wherein the target areas belong to a preset area range of the target part; the beautification module is used for respectively carrying out beautification treatment matched with the target areas on the plurality of target areas according to beautification parameters in the beautification operation to obtain a plurality of beautification results; and the generating module is used for generating a target user image which is obtained by beautifying the preset area range of the target part according to the beautifying results.
In one possible implementation, the region determining module is configured to: copying the user images into a plurality of image layers respectively; traversing the layers, and taking the traversed layer as a target layer; and in the target image layer, determining N target areas to be beautified in the user image according to the target position, wherein N is a positive integer, and the number of N is smaller than the area number of the plurality of target areas.
In one possible implementation, the region determining module is further configured to: acquiring preset position relations between the N target areas and the target part; and in the target map layer, performing region extension according to the preset position relation by taking the target position as a center to obtain the N target regions.
In a possible implementation manner, the plurality of layers are arranged according to a preset sequence, and the preset sequence is matched with a beautifying execution sequence in the beautifying operation.
In one possible implementation, the region determining module is configured to: and taking the target position as a center, respectively performing area extension according to a plurality of preset extension ranges in a plurality of preset directions, and determining the plurality of target areas.
In one possible implementation, the beautification module is configured to: traversing the target areas, taking the traversed target areas as areas to be beautified, and acquiring the original colors of the user images in the areas to be beautified; determining beautification colors corresponding to the areas to be beautified based on beautification parameters in the beautification operation; and in the area to be beautified, fusing the original color and the beautification color to obtain an beautification result of the area to be beautified.
In a possible implementation manner, the plurality of beautification results respectively belong to a plurality of layers, the plurality of layers are arranged according to a preset sequence, and the preset sequence is matched with a beautification execution sequence in the beautification operation; the generation module is configured to: according to the preset sequence, overlapping the layers to which the beautifying results belong to obtain a target beautifying result; and fusing the target beautifying result with the user image to obtain the target user image.
In one possible implementation, the generating module is further configured to: according to the preset sequence, overlapping the layers to which the beautifying results belong to obtain a middle beautifying result; and fusing the intermediate beautifying result with a preset texture material to obtain the target beautifying result.
In one possible implementation, the target site includes an eye site, and the beautification operation includes an eye shadow rendering operation; the plurality of target regions includes one or more of a base upper eye shadow region, a base lower eye shadow region, an upper eyelid region, an outer corner of the eye region, an inner corner of the eye region, and an outer upper eye shadow region.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the above-described image processing method is performed.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described image processing method.
In the embodiment of the disclosure, the target position of the target part in the user image is determined in response to the beautification operation on the user image, so that a plurality of target areas to be beautified in the user image are determined based on the target position, then, according to the beautification parameters in the beautification operation, the beautification processing matched with the target areas is respectively performed on the plurality of target areas to obtain a plurality of beautification results, and the target user image with the beautified target part is generated according to the plurality of beautification results. Through the process, can be based on the target location of target part, will treat the region of beautifying and divide into a plurality of target areas, and handle respectively this a plurality of target areas and obtain target user's image, thereby realize the independent beautification of multizone, when promoting and beautify the flexibility, can also promote the richness of beautifying the effect, for example to the operation is rendered to the eye shadow, through the method that this open embodiment provided, can handle respectively a plurality of target areas that the eye shadow was rendered, on the one hand can promote the eye shadow rendering precision of each target area respectively, on the other hand also can make holistic eye shadow effect abundanter and have the stereovision.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
FIG. 2 illustrates a mask image schematic of a target area according to an embodiment of the present disclosure.
FIG. 3 illustrates a mask image schematic of a target area according to an embodiment of the present disclosure.
FIG. 4 illustrates a mask image schematic of a target area according to an embodiment of the present disclosure.
FIG. 5 illustrates a mask image schematic of a target area according to an embodiment of the present disclosure.
FIG. 6 illustrates a mask image schematic of a target area according to an embodiment of the present disclosure.
FIG. 7 illustrates a mask image schematic of a target area according to an embodiment of the present disclosure.
FIG. 8 shows a schematic diagram of a user image according to an embodiment of the present disclosure.
Fig. 9 shows a superposition effect obtained by superposing a plurality of layers to which a plurality of beautification results belong in a preset order.
Fig. 10 shows a superposition effect obtained by superposing a plurality of layers to which a plurality of beautification results belong in a preset order.
Fig. 11 shows a superposition effect obtained by superposing a plurality of layers to which a plurality of beautification results belong in a preset order.
Fig. 12 shows a superposition effect obtained by superposing a plurality of layers to which a plurality of beautification results belong in a preset order.
Fig. 13 shows a superposition effect obtained by superposing a plurality of layers to which a plurality of beautification results belong in a preset order.
Fig. 14 shows a superposition effect obtained by superposing a plurality of layers to which a plurality of beautification results belong in a preset order.
Fig. 15 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
FIG. 16 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
FIG. 17 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure, which may be applied to an image processing apparatus, which may be a terminal device, a server, or other processing device, or the like, or an image processing system, or the like. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In one example, the image processing method can be applied to a cloud server or a local server, the cloud server can be a public cloud server or a private cloud server, and the cloud server can be flexibly selected according to actual conditions.
In some possible implementations, the image processing method may also be implemented by the processor calling computer readable instructions stored in the memory.
As shown in fig. 1, in one possible implementation, the image processing method may include:
step S11, in response to the beautification operation for the user image, determines a target position of the target portion in the user image.
The user image may be any image including a target portion of a user, the user image may include one or more users, or may include a target portion of one or more users, and an implementation form of the user image may be flexibly determined according to an actual situation, which is not limited in the embodiment of the present disclosure.
The target part can be any part which needs to be beautified in the user image, and the target part comprises which parts, the realization form of the target part can be flexibly determined according to the actual situation of beautifying operation, for example, the beautifying operation aiming at eye shadow rendering, and the target part can be an eye part; for beautifying operation of the face-beautifying operation, the target part can be a part with face, nose bridge and other face-beautifying requirements; for the cosmetic operation of lip makeup, the target portion may be a lip portion or the like.
The beautification operation may be any operation for performing beautification processing on the target portion of the user image, such as various operations including an eye shadow rendering operation, a face repairing operation, a lip makeup operation, and the like. The operation content included in the beautification operation can be flexibly determined according to actual situations, and is not limited to the following disclosed embodiments. In one possible implementation, the beautification operation may include an operation of instructing beautification processing on a target portion of the user in the user image; in some possible implementations, the beautification operation may also include various types of beautification parameters that are input, and the like.
The beautifying parameters can be related parameters which are input by a user and are used for beautifying the target part, and the realization form of the beautifying parameters can be flexibly determined, such as various parameters of beautifying color, beautifying strength or transparency and the like.
The target location may be a location of the target portion in the user image. The manner of determining the target position is not limited in the embodiments of the present disclosure, and is not limited to the following embodiments of the present disclosure. In one possible implementation, a recognition process may be performed on the user image to determine the target location. The identification processing manner is not limited in the embodiment of the present disclosure, and for example, the identification processing manner may be key point identification or direct identification of the whole target portion.
Step S12, determining a plurality of target areas to be beautified in the user image based on the target positions.
The type and position of the target area may be determined flexibly according to the actual situation of the beautifying operation, and multiple target areas may have overlapping areas or may be independent of each other, which is not limited in the embodiment of the present disclosure.
The determined target areas may be saved in the form of mask images (masks), which may be used to determine the positions of the target areas in the user image, on the one hand, and may also have different transparencies of pixel points in the mask image, so that the target areas have soft and overly-soft boundaries, on the other hand. The transparency of each pixel point in the mask image can be flexibly set according to actual conditions, and is not limited in the embodiment of the disclosure.
In one example, for an beautification operation of an eye shadow rendering, the target area may be one or more areas that need to be rendered in the eye shadow rendering, such as 6 areas, for example, a base upper eye shadow area, a base lower eye shadow area, an upper eyelid area, an outer eye corner area, an inner eye corner area, and a right upper eye shadow area. 2-7 illustrate mask image schematics of a target area, such as the area between the eye and the eyebrow, based on an eye shadow area as shown in FIG. 2, according to an embodiment of the present disclosure; as shown in fig. 3, the base lower eye shadow region may be a region where the lower eyelid below the eye is located; as shown in fig. 4, the upper eyelid area may be the area above the eye where the upper eyelid is located; as shown in fig. 5, the external corner of the eye region may be a region where the corner of the eye is located near the side of the face contour; as shown in fig. 6, the inner corner region may be near the region where the corner of the eye on the nose side is located; as shown in fig. 7, the outer upper eye shadow region may be an upper region in the outer corner of the eye.
In some possible implementations, for the beautifying operation of the facial cosmetic, the target area may be an area that needs to be processed in the cosmetic, such as a forehead area, a nose bridge area, or a mandible area. In some possible implementations, the target area may be an area of the lip makeup that requires makeup, such as the upper lip, lower lip, center of lip, or edge of lip, for cosmetic manipulation of the lip makeup manipulation.
In some possible implementation manners, since the target area needs to be determined according to the target position, the target area is generally within a certain range of the target portion, so that the multiple target areas may belong to a preset area range of the target portion, the size of the preset area range may be determined according to actual conditions of the target portion and the target area, and the embodiment of the present disclosure is not limited.
The manner of determining the plurality of target areas according to the target position may be flexibly selected according to actual situations, for example, the plurality of target areas may be determined by extending the target position in multiple directions, or one target area near the target position may be determined in different image layers, for example, as described in the following disclosure embodiments, the method is not expanded first.
And step S13, performing beautification processing matched with the target area on the plurality of target areas respectively according to the beautification parameters in the beautification operation to obtain a plurality of beautification results.
The beautifying parameters can be realized in the forms described in the above embodiments, and are not described herein again. For a plurality of target areas, the beautification parameters corresponding to different target areas may be the same or different, for example, different target areas may correspond to the same or different beautification colors, or different areas may be beautified with the same or different beautification strengths, and the like, and may be flexibly set according to actual situations, which is not limited to the embodiments of the present disclosure.
The beautification result may be a result obtained after the target area is subjected to beautification treatment. The beautification processing mode may be different according to different target areas, so that the beautification processing can be performed on a plurality of target areas respectively to obtain a plurality of beautification results. Some possible implementations of step S13 can be seen in the following disclosure, which is not first expanded.
And step S14, generating a target user image with the target part beautified according to the plurality of beautifying results.
The target user image may be an image obtained by beautifying a preset area range of a target portion of the user image, and a mode of generating the target user image may be flexibly determined according to an actual situation, for example, a plurality of beautifying results may be fused to obtain the target user image, or a plurality of beautifying results may be fused with the user image to obtain the target user image. In some possible implementations, the plurality of beautification results may also belong to a plurality of layers respectively, in which case, the target user image may be obtained by layer superposition.
Some possible implementations of step S14 can be seen in the following disclosure, which is not first expanded.
In the embodiment of the disclosure, the target position of the target part in the user image is determined in response to the beautification operation on the user image, so that a plurality of target areas to be beautified in the user image are determined based on the target position, then, according to the beautification parameters in the beautification operation, the beautification processing matched with the target areas is respectively performed on the plurality of target areas to obtain a plurality of beautification results, and the target user image with the beautified target part is generated according to the plurality of beautification results. Through the process, can be based on the target location of target part, will treat the region of beautifying and divide into a plurality of target areas, and handle respectively this a plurality of target areas and obtain target user's image, thereby realize the independent beautification of multizone, when promoting and beautify the flexibility, can also promote the richness of beautifying the effect, for example to the operation is rendered to the eye shadow, through the method that this open embodiment provided, can handle respectively a plurality of target areas that the eye shadow was rendered, on the one hand can promote the eye shadow rendering precision of each target area respectively, on the other hand also can make holistic eye shadow effect abundanter and have the stereovision.
In one possible implementation, step S12 may include:
copying the user images into a plurality of image layers respectively;
traversing a plurality of layers, and taking the traversed layers as target layers;
and in the target layer, determining N target areas to be beautified in the user image according to the target positions.
The layer may be any layer having an image processing or editing function, such as an editing layer in image editing software (PS, Photoshop).
The number of the layers can be flexibly determined according to actual conditions, and is not limited in the embodiment of the disclosure, and in some possible implementation manners, the number of the layers can be the same as the number of the target area; in some possible implementations, the number of layers may also be smaller than the number of target regions, in which case two or more target regions may be determined simultaneously in one or some layers.
The user images are respectively copied to the multiple image layers, the target part in the user images and the preset area range of the target part can be copied to the multiple image layers, the user images can be integrally copied to the multiple image layers, or the original image layer where the user images are located is directly copied to the multiple image layers, and the like.
After the plurality of layers are obtained by copying, the plurality of layers can be traversed, and each traversed layer is respectively used as a target layer.
In each target layer, N target regions may be determined according to target positions, where N is a positive integer and the number of N is smaller than the number of regions of the plurality of target regions.
In a possible implementation manner, the number of N may be 1, that is, a target area is determined in each target layer, for example, for an eye shadow rendering operation, 6 target areas to be beautified mentioned in the above disclosed embodiments may be determined, in one example, a user image may be copied into 6 layers, and 1 target area to be beautified is determined in each layer, for example, an eye shadow area is determined on the basis in a first layer, an eye shadow area is determined on the basis in a second layer, and the like.
In a possible implementation manner, the number of N may also be an integer greater than 1 and smaller than the number of target areas, for example, for an eye shadow rendering operation, 6 target areas to be beautified may be determined, in an example, a user image may be copied into 5 image layers, since the beautification colors and the beautification processing manners of the base eye shadow area and the base lower eye shadow area are generally the same, the base eye shadow area and the base lower eye shadow area may be determined in a first image layer, and in the remaining 4 image layers, one target area and the like are determined respectively.
The method for determining the target area in the target layer can be flexibly determined according to the actual situation of the target area, and is described in detail in the following disclosure embodiments, which is not expanded first.
Through the embodiment of the disclosure, the user images can be copied to the plurality of image layers, and the plurality of target areas are determined respectively and independently, so that the subsequent beautification treatment of the plurality of target areas is facilitated, the beautification flexibility is improved, the beautification effect of each target area is also facilitated to be changed, and the beautification abundance is improved.
In a possible implementation manner, in the target layer, according to the target position, determining N target areas to be beautified in the user image includes:
acquiring preset position relations between the N target areas and the target part;
and in the target map layer, performing region extension according to a preset position relation by taking the target position as a center to obtain N target regions.
For example, the positional relationship between the eye shadow region and the target site may be within a range from x1 to x2 above the target site on the basis, and the positional relationship between the eye shadow region and the target site may be within a range from x3 to x4 below the target site on the basis, where values of x1, x2, x3, and x4 may all be determined according to actual situations, and are not limited in the embodiment of the present disclosure. The position relationship between other target areas and the target part can be analogized, and the position relationship is flexibly determined according to the actual operation condition of the eye shadow rendering, and is not repeated herein.
In the case that N is greater than 1, the number of the acquired preset position relationships may also be N, and in this case, the N preset position relationships may be combined into one preset position relationship, or the subsequent processing may be performed based on the N preset position relationships, respectively.
After the preset position relationship is determined, in the target layer, the target position is used as the center, and region extension is performed according to the preset position relationship to obtain N target regions, wherein the N target regions determined by extension may be combined into one region or may be used as N individual target regions to be subjected to subsequent processing.
Through the embodiment of the disclosure, the target area can be quickly and conveniently determined according to the preset position relation in the target layer, the determination efficiency of the target area is improved, and then the processing efficiency of the whole beautifying process is improved.
In a possible implementation manner, the plurality of layers may be arranged according to a preset sequence, where the preset sequence may be matched with a beautification execution sequence in the beautification operation.
The beautification execution sequence may be an beautification processing sequence for different target areas in the beautification operation, for example, in the eye shadow rendering operation, the beautification execution sequence may be, from front to back: a base upper eye shadow region, a base lower eye shadow region, an outer upper eye shadow region, an outer corner of the eye region, an upper eyelid region, and an inner corner of the eye region.
The preset sequence of the layer arrangement is matched with the beautifying execution sequence, and the matching mode can be set according to the actual situation. In a possible implementation manner, according to the sequence of beautification execution, the layer corresponding to the target area which is beautified first may be arranged at the lowest layer of the plurality of layers, and the layer corresponding to the target area which is beautified last may be arranged at the uppermost layer of the plurality of layers. Therefore, in one example, the target areas corresponding to the layers from bottom to top in the eye shadow rendering operation are: a base upper eye shadow region, a base lower eye shadow region, an outer upper eye shadow region, an outer corner of the eye region, an upper eyelid region, and an inner corner of the eye region.
According to the embodiment of the invention, the layers are arranged according to the preset sequence matched with the beautifying execution sequence in the beautifying operation, so that the beautified target image can be obtained by simulating the actual steps and the technique of the beautifying operation, and the beautifying effect and the authenticity are effectively improved.
In one possible implementation, step S12 may include:
and taking the target position as a center, respectively performing region extension according to a plurality of preset extension ranges in a plurality of preset directions, and determining a plurality of target regions.
The plurality of preset directions may be directions of the plurality of target regions respectively relative to the target portion, and the plurality of preset extension ranges may be extension ranges of the plurality of target regions respectively relative to the target portion. The target areas all belong to the preset area range of the target part, so that the preset extension ranges are all within the preset area range.
Through the embodiment of the disclosure, the positions of a plurality of target areas can be determined simultaneously according to the directions and the extension ranges between the target areas and the target part, the convenience degree of determining the target areas is improved, and then the processing efficiency of the beautifying process is improved.
In one possible implementation, step S13 may include:
traversing a plurality of target areas, taking the traversed target areas as areas to be beautified, and acquiring the original colors of the user images in the areas to be beautified;
determining a beautification color corresponding to the area to be beautified based on beautification parameters in the beautification operation;
and in the area to be beautified, fusing the original color and the beautification color to obtain the beautification result of the area to be beautified.
The area to be beautified may be a target area traversed in the process of traversing the target area. In a possible implementation manner, since the plurality of target areas may be determined in the plurality of layers respectively, traversal of the plurality of target areas may also be implemented by traversing the plurality of layers, and then a subsequent process of fusing the original color and the beautification color in the area to be beautified may also be performed in the layer to which the area to be beautified belongs, in which case, the obtained plurality of beautification results may belong to the plurality of layers respectively.
The original color may be color information of the user image itself, and since the position of the area to be beautified in the user image may be determined through step S12, the color of the user image is extracted within the area range of the area to be beautified, so as to obtain the original color of the user image in the area to be beautified.
The beautification color may be a color for rendering an area to be beautified, and the beautification color may be determined according to a color value or an RGB channel value, etc. input by a user in an beautification operation. As described in the above-mentioned embodiments, different target areas may correspond to different beautification parameters, so that the beautification color corresponding to the area to be beautified may be determined according to the area type of the area to be beautified, for example, the same beautification color may be used for the area with the basic upper eye shadow and the area with the basic lower eye shadow, and the different beautification colors may be used for the areas with the outer corner of the eye, the highlight of the inner corner of the eye, the upper eyelid, or the outer upper eye shadow.
In the area to be beautified, the original color and the beautification color can be fused to obtain the beautification result of the area to be beautified, wherein the fusion mode can be flexibly selected according to the actual situation. For example, each pixel point of the area to be beautified can be traversed, and the original color and the beautification color corresponding to each pixel point are added or multiplied to realize fusion. In a possible implementation manner, the beautified color and the mask image corresponding to the area to be beautified may also be mixed in a common mixing manner to implement fusion and the like.
Through the embodiment of the disclosure, each target area can be processed respectively according to the beautifying colors corresponding to different target areas to obtain the beautifying result of each target area, so that different target areas can have the beautifying effect of different colors, the richness of the whole beautifying effect is effectively improved, and the flexibility of the beautifying process is also improved.
In one possible implementation, step S14 may include:
superposing a plurality of layers to which a plurality of beautifying results belong according to a preset sequence to obtain a target beautifying result;
and fusing the target beautifying result with the user image to obtain a target user image.
As described in the above-mentioned embodiment, a plurality of target areas may be determined by a plurality of layers, and accordingly, a plurality of obtained beautification results may also belong to the plurality of layers, respectively.
In this case, multiple layers may be overlaid to obtain a target beautification result.
As described in the foregoing disclosure, the plurality of layers may be arranged according to a preset sequence, and thus the position relationship of the plurality of layers in the stacking process may be consistent with the preset sequence. Under the condition that the superposition position relationship of the layers is determined, the superposition sequence may be consistent with the preset sequence or different from the preset sequence.
The superposition mode may be flexibly determined according to an actual situation, for example, direct superposition between layers may be performed, and in some possible implementation modes, superposition may also be performed in some or some mixed manners, for example, multiple layers may be mixed and superposed in a front-sheet bottom-stacking manner.
After the target beautifying result is obtained, the target beautifying result and the user image may be fused to obtain the target user image, and the fusion mode is not limited in the embodiment of the present disclosure, and the above fusion mode of the original color and the beautifying color may be referred to, and details are not described here.
Through the embodiment of the disclosure, the layers to which the beautifying results belong can be overlapped according to the preset sequence to obtain the target beautifying result, so that on one hand, the beautifying effect on the user image can be simply and conveniently determined, on the other hand, the layer overlapping can be realized based on the preset sequence, and the authenticity of the beautifying effect is improved.
In a possible implementation manner, the superimposing, according to the preset sequence, the plurality of layers to which the plurality of beautification results belong to obtain the target beautification result may include:
according to the preset sequence, overlapping a plurality of layers to which a plurality of beautifying results belong to obtain a middle beautifying result;
and fusing the intermediate beautifying result with the preset texture material to obtain a target beautifying result.
The process of obtaining the intermediate beautifying result may refer to the above process of obtaining the target beautifying result by layer stacking, and is not described herein again.
The preset texture material can be an additional material for beautifying the target part, and the implementation mode of the preset texture material can be flexibly changed according to different beautifying operations, for example, the preset texture material can comprise an eye shadow powder material aiming at the eye shadow rendering operation, the preset texture material can comprise a matte material or a highlight material aiming at the face repairing operation, and the preset texture material can also comprise a pearlescent material, a velvet material or a fog face material aiming at the lip makeup operation.
The method of fusing the intermediate beautification result with the preset texture material may also refer to the method of fusing in the above-described embodiments, and will not be described herein again. In a possible implementation manner, the preset texture material and the intermediate beautifying result may be superimposed in a layer form, and in one example, in order to retain more texture details, the layer of the preset texture material may be superimposed on the layer of the intermediate beautifying result, so as to improve the beautifying effect.
Through the embodiment of the disclosure, on the basis of beautifying in multiple target areas, more texture detail information can be further superimposed, the fineness of beautifying operation is increased, and the beautifying effect is further improved.
Based on the above disclosed embodiments, the target user image which is divided into the preset area of the target part in the user image for beautification can be obtained. Fig. 8 is a schematic diagram of an image of a user according to an embodiment of the present disclosure, and fig. 9-14 are diagrams illustrating a superposition effect obtained by superposing a plurality of image layers to which a plurality of beautification results belong according to a preset sequence (in order to protect an object in the image, a part of a face in each image is subjected to mosaic processing), where fig. 9 is a first superposition effect obtained by superposing beautification results of an eye shadow region; FIG. 10 is a second overlay effect obtained by overlaying the enhancement of the eye shadow area based on FIG. 9; FIG. 11 is a third overlay effect obtained by overlaying the beautification result of the inner corner region on the basis of FIG. 10; FIG. 12 is a fourth overlay effect obtained by overlaying the beautification result of the outer corner region on the basis of FIG. 11; fig. 13 is a fifth superposition effect obtained by superposing the beautification result of the eyelid area on the basis of fig. 12; fig. 14 is a sixth superimposition effect obtained by superimposing the beautification result of the outer and upper eye shadow areas on the basis of fig. 13. The image is displayed as a gray scale image, so that the change of the eye shadow effect is not obvious, but the comparison still shows that the beautifying results of different target areas are superposed by the method provided by the embodiment of the disclosure, so that a relatively real and natural eye shadow rendering effect can be obtained.
Fig. 15 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure. As shown, the image processing apparatus 20 may include:
and the determining module 21 is used for responding to the beautifying operation aiming at the user image and determining the target position of the target part in the user image.
The area determining module 22 is configured to determine a plurality of target areas to be beautified in the user image based on the target positions, where the plurality of target areas belong to a preset area range of the target portion.
And the beautification module 23 is configured to perform beautification processing on the plurality of target areas respectively matched with the target areas according to the beautification parameters in the beautification operation, so as to obtain a plurality of beautification results.
And the generating module 24 is configured to generate a target user image obtained by beautifying the preset area range of the target portion according to the plurality of beautifying results.
In one possible implementation, the region determining module is configured to: copying the user images into a plurality of image layers respectively; traversing a plurality of layers, and taking the traversed layers as target layers; in the target map layer, N target areas to be beautified in the user image are determined according to the target positions, wherein N is a positive integer, and the number of N is smaller than the area number of the target areas.
In one possible implementation, the region determining module is further configured to: acquiring preset position relations between the N target areas and the target part; and in the target map layer, performing region extension according to a preset position relation by taking the target position as a center to obtain N target regions.
In one possible implementation manner, the plurality of layers are arranged according to a preset sequence, and the preset sequence is matched with a beautifying execution sequence in the beautifying operation.
In one possible implementation, the region determining module is configured to: and taking the target position as a center, respectively performing region extension according to a plurality of preset extension ranges in a plurality of preset directions, and determining a plurality of target regions.
In one possible implementation, the beautification module is to: traversing a plurality of target areas, taking the traversed target areas as areas to be beautified, and acquiring the original colors of the user images in the areas to be beautified; determining a beautification color corresponding to the area to be beautified based on beautification parameters in the beautification operation; and in the area to be beautified, fusing the original color and the beautification color to obtain the beautification result of the area to be beautified.
In a possible implementation manner, a plurality of beautification results respectively belong to a plurality of layers, the layers are arranged according to a preset sequence, and the preset sequence is matched with an beautification execution sequence in the beautification operation; the generation module is to: superposing a plurality of layers to which a plurality of beautifying results belong according to a preset sequence to obtain a target beautifying result; and fusing the target beautifying result with the user image to obtain a target user image.
In one possible implementation, the generation module is further configured to: superposing a plurality of layers to which a plurality of beautifying results belong according to a preset sequence to obtain a middle beautifying result; and fusing the intermediate beautifying result with the preset texture material to obtain a target beautifying result.
In one possible implementation, the target site includes an eye site, and the beautification operation includes an eye shadow rendering operation; the plurality of target regions includes one or more of a base upper eye shadow region, a base lower eye shadow region, an upper eyelid region, an outer corner of the eye region, an inner corner of the eye region, and an outer upper eye shadow region.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
Application scenario example
In the field of computer vision, how to obtain an image rendered by an eye shadow with real and rich effect becomes a problem to be solved urgently at present.
The application example of the present disclosure provides an image processing method, which includes the following processes:
and performing face recognition based on the user image, determining the target position of the eye part in the user image, and performing region extension outside the eye part according to the target position to obtain mask images of a plurality of target regions. In an application example of the present disclosure, mask images of 6 target regions are obtained, and the obtaining includes: a base upper eye shadow region, a base lower eye shadow region, an upper eyelid region, an outer corner of the eye region, an inner corner of the eye region, and an outer upper eye shadow region.
In the application example of the disclosure, the mask image can be matched with the positions of the areas in the user image, and the mask image also contains transparency information of the pixels in the target area, so that the mask image has soft transition boundaries, and the halation effect of the pressed powder on the skin can be simulated.
In an application example of the present disclosure, mask images of 6 target regions may respectively belong to 6 layers, which are respectively recorded as a, b, c, d, e, and f, where a layer corresponds to an upper eye shadow region on the basis, b corresponds to a lower eye shadow region on the basis, c corresponds to an upper outer eye shadow region, d corresponds to an outer eye corner region, e corresponds to an upper eyelid region, and f corresponds to an inner eye corner region. And arranging the layers from bottom to top according to the sequence of a-f, namely, the layer a is positioned at the lowest layer, and the layer f is positioned at the uppermost layer. By means of the layer arrangement sequence of the application example, the makeup steps and methods can be simulated, most of the classic eye shadow makeup sequences are met, and a relatively real eye shadow rendering effect is obtained.
In each layer, the beautification color and the mask image in the layer can be subjected to common mixing processing, each mixed layer can be respectively recorded as aa, bb, cc, dd, ee and ff, and each obtained mixed layer is then respectively subjected to positive bottom-stacked mixing processing with the user image to obtain the beautification result of each target area, which is respectively recorded as aaa, bbb, ccc, ddd, eee and fff.
The layers to which the beautification results belong are mixed and superposed from top to bottom according to the order of fff-aaa, so that the intermediate target beautification result can be obtained.
In one example, preset texture materials, such as eye shadow powder materials and the like, can be further superimposed on the basis of the intermediate target beautifying result to enhance the reality, stereoscopic impression and atmosphere of the eye shadow effect, so that multi-level, multi-detail and multi-texture rendering can be realized, and a finer eye shadow effect can be obtained.
The image processing method provided in the application example of the disclosure can enable the eye shadow effect to be more naturally displayed through multi-level partition rendering, and can realize more parameter definitions, such as the change of beautification colors of different target areas in the eye shadow, the change of transparency of mask images corresponding to different target areas, and the like, so that the eye shadow effect is more controllable, a user can conveniently customize beautification parameters according to the preference, and a mechanism can conveniently provide richer self-defined functions for the user according to the method in the application example of the disclosure.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile computer readable storage medium or a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
In practical applications, the memory may be a volatile memory (RAM); or a non-volatile memory (non-volatile memory) such as a ROM, a flash memory (flash memory), a Hard Disk (Hard Disk Drive, HDD) or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor.
The processor may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor. It is understood that the electronic devices for implementing the above-described processor functions may be other devices, and the embodiments of the present disclosure are not particularly limited.
The electronic device may be provided as a terminal, server, or other form of device.
Based on the same technical concept of the foregoing embodiments, the embodiments of the present disclosure also provide a computer program, which when executed by a processor implements the above method.
Fig. 16 is a block diagram of an electronic device 800 according to an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 16, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related personnel information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 17 is a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 17, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), can execute computer-readable program instructions to implement various aspects of the present disclosure by utilizing state personnel information of the computer-readable program instructions to personalize the electronic circuitry.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. An image processing method, comprising:
determining a target position of a target part in a user image in response to a beautification operation for the user image;
determining a plurality of target areas to be beautified in the user image based on the target positions, wherein the target areas belong to a preset area range of the target part;
according to the beautification parameters in the beautification operation, beautification processing matched with the target areas is respectively carried out on the target areas to obtain a plurality of beautification results;
and generating a target user image which is obtained by beautifying the preset area range of the target part according to the beautifying results.
2. The method of claim 1, wherein determining a plurality of target areas in the user image to be beautified based on the target location comprises:
copying the user images into a plurality of image layers respectively;
traversing the layers, and taking the traversed layer as a target layer;
and in the target image layer, determining N target areas to be beautified in the user image according to the target position, wherein N is a positive integer, and the number of N is smaller than the area number of the plurality of target areas.
3. The method according to claim 2, wherein the determining N target areas to be beautified in the user image according to the target position in the target image layer comprises:
acquiring preset position relations between the N target areas and the target part;
and in the target map layer, performing region extension according to the preset position relation by taking the target position as a center to obtain the N target regions.
4. The method according to claim 2 or 3, wherein the plurality of layers are arranged in layers according to a preset sequence, and the preset sequence is matched with a beautification execution sequence in the beautification operation.
5. The method according to any one of claims 1 to 4, wherein the determining a plurality of target areas to be beautified in the user image based on the target location comprises:
and taking the target position as a center, respectively performing area extension according to a plurality of preset extension ranges in a plurality of preset directions, and determining the plurality of target areas.
6. The method according to any one of claims 1 to 5, wherein the performing beautification processing on the plurality of target areas according to beautification parameters in the beautification operation to match the target areas respectively to obtain a plurality of beautification results comprises:
traversing the target areas, taking the traversed target areas as areas to be beautified, and acquiring the original colors of the user images in the areas to be beautified;
determining beautification colors corresponding to the areas to be beautified based on beautification parameters in the beautification operation;
and in the area to be beautified, fusing the original color and the beautification color to obtain an beautification result of the area to be beautified.
7. The method according to any one of claims 1 to 6, wherein the plurality of beautification results belong to a plurality of layers respectively, the plurality of layers are arranged according to a preset sequence, and the preset sequence is matched with an beautification execution sequence in the beautification operation;
generating a target user image for beautifying the preset area range of the target part according to the beautifying results, wherein the target user image comprises:
according to the preset sequence, overlapping the layers to which the beautifying results belong to obtain a target beautifying result;
and fusing the target beautifying result with the user image to obtain the target user image.
8. The method according to claim 7, wherein the superimposing the plurality of layers to which the plurality of beautification results belong according to the preset order to obtain a target beautification result comprises:
according to the preset sequence, overlapping the layers to which the beautifying results belong to obtain a middle beautifying result;
and fusing the intermediate beautifying result with a preset texture material to obtain the target beautifying result.
9. The method of any one of claims 1 to 8, wherein the target site comprises an eye site, the beautification operation comprises an eye shadow rendering operation;
the plurality of target regions includes one or more of a base upper eye shadow region, a base lower eye shadow region, an upper eyelid region, an outer corner of the eye region, an inner corner of the eye region, and an outer upper eye shadow region.
10. An image processing apparatus characterized by comprising:
the determining module is used for responding to beautifying operation aiming at the user image and determining the target position of the target part in the user image;
the area determining module is used for determining a plurality of target areas to be beautified in the user image based on the target positions, wherein the target areas belong to a preset area range of the target part;
the beautification module is used for respectively carrying out beautification treatment matched with the target areas on the plurality of target areas according to beautification parameters in the beautification operation to obtain a plurality of beautification results;
and the generating module is used for generating a target user image which is obtained by beautifying the preset area range of the target part according to the beautifying results.
11. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 9.
12. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 9.
CN202111137672.3A 2021-09-27 2021-09-27 Image processing method and device, electronic equipment and storage medium Pending CN113763286A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111137672.3A CN113763286A (en) 2021-09-27 2021-09-27 Image processing method and device, electronic equipment and storage medium
PCT/CN2022/120090 WO2023045941A1 (en) 2021-09-27 2022-09-21 Image processing method and apparatus, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111137672.3A CN113763286A (en) 2021-09-27 2021-09-27 Image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113763286A true CN113763286A (en) 2021-12-07

Family

ID=78797743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111137672.3A Pending CN113763286A (en) 2021-09-27 2021-09-27 Image processing method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113763286A (en)
WO (1) WO2023045941A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045941A1 (en) * 2021-09-27 2023-03-30 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device and storage medium
CN115880168A (en) * 2022-09-30 2023-03-31 北京字跳网络技术有限公司 Image restoration method, device, equipment, computer readable storage medium and product
WO2023142645A1 (en) * 2022-01-28 2023-08-03 上海商汤智能科技有限公司 Image processing method and apparatus, and electronic device, storage medium and computer program product

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622613A (en) * 2011-12-16 2012-08-01 彭强 Hair style design method based on eyes location and face recognition
US20130063473A1 (en) * 2011-09-12 2013-03-14 Microsoft Corporation System and method for layering using tile-based renderers
US20130127891A1 (en) * 2011-08-31 2013-05-23 Byungmoon Kim Ordering and Rendering Buffers for Complex Scenes with Cyclic Dependency
US20130342575A1 (en) * 2012-05-23 2013-12-26 1-800 Contacts, Inc. Systems and methods to display rendered images
CN104537612A (en) * 2014-08-05 2015-04-22 华南理工大学 Method for automatically beautifying skin of facial image
CN104574306A (en) * 2014-12-24 2015-04-29 掌赢信息科技(上海)有限公司 Face beautifying method for real-time video and electronic equipment
CN106023104A (en) * 2016-05-16 2016-10-12 厦门美图之家科技有限公司 Human face eye area image enhancement method and system and shooting terminal
CN107358573A (en) * 2017-06-16 2017-11-17 广东欧珀移动通信有限公司 Image U.S. face treating method and apparatus
CN107808137A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN109102467A (en) * 2017-06-21 2018-12-28 北京小米移动软件有限公司 The method and device of picture processing
CN109409319A (en) * 2018-11-07 2019-03-01 北京旷视科技有限公司 A kind of pet image beautification method, device and its storage medium
US20190114466A1 (en) * 2017-10-12 2019-04-18 Casio Computer Co., Ltd. Image processing device, image processing method, and recording medium
CN110111279A (en) * 2019-05-05 2019-08-09 腾讯科技(深圳)有限公司 A kind of image processing method, device and terminal device
CN110458921A (en) * 2019-08-05 2019-11-15 腾讯科技(深圳)有限公司 A kind of image processing method, device, terminal and storage medium
CN111640163A (en) * 2020-06-03 2020-09-08 湖南工业大学 Image synthesis method and computer-readable storage medium
CN112348736A (en) * 2020-10-12 2021-02-09 武汉斗鱼鱼乐网络科技有限公司 Method, storage medium, device and system for removing black eye
CN112766234A (en) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112883821A (en) * 2021-01-27 2021-06-01 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN113240792A (en) * 2021-04-29 2021-08-10 浙江大学 Image fusion generation type face changing method based on face reconstruction

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107742274A (en) * 2017-10-31 2018-02-27 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN109543646A (en) * 2018-11-30 2019-03-29 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN112102159A (en) * 2020-09-18 2020-12-18 广州虎牙科技有限公司 Human body beautifying method, device, electronic equipment and storage medium
CN112541955A (en) * 2020-12-17 2021-03-23 维沃移动通信有限公司 Image processing method, device and equipment
CN112801916A (en) * 2021-02-23 2021-05-14 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113421204A (en) * 2021-07-09 2021-09-21 北京百度网讯科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN113763286A (en) * 2021-09-27 2021-12-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130127891A1 (en) * 2011-08-31 2013-05-23 Byungmoon Kim Ordering and Rendering Buffers for Complex Scenes with Cyclic Dependency
US20130063473A1 (en) * 2011-09-12 2013-03-14 Microsoft Corporation System and method for layering using tile-based renderers
CN102622613A (en) * 2011-12-16 2012-08-01 彭强 Hair style design method based on eyes location and face recognition
US20130342575A1 (en) * 2012-05-23 2013-12-26 1-800 Contacts, Inc. Systems and methods to display rendered images
CN104537612A (en) * 2014-08-05 2015-04-22 华南理工大学 Method for automatically beautifying skin of facial image
CN104574306A (en) * 2014-12-24 2015-04-29 掌赢信息科技(上海)有限公司 Face beautifying method for real-time video and electronic equipment
CN106023104A (en) * 2016-05-16 2016-10-12 厦门美图之家科技有限公司 Human face eye area image enhancement method and system and shooting terminal
CN107358573A (en) * 2017-06-16 2017-11-17 广东欧珀移动通信有限公司 Image U.S. face treating method and apparatus
CN109102467A (en) * 2017-06-21 2018-12-28 北京小米移动软件有限公司 The method and device of picture processing
US20190114466A1 (en) * 2017-10-12 2019-04-18 Casio Computer Co., Ltd. Image processing device, image processing method, and recording medium
CN107808137A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN109409319A (en) * 2018-11-07 2019-03-01 北京旷视科技有限公司 A kind of pet image beautification method, device and its storage medium
CN110111279A (en) * 2019-05-05 2019-08-09 腾讯科技(深圳)有限公司 A kind of image processing method, device and terminal device
CN110458921A (en) * 2019-08-05 2019-11-15 腾讯科技(深圳)有限公司 A kind of image processing method, device, terminal and storage medium
CN111640163A (en) * 2020-06-03 2020-09-08 湖南工业大学 Image synthesis method and computer-readable storage medium
CN112348736A (en) * 2020-10-12 2021-02-09 武汉斗鱼鱼乐网络科技有限公司 Method, storage medium, device and system for removing black eye
CN112883821A (en) * 2021-01-27 2021-06-01 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112766234A (en) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113240792A (en) * 2021-04-29 2021-08-10 浙江大学 Image fusion generation type face changing method based on face reconstruction

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045941A1 (en) * 2021-09-27 2023-03-30 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device and storage medium
WO2023142645A1 (en) * 2022-01-28 2023-08-03 上海商汤智能科技有限公司 Image processing method and apparatus, and electronic device, storage medium and computer program product
CN115880168A (en) * 2022-09-30 2023-03-31 北京字跳网络技术有限公司 Image restoration method, device, equipment, computer readable storage medium and product

Also Published As

Publication number Publication date
WO2023045941A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
CN112766234B (en) Image processing method and device, electronic equipment and storage medium
CN112767285B (en) Image processing method and device, electronic device and storage medium
US11114130B2 (en) Method and device for processing video
CN113763286A (en) Image processing method and device, electronic equipment and storage medium
CN109816764B (en) Image generation method and device, electronic equipment and storage medium
CN113160094A (en) Image processing method and device, electronic equipment and storage medium
CN110991327A (en) Interaction method and device, electronic equipment and storage medium
CN110889382A (en) Virtual image rendering method and device, electronic equipment and storage medium
CN112767288B (en) Image processing method and device, electronic equipment and storage medium
CN110989901B (en) Interactive display method and device for image positioning, electronic equipment and storage medium
CN111091610B (en) Image processing method and device, electronic equipment and storage medium
WO2023045979A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN107424130B (en) Picture beautifying method and device
CN113822798B (en) Method and device for training generation countermeasure network, electronic equipment and storage medium
CN113570581A (en) Image processing method and device, electronic equipment and storage medium
CN112783316A (en) Augmented reality-based control method and apparatus, electronic device, and storage medium
WO2023045950A1 (en) Image processing method and apparatus, electronic device, and storage medium
WO2023045961A1 (en) Virtual object generation method and apparatus, and electronic device and storage medium
WO2023045946A1 (en) Image processing method and apparatus, electronic device, and storage medium
WO2023142645A1 (en) Image processing method and apparatus, and electronic device, storage medium and computer program product
WO2023051356A1 (en) Virtual object display method and apparatus, and electronic device and storage medium
CN114266305A (en) Object identification method and device, electronic equipment and storage medium
CN113570583A (en) Image processing method and device, electronic equipment and storage medium
CN112906467A (en) Group photo image generation method and device, electronic device and storage medium
US20220270313A1 (en) Image processing method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40056542

Country of ref document: HK