CN113781359A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113781359A
CN113781359A CN202111137187.6A CN202111137187A CN113781359A CN 113781359 A CN113781359 A CN 113781359A CN 202111137187 A CN202111137187 A CN 202111137187A CN 113781359 A CN113781359 A CN 113781359A
Authority
CN
China
Prior art keywords
eye
region
target
makeup
eye object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111137187.6A
Other languages
Chinese (zh)
Inventor
孙仁辉
苏柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202111137187.6A priority Critical patent/CN113781359A/en
Publication of CN113781359A publication Critical patent/CN113781359A/en
Priority to PCT/CN2022/120109 priority patent/WO2023045950A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium. The method comprises the following steps: in response to eye makeup operation aiming at a user image, determining an eye object to be subjected to eye makeup processing in the user image; dividing the eye object into a plurality of target regions based on hue information and preset region parameters of the eye object, wherein the range of the target regions is larger than that of an original target region, and the original target region is obtained by dividing the eye object based on the hue information; according to the eye makeup parameters in the eye makeup operation, eye makeup processing matched with the color tones of the target areas is carried out on the target areas respectively to obtain a plurality of eye makeup results; and generating a target user image after eye makeup processing is carried out on the eye object according to the eye makeup results.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of computer vision technology, various eye makeup operations for the eye parts in the face image have been increasingly and widely applied to the field of image processing. How to obtain a more beautiful and natural eye makeup effect becomes a problem to be solved urgently at present.
Disclosure of Invention
The present disclosure proposes an image processing scheme.
According to an aspect of the present disclosure, there is provided an image processing method including:
in response to eye makeup operation aiming at a user image, determining an eye object to be subjected to eye makeup processing in the user image; dividing the eye object into a plurality of target regions based on hue information and preset region parameters of the eye object, wherein the range of the target regions is larger than that of an original target region, and the original target region is obtained by dividing the eye object based on the hue information; according to the eye makeup parameters in the eye makeup operation, eye makeup processing matched with the color tones of the target areas is carried out on the target areas respectively to obtain a plurality of eye makeup results; and generating a target user image after eye makeup processing is carried out on the eye object according to the eye makeup results.
In one possible implementation manner, the eye object includes a plurality of eye objects, and the plurality of eye objects are respectively located in a plurality of image layers; the determining the eye object to be subjected to eye makeup processing in the user image comprises the following steps: performing key point identification processing on the user image, and determining an initial position of the eye object in the user image; copying the user images into the plurality of image layers respectively; and in each layer, carrying out position expansion by taking the initial position as a center to obtain an expansion position, and determining the eye object in each layer according to the expansion position.
In one possible implementation, the hue information includes shade information and/or halftone information; the dividing the eye object into a plurality of target regions based on the hue information of the eye object and the preset region parameters comprises one or more of the following operations: extracting a shadow region from the eye object based on the shadow information of the eye object by combining a first preset region parameter in the preset region parameters; and/or extracting a middle key region from the eye object based on the middle key information of the eye object and by combining a second preset region parameter in the preset region parameters.
In a possible implementation manner, the extracting a shadow region from the eye object based on the shadow information of the eye object by combining a first preset region parameter in the preset region parameters includes: performing positive-film bottom-overlapped mixing on the basis of the reverse gray-scale image of the eye object to obtain a first mixing result; determining a first transparency of a pixel point in the eye object according to the first mixing result and the first preset region parameter; extracting pixel points in the eye object according to a first preset transparency threshold and the first transparency to obtain the shadow region, wherein the shadow region is larger than an original shadow region, and the original shadow region is obtained by dividing the eye object based on the shadow information.
In a possible implementation manner, the extracting, based on the halftone information of the eye object and in combination with a second preset region parameter of the preset region parameters, an halftone region from the eye object includes: performing exclusion mixing based on the gray scale image of the eye object to obtain a second mixing result; determining a second transparency of the pixel points in the eye object according to the second mixing result and the second preset region parameter; extracting pixel points in the eye object according to a second preset transparency threshold and the second transparency to obtain the middle key region, wherein the middle key region is larger than an original middle key region, and the original middle key region is obtained by dividing the eye object based on the middle key information.
In one possible implementation manner, the performing, according to the make-up parameters in the make-up operation, make-up processing on the target areas by matching color tones of the target areas to obtain a plurality of make-up results includes: rendering the target areas respectively according to color parameters in the eye makeup parameters to obtain a plurality of intermediate eye makeup results; determining processing modes corresponding to the target areas respectively according to the hues of the target areas; and mixing the eye object and the plurality of intermediate eye makeup results according to the processing modes corresponding to the plurality of target areas respectively to obtain a plurality of eye makeup results.
In one possible implementation, the target region includes one or more of a shadow region and/or a halftone region; the determining, according to the hues of the target areas, the processing manners corresponding to the target areas respectively includes: determining that the processing mode comprises positive film bottom-stack mixing under the condition that the target area comprises a shadow area; and under the condition that the target area comprises a middle tone area, determining that the processing mode comprises normal mixing.
In one possible implementation manner, the generating an image of a target user after performing eye makeup processing on the eye object according to the plurality of eye makeup results includes: superposing the multiple eye makeup results to obtain a target eye makeup result; and fusing the target eye makeup result and the user image according to the fusion parameters in the eye makeup parameters to obtain the target user image.
According to an aspect of the present disclosure, there is provided an image processing apparatus including:
the eye makeup processing module is used for responding to eye makeup operation aiming at the user image and determining an eye object to be subjected to eye makeup processing in the user image; a dividing module, configured to divide the eye object into a plurality of target regions based on hue information and preset region parameters of the eye object, where a range of the target region is larger than an original target region, and the original target region is obtained by dividing the eye object based on the hue information; the eye makeup module is used for respectively carrying out eye makeup processing matched with the color tones of the target areas on the target areas according to eye makeup parameters in the eye makeup operation to obtain a plurality of eye makeup results; and the generating module is used for generating a target user image after eye makeup processing is carried out on the eye object according to the eye makeup results.
In one possible implementation manner, the eye object includes a plurality of eye objects, and the plurality of eye objects are respectively located in a plurality of image layers; the determination module is to: performing key point identification processing on the user image, and determining an initial position of the eye object in the user image; copying the user images into the plurality of image layers respectively; and in each layer, carrying out position expansion by taking the initial position as a center to obtain an expansion position, and determining the eye object in each layer according to the expansion position.
In one possible implementation, the hue information includes shade information and/or halftone information; the dividing module is configured to: extracting a shadow region from the eye object based on the shadow information of the eye object by combining a first preset region parameter in the preset region parameters; and/or extracting a middle key region from the eye object based on the middle key information of the eye object and by combining a second preset region parameter in the preset region parameters.
In one possible implementation, the dividing module is further configured to: performing positive-film bottom-overlapped mixing on the basis of the reverse gray-scale image of the eye object to obtain a first mixing result; determining a first transparency of a pixel point in the eye object according to the first mixing result and the first preset region parameter; extracting pixel points in the eye object according to a first preset transparency threshold and the first transparency to obtain the shadow region, wherein the shadow region is larger than an original shadow region, and the original shadow region is obtained by dividing the eye object based on the shadow information.
In one possible implementation, the dividing module is further configured to: performing exclusion mixing based on the gray scale image of the eye object to obtain a second mixing result; determining a second transparency of the pixel points in the eye object according to the second mixing result and the second preset region parameter; extracting pixel points in the eye object according to a second preset transparency threshold and the second transparency to obtain the middle key region, wherein the middle key region is larger than an original middle key region, and the original middle key region is obtained by dividing the eye object based on the middle key information.
In one possible implementation, the make-up module is configured to: rendering the target areas respectively according to color parameters in the eye makeup parameters to obtain a plurality of intermediate eye makeup results; determining processing modes corresponding to the target areas respectively according to the hues of the target areas; and mixing the eye object and the plurality of intermediate eye makeup results according to the processing modes corresponding to the plurality of target areas respectively to obtain a plurality of eye makeup results.
In one possible implementation, the target region includes one or more of a shadow region and/or a halftone region; the eye makeup module is further configured to: determining that the processing mode comprises positive film bottom-stack mixing under the condition that the target area comprises a shadow area; and under the condition that the target area comprises a middle tone area, determining that the processing mode comprises normal mixing.
In one possible implementation, the generating module is configured to: superposing the multiple eye makeup results to obtain a target eye makeup result; and fusing the target eye makeup result and the user image according to the fusion parameters in the eye makeup parameters to obtain the target user image.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the above-described image processing method is performed.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described image processing method.
In the embodiment of the disclosure, an eye object to be subjected to eye makeup processing in a user image is determined in response to eye makeup operation on the user image, so that the eye object is divided into a plurality of target areas according to hue information of the eye object and preset area parameters, the range of the target areas is larger than the range of an original target area divided by hues, eye makeup processing matched with hues of the target areas is respectively performed on the plurality of target areas according to the eye makeup parameters in the eye makeup operation, a plurality of eye makeup results are obtained, and a target user image obtained after eye makeup processing is performed on the eye object according to the eye makeup results. Through the process, the degree of distinction in the color tone between different target areas can be made to be larger through presetting the area parameters, the target area which is larger than the original target area obtained only through the color tone division can be obtained, corresponding eye makeup processing is carried out based on the target area, the whole eye makeup effect of the eye object can be made to be more natural and real, for example, the eye object can be divided into a shadow area and a middle tone area according to the color tones of shadow, middle tone and the like, so that in the process of carrying out the eye makeup processing on the shadow area, the brightness of the eye makeup is reduced as much as possible, in the process of carrying out the eye makeup processing on the middle tone area, the brightness of the eye makeup is kept as much as possible, the eye makeup effect of the eye object is matched with the original color tone distribution condition of the eye object, the authenticity and the natural degree of the eye makeup are improved, and the process of dividing the eye object into a highlight area can be omitted, the data volume of processing is reduced, and the efficiency of image processing is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
Fig. 2 illustrates a flow diagram of an image processing method according to an embodiment of the present disclosure.
Fig. 3 shows a flow diagram of an image processing method according to an embodiment of the present disclosure.
Fig. 4 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
Fig. 5 illustrates a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 6 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Fig. 7 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure, which may be applied to an image processing apparatus, which may be a terminal device, a server, or other processing device, or the like, or an image processing system, or the like. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In one example, the image processing method can be applied to a cloud server or a local server, the cloud server can be a public cloud server or a private cloud server, and the cloud server can be flexibly selected according to actual conditions.
In some possible implementations, the image processing method may also be implemented by the processor calling computer readable instructions stored in the memory.
As shown in fig. 1, in one possible implementation, the image processing method may include:
step S11, in response to the eye makeup operation for the user image, determines an eye object to be subjected to eye makeup processing in the user image.
The user image may be any image including the eyes of the user, the user image may include one or more users, or may include the eyes of one or more users, and the implementation form of the user image may be flexibly determined according to actual situations, which is not limited in the embodiment of the present disclosure.
The eye object may be a part object to be subjected to eye makeup processing in the user image, the eye object may include a complete eye, for example, both eyes and a part related to the part near both eyes, which may be subjected to eye makeup, such as a part capable of performing eye shadow rendering, or an eyeball part capable of performing eye pupil rendering, or may include only objects of the left eye or the right eye, and may be flexibly selected according to the actual condition of the eye makeup operation.
The eye makeup operation may be any operation for performing eye makeup processing on an eye object of the user image, such as various operations including eye shadow rendering and cosmetic pupil rendering. The operation contents included in the eye makeup operation can be flexibly determined according to actual situations, and are not limited to the following embodiments. In one possible implementation, the eye makeup operation may include an operation of instructing eye makeup processing on an eye object in the user image; in some possible implementations, the eye makeup operation may further include various types of eye makeup parameters and the like.
The eye makeup parameters can be related parameters which are input by a user and used for carrying out eye makeup processing on the eye object, and the implementation form of the eye makeup parameters can be flexibly determined, such as various parameters including color parameters or fusion parameters.
The manner of determining the eye object is not limited in the embodiments of the present disclosure, and is not limited to the following disclosed embodiments. In one possible implementation, an eye recognition process may be performed on the user image to determine the target location. The identification processing manner is not limited in the embodiment of the present disclosure, and for example, the identification may be key point identification or direct identification of the whole eye.
In step S12, the eye object is divided into a plurality of target regions based on the hue information of the eye object and the preset region parameters.
Wherein the hue information may reflect the relative brightness of the eye object. The information content contained in the hue information can be flexibly determined according to actual situations, and in a possible implementation manner, the hue information can include one or more of highlight information, shadow information and middle hue information.
The highlight information may reflect an area with higher brightness in the eye object, the shadow information may reflect an area with lower brightness in the eye object, and the halftone information may reflect an area with brightness between highlight and shadow in the eye object.
Different tone information can be determined from the eye object in different tone modes, in some possible implementation modes, the tone information can be directly obtained according to the brightness of pixel points in the eye object, and in some possible implementation modes, the eye object can be processed in different modes to obtain different tone information. The manner in which the hue information is obtained can be detailed in the following disclosure examples, which are not first developed.
The preset area parameter may be a relevant parameter for determining the target area, and an implementation form of the preset area parameter is not limited in the embodiment of the present disclosure, for example, the preset area parameter may be a relevant parameter such as a set area range or a range threshold.
In one possible implementation, the eye object may be divided into a plurality of original target regions based on the hue information of the eye object, and the target region may be further determined on the basis of the original target region by a preset region parameter, in which case the preset region parameter may be a related parameter for determining the target region on the basis of the original target region. For example, the range of the original target area may be adjusted by a preset value, or a relevant parameter required to be used in the process of further determining the target area on the basis of the original target area may be further determined. Since the target area may be determined on the basis of the original target area, there may be a certain correspondence between the two, and in a possible implementation, the range of the target area may be larger than the original target area.
The type and position of the target region may be flexibly determined according to the actual condition of the hue information in the eye object, and multiple target regions may have overlapping regions or may be independent of each other, which is not limited in the embodiment of the present disclosure.
In some possible implementations, the target region may include one or more of a highlight region, a shadow region, and a midtone region.
According to the tone information, the manner of dividing the eye object into the plurality of target regions can be flexibly changed along with the difference of the tone information, for example, the pixel points in the eye object can be respectively divided into highlight regions, shadow regions, middle tone regions and the like according to the acquired highlight information, shadow information and middle tone information. Some possible implementations of step S12 can be seen in the following disclosure, which is not first expanded.
And step S13, performing eye makeup processing matched with the color tones of the target areas on the target areas respectively according to the eye makeup parameters in the eye makeup operation to obtain a plurality of eye makeup results.
The implementation form of the make-up parameters may refer to the above disclosed embodiments, and will not be described herein again. For a plurality of target areas, the make-up parameters corresponding to different target areas may be the same or different, for example, different target areas may correspond to the same or different color parameters, or different areas may adopt the same or different fusion parameters to perform color parameter fusion, etc., and may be flexibly set according to actual situations, which is not limited to the embodiments of the present disclosure.
The eye makeup result may be a result obtained after the target region is subjected to eye makeup treatment. With the difference of the target areas, the eye makeup processing mode may also be different, so that the eye makeup processing can be performed on a plurality of target areas respectively to obtain a plurality of eye makeup results. Some possible implementations of step S13 can be seen in the following disclosure, which is not first expanded.
Step S14 is to generate a target user image after performing eye makeup processing on the eye object, based on the plurality of eye makeup results.
The target user image may be an image obtained by performing eye makeup processing on an eye makeup object of the user image, and the mode of generating the target user image may be flexibly determined according to actual conditions, for example, multiple eye makeup results may be fused to obtain the target user image, or multiple eye makeup results and the user image may be fused to obtain the target user image. In some possible implementations, the plurality of eye makeup results may also belong to a plurality of layers, respectively, in which case, the target user image may be obtained by layer superposition.
Some possible implementations of step S14 can be seen in the following disclosure, which is not first expanded.
In the embodiment of the disclosure, an eye object to be subjected to eye makeup processing in a user image is determined in response to eye makeup operation on the user image, so that the eye object is divided into a plurality of target areas according to hue information of the eye object and preset area parameters, the range of the target areas is larger than the range of an original target area divided by hues, eye makeup processing matched with hues of the target areas is respectively performed on the plurality of target areas according to the eye makeup parameters in the eye makeup operation, a plurality of eye makeup results are obtained, and a target user image obtained after eye makeup processing is performed on the eye object according to the eye makeup results. Through the process, the degree of distinction in the color tone between different target areas can be made to be larger through presetting the area parameters, the target area which is larger than the original target area obtained only through the color tone division can be obtained, corresponding eye makeup processing is carried out based on the target area, the whole eye makeup effect of the eye object can be made to be more natural and real, for example, the eye object can be divided into a shadow area and a middle tone area according to the color tones of shadow, middle tone and the like, so that in the process of carrying out the eye makeup processing on the shadow area, the brightness of the eye makeup is reduced as much as possible, in the process of carrying out the eye makeup processing on the middle tone area, the brightness of the eye makeup is kept as much as possible, the eye makeup effect of the eye object is matched with the original color tone distribution condition of the eye object, the authenticity and the natural degree of the eye makeup are improved, and the process of dividing the eye object into a highlight area can be omitted, the data volume of processing is reduced, and the efficiency of image processing is improved.
In one possible implementation, the eye object may include a plurality of eye objects respectively located in a plurality of layers, for example, for the eye shadow rendering operation, one or more regions near the eye may be rendered, such as 6 regions, i.e., a base upper eye shadow region, a base lower eye shadow region, an upper eyelid region, an outer eye corner region, an inner eye corner region, and a right upper eye shadow region. In this case, the subsequent operations can be performed with the 6 regions as eye objects, respectively.
Therefore, in one possible implementation, step S11 may include:
performing key point identification processing on the user image, and determining the initial position of the eye object in the user image;
copying the user images into a plurality of image layers respectively;
and in each image layer, carrying out position expansion by taking the initial position as a center to obtain an expansion position, and determining the eye object in each image layer according to the expansion position.
The method for identifying the keypoints is not limited in the embodiment of the present disclosure, and for example, the keypoints may be identified by a related keypoint identification algorithm, or identified by a neural network having a function of identifying the keypoints, or the like.
The initial position may be a position of the eye object in the user image, and in a possible implementation, in a case that the eye object may include the plurality of eye objects, a position of a central region of the eye objects may be used as the initial position, for example, for an eye shadow rendering operation, each eye shadow region serving as the eye object is distributed around the eye, so that a position of the eye in the user image may be used as the initial position.
The layer may be any layer having image processing or editing functions, such as an editing layer in image editing software (PS).
The number of the plurality of layers may be flexibly determined according to an actual situation, and is not limited in the embodiment of the present disclosure, and in some possible implementation manners, the number of the layers may be the same as the number of the plurality of eye objects; in some possible implementations, the number of layers may also be smaller than the number of eye objects, in which case, two or more eye objects may be determined simultaneously in a certain layer or in certain layers, for example, in a certain layer, a base eye shadow region object and a base eye shadow region object may be determined simultaneously.
The user image is respectively copied to a plurality of layers, a plurality of eye objects in the user image can be respectively copied to the plurality of layers, the user image can be entirely copied to the plurality of layers, or an original layer where the user image is located is directly copied to the plurality of layers, and the like.
In each layer, position expansion can be performed with the initial position as a center to obtain an expansion position, and the eye object in each layer is determined according to the expansion position. The expansion mode is not limited in this embodiment of the disclosure, for example, in each layer, expansion may be performed according to a corresponding range in a corresponding position direction according to a position corresponding relationship between the eye object and the initial position in the layer, where the position corresponding relationship may be determined according to an objective relationship between the eye object and the initial position, for example, on the basis, the eye shadow region object may be above the initial position of the eye, and the inner corner region is inside the initial position of the eye, and the like.
After the eye object is determined in each layer, the eye makeup result of the eye object may be determined through steps S12 to S14 in each layer, and in some possible implementations, the eye makeup results of the layers may be superimposed to further obtain an image of the target user after eye makeup processing.
Through this disclosed embodiment, can through with user's image copy to a plurality of picture layers, respectively and independently confirm a plurality of eye objects, be convenient for follow-up carry out eye to make up respectively to a plurality of eye objects and handle, promote the flexibility of eye making up, also be convenient for change the eye effect of making up of each eye object, promote the abundance of eye making up.
Fig. 2 shows a flowchart of an image processing method according to an embodiment of the present disclosure, and as shown in one possible implementation, step S12 may include one or more of the following operations:
step S121, extracting a shadow region from the eye object based on the shadow information of the eye object and by combining a first preset region parameter in the preset region parameters. And/or the presence of a gas in the gas,
step S122, based on the halftone information of the eye object, and in combination with a second preset region parameter in the preset region parameters, extracting a halftone region from the eye object.
The numbers of step S121 and step S122 are only used to distinguish the above different steps, and do not limit the implementation order of the different steps, and the different steps may be executed simultaneously or sequentially, and the order is not limited in the embodiment of the present disclosure. Step S12 may include both of the above two steps, or may selectively perform some of the steps.
According to the embodiment of the disclosure, the eye object can be flexibly divided into the shadow area and/or the middle tone area according to the shadow information and/or the middle tone information, so that the flexibility of beautifying the eye object can be effectively improved, and the eye makeup processing can be flexibly and quickly realized; on the other hand, because the influence of the highlight region in the eye object is small, and the eye makeup effect after highlight region processing is not obvious, the processing of the highlight region can be omitted, and the eye makeup processing efficiency is improved.
In one possible implementation, step S121 may include:
based on the reverse gray-scale image of the eye object, performing positive film bottom-on-bottom mixing to obtain a first mixing result;
determining a first transparency of a pixel point in the eye object according to the first mixing result and the first preset region parameter;
and extracting pixel points in the eye object according to the first preset transparency threshold and the first transparency to obtain a shadow region.
The inverse gray scale image of the eye object may be an image obtained by performing inverse gray scale processing on each pixel point in the eye object, and the inverse gray scale may perform linear or nonlinear inversion on the gray scale range of the image of the eye object to obtain an image opposite to the gray scale image of the eye object. The positive-image-overlap-bottom blending based on the reverse gray-scale image of the eye object may be performed by performing positive-image-overlap-bottom blending on the reverse gray-scale image of the eye object itself to obtain the first blending result, and specifically, the first blending result may be obtained by copying the reverse gray-scale image of the eye object and then performing positive-image-overlap-bottom blending based on two same reverse gray-scale images.
The first blending result obtained in the above manner can reflect the shadow information of the eye object, and therefore, based on the first blending result, the original shadow region in the eye object can be further determined, and the original shadow region can be a region obtained by dividing the eye object according to only the shadow information represented by the obtained first blending result.
Therefore, in a possible implementation, the shadow region may be determined according to the first mixing result, further in combination with the first preset region parameter. The first preset area parameter may be a preset parameter for determining a shadow area, and a parameter value thereof may be flexibly determined according to an actual situation, and is not limited to the embodiments of the present disclosure. In one example, the first preset parameter may be any value greater than 1, such as 1.1 to 1.7.
Specifically, an alpha channel value of each pixel point in the first mixed result may be obtained, and the alpha channel value may be multiplied by the first preset region parameter to obtain a multiplied first gray scale result. The first gray result can be mapped into a value of transparency through a mapping relation between gray and transparency, so that the first transparency of each pixel point in the eye object is determined. The specific mapping manner of the mapping relationship can be flexibly set according to the actual situation, and is not limited in the embodiment of the disclosure.
The first preset transparency threshold may be a preset threshold for screening pixel points belonging to a shadow region in the eye object, and a specific value of the first preset transparency threshold is not limited in the embodiment of the present disclosure.
According to the first preset transparency threshold value, whether the first transparency of each pixel point in the eye object is within the range of the first preset transparency threshold value or not can be judged, if yes, the pixel point is determined to belong to the shadow region, otherwise, the pixel point is considered to belong to the region outside the shadow region, through the screening process, a plurality of pixel points belonging to the shadow region can be screened from the eye object, and then the pixel points belonging to the shadow region in the eye object are extracted to obtain the shadow region.
Since the first transparency may be obtained by multiplying the first mixing result by the first preset region parameter, for some pixel points in the eye object, the transparency determined based on the first mixing result of the pixel points may not fall within the range of the first preset transparency threshold, and after the pixel points are multiplied by the first preset region parameter, the pixel points may be divided into shadow regions. Therefore, the shadow area obtained by the method of the embodiment of the present disclosure may be larger than the original shadow area determined only according to the shadow information. For example, in an example, according to the first blending result, it may be determined that the alpha channel value of a certain pixel is 100, the corresponding transparency is 39%, and the corresponding transparency does not belong to the range of the first preset transparency threshold (for example, greater than 40%), and after the first blending result is multiplied by the first preset region parameter (for example, 1.2), it may be determined that the first gray scale result of the pixel is 120, and the mapped first transparency is 47%, so that after the first blending result is multiplied by the first preset region parameter, the pixel may belong to the range of the first preset transparency threshold and then belong to the shadow region.
According to the embodiment of the disclosure, the positive-sheet bottom-stack mixing can be performed on the reverse gray-scale image of the eye object to obtain the first mixed result reflecting the shadow information, so that the shadow region is extracted from the eye object according to the mixed result and the first preset region parameter, on one hand, the method for obtaining the shadow information is quick and convenient, the accuracy is high, the eye makeup processing efficiency is improved, the eye makeup accuracy can be effectively improved, and the eye makeup effect is improved; on the other hand, by introducing the first preset region parameter, the contrast of the shadow region relative to other target regions can be effectively enhanced, the region range of the shadow region can be expanded, and the divided highlight region can be compensated and omitted, so that eye makeup processing according to the divided target region can be more accurate and has a better effect.
In one possible implementation, step S122 may include:
performing exclusion mixing based on the gray scale image of the eye object to obtain a second mixing result;
determining a second transparency of the pixel points in the eye object according to the second mixing result and the second preset region parameter;
and extracting pixel points in the eye object according to a second preset transparency threshold and a second transparency to obtain a middle tone area.
The grayscale map of the eye object may be an image obtained by performing grayscale processing on the eye object. The excluding and mixing may be performed based on the gray scale images of the eye objects, and the gray scale images of the eye objects themselves may be subjected to the excluding and mixing to obtain the second mixing result. The exclusion mixing can be an image mixing mode in PS editing software, the brightness and the gray scale of the image can be changed, and the halftone information of the eye object can be obtained based on the result of the exclusion mixing.
Therefore, the halftone information of the eye object can be reflected by the second blending result obtained in the above manner, and therefore, based on the second blending result, an original halftone area in the eye object, which may be an area obtained by dividing the eye object only according to the halftone information represented by the obtained second blending result, can be further determined.
Therefore, in a possible implementation, the halftone area may be determined according to the second mixing result and further combined with a second preset area parameter. The second preset area parameter may be a preset parameter for determining the middle tuning area, and a parameter value thereof may be flexibly determined according to an actual situation, and is not limited to the embodiments of the present disclosure. The value of the second predetermined area parameter may be the same as or different from the first predetermined area parameter. In one example, the second preset parameter may also be any value greater than 1, such as 1.1 to 1.7.
Specifically, an alpha channel value of each pixel point in the second mixed result may be obtained, and the alpha channel value may be multiplied by the second preset region parameter to obtain a multiplied second gray scale result. The second gray scale result can be mapped into a value of the transparency through a mapping relation between the gray scale and the transparency, so that the second transparency of each pixel point in the eye object is determined. The specific mapping manner of the mapping relationship can refer to the above disclosed embodiments.
The second preset transparency threshold may be a preset threshold for screening pixel points belonging to the halftone area in the eye object, and a specific value of the second preset transparency threshold is not limited in the embodiment of the present disclosure. In a possible implementation manner, the second preset transparency threshold and the first preset transparency threshold may not have the same value.
According to the second preset threshold, whether the second transparency of each pixel point in the eye object is within the range of the second preset threshold can be judged, if yes, the pixel point is determined to belong to the middle tone region, otherwise, the pixel point is considered to belong to a region outside the middle tone region, through the screening process, a plurality of pixel points belonging to the middle tone region can be screened from the eye object, and then the pixel points belonging to the middle tone region in the eye object are extracted to obtain the middle tone region.
Similar to the relationship between the shadow region and the original shadow region, the halftone region obtained by the second preset region parameter may be larger than the original halftone region determined only according to the halftone information. Reference may be made to the above-described embodiments for their principles.
According to the embodiment of the disclosure, the gray-scale map of the eye object is excluded and mixed to obtain a second mixed result reflecting the halftone information, so that the halftone area is extracted from the eye object according to the mixed result and a second preset area parameter, on one hand, the mode of obtaining the halftone information is fast and convenient, and batch processing is conveniently carried out with the mode of obtaining shadow information, thereby integrally improving the efficiency and effect of eye makeup; on the other hand, by introducing the second preset region parameter, the contrast of the middle tone region relative to other target regions can be effectively enhanced, the region range of the middle tone region can be expanded, and the high light region which is omitted is compensated, so that eye makeup processing according to the divided target regions can be more accurate and has a better effect.
Fig. 3 shows a flowchart of an image processing method according to an embodiment of the present disclosure, and as shown in the figure, in one possible implementation, step S13 may include:
step S131, rendering the target areas respectively according to the color parameters in the eye makeup parameters to obtain a plurality of intermediate eye makeup results.
In step S132, a processing method corresponding to each of the plurality of target regions is determined based on the color tones of the plurality of target regions.
Step S133, mixing the eye object with the plurality of intermediate eye makeup results according to the processing modes corresponding to the plurality of target regions, respectively, to obtain a plurality of eye makeup results.
The intermediate make-up result may be a rendering result obtained by rendering the color parameter to the target area. The color parameter may be a parameter for performing color rendering on the eye object, and the color parameter may be a color value or may be in the form of an RGB channel value. The color parameter can be determined according to the color selected by the user or the color value input by the user, and can also be preset for color setting and the like, and can be flexibly selected according to the actual situation.
In the multiple target areas, different target areas can correspond to the same color parameters or different color parameters, and the color parameters can be flexibly selected according to actual conditions. In one possible implementation manner, for each target area, the color parameters corresponding to each target area may be mixed to obtain the intermediate eye makeup result corresponding to each target area.
Since different target areas can be divided based on different tone information, different target areas can correspond to different tones, such as the shadow areas corresponding to the shadows and the halftone areas corresponding to the halftone, as mentioned in the above-mentioned embodiments of the disclosure.
The target areas with different color tones may be processed in different ways, wherein the correspondence between the color tones and the processing ways can be detailed in the following disclosure embodiments, and will not be expanded here.
After the processing mode of each target region is determined, the eye object and each intermediate eye makeup result can be mixed according to the processing mode to obtain a plurality of eye makeup results.
Through the embodiment of the disclosure, the mixing processing of corresponding processing modes can be respectively carried out on each target area according to the tone corresponding to different target areas so as to obtain the eye makeup result of each target area, so that different target areas can have the eye makeup effect corresponding to the tone, the accuracy and the abundance of the whole eye makeup effect are effectively improved, and the flexibility of the eye makeup processing process is also improved.
In one possible implementation, step S132 may include:
determining a processing mode including positive film bottom-on-bottom mixing under the condition that the target area comprises a shadow area;
in the case where the target region includes the halftone region, the determination processing manner includes normal blending.
It can be seen from the above disclosure that, when the target area is a shadow area, the intermediate eye makeup effect of the shadow area and the eye object may be mixed according to a positive-sheet-by-substrate mixing manner, so as to obtain an eye makeup result of the shadow area. The positive film bottom-overlapped mixing can darken the mixing result, so that the tone property of the shadow area is fully reserved, and the processing effect is improved.
When the target area is the halftone area, the halftone effect of the halftone area and the eye object may be mixed in a normal mixing manner to obtain a result of the halftone area. Wherein, the normal mixing has little influence on the brightness of the mixing result, thereby fully retaining the tone property of the halftone area and improving the processing effect.
According to the various disclosed embodiments, different mixing modes are adopted for mixing aiming at the color tones of different target areas, so that the obtained eye makeup result is consistent with the original color tone property of the eye object, and the authenticity of the eye makeup effect is greatly improved.
Fig. 4 shows a flowchart of an image processing method according to an embodiment of the present disclosure, and as shown in the figure, in one possible implementation, step S14 may include:
step S141, superposing the multiple eye makeup results to obtain a target eye makeup result;
and S142, fusing the target eye makeup result and the user image according to the fusion parameters in the eye makeup parameters to obtain the target user image.
In a possible implementation manner, the plurality of target areas may be determined in a plurality of layers, and in this case, the obtained plurality of eye makeup results may also belong to the plurality of layers, respectively.
The superposition mode may be flexibly determined according to an actual situation, for example, direct superposition between layers may be performed, and in some possible implementation modes, superposition may also be performed in some or some mixed manners, for example, multiple layers may be mixed and superposed in a front-sheet bottom-stacking manner.
In one possible implementation, multiple eye makeup results may also be fused or mixed directly to obtain a target eye makeup result.
After the target eye makeup result is obtained, the target eye makeup result and the user image may be fused according to the fusion parameters to obtain the target user image, and the fusion manner is also not limited in the embodiment of the present disclosure, for example, the pixel values of the pixel points at the same position may be added or multiplied to implement fusion, or the pixel values of the pixel points at the same position may be weighted and fused.
The fusion weight of the target eye makeup result in the weighted fusion can be determined according to a fusion parameter, the fusion parameter can be a preset parameter value, can also be determined according to a parameter value input by a user in the eye makeup operation, and the like, and can be flexibly selected according to actual conditions.
In a possible implementation manner, the fusion parameter may be a transparency, the target eye makeup result is multiplied by the transparency and then added to the image to obtain the target user image, and the transparency effect of the whole eye makeup may be changed by changing the transparency in the fusion parameter.
Through the embodiment of the disclosure, the target eye makeup result and the user image can be fused according to the fusion parameter in the eye makeup parameter so as to obtain the target user image, and the eye makeup effect can be conveniently adjusted by changing the fusion parameter, so that the flexibility and the degree of independence of the whole eye makeup process are improved.
Fig. 5 illustrates a block diagram of an image processing apparatus according to an embodiment of the present disclosure. As shown, the image processing apparatus 20 may include:
the determining module 21 is configured to determine an eye object to be subjected to eye makeup processing in the user image in response to an eye makeup operation for the user image.
The dividing module 22 is configured to divide the eye object into a plurality of target regions based on the hue information of the eye object and the preset region parameter, where a range of the target region is larger than an original target region, and the original target region is obtained by dividing the eye object based on the hue information.
And the eye makeup module 23 is configured to perform eye makeup processing on the plurality of target areas respectively, the eye makeup processing being matched with the color tones of the target areas, according to the eye makeup parameters in the eye makeup operation, and obtain a plurality of eye makeup results.
And the generating module 24 is configured to generate a target user image after performing eye makeup processing on the eye object according to the plurality of eye makeup results.
In one possible implementation manner, the eye object includes a plurality of eye objects, and the plurality of eye objects are respectively located in a plurality of image layers; the determination module is to: performing key point identification processing on the user image, and determining an initial position of the eye object in the user image; copying the user images into the plurality of image layers respectively; and in each layer, carrying out position expansion by taking the initial position as a center to obtain an expansion position, and determining the eye object in each layer according to the expansion position.
In one possible implementation, the hue information includes shade information and/or halftone information; the dividing module is configured to: extracting a shadow region from the eye object based on the shadow information of the eye object by combining a first preset region parameter in the preset region parameters; and/or extracting a middle key region from the eye object based on the middle key information of the eye object and by combining a second preset region parameter in the preset region parameters.
In one possible implementation, the dividing module is further configured to: performing positive-film bottom-overlapped mixing on the basis of the reverse gray-scale image of the eye object to obtain a first mixing result; determining a first transparency of a pixel point in the eye object according to the first mixing result and the first preset region parameter; extracting pixel points in the eye object according to a first preset transparency threshold and the first transparency to obtain the shadow region, wherein the shadow region is larger than an original shadow region, and the original shadow region is obtained by dividing the eye object based on the shadow information.
In one possible implementation, the dividing module is further configured to: performing exclusion mixing based on the gray scale image of the eye object to obtain a second mixing result; determining a second transparency of the pixel points in the eye object according to the second mixing result and the second preset region parameter; extracting pixel points in the eye object according to a second preset transparency threshold and the second transparency to obtain the middle key region, wherein the middle key region is larger than an original middle key region, and the original middle key region is obtained by dividing the eye object based on the middle key information.
In one possible implementation, the make-up module is configured to: rendering the target areas respectively according to color parameters in the eye makeup parameters to obtain a plurality of intermediate eye makeup results; determining processing modes corresponding to the target areas respectively according to the hues of the target areas; and mixing the eye object and the plurality of intermediate eye makeup results according to the processing modes corresponding to the plurality of target areas respectively to obtain a plurality of eye makeup results.
In one possible implementation, the target region includes one or more of a shadow region and/or a halftone region; the eye makeup module is further configured to: determining that the processing mode comprises positive film bottom-stack mixing under the condition that the target area comprises a shadow area; and under the condition that the target area comprises a middle tone area, determining that the processing mode comprises normal mixing.
In one possible implementation, the generating module is configured to: superposing the multiple eye makeup results to obtain a target eye makeup result; and fusing the target eye makeup result and the user image according to the fusion parameters in the eye makeup parameters to obtain the target user image.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or a marker, a marker associated with an object, or a sand table, a display area, a display item, etc. associated with a venue or a place. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like. The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
Application scenario example
In the field of computer vision, how to obtain eye makeup images with real and rich effects becomes a problem to be solved urgently at present.
The application example of the present disclosure provides an image processing method, which includes the following processes:
performing face recognition based on a user image, extracting an eye object from the user image, separating a shadow region from a middle tone region aiming at the eye object, and obtaining two target regions such as a high shadow region and a middle tone region, wherein,
the process of shadow separation may include:
and carrying out positive film bottom-overlapped mixing on the reverse gray level image of the eye object and the reverse gray level image of the eye object to obtain a first mixing result. Acquiring an alpha channel value in the first mixed result, multiplying the alpha channel value by a first preset region parameter (such as 1.2) to obtain a first gray scale result, mapping the first gray scale result into transparency according to a black-white relation of the first gray scale result, thereby obtaining first transparency of each pixel point in the eye object, and screening the pixel points belonging to the shadow region from the eye object based on the first transparency to obtain the shadow region.
The process of mesopic separation may include:
and carrying out exclusion mixing on the gray level image of the eye object and the gray level image of the eye object to obtain a second mixing result. And acquiring an alpha channel value in the second mixed result, multiplying the alpha channel value by a second preset region parameter (such as 1.2) to obtain a second gray scale result, mapping the second gray scale result into transparency according to a black-white relation of the second gray scale result to obtain second transparency of each pixel point in the eye object, and screening the pixel points belonging to the middle tone region from the eye object based on the second transparency to obtain a middle tone region.
After the shadow region and the halftone region are obtained respectively, the color parameters of the eye makeup input by the user can be mixed with the shadow region and the halftone region respectively to obtain intermediate eye makeup results x and y respectively.
Carrying out positive plate bottom-on-bottom mixing on the obtained intermediate eye makeup result x and the eye object to obtain an eye makeup result g;
and normally mixing the obtained intermediate eye makeup result y with the eye object to obtain an eye makeup result h.
And superposing the two image layers of the eye makeup results g and h, namely packaging the two image layers into a group to obtain a target eye makeup result i, wherein the eye makeup result i can be fused with the user image to obtain a target user image after eye makeup, the transparency of the i is changed to change the transparent effect of the color of the whole eye makeup, and the target eye makeup result i can be fused with the target user image in a normal mixing mode.
According to the image processing method provided in the application example of the disclosure, the target areas with different hues can be processed, so that the eye makeup effect is more naturally displayed, the skin texture details of the eyes are included, and the processing effect of what you see is what you get can be realized. The method provided by the embodiment of the disclosure can realize more parameter definitions, such as the change of color parameters of eye makeup in different target areas, the change of fusion parameters of a target eye makeup result and a user image, and the like, so that the eye makeup effect is more controllable, and a user can conveniently customize beautifying parameters according to preferences, such as the color of an eye shadow in the eye makeup may need to be changed by a designer in the process of designing the makeup. The organization is also convenient to provide richer self-defining functions for users according to the method in the application example of the disclosure, for example, for a software developer in the organization, the method in the application example of the disclosure can be programmed into a bottom layer technology in the development of software related to makeup, and an interface for modifying the color of the makeup is reserved, so that the color can be conveniently changed.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile computer readable storage medium or a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
In practical applications, the memory may be a volatile memory (RAM); or a non-volatile memory (non-volatile memory) such as a ROM, a flash memory (flash memory), a Hard Disk (Hard Disk Drive, HDD) or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor.
The processor may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor. It is understood that the electronic devices for implementing the above-described processor functions may be other devices, and the embodiments of the present disclosure are not particularly limited.
The electronic device may be provided as a terminal, server, or other form of device.
Based on the same technical concept of the foregoing embodiments, the embodiments of the present disclosure also provide a computer program, which when executed by a processor implements the above method.
Fig. 6 is a block diagram of an electronic device 800 according to an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 6, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related personnel information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 7 is a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 7, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), can execute computer-readable program instructions to implement various aspects of the present disclosure by utilizing state personnel information of the computer-readable program instructions to personalize the electronic circuitry.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (11)

1. An image processing method, comprising:
in response to eye makeup operation aiming at a user image, determining an eye object to be subjected to eye makeup processing in the user image;
dividing the eye object into a plurality of target regions based on hue information and preset region parameters of the eye object, wherein the range of the target regions is larger than that of an original target region, and the original target region is obtained by dividing the eye object based on the hue information;
according to the eye makeup parameters in the eye makeup operation, eye makeup processing matched with the color tones of the target areas is carried out on the target areas respectively to obtain a plurality of eye makeup results;
and generating a target user image after eye makeup processing is carried out on the eye object according to the eye makeup results.
2. The method according to claim 1, wherein the eye object comprises a plurality of eye objects, the plurality of eye objects being respectively located in a plurality of image layers;
the determining the eye object to be subjected to eye makeup processing in the user image comprises the following steps:
performing key point identification processing on the user image, and determining an initial position of the eye object in the user image;
copying the user images into the plurality of image layers respectively;
and in each layer, carrying out position expansion by taking the initial position as a center to obtain an expansion position, and determining the eye object in each layer according to the expansion position.
3. The method according to claim 1 or 2, wherein the hue information comprises shading information and/or halftone information;
the dividing the eye object into a plurality of target regions based on the hue information of the eye object and the preset region parameters comprises one or more of the following operations:
extracting a shadow region from the eye object based on the shadow information of the eye object by combining a first preset region parameter in the preset region parameters; and/or the presence of a gas in the gas,
and extracting a middle key region from the eye object based on the middle key information of the eye object and by combining a second preset region parameter in the preset region parameters.
4. The method according to claim 3, wherein the extracting a shadow region from the eye object based on the shadow information of the eye object in combination with a first preset region parameter of the preset region parameters comprises:
performing positive-film bottom-overlapped mixing on the basis of the reverse gray-scale image of the eye object to obtain a first mixing result;
determining a first transparency of a pixel point in the eye object according to the first mixing result and the first preset region parameter;
extracting pixel points in the eye object according to a first preset transparency threshold and the first transparency to obtain the shadow region, wherein the shadow region is larger than an original shadow region, and the original shadow region is obtained by dividing the eye object based on the shadow information.
5. The method according to claim 3 or 4, wherein the extracting a key-in region from the eye object based on the key-in information of the eye object and in combination with a second preset region parameter of the preset region parameters comprises:
performing exclusion mixing based on the gray scale image of the eye object to obtain a second mixing result;
determining a second transparency of the pixel points in the eye object according to the second mixing result and the second preset region parameter;
extracting pixel points in the eye object according to a second preset transparency threshold and the second transparency to obtain the middle key region, wherein the middle key region is larger than an original middle key region, and the original middle key region is obtained by dividing the eye object based on the middle key information.
6. The method according to any one of claims 1 to 5, wherein the performing, according to the makeup parameters in the makeup operation, a makeup treatment on the plurality of target regions respectively matched with the color tones of the target regions to obtain a plurality of makeup results comprises:
rendering the target areas respectively according to color parameters in the eye makeup parameters to obtain a plurality of intermediate eye makeup results;
determining processing modes corresponding to the target areas respectively according to the hues of the target areas;
and mixing the eye object and the plurality of intermediate eye makeup results according to the processing modes corresponding to the plurality of target areas respectively to obtain a plurality of eye makeup results.
7. The method of claim 6, wherein the target region comprises one or more of a shadow region and/or a halftone region;
the determining, according to the hues of the target areas, the processing manners corresponding to the target areas respectively includes:
determining that the processing mode comprises positive film bottom-stack mixing under the condition that the target area comprises a shadow area;
and under the condition that the target area comprises a middle tone area, determining that the processing mode comprises normal mixing.
8. The method according to any one of claims 1 to 7, wherein the generating of the target user image after performing eye makeup processing on the eye object according to the plurality of eye makeup results comprises:
superposing the multiple eye makeup results to obtain a target eye makeup result;
and fusing the target eye makeup result and the user image according to the fusion parameters in the eye makeup parameters to obtain the target user image.
9. An image processing apparatus characterized by comprising:
the eye makeup processing module is used for responding to eye makeup operation aiming at the user image and determining an eye object to be subjected to eye makeup processing in the user image;
a dividing module, configured to divide the eye object into a plurality of target regions based on hue information and preset region parameters of the eye object, where a range of the target region is larger than an original target region, and the original target region is obtained by dividing the eye object based on the hue information;
the eye makeup module is used for respectively carrying out eye makeup processing matched with the color tones of the target areas on the target areas according to eye makeup parameters in the eye makeup operation to obtain a plurality of eye makeup results;
and the generating module is used for generating a target user image after eye makeup processing is carried out on the eye object according to the eye makeup results.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 8.
11. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 8.
CN202111137187.6A 2021-09-27 2021-09-27 Image processing method and device, electronic equipment and storage medium Pending CN113781359A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111137187.6A CN113781359A (en) 2021-09-27 2021-09-27 Image processing method and device, electronic equipment and storage medium
PCT/CN2022/120109 WO2023045950A1 (en) 2021-09-27 2022-09-21 Image processing method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111137187.6A CN113781359A (en) 2021-09-27 2021-09-27 Image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113781359A true CN113781359A (en) 2021-12-10

Family

ID=78853735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111137187.6A Pending CN113781359A (en) 2021-09-27 2021-09-27 Image processing method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113781359A (en)
WO (1) WO2023045950A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045950A1 (en) * 2021-09-27 2023-03-30 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584153A (en) * 2018-12-06 2019-04-05 北京旷视科技有限公司 Modify the methods, devices and systems of eye
CN112330527A (en) * 2020-05-29 2021-02-05 北京沃东天骏信息技术有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN112581395A (en) * 2020-12-15 2021-03-30 维沃移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112767285A (en) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112766234A (en) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112801916A (en) * 2021-02-23 2021-05-14 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
US20210158579A1 (en) * 2019-11-22 2021-05-27 Beijing Sensetime Technology Development Co., Ltd. Image processing method and apparatus, electronic device, and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999658A (en) * 1996-06-28 1999-12-07 Dainippon Screen Mfg. Co., Ltd. Image tone interpolation method and apparatus therefor
JP2005190435A (en) * 2003-12-26 2005-07-14 Konica Minolta Photo Imaging Inc Image processing method, image processing apparatus and image recording apparatus
US8867833B2 (en) * 2013-03-14 2014-10-21 Ili Technology Corporation Image processing method
JP2015156072A (en) * 2014-02-20 2015-08-27 国立大学法人お茶の水女子大学 Eye make-up design creation method and program
CN108053365B (en) * 2017-12-29 2019-11-05 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN110572652B (en) * 2019-09-04 2021-07-16 锐捷网络股份有限公司 Static image processing method and device
CN111583102B (en) * 2020-05-14 2023-05-16 抖音视界有限公司 Face image processing method and device, electronic equipment and computer storage medium
CN113781359A (en) * 2021-09-27 2021-12-10 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584153A (en) * 2018-12-06 2019-04-05 北京旷视科技有限公司 Modify the methods, devices and systems of eye
US20210158579A1 (en) * 2019-11-22 2021-05-27 Beijing Sensetime Technology Development Co., Ltd. Image processing method and apparatus, electronic device, and storage medium
CN112330527A (en) * 2020-05-29 2021-02-05 北京沃东天骏信息技术有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN112581395A (en) * 2020-12-15 2021-03-30 维沃移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112767285A (en) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112766234A (en) * 2021-02-23 2021-05-07 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112801916A (en) * 2021-02-23 2021-05-14 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN113160094A (en) * 2021-02-23 2021-07-23 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
潇霞;: "打造完美妆效", 人像摄影, no. 06, 1 June 2009 (2009-06-01), pages 174 - 178 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023045950A1 (en) * 2021-09-27 2023-03-30 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
WO2023045950A1 (en) 2023-03-30

Similar Documents

Publication Publication Date Title
CN112767285B (en) Image processing method and device, electronic device and storage medium
CN112766234B (en) Image processing method and device, electronic equipment and storage medium
CN110189249B (en) Image processing method and device, electronic equipment and storage medium
CN109816764B (en) Image generation method and device, electronic equipment and storage medium
CN113160094A (en) Image processing method and device, electronic equipment and storage medium
CN110675310A (en) Video processing method and device, electronic equipment and storage medium
WO2023045941A1 (en) Image processing method and apparatus, electronic device and storage medium
CN111553864A (en) Image restoration method and device, electronic equipment and storage medium
CN111091610B (en) Image processing method and device, electronic equipment and storage medium
CN111243011A (en) Key point detection method and device, electronic equipment and storage medium
CN104517271B (en) Image processing method and device
WO2023045979A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN112219224A (en) Image processing method and device, electronic equipment and storage medium
CN112767288A (en) Image processing method and device, electronic equipment and storage medium
CN111445415B (en) Image restoration method and device, electronic equipment and storage medium
CN107424130B (en) Picture beautifying method and device
CN113822798B (en) Method and device for training generation countermeasure network, electronic equipment and storage medium
WO2023045950A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN113570581A (en) Image processing method and device, electronic equipment and storage medium
WO2023045961A1 (en) Virtual object generation method and apparatus, and electronic device and storage medium
CN111935418B (en) Video processing method and device, electronic equipment and storage medium
WO2023045946A1 (en) Image processing method and apparatus, electronic device, and storage medium
WO2023142645A1 (en) Image processing method and apparatus, and electronic device, storage medium and computer program product
CN112613447A (en) Key point detection method and device, electronic equipment and storage medium
CN113762212A (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40061771

Country of ref document: HK