WO2023045950A1 - Image processing method and apparatus, electronic device, and storage medium - Google Patents

Image processing method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
WO2023045950A1
WO2023045950A1 PCT/CN2022/120109 CN2022120109W WO2023045950A1 WO 2023045950 A1 WO2023045950 A1 WO 2023045950A1 CN 2022120109 W CN2022120109 W CN 2022120109W WO 2023045950 A1 WO2023045950 A1 WO 2023045950A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
area
eye makeup
target
makeup
Prior art date
Application number
PCT/CN2022/120109
Other languages
French (fr)
Chinese (zh)
Inventor
孙仁辉
苏柳
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023045950A1 publication Critical patent/WO2023045950A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the field of computer vision, and in particular to an image processing method and device, electronic equipment and a storage medium.
  • the present disclosure proposes an image processing scheme.
  • an image processing method including:
  • the eye makeup operation on the user image determine an eye object to be subjected to eye makeup processing in the user image; divide the eye object into multiple target areas, wherein, the range of the target area is larger than the original target area, and the original target area is obtained by dividing the eye objects based on the hue information; according to the eye makeup parameters in the eye makeup operation , performing eye makeup processing matching the color tone of the target area to each of the multiple target areas to obtain multiple eye makeup results; generating a target user image according to the multiple eye makeup results.
  • the determining the eye object to be subjected to eye makeup processing in the user image includes: performing key point recognition processing on the user image, and determining that the eye object is in the user image The initial position in the image; the user image is copied to multiple layers respectively; for each layer in the multiple layers, in this layer, the position is expanded with the initial position as the center to obtain An extension location, according to the extension location, determine the object to be processed for eye makeup in the layer.
  • the hue information includes at least one of shadow information or midtone information; the division of the eye object based on the hue information of the eye object and preset area parameters For multiple target areas, at least one of the following operations is included: based on the shadow information of the eye object, combined with the first preset area parameter among the preset area parameters, extracting the shadow from the eye object region; or, based on the midtone information of the eye object, in combination with the second preset region parameter among the preset region parameters, extract a midtone region from the eye object.
  • the extracting a shadow area from the eye object based on the shadow information of the eye object in combination with the first preset area parameter in the preset area parameters includes: Based on the reverse grayscale image of the eye object, perform multiply blending to obtain a first blending result; determine the pixels in the eye object according to the first blending result and the first preset area parameter The first transparency of the point; according to the first preset transparency threshold and the first transparency, pixel points in the eye object are extracted to obtain the shadow area, wherein the shadow area is larger than the original shadow area, The original shadow area is obtained by dividing the eye object based on the shadow information.
  • the midtone area is extracted from the eye object based on the midtone information of the eye object in combination with the second preset area parameter among the preset area parameters
  • the method includes: performing exclusion mixing based on the grayscale image of the eye object to obtain a second mixing result; determining the pixel points in the eye object according to the second mixing result and the second preset area parameter Second transparency: according to the second preset transparency threshold and the second transparency, extract the pixel points in the eye object to obtain the midtone area, wherein the midtone area is larger than the original midtone area , the original midtone area is obtained by dividing the eye object based on the midtone information.
  • each of the multiple target areas is subjected to eye makeup processing that matches the color tone of the target area to obtain multiple
  • the eye makeup results include: according to the color parameters in the eye makeup parameters, respectively rendering the multiple target areas to obtain multiple intermediate eye makeup results; according to the hues of the multiple target areas, determining the multiple target areas A processing method corresponding to each of the target areas; according to the processing methods corresponding to the plurality of target areas, the eye object and the plurality of intermediate eye makeup results are respectively mixed to obtain a plurality of eye makeup results.
  • the target area includes at least one of the following: a shadow area or a midtone area; according to the hues of the multiple target areas, determining the corresponding processing methods of the multiple target areas , comprising: in response to determining that the target area includes a shadow area, determining that the processing manner includes multiply blending; in response to determining that the target area includes a midtone area, determining that the processing manner includes normal blending.
  • the generating the target user image according to the multiple eye makeup results includes: superimposing the multiple eye makeup results to obtain the target eye makeup result; Fusion parameters in , the target eye makeup result is fused with the user image to obtain the target user image.
  • an image processing device including:
  • a determination module configured to determine an eye object to be subjected to eye makeup processing in the user image in response to an eye makeup operation on the user image; a division module, configured to based on the hue information of the eye object and preset area parameters , dividing the eye object into a plurality of target areas, wherein the scope of the target area is larger than the original target area, and the original target area is obtained by dividing the eye object based on the hue information; the eye A makeup module, configured to perform eye makeup processing on each of the multiple target areas according to the eye makeup parameters in the eye makeup operation to match the color tone of the target area, to obtain multiple eye makeup results; a generation module , for generating a target user image according to the multiple eye makeup results.
  • the determining module is configured to: perform key point recognition processing on the user image, determine the initial position of the eye object in the user image; copy the user image respectively to a plurality of layers; for each layer in the plurality of layers, in the layer, the position is expanded with the initial position as the center to obtain the extended position, and the layer is determined according to the expanded position The object to be treated with eye makeup.
  • the hue information includes at least one of shadow information or midtone information; the dividing module is used for at least one of the following: based on the shadow information of the eye object, combined with the The first preset area parameter in the preset area parameters is used to extract the shadow area from the eye object; or, based on the midtone information of the eye object, combined with the second preset area parameter in the preset area parameters Set a region parameter to extract a midtone region from the eye object.
  • the division module is further configured to: perform multiplicative blending based on the reverse grayscale image of the eye object to obtain a first blending result; according to the first blending result and The first preset area parameter determines the first transparency of the pixels in the eye object; extracts the pixels in the eye object according to the first preset transparency threshold and the first transparency, The shadow area is obtained, wherein the shadow area is larger than the original shadow area obtained by dividing the eye object based on the shadow information.
  • the dividing module is further configured to: perform exclusion mixing based on the grayscale image of the eye object to obtain a second mixing result; according to the second mixing result and the second Preset area parameters to determine the second transparency of the pixels in the eye object; extract the pixels in the eye object according to the second preset transparency threshold and the second transparency to obtain the intermediate a midtone area, wherein the midtone area is larger than an original midtone area, and the original midtone area is obtained by dividing the eye object based on the midtone information.
  • the eye makeup module is configured to: respectively render the multiple target areas according to the color parameters in the eye makeup parameters to obtain multiple intermediate eye makeup results; according to the The color tone of the multiple target areas, determine the corresponding processing methods of the multiple target areas; according to the processing methods corresponding to the multiple target areas, the eye objects are respectively compared with the multiple intermediate eye makeup results Blend the treatment for multiple eye makeup results.
  • the target area includes at least one of the following: one or more of a shadow area or a midtone area; the eye makeup module is further configured to: respond to determining that the target area includes For the shaded area, it is determined that the processing method includes multiply blending; in response to determining that the target area includes a midtone area, it is determined that the processing method includes normal blending.
  • the generating module is configured to: superimpose the multiple eye makeup results to obtain a target eye makeup result; The result is fused with the user image to obtain the target user image.
  • an electronic device including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to: execute the above image processing method.
  • a computer-readable storage medium on which computer program instructions are stored, and the above-mentioned image processing method is implemented when the computer program instructions are executed by a processor.
  • the eye object to be subjected to eye makeup processing in the user image is determined, thereby according to the hue information of the eye object and the preset area parameters.
  • Divide the eye object into a plurality of target regions, wherein the range of the target region is larger than the range of the original target region divided by the hue, and according to the eye makeup parameters in the eye makeup operation, each of the plurality of target regions is compared with the The eye makeup processing of the color tone matching of the target area to obtain multiple eye makeup results, and generate the target user image after eye makeup processing on the eye object according to the multiple eye makeup results.
  • the difference in hue between different target areas can be made greater by preset area parameters, and a larger target area can be obtained than the original target area obtained by dividing only based on the hue.
  • Performing corresponding eye makeup processing based on the target area can make the overall eye makeup effect of the eye object more natural and real.
  • eye objects can be divided into shadow areas and mid-tone areas according to tones such as shadows and mid-tones, so that in the process of applying eye makeup to shadow areas, the brightness of eye makeup can be reduced as much as possible, and eye makeup can be applied to mid-tone areas.
  • the brightness of the eye makeup should be kept as much as possible, so that the eye makeup effect of the eye object can match the original color distribution of the eye object, and the authenticity and naturalness of the eye makeup can be improved, and the eye makeup can be omitted.
  • the process of dividing an object into highlight areas reduces the amount of data processed and improves the efficiency of image processing.
  • Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
  • Fig. 2 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
  • Fig. 3 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
  • Fig. 4 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
  • FIG. 5 shows a block diagram of an image processing device according to an embodiment of the present disclosure.
  • Fig. 6 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • Fig. 7 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
  • the method can be applied to an image processing device or an image processing system, and the image processing device can be a terminal device, a server, or other processing devices.
  • the terminal device can be user equipment (User Equipment, UE), mobile device, user terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, computing device, vehicle-mounted device, wearable device wait.
  • the image processing method can be applied to a cloud server or a local server, and the cloud server can be a public cloud server or a private cloud server, which can be flexibly selected according to actual conditions.
  • the image processing method may also be implemented in a manner in which the processor invokes computer-readable instructions stored in the memory.
  • the image processing method may include:
  • Step S11 in response to the eye makeup operation on the user image, determine eye objects to be subjected to eye makeup processing in the user image.
  • the user image can be any image including the user's eyes, and the user image can include one or more users, or one or more users' eyes, and its implementation form can be flexibly determined according to the actual situation. No limitation is imposed in the disclosed embodiments.
  • the eye object can be a part of the face that needs eye makeup processing in the user image.
  • the eye object can include the complete eye, such as the eyes and the related parts near the eyes that can be used for eye makeup, such as eye shadow rendering.
  • Parts, or eyeball parts that can be rendered with color contact lenses, etc. can also only include objects such as the left eye or right eye, which can be flexibly selected according to the actual situation of eye makeup operation.
  • the eye makeup operation can be any operation of eye makeup processing on the eye object of the user image, such as eye shadow rendering or color contact lens rendering.
  • the operation content included in the eye makeup operation can be flexibly determined according to the actual situation, and is not limited to the following disclosed embodiments.
  • the eye makeup operation may include performing eye makeup processing on the eye objects in the user image; in some possible implementation manners, the eye makeup operation may also include the input makeup parameters, etc.
  • the eye makeup parameters can be user-input parameters related to eye makeup processing on eye objects.
  • the implementation form of the eye makeup parameters can be flexibly determined, such as color parameters, fusion parameters and other parameters.
  • the manner of determining the eye object is not limited in the disclosed embodiments, and is not limited to the following disclosed embodiments.
  • eye recognition processing may be performed on the user image to determine the target location.
  • the manner of recognition processing is not limited in the embodiment of the present disclosure, for example, it may be key point recognition, direct recognition of the whole eye, and the like.
  • Step S12 dividing the eye object into a plurality of target areas based on the hue information of the eye object and preset area parameters.
  • the hue information may reflect the relative lightness and darkness of the eye object.
  • the information contained in the hue information may be flexibly determined according to actual conditions.
  • the hue information may include one or more of highlight information, shadow information, and midtone information.
  • the highlight information can reflect the region with higher brightness in the eye object
  • the shadow information can reflect the region with lower brightness in the eye object
  • the midtone information can reflect the brightness in the eye object is between the highlight and shadow Area.
  • Different hue information can be determined from the eye object in different hue ways.
  • the above hue information can be obtained directly according to the brightness of the pixels in the eye object.
  • Eye objects can also be processed in different ways to obtain different hue information. The manner of acquiring hue information can be referred to the following disclosed embodiments in detail, and will not be expanded here.
  • the preset area parameter may be a related parameter used to determine the target area, and the implementation form of the preset area parameter is not limited in the embodiment of the present disclosure, for example, it may be related parameters such as a set area range or a range threshold.
  • the eye object can be divided into multiple original target areas.
  • the original target area can be The target area is further determined on the basis of the original target area.
  • the preset area parameters may be related parameters used to determine the target area on the basis of the original target area. For example, it may be the preset value for adjusting the range of the original target area, or the relevant parameters that need to be used in the process of further determining the target area on the basis of the original target area. See the following disclosed embodiments for details. Do not expand here. Since the target area can be determined on the basis of the original target area, there may be a certain corresponding relationship between the two. In a possible implementation manner, the range of the target area may be larger than the original target area.
  • the area type and area position contained in the target area can be flexibly determined according to the actual situation of the color tone information in the eye object. There may be overlapping areas between multiple target areas, and they may also be independent of each other. In the embodiment of the present disclosure No restrictions.
  • the target area may include one or more of a highlight area, a shadow area, and a midtone area.
  • the method of dividing the eye object into multiple target areas can be flexibly changed with the different hue information, for example, according to the obtained highlight information, shadow information and midtone information, the pixels in the eye object can be Points are divided into highlight area, shadow area and midtone area etc. respectively.
  • step S12 refer to the following disclosed embodiments for details, which will not be expanded here.
  • Step S13 according to the eye makeup parameters in the eye makeup operation, perform eye makeup processing on each of the multiple target areas that matches the color tone of the target area, and obtain multiple eye makeup results.
  • the eye makeup parameters corresponding to different target areas can be the same or different.
  • different target areas can correspond to the same or different color parameters, or different areas can use the same or different fusion parameters.
  • the fusion of color parameters and the like can be flexibly set according to actual conditions, and are not limited to the embodiments of the present disclosure.
  • the eye makeup result may be the result obtained after the target area has been treated with eye makeup.
  • the way of eye makeup treatment may also be different, so corresponding eye makeup treatment can be performed on multiple target areas to obtain multiple eye makeup results.
  • step S13 reference may be made to the following disclosed embodiments in detail, which will not be expanded here.
  • Step S14 generating target user images according to multiple eye makeup results.
  • the target user image can be an image obtained by performing eye makeup processing on the eye object of the user image, and the method of generating the target user image can be flexibly determined according to the actual situation.
  • multiple eye makeup results can be fused to obtain the target user image.
  • the user image or combine multiple eye makeup results with the user image to obtain the target user image.
  • multiple eye makeup results may also respectively belong to multiple layers.
  • the target user image may be obtained by superimposing the layers.
  • step S14 For some possible implementation manners of step S14, reference may be made to the following disclosed embodiments in detail, which will not be expanded here.
  • the eye object to be subjected to eye makeup processing in the user image is determined, so that according to the hue information of the eye object and the preset area parameters, the eye object Divide into a plurality of target areas, wherein the range of the target area is larger than the range of the original target area divided by the color tone, and according to the eye makeup parameters in the eye makeup operation, each of the plurality of target areas is compared with the color tone of the target area Matching eye makeup processing to obtain multiple eye makeup results, and generate a target user image after performing eye makeup processing on eye objects according to the multiple eye makeup results.
  • the difference in hue between different target areas can be made greater by presetting the area parameters, and a larger target area can be obtained compared to the original target area that is only divided according to the hue.
  • the corresponding eye makeup treatment of the area can make the overall eye makeup effect of the eye object more natural and real.
  • the eye object can be divided into a shadow area and a mid-tone area according to the shades such as shadows and mid-tones, so that the shadow area can be adjusted.
  • the brightness of eye makeup should be reduced as much as possible, and in the process of eye makeup processing in the middle tone area, the brightness of eye makeup should be kept as much as possible, so that the eye makeup effect of the eye object is consistent with the original color of the eye object.
  • Matching the color tone distribution improves the authenticity and naturalness of eye makeup, and can omit the process of dividing eye objects into highlight areas, reducing the amount of processed data and improving the efficiency of image processing.
  • the eye object may include multiple eye objects respectively located in multiple layers.
  • the eye makeup operation of eye shadow rendering one or Multiple areas are rendered, such as the basic eye shadow area, the basic lower eye shadow area, the upper eyelid area, the outer corner area, the inner eye corner area, and the upper right eye shadow area.
  • these six regions can be used as eye objects for subsequent operations.
  • step S11 may include:
  • each layer expand the position centered on the initial position to obtain the expanded position, and determine the eye object in each layer according to the expanded position.
  • the key point identification method is not limited in the embodiment of the present disclosure, for example, it may be identified through a relevant key point identification algorithm, or through a neural network with a key point identification function.
  • the initial position may be the position of the eye object in the user image.
  • the positions of the central areas of these eye objects may be used as the initial position.
  • multiple eye shadow areas as eye objects are distributed around the eyes, so the position of the eyes in the user image can be used as the initial position.
  • a layer may be any layer with image processing or editing functions, such as an editing layer in image editing software (Photoshop, PS).
  • the number of multiple layers can be flexibly determined according to the actual situation, and is not limited in this embodiment of the present disclosure.
  • the number of layers can be the same as the number of multiple eye objects; in some possible In the implementation mode, the number of layers can also be smaller than the number of multiple eye objects. In this case, it is possible to determine two or more eye objects in one or some layers at the same time, such as In a layer, you can determine both the base eye shadow area object and the base eye shadow area object.
  • Copy the user image to multiple layers respectively which can be to copy multiple eye objects in the user image to multiple layers, or copy the entire user image to multiple layers, or directly copy the user image to multiple layers.
  • the original layer the image was on is copied to multiple layers, etc.
  • the position can be expanded with the initial position as the center to obtain the expanded position, and the eye objects in each layer can be determined according to the expanded position.
  • the way of extension is not limited in this embodiment of the present disclosure.
  • the corresponding position direction can be Expand according to the corresponding range, and the position correspondence can be determined according to the objective relationship between the eye object and the initial position. The inside of the initial position, etc.
  • each image can be Layer eye makeup results are superimposed to further obtain the target user image after eye makeup processing.
  • multiple eye objects can be determined separately and independently by copying user images to multiple layers, which facilitates subsequent eye makeup processing on multiple eye objects, improves the flexibility of eye makeup, and also It is convenient to change the eye makeup effect of each eye object to enhance the richness of eye makeup.
  • FIG. 2 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
  • step S12 may include one or more of the following operations:
  • Step S121 based on the shadow information of the eye object, combined with the first preset area parameter among the preset area parameters, extract the shadow area from the eye object.
  • Step S122 based on the midtone information of the eye object, combined with the second preset region parameter among the preset region parameters, extracting a midtone region from the eye object.
  • step S121 and step S122 are only used to distinguish the above-mentioned different steps, and do not limit the implementation order of different steps. Different steps can be executed simultaneously or sequentially, and the order is not limited in the embodiments of the present disclosure. . Step S12 may include the above two steps at the same time, and may also selectively execute some of the steps.
  • eye objects can be flexibly divided into shadow regions and/or midtone regions according to shadow information and/or midtone information.
  • it can effectively improve the flexibility of eye object beautification, and it is convenient to realize eye makeup processing flexibly and quickly;
  • the embodiment of the present disclosure can omit the processing of the highlight area, and improve the efficiency of eye makeup processing.
  • step S121 may include:
  • pixel points in the eye object are extracted to obtain a shadow area.
  • the reverse grayscale image of the eye object can be an image obtained after performing reverse grayscale processing on each pixel in the eye object, and the reverse grayscale can perform linear or non-linear inversion to obtain an image opposite to the grayscale image of the eye object.
  • Multiply blending based on the reversed grayscale image of the eye object may be to perform multiplicative blending on the reversed grayscale image of the eye object itself to obtain the first blended result, specifically, it may be After copying the reversed grayscale image of the eye object, multiply blending is performed based on the two identical reversed grayscaled images to obtain the first blending result.
  • the first mixed result obtained by the above method can reflect the shadow information of the eye object, therefore, based on the first mixed result, the original shadow area in the eye object can be further determined, and the original shadow area can be obtained only according to The shadow information represented by the first blending result is used to divide the eye object into the obtained area.
  • the shadow area may be determined according to the first mixing result and further combined with the first preset area parameter.
  • the first preset area parameter may be a preset parameter for determining the shaded area, and its parameter value may be flexibly determined according to actual conditions, and is not limited to the embodiments of the present disclosure.
  • the first preset parameter may be any value greater than 1 (for example, 1.1 ⁇ 1.7).
  • an alpha (alpha) channel value of each pixel in the first mixed result may be obtained, and the alpha channel value may be multiplied by a first preset area parameter to obtain a multiplied first grayscale result.
  • the first grayscale result can be mapped to a transparency value through the mapping relationship between grayscale and transparency, so as to determine the first transparency of each pixel in the eye object.
  • the specific mapping manner of the mapping relationship can be flexibly set according to the actual situation, and is not limited in this embodiment of the present disclosure.
  • the first preset transparency threshold may be a preset threshold for filtering the pixels belonging to the shadow area in the eye object, and the specific value of the first preset transparency threshold is not limited in this embodiment of the present disclosure.
  • the first preset transparency threshold it can be judged whether the first transparency of each pixel in the eye object is within the range of the first preset transparency threshold, if so, it is determined that the pixel belongs to the shadow area, otherwise it is considered that the pixel belongs to the shadow.
  • regions other than the region through the above screening process, multiple pixels belonging to the shadow region can be obtained from the eye object, and then the pixels belonging to the shadow region in the eye object are extracted to obtain the shadow region.
  • the transparency determined based on its own first blending result may be If it does not belong to the range of the first preset transparency threshold, and after multiplying it by the first preset area parameter, the pixel can be classified into a shadow area. Therefore, the shadow area obtained by the method of the embodiment of the present disclosure may be larger than the original shadow area determined only according to the shadow information.
  • the alpha channel value of a certain pixel is 100, and the corresponding transparency is 39%, which is not within the range of the first preset transparency threshold (for example, greater than 40%) , and after multiplying the first mixed result with the first preset area parameter (such as 1.2), it can be determined that the first grayscale result of the pixel is 120, and the first transparency obtained by mapping is 47%, so by combining with After the first preset area parameter is multiplied, the pixel point may belong to the range of the first preset transparency threshold, and then belong to the shadow area.
  • the first preset transparency threshold for example, greater than 40%
  • the first blending result reflecting the shadow information can be obtained by multiplying and blending the reverse grayscale image of the eye object, so that according to the blending result and the first preset area parameters, the Extract the shadow area from the object.
  • this method of obtaining shadow information is fast and convenient, and the accuracy is high. While improving the efficiency of eye makeup processing, it can effectively improve the accuracy of eye makeup, and then improve the effect of eye makeup;
  • the first preset area parameter by introducing the first preset area parameter, the contrast of the shadow area relative to other target areas can be effectively enhanced, and the area range of the shadow area can be expanded to make up for the omission of the divided highlight area, so that the eyes based on the divided target area Makeup processing can be more accurate and have better results.
  • step S122 may include:
  • pixel points in the eye object are extracted to obtain a midtone area.
  • the grayscale image of the eye object may be an image obtained by performing grayscale processing on the eye object. Excluding and mixing based on the grayscale image of the eye object may be to mix the grayscale image of the eye object itself in a form of exclusion to obtain the second mixed result, and its implementation form can refer to the above-mentioned disclosed embodiments, which are not described herein. Let me repeat.
  • exclusion blending can be an image blending mode in PS editing software, which can change the brightness and grayscale of an image, and based on the result of exclusion blending, midtone information of eye objects can be obtained.
  • the second mixed result obtained in the above manner can reflect the midtone information of the eye object, and therefore, based on the second mixed result, the original midtone region in the eye object can be further determined, and the original midtone region It may be the region obtained by dividing the eye object only according to the middle tone information represented by the obtained second mixing result.
  • the middle tone area may be determined according to the second mixing result and further combined with the second preset area parameter.
  • the second preset area parameter may be a preset parameter for determining the midtone area, and its parameter value may be flexibly determined according to actual conditions, and is not limited to the embodiments of the present disclosure.
  • the value of the second preset area parameter may be the same as or different from the first preset area parameter.
  • the second preset parameter may also be any value greater than 1 (for example, 1.1 to 1.7).
  • the alpha channel value of each pixel in the second mixed result can be obtained, and the alpha channel value can be multiplied by the second preset area parameter to obtain the multiplied second grayscale result.
  • the second grayscale result can be mapped to a transparency value through the mapping relationship between grayscale and transparency, so as to determine the second transparency of each pixel in the eye object.
  • the second preset transparency threshold may be a preset threshold for filtering the pixel points belonging to the midtone region in the eye object, and the specific value of the second preset transparency threshold is not limited in the embodiments of the present disclosure. In a possible implementation manner, the second preset transparency threshold may be different from the first preset transparency threshold.
  • the second preset threshold it can be judged whether the second transparency of each pixel in the eye object is within the range of the second preset threshold, if so, it is determined that the pixel belongs to the midtone area, otherwise, the pixel is considered to belong to the midtone.
  • regions other than the region through the above screening process, multiple pixels belonging to the midtone region can be obtained from the eye object, and then the pixels belonging to the midtone region in the eye object are extracted to obtain the midtone region.
  • the midtone area obtained through the second preset area parameter may be larger than the original midtone area determined only based on the midtone information.
  • the grayscale image of the eye object can be excluded and mixed to obtain the second mixed result reflecting the midtone information, so that according to the mixed result and the second preset area parameters, the eye object can be extracted
  • This method of obtaining mid-tone information is fast and convenient, and it is convenient for batch processing at the same time as the method of obtaining shadow information, thereby improving the efficiency and effect of eye makeup as a whole; on the other hand, by introducing the second
  • the preset area parameters can effectively enhance the contrast of the midtone area relative to other target areas, and can expand the area range of the midtone area to make up for the omission of the divided highlight area, so that the eye makeup processing based on the divided target area can be More accurate and with better results.
  • FIG. 3 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
  • step S13 may include:
  • step S131 according to the color parameters in the eye makeup parameters, the multiple target areas are respectively rendered to obtain multiple intermediate eye makeup results.
  • Step S132 according to the hues of the multiple target areas, determine the processing modes corresponding to the multiple target areas.
  • step S133 the eye object is mixed with multiple intermediate eye makeup results to obtain multiple eye makeup results according to the corresponding processing methods of multiple target areas.
  • the intermediate eye makeup result may be a rendering result obtained after rendering the color parameters to the target area.
  • the color parameter may be a parameter for color rendering of the eye object, and the color parameter may be a color value or an RGB channel value.
  • the color parameter can be determined according to the color selected by the user or the color value input by the user, or the color can be set in advance, and can be flexibly selected according to the actual situation.
  • each target area may correspond to the same color parameters, or may correspond to different color parameters, which can be flexibly selected according to the actual situation.
  • each target area may be mixed with corresponding color parameters to obtain an intermediate eye makeup result corresponding to each target area.
  • different target areas can be divided based on different hue information
  • different target areas can correspond to different hues, such as the shaded areas corresponding to shadows and the midtone area corresponding to midtones mentioned in the above disclosed embodiments.
  • Target areas with different tones may have different eye makeup treatment methods.
  • the corresponding relationship between the color tones and the treatment methods can be found in the following disclosed embodiments in detail, and will not be expanded here.
  • the eye object can be mixed with each intermediate eye makeup result according to the processing method, so as to obtain multiple eye makeup results.
  • the mixed processing of the corresponding processing methods can be performed on each target area to obtain the eye makeup result of each target area, so that different target areas can have the same color tone.
  • the corresponding eye makeup effect effectively improves the accuracy and richness of the overall eye makeup effect, and also improves the flexibility of the eye makeup process.
  • step S132 may include:
  • the processing includes normal mixing.
  • the intermediate eye makeup effect and eye objects in the shadow area can be mixed in the manner of multiplying and blending to obtain the eye makeup in the shadow area result.
  • multiply and bottom blending can darken the blending result, thereby fully retaining the tonal properties of the shadow area and improving the processing effect.
  • the mid-tone eye makeup effect and eye objects in the mid-tone area can be mixed in a normal blending manner to obtain an eye makeup result in the mid-tone area.
  • normal blending has little effect on the brightness of the blending result, so as to fully preserve the tone properties of the midtone area and improve the processing effect.
  • FIG. 4 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
  • step S14 may include:
  • Step S141 superimposing multiple eye makeup results to obtain the target eye makeup result
  • Step S142 according to the fusion parameters in the eye makeup parameters, the target eye makeup result is fused with the user image to obtain the target user image.
  • the above-mentioned multiple target areas may also be determined in multiple layers respectively, in this case, the multiple eye makeup results obtained may also belong to multiple layers respectively, therefore, In an example, eye makeup results in multiple layers may be superimposed to obtain a target eye makeup result.
  • overlay can be flexibly decided according to the actual situation, for example, it can be direct overlay between layers, and in some possible implementations, overlay can also be realized through some or some mixed methods, for example, multiple layers can be passed through Multiply the way to mix and overlay.
  • multiple eye makeup results may also be directly fused or mixed to obtain a target eye makeup result.
  • the target eye makeup result and the user image can be fused according to the fusion parameters to obtain the target user image.
  • the fusion method is also not limited in this embodiment of the disclosure.
  • the pixel values of the same position are added or multiplied to achieve fusion, or the pixel values of the pixels at the same position are weighted and fused.
  • the fusion weight of the target eye makeup result in the weighted fusion can be determined according to the fusion parameters, and the fusion parameters can be preset parameter values, or can be determined according to the parameter values input by the user in the eye makeup operation, etc., according to the actual situation Choose flexibly.
  • the fusion parameter can be transparency.
  • the target eye makeup result is multiplied by the transparency and then added to the used image to obtain the target user image.
  • the target eye makeup result can be fused with the user image according to the fusion parameters in the eye makeup parameters to obtain the target user image.
  • the fusion parameters By changing the fusion parameters, it is easy to adjust the eye makeup effect, thereby improving the overall The flexibility and autonomy of the eye makeup process.
  • FIG. 5 shows a block diagram of an image processing device according to an embodiment of the present disclosure.
  • the image processing device 20 may include:
  • the determination module 21 is configured to determine eye objects to be subjected to eye makeup processing in the user image in response to the eye makeup operation on the user image.
  • the dividing module 22 is used to divide the eye object into a plurality of target areas based on the hue information of the eye object and preset area parameters, wherein the scope of the target area is larger than the original target area, and the original target area is divided into multiple target areas based on the hue information. It is obtained by dividing the department object.
  • the eye makeup module 23 is configured to, according to the eye makeup parameters in the eye makeup operation, perform eye makeup processing on each of the multiple target areas matching the color tone of the target area to obtain multiple eye makeup results.
  • the generating module 24 is configured to generate target user images according to multiple eye makeup results.
  • the determining module is configured to: perform key point recognition processing on the user image, determine the initial position of the eye object in the user image; copy the user image respectively to a plurality of layers; for each layer in the plurality of layers, in the layer, the position is expanded with the initial position as the center to obtain the extended position, and the layer is determined according to the expanded position The object to be treated with eye makeup.
  • the hue information includes at least one of the following: shadow information or midtone information; the dividing module is used for at least one of the following: based on the shadow information of the eye object, Extracting a shadow area from the eye object in combination with the first preset area parameter in the preset area parameters; or, based on the midtone information of the eye object, combining the first preset area parameter in the preset area parameters Two preset area parameters, extracting a midtone area from the eye object.
  • the division module is further configured to: perform multiplicative blending based on the reverse grayscale image of the eye object to obtain a first blending result; according to the first blending result and The first preset area parameter determines the first transparency of the pixels in the eye object; extracts the pixels in the eye object according to the first preset transparency threshold and the first transparency, The shadow area is obtained, wherein the shadow area is larger than the original shadow area obtained by dividing the eye object based on the shadow information.
  • the dividing module is further configured to: perform exclusion mixing based on the grayscale image of the eye object to obtain a second mixing result; according to the second mixing result and the second Preset area parameters to determine the second transparency of the pixels in the eye object; extract the pixels in the eye object according to the second preset transparency threshold and the second transparency to obtain the intermediate a midtone area, wherein the midtone area is larger than an original midtone area, and the original midtone area is obtained by dividing the eye object based on the midtone information.
  • the eye makeup module is configured to: respectively render the multiple target areas according to the color parameters in the eye makeup parameters to obtain multiple intermediate eye makeup results; according to the The color tone of the multiple target areas, determine the corresponding processing methods of the multiple target areas; according to the processing methods corresponding to the multiple target areas, the eye objects are respectively compared with the multiple intermediate eye makeup results Blend the treatment for multiple eye makeup results.
  • the target area includes at least one of the following: one or more of shadow areas and/or midtone areas; the eye makeup module is further configured to: respond to determining the target In response to determining that the target area includes a midtone area, it is determined that the processing includes normal blending in response to determining that the target area includes a shaded area.
  • the generating module is configured to: superimpose the multiple eye makeup results to obtain a target eye makeup result; The result is fused with the user image to obtain the target user image.
  • This disclosure relates to the field of augmented reality.
  • acquiring the image information of the target object in the real environment and then using various visual correlation algorithms to detect or identify the relevant features, states and attributes of the target object, and thus obtain the image information that matches the specific application.
  • AR effect combining virtual and reality.
  • the target object may involve faces, limbs, gestures, actions, etc. related to the human body, or markers and markers related to objects, or sand tables, display areas or display items related to venues or places.
  • Vision-related algorithms can involve visual positioning, SLAM, 3D reconstruction, image registration, background segmentation, object key point extraction and tracking, object pose or depth detection, etc.
  • Specific applications can not only involve interactive scenes such as guided tours, navigation, explanations, reconstructions, virtual effect overlays and display related to real scenes or objects, but also special effects processing related to people, such as makeup beautification, body beautification, special effect display, virtual Interactive scenarios such as model display.
  • the relevant features, states and attributes of the target object can be detected or identified through the convolutional neural network.
  • the above-mentioned convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
  • the disclosed application example proposes an image processing method, including the following process:
  • Face recognition is performed based on the user image, the eye object is extracted from the user image, and the shadow area and the midtone area are separated for the eye object to obtain two target areas such as the high shadow area and the midtone area.
  • the process of shadow separation can include:
  • Multiply and mix the reverse grayscale image of the eye object and the reverse grayscale image of the eye object (for example, after copying the reverse grayscale image of the eye object, based on two identical reverse grayscale images Figures are multiplied and blended) to get the first blended result.
  • Obtain the alpha channel value in the first mixed result and multiply the alpha channel value by the first preset area parameter (such as 1.2, etc.) to obtain the first grayscale result.
  • the second A grayscale result is mapped to transparency, so as to obtain the first transparency of each pixel in the eye object, and based on the first transparency, pixel points belonging to the shadow area are screened from the eye object to obtain the shadow area.
  • the process of separating midtones can include:
  • a second mixed result is obtained.
  • Obtain the alpha channel value in the second mixed result and multiply the alpha channel value by the second preset area parameter (such as 1.2, etc.) to obtain the second grayscale result.
  • the first The second grayscale result is mapped to transparency, thereby obtaining the second transparency of each pixel in the eye object, and based on the second transparency, the pixels belonging to the midtone area are screened from the eye object to obtain the midtone area.
  • the color parameters of the eye makeup input by the user can be mixed with the shadow area and the midtone area respectively to obtain the intermediate eye makeup results x and y respectively.
  • the obtained intermediate eye makeup result y is normally mixed with the eye object to obtain the eye makeup result h.
  • the target user image is an image obtained by performing the above-mentioned eye makeup processing on the eye object in the user image.
  • Changing the transparency of i can change the transparency effect of the entire eye makeup color, wherein the target eye makeup result i can be fused with the target user image through normal mixing.
  • the image processing method proposed in the application example of the present disclosure can make the eye makeup effect more natural by processing the target areas of different tones, including the skin texture details of the eyes, and can realize the processing effect of what you see is what you get. Moreover, the method proposed in the embodiments of the present disclosure can achieve more parameter definitions, such as changing the color parameters of eye makeup in different target areas, changing the fusion parameters of the target eye makeup result and the user image, etc., so that the eye makeup effect is more predictable. control, which is convenient for users to customize beautification parameters according to their preferences. For example, a designer may need to change the color of eye shadow in eye makeup in the process of designing makeup.
  • the method proposed in the application example of this disclosure can be used, or the method steps in the application example of this disclosure can be used According to the operation process, it is recorded as a set of actions and run in PS and other software, so that the eye makeup effect can be changed conveniently and quickly. It is also convenient for institutions to provide users with richer custom functions according to the methods in the application examples of the present disclosure. For example, software developers in institutions can incorporate the methods of the application examples of the disclosure into the underlying technology when developing makeup-related software , and retain the interface for modifying the eye makeup color, so as to facilitate color changes, etc.
  • the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible
  • the inner logic is OK.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, on which computer program instructions are stored, and the above-mentioned method is implemented when the computer program instructions are executed by a processor.
  • the computer readable storage medium may be a volatile computer readable storage medium or a nonvolatile computer readable storage medium.
  • An embodiment of the present disclosure also proposes an electronic device, including: a processor; and a memory for storing instructions executable by the processor; wherein the processor is configured as the above method.
  • the above-mentioned memory can be volatile memory (volatile memory), such as RAM; or non-volatile memory (non-volatile memory), such as ROM, flash memory (flash memory), hard disk (Hard Disk Drive , HDD) or solid-state drive (Solid-State Drive, SSD); or a combination of the above types of memory, and provide instructions and data to the processor.
  • volatile memory such as RAM
  • non-volatile memory non-volatile memory
  • ROM read-only memory
  • flash memory flash memory
  • HDD hard disk
  • SSD solid-state drive
  • the aforementioned processor may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor. It can be understood that, for different devices, the electronic device used to implement the above processor function may also be other, which is not specifically limited in this embodiment of the present disclosure.
  • Electronic devices may be provided as terminals, servers, or other forms of devices.
  • the embodiments of the present disclosure further provide a computer program, which implements the above method when the computer program is executed by a processor.
  • FIG. 6 is a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • the electronic device 800 may be a terminal such as a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, or a personal digital assistant.
  • electronic device 800 may include one or more of the following components: processing component 802, memory 804, power supply component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814 , and the communication component 816.
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as those associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802 .
  • the memory 804 is configured to store various types of data to support operations at the electronic device 800 . Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like.
  • the memory 804 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • the power supply component 806 provides power to various components of the electronic device 800 .
  • Power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 800 .
  • the multimedia component 808 includes a screen providing an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense a boundary of a touch or swipe action, but also detect duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capability.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC), which is configured to receive external audio signals when the electronic device 800 is in operation modes, such as call mode, recording mode and voice recognition mode. Received audio signals may be further stored in memory 804 or sent via communication component 816 .
  • the audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessments of various aspects of electronic device 800 .
  • the sensor component 814 can detect the open/closed state of the electronic device 800, the relative positioning of components, such as the display and the keypad of the electronic device 800, the sensor component 814 can also detect the electronic device 800 or a Changes in position of components, presence or absence of user contact with electronic device 800 , electronic device 800 orientation or acceleration/deceleration and temperature changes in electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include an optical sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access a wireless network based on communication standards, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 816 receives a broadcast signal or broadcast related personnel information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wide Band
  • Bluetooth Bluetooth
  • electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmable gate array
  • controller microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
  • a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to implement the above method.
  • FIG. 7 is a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922 , which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922 , such as application programs.
  • the application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above method.
  • Electronic device 1900 may also include a power supply component 1926 configured to perform power management of electronic device 1900, a wired or wireless network interface 1950 configured to connect electronic device 1900 to a network, and an input-output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • a non-transitory computer-readable storage medium such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to implement the above method.
  • the present disclosure can be a system, method and/or computer program product.
  • a computer program product may include a computer readable storage medium having computer readable program instructions thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
  • a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer-readable storage media include: portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or flash memory), static random access memory (SRAM), compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanically encoded device, such as a printer with instructions stored thereon A hole card or a raised structure in a groove, and any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disc read only memory
  • DVD digital versatile disc
  • memory stick floppy disk
  • mechanically encoded device such as a printer with instructions stored thereon
  • a hole card or a raised structure in a groove and any suitable combination of the above.
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
  • Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or Source or object code written in any combination, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as the “C” language or similar programming languages.
  • Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as via the Internet using an Internet service provider). connect).
  • electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs) or programmable logic arrays (PLAs), are personalized by utilizing status personnel information of computer readable program instructions, the electronic circuits Computer readable program instructions may be executed to implement various aspects of the present disclosure.
  • These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
  • each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium. The method comprises: in response to an eye makeup operation for a user image, determining an eye object to be subjected to eye makeup processing in the user image; dividing the eye object into multiple target regions on the basis of tone information and a preset region parameter of the eye object, wherein the target regions are larger in range than an original target region, and the original target region is obtained by dividing the eye object on the basis of the tone information; according to eye makeup parameters of the eye makeup operation, respectively performing, on the multiple target regions, eye makeup processing matching the tones of the target regions to obtain multiple eye makeup results; and generating a target user image according to the multiple eye makeup results.

Description

图像处理方法及装置、电子设备和存储介质Image processing method and device, electronic device and storage medium
相关申请的交叉引用Cross References to Related Applications
本公开要求于2021年09月27日提交的、申请号为202111137187.6的中国专利申请的优先权,该申请以引用的方式并入本文中。This disclosure claims the priority of the Chinese patent application with application number 202111137187.6 filed on September 27, 2021, which is incorporated herein by reference.
技术领域technical field
本公开涉及计算机视觉领域,尤其涉及一种图像处理方法及装置、电子设备和存储介质。The present disclosure relates to the field of computer vision, and in particular to an image processing method and device, electronic equipment and a storage medium.
背景技术Background technique
随着计算机视觉技术的发展,对人脸图像中的眼睛部位进行各类眼妆操作已愈加广泛应用于图像处理领域。如何得到较为美观自然的眼妆效果,成为目前一个亟待解决的问题。With the development of computer vision technology, various eye makeup operations on eye parts in face images have become more and more widely used in the field of image processing. How to obtain a more beautiful and natural eye makeup effect has become an urgent problem to be solved at present.
发明内容Contents of the invention
本公开提出了一种图像处理方案。The present disclosure proposes an image processing scheme.
根据本公开的一方面,提供了一种图像处理方法,包括:According to an aspect of the present disclosure, an image processing method is provided, including:
响应于针对用户图像的眼妆操作,确定所述用户图像中待进行眼妆处理的眼部对象;基于所述眼部对象的色调信息和预设区域参数,将所述眼部对象划分为多个目标区域,其中,所述目标区域的范围大于原始目标区域,所述原始目标区域通过基于所述色调信息对所述眼部对象进行划分所得到;根据所述眼妆操作中的眼妆参数,对所述多个目标区域中的每个进行与该目标区域的色调匹配的眼妆处理,得到多个眼妆结果;根据所述多个眼妆结果,生成目标用户图像。In response to the eye makeup operation on the user image, determine an eye object to be subjected to eye makeup processing in the user image; divide the eye object into multiple target areas, wherein, the range of the target area is larger than the original target area, and the original target area is obtained by dividing the eye objects based on the hue information; according to the eye makeup parameters in the eye makeup operation , performing eye makeup processing matching the color tone of the target area to each of the multiple target areas to obtain multiple eye makeup results; generating a target user image according to the multiple eye makeup results.
在一种可能的实现方式中,所述确定所述用户图像中待进行眼妆处理的眼部对象,包括:对所述用户图像进行关键点识别处理,确定所述眼部对象在所述用户图像中的初始位置;将所述用户图像分别复制至多个图层中;针对所述多个图层中的每个图层,在该图层中,以所述初始位置为中心进行位置扩展得到扩展位置,根据所述扩展位置,确定该图层中待进行眼妆处理的对象。In a possible implementation manner, the determining the eye object to be subjected to eye makeup processing in the user image includes: performing key point recognition processing on the user image, and determining that the eye object is in the user image The initial position in the image; the user image is copied to multiple layers respectively; for each layer in the multiple layers, in this layer, the position is expanded with the initial position as the center to obtain An extension location, according to the extension location, determine the object to be processed for eye makeup in the layer.
在一种可能的实现方式中,所述色调信息包括阴影信息或中间调信息中的至少一种;所述基于所述眼部对象的色调信息和预设区域参数,将所述眼部对象划分为多个目标区域,包括以下操作中的至少一项:基于所述眼部对象的阴影信息,结合所述预设区域参数中的第一预设区域参数,从所述眼部对象中提取阴影区域;或,基于所述眼部对象的中间调信息,结合所述预设区域参数中的第二预设区域参数,从所述眼部对象中提取中间调区域。In a possible implementation manner, the hue information includes at least one of shadow information or midtone information; the division of the eye object based on the hue information of the eye object and preset area parameters For multiple target areas, at least one of the following operations is included: based on the shadow information of the eye object, combined with the first preset area parameter among the preset area parameters, extracting the shadow from the eye object region; or, based on the midtone information of the eye object, in combination with the second preset region parameter among the preset region parameters, extract a midtone region from the eye object.
在一种可能的实现方式中,所述基于所述眼部对象的阴影信息,结合所述预设区域参数中的第一预设区域参数,从所述眼部对象中提取阴影区域,包括:基于所述眼部对象的反向灰度图,进行正片叠底混合,得到第一混合结果;根据所述第一混合结果与所述第一预设区域参数,确定所述眼部对象中像素点的第一透明度;根据第一预设透明度阈值和所述第一透明度,对所述眼部对象中的像素点进行提取,得到所述阴影区域,其中,所述阴影区域大于原始阴影区域,所述原始阴影区域通过基于所述阴影信息对所述眼部对象进行划分所得到。In a possible implementation manner, the extracting a shadow area from the eye object based on the shadow information of the eye object in combination with the first preset area parameter in the preset area parameters includes: Based on the reverse grayscale image of the eye object, perform multiply blending to obtain a first blending result; determine the pixels in the eye object according to the first blending result and the first preset area parameter The first transparency of the point; according to the first preset transparency threshold and the first transparency, pixel points in the eye object are extracted to obtain the shadow area, wherein the shadow area is larger than the original shadow area, The original shadow area is obtained by dividing the eye object based on the shadow information.
在一种可能的实现方式中,所述基于所述眼部对象的中间调信息,结合所述预设区域参数中的第二预设区域参数,从所述眼部对象中提取中间调区域,包括:基于所述眼部对象的灰度图,进行排除混合,得到第二混合结果;根据所述第二混合结果与所述第二预设区域参数,确定所述眼部对象中像素点的第二透明度;根据第二预设透明度阈值和所述第二透明度,对所述眼部对象中的像素点进行提取,得到所述中间调区域,其中,所述中间调区域大于原始中间调区域,所述原始中间调区域通过基于所述中间调信息对所述眼部对象进行划分所得到。In a possible implementation manner, the midtone area is extracted from the eye object based on the midtone information of the eye object in combination with the second preset area parameter among the preset area parameters, The method includes: performing exclusion mixing based on the grayscale image of the eye object to obtain a second mixing result; determining the pixel points in the eye object according to the second mixing result and the second preset area parameter Second transparency: according to the second preset transparency threshold and the second transparency, extract the pixel points in the eye object to obtain the midtone area, wherein the midtone area is larger than the original midtone area , the original midtone area is obtained by dividing the eye object based on the midtone information.
在一种可能的实现方式中,所述根据所述眼妆操作中的眼妆参数,对所述多个目标区域中的每个进行与该目标区域的色调匹配的眼妆处理,得到多个眼妆结果,包括:根据所述眼妆参数中的颜色参数,对所述多个目标区域分别进行渲染,得到多个中间眼妆结果;根据所述多个目标区域的色调,确定所述多个目标区域分别对应的处理方式;按照所述多个目标区域分别对应的处理方式,将所述眼部对象分别与所述多个中间眼妆结果进行混合处理,得到多个眼妆结果。In a possible implementation, according to the eye makeup parameters in the eye makeup operation, each of the multiple target areas is subjected to eye makeup processing that matches the color tone of the target area to obtain multiple The eye makeup results include: according to the color parameters in the eye makeup parameters, respectively rendering the multiple target areas to obtain multiple intermediate eye makeup results; according to the hues of the multiple target areas, determining the multiple target areas A processing method corresponding to each of the target areas; according to the processing methods corresponding to the plurality of target areas, the eye object and the plurality of intermediate eye makeup results are respectively mixed to obtain a plurality of eye makeup results.
在一种可能的实现方式中,所述目标区域包括以下至少一项:阴影区域或中间调区域;所述根据所述多个目标区域的色调,确定所述多个目标区域分别对应的处理方式,包括:响应于确定所述目标区域包括阴影区域,确定所述处理方式包括正片叠底混合;响应于确定所述目标区域包括中间调区域,确定所述处理方式包括正常混合。In a possible implementation manner, the target area includes at least one of the following: a shadow area or a midtone area; according to the hues of the multiple target areas, determining the corresponding processing methods of the multiple target areas , comprising: in response to determining that the target area includes a shadow area, determining that the processing manner includes multiply blending; in response to determining that the target area includes a midtone area, determining that the processing manner includes normal blending.
在一种可能的实现方式中,所述根据所述多个眼妆结果,生成目标用户图像,包括:将所述多个眼妆结果进行叠加,得到目标眼妆结果;根据所述眼妆参数中的融合参数,对所述目标眼妆结果与所述用户图像进行融合,得到所述目标用户图像。In a possible implementation manner, the generating the target user image according to the multiple eye makeup results includes: superimposing the multiple eye makeup results to obtain the target eye makeup result; Fusion parameters in , the target eye makeup result is fused with the user image to obtain the target user image.
根据本公开的一方面,提供了一种图像处理装置,包括:According to an aspect of the present disclosure, an image processing device is provided, including:
确定模块,用于响应于针对用户图像的眼妆操作,确定所述用户图像中待进行眼妆处理的眼部对象;划分模块,用于基于所述眼部对象的色调信息和预设区域参数,将所述眼部对象划分为多个目标区域,其中,所述目标区域的范围大于原始目标区域,所述原始目标区域通过基于所述色调信息对所述眼部对象进行划分所得到;眼妆模块,用于根据所述眼妆操作中的眼妆参数,对所述多个目标区域中的每个进行与该目标区域的色调匹配的眼妆处理,得到多个眼妆结果;生成模块,用于根据所述多个眼妆结果,生成目标用户图像。A determination module, configured to determine an eye object to be subjected to eye makeup processing in the user image in response to an eye makeup operation on the user image; a division module, configured to based on the hue information of the eye object and preset area parameters , dividing the eye object into a plurality of target areas, wherein the scope of the target area is larger than the original target area, and the original target area is obtained by dividing the eye object based on the hue information; the eye A makeup module, configured to perform eye makeup processing on each of the multiple target areas according to the eye makeup parameters in the eye makeup operation to match the color tone of the target area, to obtain multiple eye makeup results; a generation module , for generating a target user image according to the multiple eye makeup results.
在一种可能的实现方式中,所述确定模块用于:对所述用户图像进行关键点识别处理,确定所述眼部对象在所述用户图像中的初始位置;将所述用户图像分别复制至多个图层中;针对所述多个图层中的每个图层,在该图层中,以所述初始位置为中心进行位置扩展得到扩展位置,根据所述扩展位置,确定该图层中待进行眼妆处理的对象。In a possible implementation manner, the determining module is configured to: perform key point recognition processing on the user image, determine the initial position of the eye object in the user image; copy the user image respectively to a plurality of layers; for each layer in the plurality of layers, in the layer, the position is expanded with the initial position as the center to obtain the extended position, and the layer is determined according to the expanded position The object to be treated with eye makeup.
在一种可能的实现方式中,所述色调信息包括阴影信息或中间调信息中的至少一种;所述划分模块用于以下中至少一项:基于所述眼部对象的阴影信息,结合所述预设区域参数中的第一预设区域参数,从所述眼部对象中提取阴影区域;或,基于所述眼部对象的中间调信息,结合所述预设区域参数中的第二预设区域参数,从所述眼部对象中提取中间调区域。In a possible implementation manner, the hue information includes at least one of shadow information or midtone information; the dividing module is used for at least one of the following: based on the shadow information of the eye object, combined with the The first preset area parameter in the preset area parameters is used to extract the shadow area from the eye object; or, based on the midtone information of the eye object, combined with the second preset area parameter in the preset area parameters Set a region parameter to extract a midtone region from the eye object.
在一种可能的实现方式中,所述划分模块进一步用于:基于所述眼部对象的反向灰度图,进行正片叠底混合,得到第一混合结果;根据所述第一混合结果与所述第一预设区域参数,确定所述眼部对象中像素点的第一透明度;根据第一预设透明度阈值和所述第一透明度,对所述眼部对象中的像素点进行提取,得到所述阴影区域,其中,所述阴影区域大于原始阴影区域,所述原始阴影区域通过基于所述阴影信息对所述眼部对象进行划分所得到。In a possible implementation manner, the division module is further configured to: perform multiplicative blending based on the reverse grayscale image of the eye object to obtain a first blending result; according to the first blending result and The first preset area parameter determines the first transparency of the pixels in the eye object; extracts the pixels in the eye object according to the first preset transparency threshold and the first transparency, The shadow area is obtained, wherein the shadow area is larger than the original shadow area obtained by dividing the eye object based on the shadow information.
在一种可能的实现方式中,所述划分模块进一步用于:基于所述眼部对象的灰度图,进行排除混合,得到第二混合结果;根据所述第二混合结果与所述第二预设区域参数,确定所述眼部对象中像素点的第二透明度;根据第二预设透明度阈值和所述第二透明度,对所述眼部对象中的像素点进行提取,得到所述中间调区域,其中,所述中间调区域大于原始中间调区域,所述原始中间调区域通过基于所述中间调信息对所述眼部对象进行划分所得到。In a possible implementation manner, the dividing module is further configured to: perform exclusion mixing based on the grayscale image of the eye object to obtain a second mixing result; according to the second mixing result and the second Preset area parameters to determine the second transparency of the pixels in the eye object; extract the pixels in the eye object according to the second preset transparency threshold and the second transparency to obtain the intermediate a midtone area, wherein the midtone area is larger than an original midtone area, and the original midtone area is obtained by dividing the eye object based on the midtone information.
在一种可能的实现方式中,所述眼妆模块用于:根据所述眼妆参数中的颜色参数,对所述多个目标区域分别进行渲染,得到多个中间眼妆结果;根据所述多个目标区域的色调,确定所述多个目标区域分别对应的处理方式;按照所述多个目标区域分别对应的处理方式,将所述眼部对象分别与所述多个中间眼妆结果进行混合处理,得到多个眼妆结果。In a possible implementation manner, the eye makeup module is configured to: respectively render the multiple target areas according to the color parameters in the eye makeup parameters to obtain multiple intermediate eye makeup results; according to the The color tone of the multiple target areas, determine the corresponding processing methods of the multiple target areas; according to the processing methods corresponding to the multiple target areas, the eye objects are respectively compared with the multiple intermediate eye makeup results Blend the treatment for multiple eye makeup results.
在一种可能的实现方式中,所述目标区域包括以下至少一项:阴影区域或中间调区域中的一种或多种;所述眼妆模块进一步用于:响应于确定所述目标区域包括阴影区域,确定所述处理方式包括正片叠底混合;响应于确定所述目标区域包括中间调区域,确定所述处理方式包括正常混合。In a possible implementation manner, the target area includes at least one of the following: one or more of a shadow area or a midtone area; the eye makeup module is further configured to: respond to determining that the target area includes For the shaded area, it is determined that the processing method includes multiply blending; in response to determining that the target area includes a midtone area, it is determined that the processing method includes normal blending.
在一种可能的实现方式中,所述生成模块用于:将所述多个眼妆结果进行叠加,得到目标眼妆结果;根据所述眼妆参数中的融合参数,对所述目标眼妆结果与所述用户图像进行融合,得到所述目标用户图像。In a possible implementation manner, the generating module is configured to: superimpose the multiple eye makeup results to obtain a target eye makeup result; The result is fused with the user image to obtain the target user image.
根据本公开的一方面,提供了一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:执行上述图像处理方法。According to an aspect of the present disclosure, an electronic device is provided, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to: execute the above image processing method.
根据本公开的一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述图像处理方法。According to one aspect of the present disclosure, there is provided a computer-readable storage medium, on which computer program instructions are stored, and the above-mentioned image processing method is implemented when the computer program instructions are executed by a processor.
在本公开实施例中,通过响应于针对用户图像的眼妆操作,确定用户图像中待进行眼妆处理的眼部对象,从而根据眼部对象的色调信息和预设区域参数。将眼部对象划分为多个目标区域,其中目标区域的范围大于通过色调划分的原始目标区域的范围,并根据眼妆操作中的眼妆参数,对多个目标区域中的每个进行与该目标区域的色调匹配的眼妆处理,来得到多个眼妆结果,并根据多个眼妆结果生成对眼部对象进行眼妆处理后的目标用户图像。通过上述过程,可以通过预设区域参数,令不同目标区域之间在色调上的区别度更大,且可以获取到相对于仅根据色调划分得到的原始目标区域更大的目标区域。基于该目标区域进行相应的眼妆处理,可以使得眼部对象的整体眼妆效果更加自然真实。比如可以根据阴影和中间调等色调将眼部对象划分成阴影区域和中间调区域,从而在对阴影区域进行眼妆处理的过程中,尽可能减小眼妆的亮度,对中间调区域进行眼妆处理的过程中,尽可能保持眼妆的亮度,使得眼部对象的眼妆效果与眼部对象原本的色调分布情况相匹配,提升眼妆的真实性和自然程度,且可以省略将眼部对象划分为高光区域的过程,减少处理的数据量,提高图像处理的效率。In the embodiment of the present disclosure, by responding to the eye makeup operation on the user image, the eye object to be subjected to eye makeup processing in the user image is determined, thereby according to the hue information of the eye object and the preset area parameters. Divide the eye object into a plurality of target regions, wherein the range of the target region is larger than the range of the original target region divided by the hue, and according to the eye makeup parameters in the eye makeup operation, each of the plurality of target regions is compared with the The eye makeup processing of the color tone matching of the target area to obtain multiple eye makeup results, and generate the target user image after eye makeup processing on the eye object according to the multiple eye makeup results. Through the above process, the difference in hue between different target areas can be made greater by preset area parameters, and a larger target area can be obtained than the original target area obtained by dividing only based on the hue. Performing corresponding eye makeup processing based on the target area can make the overall eye makeup effect of the eye object more natural and real. For example, eye objects can be divided into shadow areas and mid-tone areas according to tones such as shadows and mid-tones, so that in the process of applying eye makeup to shadow areas, the brightness of eye makeup can be reduced as much as possible, and eye makeup can be applied to mid-tone areas. In the process of makeup treatment, the brightness of the eye makeup should be kept as much as possible, so that the eye makeup effect of the eye object can match the original color distribution of the eye object, and the authenticity and naturalness of the eye makeup can be improved, and the eye makeup can be omitted. The process of dividing an object into highlight areas reduces the amount of data processed and improves the efficiency of image processing.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments with reference to the accompanying drawings.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。The accompanying drawings here are incorporated into the description and constitute a part of the present description. These drawings show embodiments consistent with the present disclosure, and are used together with the description to explain the technical solution of the present disclosure.
图1示出根据本公开一实施例的图像处理方法的流程图。Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
图2示出根据本公开一实施例的图像处理方法的流程图。Fig. 2 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
图3示出根据本公开一实施例的图像处理方法的流程图。Fig. 3 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
图4示出根据本公开一实施例的图像处理方法的流程图。Fig. 4 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
图5示出根据本公开一实施例的图像处理装置的框图。FIG. 5 shows a block diagram of an image processing device according to an embodiment of the present disclosure.
图6示出根据本公开实施例的一种电子设备的框图。Fig. 6 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
图7示出根据本公开实施例的一种电子设备的框图。Fig. 7 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
具体实施方式Detailed ways
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。Various exemplary embodiments, features, and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. The same reference numbers in the figures indicate functionally identical or similar elements. While various aspects of the embodiments are shown in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as superior or better than other embodiments.
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。The term "and/or" in this article is just an association relationship describing associated objects, which means that there can be three relationships, for example, A and/or B can mean: A exists alone, A and B exist simultaneously, and there exists alone B these three situations. In addition, the term "at least one" herein means any one of a variety or any combination of at least two of the more, for example, including at least one of A, B, and C, which may mean including from A, Any one or more elements selected from the set formed by B and C.
另外,为了更好地说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。In addition, in order to better illustrate the present disclosure, numerous specific details are given in the following specific implementation manners. It will be understood by those skilled in the art that the present disclosure may be practiced without some of the specific details. In some instances, methods, means, components and circuits that are well known to those skilled in the art have not been described in detail so as to obscure the gist of the present disclosure.
图1示出根据本公开一实施例的图像处理方法的流程图,该方法可以应用于图像处理装置或图像处理系统等,图像处理装置可以为终端设备、服务器或者其他处理设备等。其中,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一个示例中,该图像处理方法可以应用于云端服务器或本地服务器,云端服务器可以为公有云服务器,也可以为私有云服务器,根据实际情况灵活选择即可。Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure. The method can be applied to an image processing device or an image processing system, and the image processing device can be a terminal device, a server, or other processing devices. Among them, the terminal device can be user equipment (User Equipment, UE), mobile device, user terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, computing device, vehicle-mounted device, wearable device wait. In an example, the image processing method can be applied to a cloud server or a local server, and the cloud server can be a public cloud server or a private cloud server, which can be flexibly selected according to actual conditions.
在一些可能的实现方式中,该图像处理方法也可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。In some possible implementation manners, the image processing method may also be implemented in a manner in which the processor invokes computer-readable instructions stored in the memory.
如图1所示,在一种可能的实现方式中,所述图像处理方法可以包括:As shown in Figure 1, in a possible implementation, the image processing method may include:
步骤S11,响应于针对用户图像的眼妆操作,确定用户图像中待进行眼妆处理的眼部对象。Step S11 , in response to the eye makeup operation on the user image, determine eye objects to be subjected to eye makeup processing in the user image.
其中,用户图像可以是包含用户的眼部的任意图像,用户图像中可以包含一个或多个用户,也可以包含一个或多个用户的眼部,其实现形式可以根据实际情况灵活决定,在本公开实施例中不做限制。Wherein, the user image can be any image including the user's eyes, and the user image can include one or more users, or one or more users' eyes, and its implementation form can be flexibly determined according to the actual situation. No limitation is imposed in the disclosed embodiments.
眼部对象可以是用户图像中需要进行眼妆处理的脸部的部位对象,眼部对象可以包含完整的眼部,比如包含双眼以及双眼附近可以进行眼妆的相关部位,如可以进行眼影渲染的部位,或是可以进行美瞳渲染的眼球部位等,也可以仅包含左眼或右眼等对象, 根据眼妆操作的实际情况灵活选择即可。The eye object can be a part of the face that needs eye makeup processing in the user image. The eye object can include the complete eye, such as the eyes and the related parts near the eyes that can be used for eye makeup, such as eye shadow rendering. Parts, or eyeball parts that can be rendered with color contact lenses, etc., can also only include objects such as the left eye or right eye, which can be flexibly selected according to the actual situation of eye makeup operation.
眼妆操作,可以是对用户图像的眼部对象进行眼妆处理的任意操作,比如眼影渲染或是美瞳渲染等各类操作。眼妆操作包含的操作内容可以根据实际情况灵活决定,不局限于下述各公开实施例。在一种可能的实现方式中,该眼妆操作可以包括对用户图像中的眼部对象进行眼妆处理的操作;在一些可能的实现方式中,该眼妆操作还可以包括输入的各类眼妆参数等。The eye makeup operation can be any operation of eye makeup processing on the eye object of the user image, such as eye shadow rendering or color contact lens rendering. The operation content included in the eye makeup operation can be flexibly determined according to the actual situation, and is not limited to the following disclosed embodiments. In a possible implementation manner, the eye makeup operation may include performing eye makeup processing on the eye objects in the user image; in some possible implementation manners, the eye makeup operation may also include the input makeup parameters, etc.
眼妆参数可以是用户输入的,对眼部对象进行眼妆处理的相关参数,该眼妆参数的实现形式可以灵活决定,比如颜色参数、融合参数等各类参数。The eye makeup parameters can be user-input parameters related to eye makeup processing on eye objects. The implementation form of the eye makeup parameters can be flexibly determined, such as color parameters, fusion parameters and other parameters.
确定眼部对象的方式在本公开实施例中不做限制,不局限于下述各公开实施例。在一种可能的实现方式中,可以对用户图像进行眼部识别处理以确定目标位置。其中,识别处理的方式在本公开实施例中不做限制,比如可以为关键点识别、对眼部整体进行直接识别等。The manner of determining the eye object is not limited in the disclosed embodiments, and is not limited to the following disclosed embodiments. In a possible implementation manner, eye recognition processing may be performed on the user image to determine the target location. The manner of recognition processing is not limited in the embodiment of the present disclosure, for example, it may be key point recognition, direct recognition of the whole eye, and the like.
步骤S12,基于眼部对象的色调信息和预设区域参数,将眼部对象划分为多个目标区域。Step S12, dividing the eye object into a plurality of target areas based on the hue information of the eye object and preset area parameters.
其中,色调信息可以反映眼部对象的相对明暗程度。色调信息包含的信息内容可以根据实际情况灵活决定,在一种可能的实现方式中,色调信息可以包括高光信息、阴影信息以及中间调信息中的一种或多种。Wherein, the hue information may reflect the relative lightness and darkness of the eye object. The information contained in the hue information may be flexibly determined according to actual conditions. In a possible implementation manner, the hue information may include one or more of highlight information, shadow information, and midtone information.
其中,高光信息可以反映眼部对象中具有较高亮度的区域,阴影信息可以反映眼部对象中具有较低亮度的区域,而中间调信息则可以反映眼部对象中亮度处于高光和阴影之间的区域。Among them, the highlight information can reflect the region with higher brightness in the eye object, the shadow information can reflect the region with lower brightness in the eye object, and the midtone information can reflect the brightness in the eye object is between the highlight and shadow Area.
不同的色调信息可以通过不同的色调方式从眼部对象中确定,在一些可能的实现方式中,可以直接根据眼部对象中像素点的亮度来获取上述色调信息,在一些可能的实现方式中,也可以通过不同的方式对眼部对象进行处理,以得到不同的色调信息。获取色调信息的方式可以详见下述各公开实施例,在此先不做展开。Different hue information can be determined from the eye object in different hue ways. In some possible implementations, the above hue information can be obtained directly according to the brightness of the pixels in the eye object. In some possible implementations, Eye objects can also be processed in different ways to obtain different hue information. The manner of acquiring hue information can be referred to the following disclosed embodiments in detail, and will not be expanded here.
预设区域参数可以是用于确定目标区域的相关参数,该预设区域参数的实现形式在本公开实施例中不做限制,比如可以是设定的区域范围或范围阈值等相关参数。The preset area parameter may be a related parameter used to determine the target area, and the implementation form of the preset area parameter is not limited in the embodiment of the present disclosure, for example, it may be related parameters such as a set area range or a range threshold.
在一种可能的实现方式中,基于眼部对象的色调信息,可以将眼部对象划分为多个原始目标区域,在该原始目标区域的基础上,可以通过预设区域参数,在原始目标区域的基础上进一步确定目标区域,在这种情况下,预设区域参数可以是在原始目标区域的基础上,用于确定目标区域的相关参数。比如可以是对原始目标区域进行范围调整的预设数值,或是在原始目标区域的基础上,进一步确定目标区域的过程中所需要使用的相关参数等,详见下述各公开实施例,在此先不做展开。由于目标区域可以在原始目标区域的基础上所确定,因此二者可能存在一定的对应关系,在一种可能的实现方式中,目标区域的范围可以大于原始目标区域。In a possible implementation, based on the hue information of the eye object, the eye object can be divided into multiple original target areas. Based on the original target area, the original target area can be The target area is further determined on the basis of the original target area. In this case, the preset area parameters may be related parameters used to determine the target area on the basis of the original target area. For example, it may be the preset value for adjusting the range of the original target area, or the relevant parameters that need to be used in the process of further determining the target area on the basis of the original target area. See the following disclosed embodiments for details. Do not expand here. Since the target area can be determined on the basis of the original target area, there may be a certain corresponding relationship between the two. In a possible implementation manner, the range of the target area may be larger than the original target area.
目标区域包含的区域类型与区域位置等,均可以眼部对象中色调信息的实际情况灵活决定,多个目标区域之间可以存在重叠区域,也可以彼此之间相互独立,在本公开实施例中不做限制。The area type and area position contained in the target area can be flexibly determined according to the actual situation of the color tone information in the eye object. There may be overlapping areas between multiple target areas, and they may also be independent of each other. In the embodiment of the present disclosure No restrictions.
在一些可能的实现方式中,目标区域可以包括高光区域、阴影区域和中间调区域中的一种或多种。In some possible implementation manners, the target area may include one or more of a highlight area, a shadow area, and a midtone area.
根据色调信息,将眼部对象划分为多个目标区域的方式可以随着色调信息的不同而灵活发生变化,比如可以根据获取的高光信息、阴影信息以及中间调信息,将眼部对象中的像素点分别划分到高光区域、阴影区域和中间调区域等。步骤S12的一些可能的实 现方式可以详见下述各公开实施例,在此先不做展开。According to the hue information, the method of dividing the eye object into multiple target areas can be flexibly changed with the different hue information, for example, according to the obtained highlight information, shadow information and midtone information, the pixels in the eye object can be Points are divided into highlight area, shadow area and midtone area etc. respectively. For some possible implementations of step S12, refer to the following disclosed embodiments for details, which will not be expanded here.
步骤S13,根据眼妆操作中的眼妆参数,对多个目标区域中的每个进行与该目标区域的色调匹配的眼妆处理,得到多个眼妆结果。Step S13 , according to the eye makeup parameters in the eye makeup operation, perform eye makeup processing on each of the multiple target areas that matches the color tone of the target area, and obtain multiple eye makeup results.
其中,眼妆参数的实现形式可以参考上述各公开实施例,在此不再赘述。对于多个目标区域来说,不同目标区域对应的眼妆参数可以相同,也可以不同,比如不同的目标区域可以对应相同或不同的颜色参数,或是不同区域可以采用相同或不同的融合参数进行颜色参数的融合等,可以根据实际情况进行灵活设定,不局限于本公开各实施例。For the implementation form of the eye makeup parameters, reference may be made to the above disclosed embodiments, which will not be repeated here. For multiple target areas, the eye makeup parameters corresponding to different target areas can be the same or different. For example, different target areas can correspond to the same or different color parameters, or different areas can use the same or different fusion parameters. The fusion of color parameters and the like can be flexibly set according to actual conditions, and are not limited to the embodiments of the present disclosure.
眼妆结果可以是目标区域经过眼妆处理后所得到的结果。随着目标区域的不同,眼妆处理的方式也可能不同,因此可以分别对多个目标区域进行相应的眼妆处理,以得到多个眼妆结果。步骤S13的一些可能的实现方式可以详见下述各公开实施例,在此先不做展开。The eye makeup result may be the result obtained after the target area has been treated with eye makeup. Depending on the target area, the way of eye makeup treatment may also be different, so corresponding eye makeup treatment can be performed on multiple target areas to obtain multiple eye makeup results. For some possible implementation manners of step S13, reference may be made to the following disclosed embodiments in detail, which will not be expanded here.
步骤S14,根据多个眼妆结果,生成目标用户图像。Step S14, generating target user images according to multiple eye makeup results.
其中,目标用户图像,可以是对用户图像的眼部对象进行眼妆处理所得到的图像,生成目标用户图像的方式可以根据实际情况灵活决定,比如可以将多个眼妆结果进行融合以得到目标用户图像,或是将多个眼妆结果与用户图像进行融合以得到目标用户图像。在一些可能的实现方式中,多个眼妆结果也可以分别属于多个图层,在这种情况下,可以通过图层叠加以得到目标用户图像。The target user image can be an image obtained by performing eye makeup processing on the eye object of the user image, and the method of generating the target user image can be flexibly determined according to the actual situation. For example, multiple eye makeup results can be fused to obtain the target user image. the user image, or combine multiple eye makeup results with the user image to obtain the target user image. In some possible implementation manners, multiple eye makeup results may also respectively belong to multiple layers. In this case, the target user image may be obtained by superimposing the layers.
步骤S14的一些可能的实现方式可以详见下述各公开实施例,在此先不做展开。For some possible implementation manners of step S14, reference may be made to the following disclosed embodiments in detail, which will not be expanded here.
在本公开实施例中,通过响应于针对用户图像的眼妆操作,确定用户图像中待进行眼妆处理的眼部对象,从而根据眼部对象的色调信息和预设区域参数,将眼部对象划分为多个目标区域,其中目标区域的范围大于通过色调划分的原始目标区域的范围,并根据眼妆操作中的眼妆参数,对多个目标区域中的每个进行与该目标区域的色调匹配的眼妆处理,来得到多个眼妆结果,并根据多个眼妆结果生成对眼部对象进行眼妆处理后的目标用户图像。通过上述过程,可以通过预设区域参数,令不同目标区域之间在色调上的区别度更大,且可以获取到相对于仅根据色调划分得到的原始目标区域更大的目标区域,基于该目标区域进行相应的眼妆处理,可以使得眼部对象的整体眼妆效果更加自然真实,比如可以根据阴影和中间调等色调将眼部对象划分成阴影区域和中间调区域,从而在对阴影区域进行眼妆处理的过程中,尽可能减小眼妆的亮度,对中间调区域进行眼妆处理的过程中,尽可能保持眼妆的亮度,使得眼部对象的眼妆效果与眼部对象原本的色调分布情况相匹配,提升眼妆的真实性和自然程度,且可以省略将眼部对象划分为高光区域的过程,减少处理的数据量,提高图像处理的效率。In the embodiment of the present disclosure, by responding to the eye makeup operation on the user image, the eye object to be subjected to eye makeup processing in the user image is determined, so that according to the hue information of the eye object and the preset area parameters, the eye object Divide into a plurality of target areas, wherein the range of the target area is larger than the range of the original target area divided by the color tone, and according to the eye makeup parameters in the eye makeup operation, each of the plurality of target areas is compared with the color tone of the target area Matching eye makeup processing to obtain multiple eye makeup results, and generate a target user image after performing eye makeup processing on eye objects according to the multiple eye makeup results. Through the above process, the difference in hue between different target areas can be made greater by presetting the area parameters, and a larger target area can be obtained compared to the original target area that is only divided according to the hue. Based on the target The corresponding eye makeup treatment of the area can make the overall eye makeup effect of the eye object more natural and real. For example, the eye object can be divided into a shadow area and a mid-tone area according to the shades such as shadows and mid-tones, so that the shadow area can be adjusted. In the process of eye makeup processing, the brightness of eye makeup should be reduced as much as possible, and in the process of eye makeup processing in the middle tone area, the brightness of eye makeup should be kept as much as possible, so that the eye makeup effect of the eye object is consistent with the original color of the eye object. Matching the color tone distribution improves the authenticity and naturalness of eye makeup, and can omit the process of dividing eye objects into highlight areas, reducing the amount of processed data and improving the efficiency of image processing.
在一种可能的实现方式中,眼部对象可以包括分别位于多个图层中的多个眼部对象,举例来说,对于眼影渲染这一眼妆操作来说,可以对眼部附近的一个或多个区域进行渲染,比如基础上眼影区域、基础下眼影区域、上眼皮区域、外眼角区域、内眼角区域以及右上眼影区域等6个区域。在这种情况下,可以分别将这6个区域作为眼部对象来进行后续的操作。In a possible implementation manner, the eye object may include multiple eye objects respectively located in multiple layers. For example, for the eye makeup operation of eye shadow rendering, one or Multiple areas are rendered, such as the basic eye shadow area, the basic lower eye shadow area, the upper eyelid area, the outer corner area, the inner eye corner area, and the upper right eye shadow area. In this case, these six regions can be used as eye objects for subsequent operations.
因此,在一种可能的实现方式中,步骤S11可以包括:Therefore, in a possible implementation manner, step S11 may include:
对用户图像进行关键点识别处理,确定眼部对象在用户图像中的初始位置;Perform key point recognition processing on the user image to determine the initial position of the eye object in the user image;
将用户图像分别复制至多个图层中;Copy user images to multiple layers respectively;
在每个图层中,以初始位置为中心进行位置扩展得到扩展位置,根据扩展位置,确定每个图层中的眼部对象。In each layer, expand the position centered on the initial position to obtain the expanded position, and determine the eye object in each layer according to the expanded position.
其中,关键点识别的方式在本公开实施例中不做限制,比如可以通过相关的关键点识别算法进行识别,或是通过具有关键点识别功能的神经网络进行识别等。Wherein, the key point identification method is not limited in the embodiment of the present disclosure, for example, it may be identified through a relevant key point identification algorithm, or through a neural network with a key point identification function.
初始位置可以为眼部对象在用户图像中的位置,在一种可能的实现方式中,存在多个眼部对象的情况下,可以将这些眼部对象的中心区域的位置作为初始位置,举例来说,对于眼影渲染这一操作,作为眼部对象的多个眼影区域均分布在眼睛的周围,因此可以将眼睛在用户图像中的位置作为初始位置。The initial position may be the position of the eye object in the user image. In a possible implementation, if there are multiple eye objects, the positions of the central areas of these eye objects may be used as the initial position. For example, In other words, for the operation of eye shadow rendering, multiple eye shadow areas as eye objects are distributed around the eyes, so the position of the eyes in the user image can be used as the initial position.
图层可以是具有图像处理或编辑功能的任意图层,比如图像编辑软件(Photoshop,PS)中的编辑图层。A layer may be any layer with image processing or editing functions, such as an editing layer in image editing software (Photoshop, PS).
多个图层的数量可以根据实际情况灵活决定,在本公开实施例中不做限制,在一些可能的实现方式中,图层的数量可以与多个眼部对象的数量相同;在一些可能的实现方式中,图层的数量也可以小于多个眼部对象的数量,在这种情况下,可能在某个或某些图层中,同时确定两个或两个以上的眼部对象,比如在某个图层中,可以同时确定基础上眼影区域对象和基础下眼影区域对象。The number of multiple layers can be flexibly determined according to the actual situation, and is not limited in this embodiment of the present disclosure. In some possible implementations, the number of layers can be the same as the number of multiple eye objects; in some possible In the implementation mode, the number of layers can also be smaller than the number of multiple eye objects. In this case, it is possible to determine two or more eye objects in one or some layers at the same time, such as In a layer, you can determine both the base eye shadow area object and the base eye shadow area object.
将用户图像分别复制至多个图层中,可以是将用户图像中的多个眼部对象分别复制至多个图层中,也可以是将用户图像整体复制至多个图层中,或是直接将用户图像所在的原始图层复制至多个图层中等。Copy the user image to multiple layers respectively, which can be to copy multiple eye objects in the user image to multiple layers, or copy the entire user image to multiple layers, or directly copy the user image to multiple layers. The original layer the image was on is copied to multiple layers, etc.
在每个图层中,可以以初始位置为中心进行位置扩展,以得到扩展位置,并根据扩展位置确定每个图层中的眼部对象。其中,扩展的方式在本公开实施例中不做限制,比如可以在每个图层中,根据该图层中的眼部对象与初始位置之间的位置对应关系,来在相应的位置方向上按照相应的范围进行扩展,其中位置对应关系可以根据眼部对象与初始位置之间的客观关系所确定,比如基础上眼影区域对象可以在眼睛这一初始位置的上方,内眼角区域在眼睛这一初始位置的内侧等。In each layer, the position can be expanded with the initial position as the center to obtain the expanded position, and the eye objects in each layer can be determined according to the expanded position. The way of extension is not limited in this embodiment of the present disclosure. For example, in each layer, according to the position correspondence between the eye object in the layer and the initial position, the corresponding position direction can be Expand according to the corresponding range, and the position correspondence can be determined according to the objective relationship between the eye object and the initial position. The inside of the initial position, etc.
在每个图层中确定眼部对象以后,可以分别在每个图层内,通过步骤S12至步骤S14来确定该眼部对象的眼妆结果,在一些可能的实现方式中,可以将各个图层的眼妆结果进行叠加,来进一步得到眼妆处理后的目标用户图像等。After the eye object is determined in each layer, the eye makeup result of the eye object can be determined in each layer through steps S12 to S14. In some possible implementations, each image can be Layer eye makeup results are superimposed to further obtain the target user image after eye makeup processing.
通过本公开实施例,可以通过将用户图像复制至多个图层,分别且独立地确定多个眼部对象,便于后续对多个眼部对象分别进行眼妆处理,提升眼妆的灵活性,也便于对各眼部对象的眼妆效果进行更改,提升眼妆的丰富程度。Through the embodiments of the present disclosure, multiple eye objects can be determined separately and independently by copying user images to multiple layers, which facilitates subsequent eye makeup processing on multiple eye objects, improves the flexibility of eye makeup, and also It is convenient to change the eye makeup effect of each eye object to enhance the richness of eye makeup.
图2示出根据本公开一实施例的图像处理方法的流程图,如图所示,在一种可能的实现方式中,步骤S12可以包括以下操作中的一种或多种:FIG. 2 shows a flowchart of an image processing method according to an embodiment of the present disclosure. As shown in the figure, in a possible implementation manner, step S12 may include one or more of the following operations:
步骤S121,基于眼部对象的阴影信息,结合预设区域参数中的第一预设区域参数,从眼部对象中提取阴影区域。或,Step S121 , based on the shadow information of the eye object, combined with the first preset area parameter among the preset area parameters, extract the shadow area from the eye object. or,
步骤S122,基于眼部对象的中间调信息,结合预设区域参数中的第二预设区域参数,从眼部对象中提取中间调区域。Step S122, based on the midtone information of the eye object, combined with the second preset region parameter among the preset region parameters, extracting a midtone region from the eye object.
其中,步骤S121和步骤S122的编号仅用于对上述不同的步骤进行区分,并不限制不同步骤的实现顺序,不同步骤可以同时执行,也可以依次执行,顺序在本公开实施例中不做限制。步骤S12可以同时包含上述两个步骤,也可以选择性地执行其中的部分步骤。Wherein, the numbers of step S121 and step S122 are only used to distinguish the above-mentioned different steps, and do not limit the implementation order of different steps. Different steps can be executed simultaneously or sequentially, and the order is not limited in the embodiments of the present disclosure. . Step S12 may include the above two steps at the same time, and may also selectively execute some of the steps.
通过本公开实施例,可以根据阴影信息和/或中间调信息,灵活地将眼部对象划分为阴影区域和/或中间调区域。一方面可以有效提升眼部对象美化的灵活程度,便于灵活快速地实现眼妆处理;另一方面由于眼部对象中高光区域的影响较小,且高光区域处理后的眼妆效果不明显,因此本公开实施例可以省略对高光区域的处理,提升眼妆处理的效 率。Through the embodiments of the present disclosure, eye objects can be flexibly divided into shadow regions and/or midtone regions according to shadow information and/or midtone information. On the one hand, it can effectively improve the flexibility of eye object beautification, and it is convenient to realize eye makeup processing flexibly and quickly; The embodiment of the present disclosure can omit the processing of the highlight area, and improve the efficiency of eye makeup processing.
在一种可能的实现方式中,步骤S121可以包括:In a possible implementation, step S121 may include:
基于眼部对象的反向灰度图,进行正片叠底混合,得到第一混合结果;Based on the reverse grayscale image of the eye object, perform multiplication and bottom blending to obtain the first blending result;
根据第一混合结果与第一预设区域参数,确定眼部对象中像素点的第一透明度;determining a first transparency of pixels in the eye object according to the first mixing result and the first preset area parameter;
根据第一预设透明度阈值和第一透明度,对眼部对象中的像素点进行提取,得到阴影区域。According to the first preset transparency threshold and the first transparency, pixel points in the eye object are extracted to obtain a shadow area.
其中,眼部对象的反向灰度图,可以是对眼部对象中各像素点进行反向灰度处理后所得到的图像,反向灰度可以对眼部对象的图像灰度范围进行线性或非线性取反,以得到与眼部对象的灰度图相反的图像。基于眼部对象的反向灰度图进行正片叠底混合,可以是将眼部对象的反向灰度图自身进行正片叠底形式的混合,以得到第一混合结果,具体来说,可以是将眼部对象的反向灰度图复制后,基于两张相同的反向灰度图进行正片叠底混合,来得到第一混合结果。Wherein, the reverse grayscale image of the eye object can be an image obtained after performing reverse grayscale processing on each pixel in the eye object, and the reverse grayscale can perform linear or non-linear inversion to obtain an image opposite to the grayscale image of the eye object. Multiply blending based on the reversed grayscale image of the eye object may be to perform multiplicative blending on the reversed grayscale image of the eye object itself to obtain the first blended result, specifically, it may be After copying the reversed grayscale image of the eye object, multiply blending is performed based on the two identical reversed grayscaled images to obtain the first blending result.
通过上述方式所得到的第一混合结果,可以反映眼部对象的阴影信息,因此,基于该第一混合结果,可以进一步确定眼部对象中的原始阴影区域,该原始阴影区域可以是仅根据得到的第一混合结果所代表的阴影信息,来对眼部对象进行划分所得到的区域。The first mixed result obtained by the above method can reflect the shadow information of the eye object, therefore, based on the first mixed result, the original shadow area in the eye object can be further determined, and the original shadow area can be obtained only according to The shadow information represented by the first blending result is used to divide the eye object into the obtained area.
因此,在一种可能的实现方式中,可以根据第一混合结果,进一步结合第一预设区域参数来确定阴影区域。其中,第一预设区域参数可以是用以确定阴影区域的预设参数,其参数值可以根据实际情况灵活决定,不局限于本公开各实施例。在一个示例中,第一预设参数可以为大于1(例如,1.1~1.7)的任意数值。Therefore, in a possible implementation manner, the shadow area may be determined according to the first mixing result and further combined with the first preset area parameter. Wherein, the first preset area parameter may be a preset parameter for determining the shaded area, and its parameter value may be flexibly determined according to actual conditions, and is not limited to the embodiments of the present disclosure. In an example, the first preset parameter may be any value greater than 1 (for example, 1.1˜1.7).
具体地,可以获取第一混合结果中各像素点的阿尔法(alpha)通道值,该alpha通道值可以与第一预设区域参数相乘,得到相乘后的第一灰度结果。该第一灰度结果可以通过灰度与透明度之间的映射关系,映射为透明度的值,从而确定眼部对象中各像素点的第一透明度。映射关系的具体映射方式可以根据实际情况灵活设定,在本公开实施例中不做限制。Specifically, an alpha (alpha) channel value of each pixel in the first mixed result may be obtained, and the alpha channel value may be multiplied by a first preset area parameter to obtain a multiplied first grayscale result. The first grayscale result can be mapped to a transparency value through the mapping relationship between grayscale and transparency, so as to determine the first transparency of each pixel in the eye object. The specific mapping manner of the mapping relationship can be flexibly set according to the actual situation, and is not limited in this embodiment of the present disclosure.
第一预设透明度阈值可以是用于对眼部对象中属于阴影区域的像素点进行筛选的预设阈值,该第一预设透明度阈值的具体数值在本公开实施例中不做限制。The first preset transparency threshold may be a preset threshold for filtering the pixels belonging to the shadow area in the eye object, and the specific value of the first preset transparency threshold is not limited in this embodiment of the present disclosure.
根据第一预设透明度阈值,可以判断眼部对象中各像素点的第一透明度是否在第一预设透明度阈值的范围以内,若是则确定该像素点属于阴影区域,否则认为该像素点属于阴影区域以外的区域,通过上述筛选过程,可以从眼部对象中筛选得到属于阴影区域的多个像素点,继而对眼部对象中属于阴影区域的像素点进行提取以得到阴影区域。According to the first preset transparency threshold, it can be judged whether the first transparency of each pixel in the eye object is within the range of the first preset transparency threshold, if so, it is determined that the pixel belongs to the shadow area, otherwise it is considered that the pixel belongs to the shadow For regions other than the region, through the above screening process, multiple pixels belonging to the shadow region can be obtained from the eye object, and then the pixels belonging to the shadow region in the eye object are extracted to obtain the shadow region.
由于第一透明度可以是基于第一混合结果与第一预设区域参数相乘所得到的,因此对于眼部对象中的某些像素点,基于其本身的第一混合结果所确定的透明度,可能不属于第一预设透明度阈值的范围以内,而将其与第一预设区域参数相乘以后,该像素点则可以划分至阴影区域。因此,通过本公开实施例的方法得到的阴影区域,可以大于仅根据阴影信息所确定的原始阴影区域。举例来说,在一个示例中,根据第一混合结果可以确定某像素点的alpha通道值为100,对应的透明度为39%,不属于第一预设透明度阈值(比如大于40%)的范围以内,而将该第一混合结果与第一预设区域参数(比如1.2)相乘以后,可以确定该像素点的第一灰度结果为120,映射得到的第一透明度为47%,故通过与第一预设区域参数相乘以后,该像素点可以属于第一预设透明度阈值的范围以内,继而属于阴影区域。Since the first transparency can be obtained based on the multiplication of the first blending result and the first preset area parameter, for some pixels in the eye object, the transparency determined based on its own first blending result may be If it does not belong to the range of the first preset transparency threshold, and after multiplying it by the first preset area parameter, the pixel can be classified into a shadow area. Therefore, the shadow area obtained by the method of the embodiment of the present disclosure may be larger than the original shadow area determined only according to the shadow information. For example, in one example, according to the first blending result, it can be determined that the alpha channel value of a certain pixel is 100, and the corresponding transparency is 39%, which is not within the range of the first preset transparency threshold (for example, greater than 40%) , and after multiplying the first mixed result with the first preset area parameter (such as 1.2), it can be determined that the first grayscale result of the pixel is 120, and the first transparency obtained by mapping is 47%, so by combining with After the first preset area parameter is multiplied, the pixel point may belong to the range of the first preset transparency threshold, and then belong to the shadow area.
通过本公开实施例,可以利用对眼部对象的反向灰度图进行正片叠底混合来得到反映阴影信息的第一混合结果,从而根据该混合结果与第一预设区域参数,从眼部对象中 提取阴影区域,一方面这种获取阴影信息的方式快速便捷,且准确性较高,在提高眼妆处理效率的同时,可以有效提升眼妆的精确度,继而提升眼妆效果;另一方面通过引入第一预设区域参数,可以有效增强阴影区域相对于其他目标区域的对比度,且可以扩大阴影区域的区域范围,弥补省略划分的高光区域,从而使得根据划分后的目标区域进行的眼妆处理可以更加准确且具有更好的效果。Through the embodiments of the present disclosure, the first blending result reflecting the shadow information can be obtained by multiplying and blending the reverse grayscale image of the eye object, so that according to the blending result and the first preset area parameters, the Extract the shadow area from the object. On the one hand, this method of obtaining shadow information is fast and convenient, and the accuracy is high. While improving the efficiency of eye makeup processing, it can effectively improve the accuracy of eye makeup, and then improve the effect of eye makeup; On the one hand, by introducing the first preset area parameter, the contrast of the shadow area relative to other target areas can be effectively enhanced, and the area range of the shadow area can be expanded to make up for the omission of the divided highlight area, so that the eyes based on the divided target area Makeup processing can be more accurate and have better results.
在一种可能的实现方式中,步骤S122可以包括:In a possible implementation manner, step S122 may include:
基于眼部对象的灰度图,进行排除混合,得到第二混合结果;Based on the grayscale image of the eye object, performing exclusion mixing to obtain a second mixing result;
根据第二混合结果与第二预设区域参数,确定眼部对象中像素点的第二透明度;According to the second mixing result and the second preset area parameter, determine the second transparency of the pixel in the eye object;
根据第二预设透明度阈值和第二透明度,对眼部对象中的像素点进行提取,得到中间调区域。According to the second preset transparency threshold and the second transparency, pixel points in the eye object are extracted to obtain a midtone area.
其中,眼部对象的灰度图,可以是对眼部对象进行灰度处理所得到的图像。基于眼部对象的灰度图进行排除混合,可以是将眼部对象的灰度图自身进行排除形式的混合,以得到第二混合结果,其实现形式可以参照上述各公开实施例,在此不再赘述。其中,排除混合可以是PS编辑软件中的一种图像混合模式,可以改变图像的亮度及灰度,基于该排除混合的结果,可以获知眼部对象的中间调信息。Wherein, the grayscale image of the eye object may be an image obtained by performing grayscale processing on the eye object. Excluding and mixing based on the grayscale image of the eye object may be to mix the grayscale image of the eye object itself in a form of exclusion to obtain the second mixed result, and its implementation form can refer to the above-mentioned disclosed embodiments, which are not described herein. Let me repeat. Wherein, exclusion blending can be an image blending mode in PS editing software, which can change the brightness and grayscale of an image, and based on the result of exclusion blending, midtone information of eye objects can be obtained.
因此,通过上述方式所得到的第二混合结果,可以反映眼部对象的中间调信息,因此,基于该第二混合结果,可以进一步确定眼部对象中的原始中间调区域,该原始中间调区域可以是仅根据得到的第二混合结果所代表的中间调信息,来对眼部对象进行划分所得到的区域。Therefore, the second mixed result obtained in the above manner can reflect the midtone information of the eye object, and therefore, based on the second mixed result, the original midtone region in the eye object can be further determined, and the original midtone region It may be the region obtained by dividing the eye object only according to the middle tone information represented by the obtained second mixing result.
因此,在一种可能的实现方式中,可以根据第二混合结果,进一步结合第二预设区域参数来确定中间调区域。其中,第二预设区域参数可以是用以确定中间调区域的预设参数,其参数值可以根据实际情况灵活决定,不局限于本公开各实施例。第二预设区域参数的数值可以与第一预设区域参数相同,也可以不同。在一个示例中,第二预设参数也可以为大于1(例如,1.1至1.7)的任意数值。Therefore, in a possible implementation manner, the middle tone area may be determined according to the second mixing result and further combined with the second preset area parameter. Wherein, the second preset area parameter may be a preset parameter for determining the midtone area, and its parameter value may be flexibly determined according to actual conditions, and is not limited to the embodiments of the present disclosure. The value of the second preset area parameter may be the same as or different from the first preset area parameter. In an example, the second preset parameter may also be any value greater than 1 (for example, 1.1 to 1.7).
具体地,可以获取第二混合结果中各像素点的alpha通道值,该alpha通道值可以与第二预设区域参数相乘,得到相乘后的第二灰度结果。该第二灰度结果可以通过灰度与透明度之间的映射关系,映射为透明度的值,从而确定眼部对象中各像素点的第二透明度。映射关系的具体映射方式可以参考上述各公开实施例。Specifically, the alpha channel value of each pixel in the second mixed result can be obtained, and the alpha channel value can be multiplied by the second preset area parameter to obtain the multiplied second grayscale result. The second grayscale result can be mapped to a transparency value through the mapping relationship between grayscale and transparency, so as to determine the second transparency of each pixel in the eye object. For the specific mapping manner of the mapping relationship, reference may be made to the foregoing disclosed embodiments.
第二预设透明度阈值可以是用于对眼部对象中属于中间调区域的像素点进行筛选的预设阈值,该第二预设透明度阈值的具体数值在本公开实施例中不做限制。在一种可能的实现方式中,第二预设透明度阈值与第一预设透明度阈值的数值可以不相同。The second preset transparency threshold may be a preset threshold for filtering the pixel points belonging to the midtone region in the eye object, and the specific value of the second preset transparency threshold is not limited in the embodiments of the present disclosure. In a possible implementation manner, the second preset transparency threshold may be different from the first preset transparency threshold.
根据第二预设阈值,可以判断眼部对象中各像素点的第二透明度是否在第二预设阈值的范围以内,若是则确定该像素点属于中间调区域,否则认为该像素点属于中间调区域以外的区域,通过上述筛选过程,可以从眼部对象中筛选得到属于中间调区域的多个像素点,继而对眼部对象中属于中间调区域的像素点进行提取以得到中间调区域。According to the second preset threshold, it can be judged whether the second transparency of each pixel in the eye object is within the range of the second preset threshold, if so, it is determined that the pixel belongs to the midtone area, otherwise, the pixel is considered to belong to the midtone For regions other than the region, through the above screening process, multiple pixels belonging to the midtone region can be obtained from the eye object, and then the pixels belonging to the midtone region in the eye object are extracted to obtain the midtone region.
与阴影区域和原始阴影区域之间的关系类似,通过第二预设区域参数所得到的中间调区域,可以大于仅根据中间调信息所确定的原始中间调区域。其原理可以参考上述各公开实施例。Similar to the relationship between the shadow area and the original shadow area, the midtone area obtained through the second preset area parameter may be larger than the original midtone area determined only based on the midtone information. For the principle, reference may be made to the above disclosed embodiments.
通过本公开实施例,可以利用对眼部对象的灰度图进行排除混合来得到反映中间调信息的第二混合结果,从而根据该混合结果与第二预设区域参数,从眼部对象中提取得到中间调区域,一方面这种获取中间调信息的方式快速便捷,且便于和获取阴影信息的方式同时进行批量处理,从而整体提升了眼妆效率和眼妆效果;另一方面通过引入第 二预设区域参数,可以有效增强中间调区域相对于其他目标区域的对比度,且可以扩大中间调区域的区域范围,弥补省略划分的高光区域,从而使得根据划分后的目标区域进行的眼妆处理可以更加准确且具有更好的效果。Through the embodiments of the present disclosure, the grayscale image of the eye object can be excluded and mixed to obtain the second mixed result reflecting the midtone information, so that according to the mixed result and the second preset area parameters, the eye object can be extracted Obtaining the mid-tone area, on the one hand, this method of obtaining mid-tone information is fast and convenient, and it is convenient for batch processing at the same time as the method of obtaining shadow information, thereby improving the efficiency and effect of eye makeup as a whole; on the other hand, by introducing the second The preset area parameters can effectively enhance the contrast of the midtone area relative to other target areas, and can expand the area range of the midtone area to make up for the omission of the divided highlight area, so that the eye makeup processing based on the divided target area can be More accurate and with better results.
图3示出根据本公开一实施例的图像处理方法的流程图,如图所示,在一种可能的实现方式中,步骤S13可以包括:FIG. 3 shows a flowchart of an image processing method according to an embodiment of the present disclosure. As shown in the figure, in a possible implementation manner, step S13 may include:
步骤S131,根据眼妆参数中的颜色参数,对多个目标区域分别进行渲染,得到多个中间眼妆结果。In step S131, according to the color parameters in the eye makeup parameters, the multiple target areas are respectively rendered to obtain multiple intermediate eye makeup results.
步骤S132,根据多个目标区域的色调,确定多个目标区域分别对应的处理方式。Step S132, according to the hues of the multiple target areas, determine the processing modes corresponding to the multiple target areas.
步骤S133,按照多个目标区域分别对应的处理方式,将眼部对象分别与多个中间眼妆结果进行混合处理,得到多个眼妆结果。In step S133, the eye object is mixed with multiple intermediate eye makeup results to obtain multiple eye makeup results according to the corresponding processing methods of multiple target areas.
其中,中间眼妆结果可以是将颜色参数渲染至目标区域后所得到的渲染结果。颜色参数可以是待对眼部对象进行颜色渲染的参数,该颜色参数可以为颜色值,也可以为RGB通道值等形式。该颜色参数可以根据用户选择的颜色或是用户输入的颜色值所确定,也可以预先进行颜色设定等,根据实际情况灵活选择。Wherein, the intermediate eye makeup result may be a rendering result obtained after rendering the color parameters to the target area. The color parameter may be a parameter for color rendering of the eye object, and the color parameter may be a color value or an RGB channel value. The color parameter can be determined according to the color selected by the user or the color value input by the user, or the color can be set in advance, and can be flexibly selected according to the actual situation.
多个目标区域中,不同目标区域可以对应相同的颜色参数,也可以对应不同的颜色参数,根据实际情况灵活选择即可。在一种可能的实现方式中,对每个目标区域,可以分别与对应的颜色参数进行混合,以得到每个目标区域所对应的中间眼妆结果。Among the multiple target areas, different target areas may correspond to the same color parameters, or may correspond to different color parameters, which can be flexibly selected according to the actual situation. In a possible implementation manner, each target area may be mixed with corresponding color parameters to obtain an intermediate eye makeup result corresponding to each target area.
由于不同的目标区域可以基于不同的色调信息所划分,因此不同的目标区域可以对应不同的色调,如上述公开实施例中提到的阴影区域对应阴影以及中间调区域对应中间调等。Since different target areas can be divided based on different hue information, different target areas can correspond to different hues, such as the shaded areas corresponding to shadows and the midtone area corresponding to midtones mentioned in the above disclosed embodiments.
不同色调的目标区域,其进行眼妆处理的方式也可以不同,其中色调与处理方式之间的对应关系可以详见下述各公开实施例,在此先不做展开。Target areas with different tones may have different eye makeup treatment methods. The corresponding relationship between the color tones and the treatment methods can be found in the following disclosed embodiments in detail, and will not be expanded here.
在确定各目标区域的处理方式后,可以按照该处理方式,将眼部对象分别与各中间眼妆结果进行混合处理,以得到多个眼妆结果。After the processing method of each target area is determined, the eye object can be mixed with each intermediate eye makeup result according to the processing method, so as to obtain multiple eye makeup results.
通过本公开实施例,可以按照不同目标区域所对应的色调,来分别对各目标区域进行相应处理方式的混合处理,以得到各目标区域的眼妆结果,从而使得不同的目标区域可以具有与色调对应的眼妆效果,有效提升整体眼妆效果的精确度和丰富程度,也提升了眼妆处理过程的灵活性。Through the embodiments of the present disclosure, according to the hues corresponding to different target areas, the mixed processing of the corresponding processing methods can be performed on each target area to obtain the eye makeup result of each target area, so that different target areas can have the same color tone. The corresponding eye makeup effect effectively improves the accuracy and richness of the overall eye makeup effect, and also improves the flexibility of the eye makeup process.
在一种可能的实现方式中,步骤S132可以包括:In a possible implementation manner, step S132 may include:
在目标区域包括阴影区域的情况下,确定处理方式包括正片叠底混合;Where the target area includes shaded areas, determine that the processing includes multiply blending;
在目标区域包括中间调区域的情况下,确定处理方式包括正常混合。Where the target area includes the midtone area, it is determined that the processing includes normal mixing.
通过上述公开实施例可以看出,在目标区域为阴影区域的情况下,可以将阴影区域的中间眼妆效果和眼部对象按照正片叠底混合的方式进行混合处理,以得到阴影区域的眼妆结果。其中,正片叠底混合可以令混合结果变暗,从而充分保留阴影区域的色调性质,提升处理效果。From the above disclosed embodiments, it can be seen that when the target area is a shadow area, the intermediate eye makeup effect and eye objects in the shadow area can be mixed in the manner of multiplying and blending to obtain the eye makeup in the shadow area result. Among them, multiply and bottom blending can darken the blending result, thereby fully retaining the tonal properties of the shadow area and improving the processing effect.
在目标区域为中间调区域的情况下,可以将中间调区域的中间眼妆效果和眼部对象按照正常混合的方式进行混合处理,以得到中间调区域的眼妆结果。其中,正常混合对于混合结果的亮度影响较小,从而充分保留中间调区域的色调性质,提升处理效果。In the case that the target area is a mid-tone area, the mid-tone eye makeup effect and eye objects in the mid-tone area can be mixed in a normal blending manner to obtain an eye makeup result in the mid-tone area. Among them, normal blending has little effect on the brightness of the blending result, so as to fully preserve the tone properties of the midtone area and improve the processing effect.
通过上述各公开实施例可以看出,针对不同目标区域的色调,采用不同的混合方式进行混合,可以使得得到的眼妆结果与眼部对象的原始色调性质一致,从而大大提升 了眼妆效果的真实性。From the above-mentioned disclosed embodiments, it can be seen that, for the tones of different target areas, different mixing methods are used for mixing, so that the obtained eye makeup result is consistent with the original color tone of the eye object, thereby greatly improving the eye makeup effect. authenticity.
图4示出根据本公开一实施例的图像处理方法的流程图,如图所示,在一种可能的实现方式中,步骤S14可以包括:FIG. 4 shows a flowchart of an image processing method according to an embodiment of the present disclosure. As shown in the figure, in a possible implementation manner, step S14 may include:
步骤S141,将多个眼妆结果进行叠加,得到目标眼妆结果;Step S141, superimposing multiple eye makeup results to obtain the target eye makeup result;
步骤S142,根据眼妆参数中的融合参数,对目标眼妆结果与用户图像进行融合,得到目标用户图像。Step S142, according to the fusion parameters in the eye makeup parameters, the target eye makeup result is fused with the user image to obtain the target user image.
在一种可能的实现方式中,上述多个目标区域也可以分别在多个图层中进行确定,在这种情况下,得到的多个眼妆结果也可以分别属于多个图层,因此,在一个示例中,可以将多个图层中的眼妆结果进行叠加,来得到目标眼妆结果。In a possible implementation manner, the above-mentioned multiple target areas may also be determined in multiple layers respectively, in this case, the multiple eye makeup results obtained may also belong to multiple layers respectively, therefore, In an example, eye makeup results in multiple layers may be superimposed to obtain a target eye makeup result.
叠加的方式可以根据实际情况灵活决定,比如可以为图层之间的直接叠加,在一些可能的实现方式中,也可以通过某种或某些混合方式实现叠加,比如可以将多个图层通过正片叠底的方式进行混合叠加。The way of overlay can be flexibly decided according to the actual situation, for example, it can be direct overlay between layers, and in some possible implementations, overlay can also be realized through some or some mixed methods, for example, multiple layers can be passed through Multiply the way to mix and overlay.
在一种可能的实现方式中,也可以是直接将多个眼妆结果进行融合或混合,以得到目标眼妆结果。In a possible implementation manner, multiple eye makeup results may also be directly fused or mixed to obtain a target eye makeup result.
在得到目标眼妆结果后,可以根据融合参数,对目标眼妆结果与用户图像进行融合以得到目标用户图像,融合的方式同样在本公开实施例中不做限制,比如可以将相同位置像素点的像素值进行相加或相乘等,来实现融合,或是将相同位置的像素点的像素值进行加权融合。After the target eye makeup result is obtained, the target eye makeup result and the user image can be fused according to the fusion parameters to obtain the target user image. The fusion method is also not limited in this embodiment of the disclosure. The pixel values of the same position are added or multiplied to achieve fusion, or the pixel values of the pixels at the same position are weighted and fused.
其中,加权融合中目标眼妆结果的融合权重可以根据融合参数来确定,融合参数可以为预先设定的参数值,也可以根据用户在眼妆操作中输入的参数值来确定等,根据实际情况灵活选择即可。Wherein, the fusion weight of the target eye makeup result in the weighted fusion can be determined according to the fusion parameters, and the fusion parameters can be preset parameter values, or can be determined according to the parameter values input by the user in the eye makeup operation, etc., according to the actual situation Choose flexibly.
在一种可能的实现方式中,该融合参数可以为透明度,将目标眼妆结果与透明度相乘后再与用于图像进行相加,可以得到目标用户图像,通过改变融合参数中的透明度,可以改变整个眼妆的透明度效果。In a possible implementation, the fusion parameter can be transparency. The target eye makeup result is multiplied by the transparency and then added to the used image to obtain the target user image. By changing the transparency in the fusion parameter, you can Change the transparency effect of the entire eye makeup.
通过本公开实施例,可以根据眼妆参数中的融合参数,对目标眼妆结果与用户图像进行融合,以得到目标用户图像,通过更改融合参数,可以便于对眼妆效果进行调整,从而提升整个眼妆过程的灵活性和自主程度。Through the embodiments of the present disclosure, the target eye makeup result can be fused with the user image according to the fusion parameters in the eye makeup parameters to obtain the target user image. By changing the fusion parameters, it is easy to adjust the eye makeup effect, thereby improving the overall The flexibility and autonomy of the eye makeup process.
图5示出根据本公开一实施例的图像处理装置的框图。如图所示,所述图像处理装置20可以包括:FIG. 5 shows a block diagram of an image processing device according to an embodiment of the present disclosure. As shown in the figure, the image processing device 20 may include:
确定模块21,用于响应于针对用户图像的眼妆操作,确定用户图像中待进行眼妆处理的眼部对象。The determination module 21 is configured to determine eye objects to be subjected to eye makeup processing in the user image in response to the eye makeup operation on the user image.
划分模块22,用于基于眼部对象的色调信息和预设区域参数,将眼部对象划分为多个目标区域,其中,目标区域的范围大于原始目标区域,原始目标区域通过基于色调信息对眼部对象进行划分所得到。The dividing module 22 is used to divide the eye object into a plurality of target areas based on the hue information of the eye object and preset area parameters, wherein the scope of the target area is larger than the original target area, and the original target area is divided into multiple target areas based on the hue information. It is obtained by dividing the department object.
眼妆模块23,用于根据眼妆操作中的眼妆参数,对多个目标区域中的每个进行与该目标区域的色调匹配的眼妆处理,得到多个眼妆结果。The eye makeup module 23 is configured to, according to the eye makeup parameters in the eye makeup operation, perform eye makeup processing on each of the multiple target areas matching the color tone of the target area to obtain multiple eye makeup results.
生成模块24,用于根据多个眼妆结果,生成目标用户图像。The generating module 24 is configured to generate target user images according to multiple eye makeup results.
在一种可能的实现方式中,所述确定模块用于:对所述用户图像进行关键点识别处理,确定所述眼部对象在所述用户图像中的初始位置;将所述用户图像分别复制至多个图层中;针对所述多个图层中的每个图层,在该图层中,以所述初始位置为中心进行 位置扩展得到扩展位置,根据所述扩展位置,确定该图层中待进行眼妆处理的对象。In a possible implementation manner, the determining module is configured to: perform key point recognition processing on the user image, determine the initial position of the eye object in the user image; copy the user image respectively to a plurality of layers; for each layer in the plurality of layers, in the layer, the position is expanded with the initial position as the center to obtain the extended position, and the layer is determined according to the expanded position The object to be treated with eye makeup.
在一种可能的实现方式中,所述色调信息包括以下中的至少一项:阴影信息或中间调信息;所述划分模块用于以下中至少一项:基于所述眼部对象的阴影信息,结合所述预设区域参数中的第一预设区域参数,从所述眼部对象中提取阴影区域;或,基于所述眼部对象的中间调信息,结合所述预设区域参数中的第二预设区域参数,从所述眼部对象中提取中间调区域。In a possible implementation manner, the hue information includes at least one of the following: shadow information or midtone information; the dividing module is used for at least one of the following: based on the shadow information of the eye object, Extracting a shadow area from the eye object in combination with the first preset area parameter in the preset area parameters; or, based on the midtone information of the eye object, combining the first preset area parameter in the preset area parameters Two preset area parameters, extracting a midtone area from the eye object.
在一种可能的实现方式中,所述划分模块进一步用于:基于所述眼部对象的反向灰度图,进行正片叠底混合,得到第一混合结果;根据所述第一混合结果与所述第一预设区域参数,确定所述眼部对象中像素点的第一透明度;根据第一预设透明度阈值和所述第一透明度,对所述眼部对象中的像素点进行提取,得到所述阴影区域,其中,所述阴影区域大于原始阴影区域,所述原始阴影区域通过基于所述阴影信息对所述眼部对象进行划分所得到。In a possible implementation manner, the division module is further configured to: perform multiplicative blending based on the reverse grayscale image of the eye object to obtain a first blending result; according to the first blending result and The first preset area parameter determines the first transparency of the pixels in the eye object; extracts the pixels in the eye object according to the first preset transparency threshold and the first transparency, The shadow area is obtained, wherein the shadow area is larger than the original shadow area obtained by dividing the eye object based on the shadow information.
在一种可能的实现方式中,所述划分模块进一步用于:基于所述眼部对象的灰度图,进行排除混合,得到第二混合结果;根据所述第二混合结果与所述第二预设区域参数,确定所述眼部对象中像素点的第二透明度;根据第二预设透明度阈值和所述第二透明度,对所述眼部对象中的像素点进行提取,得到所述中间调区域,其中,所述中间调区域大于原始中间调区域,所述原始中间调区域通过基于所述中间调信息对所述眼部对象进行划分所得到。In a possible implementation manner, the dividing module is further configured to: perform exclusion mixing based on the grayscale image of the eye object to obtain a second mixing result; according to the second mixing result and the second Preset area parameters to determine the second transparency of the pixels in the eye object; extract the pixels in the eye object according to the second preset transparency threshold and the second transparency to obtain the intermediate a midtone area, wherein the midtone area is larger than an original midtone area, and the original midtone area is obtained by dividing the eye object based on the midtone information.
在一种可能的实现方式中,所述眼妆模块用于:根据所述眼妆参数中的颜色参数,对所述多个目标区域分别进行渲染,得到多个中间眼妆结果;根据所述多个目标区域的色调,确定所述多个目标区域分别对应的处理方式;按照所述多个目标区域分别对应的处理方式,将所述眼部对象分别与所述多个中间眼妆结果进行混合处理,得到多个眼妆结果。In a possible implementation manner, the eye makeup module is configured to: respectively render the multiple target areas according to the color parameters in the eye makeup parameters to obtain multiple intermediate eye makeup results; according to the The color tone of the multiple target areas, determine the corresponding processing methods of the multiple target areas; according to the processing methods corresponding to the multiple target areas, the eye objects are respectively compared with the multiple intermediate eye makeup results Blend the treatment for multiple eye makeup results.
在一种可能的实现方式中,所述目标区域包括以下至少一项:阴影区域和/或中间调区域中的一种或多种;所述眼妆模块进一步用于:响应于确定所述目标区域包括阴影区域,确定所述处理方式包括正片叠底混合;响应于确定所述目标区域包括中间调区域,确定所述处理方式包括正常混合。In a possible implementation manner, the target area includes at least one of the following: one or more of shadow areas and/or midtone areas; the eye makeup module is further configured to: respond to determining the target In response to determining that the target area includes a midtone area, it is determined that the processing includes normal blending in response to determining that the target area includes a shaded area.
在一种可能的实现方式中,所述生成模块用于:将所述多个眼妆结果进行叠加,得到目标眼妆结果;根据所述眼妆参数中的融合参数,对所述目标眼妆结果与所述用户图像进行融合,得到所述目标用户图像。In a possible implementation manner, the generating module is configured to: superimpose the multiple eye makeup results to obtain a target eye makeup result; The result is fused with the user image to obtain the target user image.
本公开涉及增强现实领域,通过获取现实环境中的目标对象的图像信息,进而借助各类视觉相关算法实现对目标对象的相关特征、状态及属性进行检测或识别处理,从而得到与具体应用匹配的虚拟与现实相结合的AR效果。示例性的,目标对象可涉及与人体相关的脸部、肢体、手势、动作等,或者与物体相关的标识物、标志物,或者与场馆或场所相关的沙盘、展示区域或展示物品等。视觉相关算法可涉及视觉定位、SLAM、三维重建、图像注册、背景分割、对象的关键点提取及跟踪、对象的位姿或深度检测等。具体应用不仅可以涉及跟真实场景或物品相关的导览、导航、讲解、重建、虚拟效果叠加展示等交互场景,还可以涉及与人相关的特效处理,比如妆容美化、肢体美化、特效展示、虚拟模型展示等交互场景。可通过卷积神经网络,实现对目标对象的相关特征、状态及属性进行检测或识别处理。上述卷积神经网络是基于深度学习框架进行模型训练而得到的网络模型。This disclosure relates to the field of augmented reality. By acquiring the image information of the target object in the real environment, and then using various visual correlation algorithms to detect or identify the relevant features, states and attributes of the target object, and thus obtain the image information that matches the specific application. AR effect combining virtual and reality. Exemplarily, the target object may involve faces, limbs, gestures, actions, etc. related to the human body, or markers and markers related to objects, or sand tables, display areas or display items related to venues or places. Vision-related algorithms can involve visual positioning, SLAM, 3D reconstruction, image registration, background segmentation, object key point extraction and tracking, object pose or depth detection, etc. Specific applications can not only involve interactive scenes such as guided tours, navigation, explanations, reconstructions, virtual effect overlays and display related to real scenes or objects, but also special effects processing related to people, such as makeup beautification, body beautification, special effect display, virtual Interactive scenarios such as model display. The relevant features, states and attributes of the target object can be detected or identified through the convolutional neural network. The above-mentioned convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
应用场景示例Application Scenario Example
在计算机视觉领域,如何得到效果真实丰富的眼妆图像,成为目前一个亟待解决 的问题。In the field of computer vision, how to obtain realistic and rich eye makeup images has become an urgent problem to be solved.
本公开应用示例提出了一种图像处理方法,包括如下过程:The disclosed application example proposes an image processing method, including the following process:
基于用户图像进行人脸识别,从用户图像中提取得到眼部对象,针对眼部对象进行阴影区域和中间调区域的分离,得到高阴影区域和中间调区域等两个目标区域。Face recognition is performed based on the user image, the eye object is extracted from the user image, and the shadow area and the midtone area are separated for the eye object to obtain two target areas such as the high shadow area and the midtone area.
阴影分离的过程可以包括:The process of shadow separation can include:
将眼部对象的反向灰度图和眼部对象的反向灰度图进行正片叠底混合(例如,将眼部对象的反向灰度图复制后,基于两张相同的反向灰度图进行正片叠底混合),得到第一混合结果。获取第一混合结果中的alpha通道值,将该alpha通道值与第一预设区域参数(比如1.2等)相乘,得到第一灰度结果,根据第一灰度结果的黑白关系,将第一灰度结果映射为透明度,从而得到眼部对象中各像素点的第一透明度,基于第一透明度,从眼部对象中筛选属于阴影区域的像素点,得到阴影区域。Multiply and mix the reverse grayscale image of the eye object and the reverse grayscale image of the eye object (for example, after copying the reverse grayscale image of the eye object, based on two identical reverse grayscale images Figures are multiplied and blended) to get the first blended result. Obtain the alpha channel value in the first mixed result, and multiply the alpha channel value by the first preset area parameter (such as 1.2, etc.) to obtain the first grayscale result. According to the black-white relationship of the first grayscale result, the second A grayscale result is mapped to transparency, so as to obtain the first transparency of each pixel in the eye object, and based on the first transparency, pixel points belonging to the shadow area are screened from the eye object to obtain the shadow area.
中间调分离的过程可以包括:The process of separating midtones can include:
将眼部对象的灰度图和眼部对象的灰度图进行排除混合(例如,将眼部对象的反向灰度图复制后,基于两张相同的反向灰度图进行排除混合),得到第二混合结果。获取第二混合结果中的alpha通道值,将该alpha通道值与第二预设区域参数(比如1.2等)相乘,得到第二灰度结果,根据第二灰度结果的黑白关系,将第二灰度结果映射为透明度,从而得到眼部对象中各像素点的第二透明度,基于第二透明度,从眼部对象中筛选属于中间调区域的像素点,得到中间调区域。Perform exclusion blending of the grayscale image of the eye object and the grayscale image of the eye object (for example, after copying the reverse grayscale image of the eye object, perform exclusion blending based on two identical reverse grayscale images), A second mixed result is obtained. Obtain the alpha channel value in the second mixed result, and multiply the alpha channel value by the second preset area parameter (such as 1.2, etc.) to obtain the second grayscale result. According to the black-white relationship of the second grayscale result, the first The second grayscale result is mapped to transparency, thereby obtaining the second transparency of each pixel in the eye object, and based on the second transparency, the pixels belonging to the midtone area are screened from the eye object to obtain the midtone area.
在分别得到阴影区域和中间调区域后,可以将用户输入的眼妆的颜色参数分别与阴影区域和中间调区域进行混合,分别得到中间眼妆结果x和y。After the shadow area and the midtone area are respectively obtained, the color parameters of the eye makeup input by the user can be mixed with the shadow area and the midtone area respectively to obtain the intermediate eye makeup results x and y respectively.
将得到的中间眼妆结果x与眼部对象进行正片叠底混合,得到眼妆结果g;Multiply and mix the obtained intermediate eye makeup result x with the eye object to obtain the eye makeup result g;
将得到的中间眼妆结果y与眼部对象进行正常混合,得到眼妆结果h。The obtained intermediate eye makeup result y is normally mixed with the eye object to obtain the eye makeup result h.
将眼妆结果g和h所在的两个图层进行叠加,即打包成一组,从而得到目标眼妆结果i,该眼妆结果i可以与用户图像进行融合以得到目标用户图像,该目标用户图像是对所述用户图像中的眼部对象进行上述眼妆处理后所得到的图像。改变i的透明度即可以改变整个眼妆颜色的透明效果,其中目标眼妆结果i可以与目标用户图像通过正常混合的方式进行融合。Superimpose the two layers where the eye makeup results g and h are located, that is, pack them into a group, so as to obtain the target eye makeup result i, which can be fused with the user image to obtain the target user image, the target user image is an image obtained by performing the above-mentioned eye makeup processing on the eye object in the user image. Changing the transparency of i can change the transparency effect of the entire eye makeup color, wherein the target eye makeup result i can be fused with the target user image through normal mixing.
本公开应用示例中提出的图像处理方法,可以通过对不同色调的目标区域进行处理,使得眼妆效果更加展示自然,包含眼部的皮肤纹理细节且可以实现所见即所得的处理效果。而且本公开实施例提出的方法可以实现更多的参数定义,比如不同目标区域的眼妆的颜色参数的更改,目标眼妆结果与用户图像的融合参数的更改等,从而使得眼妆效果更加可控,方便用户根据喜好定制美化参数,比如设计师在设计妆容的过程中可能需要更改眼妆中眼影的颜色,可以采用本公开应用示例提出的方法,也可以把本公开应用示例中的方法步骤按照操作过程记录为作为一套动作,并在PS等软件中运行,从而可以方便快速地更改眼妆效果。也方便机构根据本公开应用示例中的方法为用户提供更加丰富的自定义功能,比如对于机构中的软件开发者,可以在开发妆容相关的软件中,将本公开应用示例的方法编入底层技术中,并保留修改眼妆颜色的接口,以便于颜色的更改等。The image processing method proposed in the application example of the present disclosure can make the eye makeup effect more natural by processing the target areas of different tones, including the skin texture details of the eyes, and can realize the processing effect of what you see is what you get. Moreover, the method proposed in the embodiments of the present disclosure can achieve more parameter definitions, such as changing the color parameters of eye makeup in different target areas, changing the fusion parameters of the target eye makeup result and the user image, etc., so that the eye makeup effect is more predictable. control, which is convenient for users to customize beautification parameters according to their preferences. For example, a designer may need to change the color of eye shadow in eye makeup in the process of designing makeup. The method proposed in the application example of this disclosure can be used, or the method steps in the application example of this disclosure can be used According to the operation process, it is recorded as a set of actions and run in PS and other software, so that the eye makeup effect can be changed conveniently and quickly. It is also convenient for institutions to provide users with richer custom functions according to the methods in the application examples of the present disclosure. For example, software developers in institutions can incorporate the methods of the application examples of the disclosure into the underlying technology when developing makeup-related software , and retain the interface for modifying the eye makeup color, so as to facilitate color changes, etc.
可以理解,本公开提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本公开不再赘述。It can be understood that the above-mentioned method embodiments mentioned in this disclosure can all be combined with each other to form a combined embodiment without violating the principle and logic. Due to space limitations, this disclosure will not repeat them.
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并 不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above method of specific implementation, the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process. The specific execution order of each step should be based on its function and possible The inner logic is OK.
本公开实施例还提出一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。计算机可读存储介质可以是易失性计算机可读存储介质或非易失性计算机可读存储介质。Embodiments of the present disclosure also provide a computer-readable storage medium, on which computer program instructions are stored, and the above-mentioned method is implemented when the computer program instructions are executed by a processor. The computer readable storage medium may be a volatile computer readable storage medium or a nonvolatile computer readable storage medium.
本公开实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为上述方法。An embodiment of the present disclosure also proposes an electronic device, including: a processor; and a memory for storing instructions executable by the processor; wherein the processor is configured as the above method.
在实际应用中,上述存储器可以是易失性存储器(volatile memory),例如RAM;或者非易失性存储器(non-volatile memory),例如ROM,快闪存储器(flash memory),硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD);或者上述种类的存储器的组合,并向处理器提供指令和数据。In practical application, the above-mentioned memory can be volatile memory (volatile memory), such as RAM; or non-volatile memory (non-volatile memory), such as ROM, flash memory (flash memory), hard disk (Hard Disk Drive , HDD) or solid-state drive (Solid-State Drive, SSD); or a combination of the above types of memory, and provide instructions and data to the processor.
上述处理器可以为ASIC、DSP、DSPD、PLD、FPGA、CPU、控制器、微控制器、微处理器中的至少一种。可以理解地,对于不同的设备,用于实现上述处理器功能的电子器件还可以为其它,本公开实施例不作具体限定。The aforementioned processor may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor. It can be understood that, for different devices, the electronic device used to implement the above processor function may also be other, which is not specifically limited in this embodiment of the present disclosure.
电子设备可以被提供为终端、服务器或其它形态的设备。Electronic devices may be provided as terminals, servers, or other forms of devices.
基于前述实施例相同的技术构思,本公开实施例还提供了一种计算机程序,该计算机程序被处理器执行时实现上述方法。Based on the same technical idea as the foregoing embodiments, the embodiments of the present disclosure further provide a computer program, which implements the above method when the computer program is executed by a processor.
图6是根据本公开实施例的一种电子设备800的框图。例如,电子设备800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等终端。FIG. 6 is a block diagram of an electronic device 800 according to an embodiment of the present disclosure. For example, the electronic device 800 may be a terminal such as a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, or a personal digital assistant.
参照图6,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。6, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power supply component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814 , and the communication component 816.
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。The processing component 802 generally controls the overall operations of the electronic device 800, such as those associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802 .
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。The memory 804 is configured to store various types of data to support operations at the electronic device 800 . Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like. The memory 804 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。The power supply component 806 provides power to various components of the electronic device 800 . Power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 800 .
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。 在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。The multimedia component 808 includes a screen providing an output interface between the electronic device 800 and the user. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense a boundary of a touch or swipe action, but also detect duration and pressure associated with the touch or swipe action. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capability.
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a microphone (MIC), which is configured to receive external audio signals when the electronic device 800 is in operation modes, such as call mode, recording mode and voice recognition mode. Received audio signals may be further stored in memory 804 or sent via communication component 816 . In some embodiments, the audio component 810 also includes a speaker for outputting audio signals.
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。The I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。 Sensor assembly 814 includes one or more sensors for providing status assessments of various aspects of electronic device 800 . For example, the sensor component 814 can detect the open/closed state of the electronic device 800, the relative positioning of components, such as the display and the keypad of the electronic device 800, the sensor component 814 can also detect the electronic device 800 or a Changes in position of components, presence or absence of user contact with electronic device 800 , electronic device 800 orientation or acceleration/deceleration and temperature changes in electronic device 800 . Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. Sensor assembly 814 may also include an optical sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关人员信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 can access a wireless network based on communication standards, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related personnel information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology and other technologies.
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。In an exemplary embodiment, electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。In an exemplary embodiment, there is also provided a non-volatile computer-readable storage medium, such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to implement the above method.
图7是根据本公开实施例的一种电子设备1900的框图。例如,电子设备1900可以被提供为一服务器。参照图7,电子设备1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述方法。FIG. 7 is a block diagram of an electronic device 1900 according to an embodiment of the present disclosure. For example, electronic device 1900 may be provided as a server. Referring to FIG. 7 , electronic device 1900 includes processing component 1922 , which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922 , such as application programs. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. In addition, the processing component 1922 is configured to execute instructions to perform the above method.
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输 入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。 Electronic device 1900 may also include a power supply component 1926 configured to perform power management of electronic device 1900, a wired or wireless network interface 1950 configured to connect electronic device 1900 to a network, and an input-output (I/O) interface 1958 . The electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows Server™, Mac OS X™, Unix™, Linux™, FreeBSD™ or the like.
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium, such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to implement the above method.
本公开可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。The present disclosure can be a system, method and/or computer program product. A computer program product may include a computer readable storage medium having computer readable program instructions thereon for causing a processor to implement various aspects of the present disclosure.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。A computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device. A computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of computer-readable storage media include: portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or flash memory), static random access memory (SRAM), compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanically encoded device, such as a printer with instructions stored thereon A hole card or a raised structure in a groove, and any suitable combination of the above. As used herein, computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态人员信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or Source or object code written in any combination, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as the “C” language or similar programming languages. Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement. In cases involving a remote computer, the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as via the Internet using an Internet service provider). connect). In some embodiments, electronic circuits, such as programmable logic circuits, field programmable gate arrays (FPGAs) or programmable logic arrays (PLAs), are personalized by utilizing status personnel information of computer readable program instructions, the electronic circuits Computer readable program instructions may be executed to implement various aspects of the present disclosure.
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It should be understood that each block of the flowcharts and/or block diagrams, and combinations of blocks in the flowcharts and/or block diagrams, can be implemented by computer-readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个 方框中规定的功能/动作的各个方面的指令。These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram. These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。It is also possible to load computer-readable program instructions into a computer, other programmable data processing device, or other equipment, so that a series of operational steps are performed on the computer, other programmable data processing device, or other equipment to produce a computer-implemented process , so that instructions executed on computers, other programmable data processing devices, or other devices implement the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions. In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。Having described various embodiments of the present disclosure above, the foregoing description is exemplary, not exhaustive, and is not limited to the disclosed embodiments. Many modifications and alterations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen to best explain the principle of each embodiment, practical application or technical improvement in the market, or to enable other ordinary skilled in the art to understand each embodiment disclosed herein.

Claims (11)

  1. 一种图像处理方法,包括:An image processing method, comprising:
    响应于针对用户图像的眼妆操作,确定所述用户图像中待进行眼妆处理的眼部对象;In response to an eye makeup operation on the user image, determine an eye object to be subjected to eye makeup processing in the user image;
    基于所述眼部对象的色调信息和预设区域参数,将所述眼部对象划分为多个目标区域,其中,所述目标区域的范围大于原始目标区域,所述原始目标区域通过基于所述色调信息对所述眼部对象进行划分所得到;Based on the hue information of the eye object and preset area parameters, the eye object is divided into a plurality of target areas, wherein the scope of the target area is larger than the original target area, and the original target area is based on the Hue information is obtained by dividing the eye object;
    根据所述眼妆操作中的眼妆参数,对所述多个目标区域中的每个进行与该目标区域的色调匹配的眼妆处理,得到多个眼妆结果;According to the eye makeup parameters in the eye makeup operation, perform eye makeup processing on each of the multiple target areas to match the color tone of the target area, to obtain multiple eye makeup results;
    根据所述多个眼妆结果,生成目标用户图像。A target user image is generated according to the plurality of eye makeup results.
  2. 根据权利要求1所述的方法,其特征在于,The method according to claim 1, characterized in that,
    所述确定所述用户图像中待进行眼妆处理的眼部对象,包括:The determining the eye object to be subjected to eye makeup processing in the user image includes:
    对所述用户图像进行关键点识别处理,确定所述眼部对象在所述用户图像中的初始位置;Perform key point recognition processing on the user image to determine the initial position of the eye object in the user image;
    将所述用户图像分别复制至多个图层中;Copying the user images to multiple layers respectively;
    针对所述多个图层中的每个图层,在该图层中,以所述初始位置为中心进行位置扩展得到扩展位置,根据所述扩展位置,确定该图层中待进行眼妆处理的对象。For each layer in the plurality of layers, in the layer, the position is extended centering on the initial position to obtain an extended position, and according to the extended position, it is determined that the eye makeup treatment is to be performed in the layer Object.
  3. 根据权利要求1或2所述的方法,其特征在于,所述色调信息包括阴影信息或中间调信息中的至少一种;The method according to claim 1 or 2, wherein the hue information includes at least one of shadow information or midtone information;
    所述基于所述眼部对象的色调信息和预设区域参数,将所述眼部对象划分为多个目标区域,包括以下操作中的至少一项:The dividing the eye object into multiple target areas based on the hue information of the eye object and preset area parameters includes at least one of the following operations:
    基于所述眼部对象的阴影信息,结合所述预设区域参数中的第一预设区域参数,从所述眼部对象中提取阴影区域;或,Based on the shadow information of the eye object, combined with the first preset area parameter among the preset area parameters, extracting a shadow area from the eye object; or,
    基于所述眼部对象的中间调信息,结合所述预设区域参数中的第二预设区域参数,从所述眼部对象中提取中间调区域。Based on the midtone information of the eye object and in combination with a second preset region parameter among the preset region parameters, a midtone region is extracted from the eye object.
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述眼部对象的阴影信息,结合所述预设区域参数中的第一预设区域参数,从所述眼部对象中提取阴影区域,包括:The method according to claim 3, wherein, based on the shadow information of the eye object, the shadow is extracted from the eye object in combination with the first preset area parameter among the preset area parameters area, including:
    基于所述眼部对象的反向灰度图,进行正片叠底混合,得到第一混合结果;Based on the reverse grayscale image of the eye object, perform multiplication and bottom blending to obtain a first blending result;
    根据所述第一混合结果与所述第一预设区域参数,确定所述眼部对象中像素点的第一透明度;determining a first transparency of pixels in the eye object according to the first mixing result and the first preset area parameter;
    根据第一预设透明度阈值和所述第一透明度,对所述眼部对象中的像素点进行提取,得到所述阴影区域,其中,所述阴影区域大于原始阴影区域,所述原始阴影区域通过基于所述阴影信息对所述眼部对象进行划分所得到。According to the first preset transparency threshold and the first transparency, pixel points in the eye object are extracted to obtain the shadow area, wherein the shadow area is larger than the original shadow area, and the original shadow area is obtained by obtained by dividing the eye object based on the shadow information.
  5. 根据权利要求3或4所述的方法,其特征在于,所述基于所述眼部对象的中间调信息,结合所述预设区域参数中的第二预设区域参数,从所述眼部对象中提取中间调区域,包括:The method according to claim 3 or 4, wherein the midtone information based on the eye object is combined with the second preset area parameter in the preset area parameters, and the eye object Extract midtone areas, including:
    基于所述眼部对象的灰度图,进行排除混合,得到第二混合结果;Based on the grayscale image of the eye object, performing exclusion mixing to obtain a second mixing result;
    根据所述第二混合结果与所述第二预设区域参数,确定所述眼部对象中像素点的第二透明度;determining a second transparency of pixels in the eye object according to the second mixing result and the second preset area parameter;
    根据第二预设透明度阈值和所述第二透明度,对所述眼部对象中的像素点进行提取,得到所述中间调区域,其中,所述中间调区域大于原始中间调区域,所述原始中间调区域通过基于所述中间调信息对所述眼部对象进行划分所得到。According to the second preset transparency threshold and the second transparency, pixel points in the eye object are extracted to obtain the midtone area, wherein the midtone area is larger than the original midtone area, and the original The midtone area is obtained by dividing the eye object based on the midtone information.
  6. 根据权利要求1至5中任意一项所述的方法,其特征在于,所述根据所述眼妆操作中的眼妆参数,对所述多个目标区域分别进行与所述目标区域的色调匹配的眼妆处理,得到多个眼妆结果,包括:The method according to any one of claims 1 to 5, characterized in that, according to the eye makeup parameters in the eye makeup operation, the multiple target areas are respectively matched with the color tone of the target area eye makeup treatment for multiple eye makeup results, including:
    根据所述眼妆参数中的颜色参数,对所述多个目标区域分别进行渲染,得到多个 中间眼妆结果;According to the color parameters in the eye makeup parameters, the plurality of target areas are respectively rendered to obtain a plurality of intermediate eye makeup results;
    根据所述多个目标区域的色调,确定所述多个目标区域分别对应的处理方式;According to the hues of the multiple target areas, determine processing modes corresponding to the multiple target areas;
    按照所述多个目标区域分别对应的处理方式,将所述眼部对象分别与所述多个中间眼妆结果进行混合处理,得到多个眼妆结果。According to the corresponding processing manners of the plurality of target areas, the eye objects are respectively mixed with the plurality of intermediate eye makeup results to obtain a plurality of eye makeup results.
  7. 根据权利要求6所述的方法,其特征在于,所述目标区域包括以下至少一项:The method according to claim 6, wherein the target area comprises at least one of the following:
    阴影区域或中间调区域;shadow or midtone areas;
    所述根据所述多个目标区域的色调,确定所述多个目标区域分别对应的处理方式,包括:The determining the processing methods respectively corresponding to the multiple target areas according to the hues of the multiple target areas includes:
    响应于确定所述目标区域包括阴影区域,确定所述处理方式包括正片叠底混合;In response to determining that the target area includes a shaded area, determining that the processing includes multiply blending;
    响应于确定所述目标区域包括中间调区域,确定所述处理方式包括正常混合。In response to determining that the target region includes a midtone region, it is determined that the processing includes normal blending.
  8. 根据权利要求1至7中任意一项所述的方法,其特征在于,所述根据所述多个眼妆结果,生成目标用户图像,包括:The method according to any one of claims 1 to 7, wherein said generating target user images according to said multiple eye makeup results comprises:
    将所述多个眼妆结果进行叠加,得到目标眼妆结果;superimposing the multiple eye makeup results to obtain the target eye makeup result;
    根据所述眼妆参数中的融合参数,对所述目标眼妆结果与所述用户图像进行融合,得到所述目标用户图像。According to the fusion parameters in the eye makeup parameters, the target eye makeup result is fused with the user image to obtain the target user image.
  9. 一种图像处理装置,包括:An image processing device, comprising:
    确定模块,用于响应于针对用户图像的眼妆操作,确定所述用户图像中待进行眼妆处理的眼部对象;A determining module, configured to determine an eye object to be subjected to eye makeup processing in the user image in response to an eye makeup operation on the user image;
    划分模块,用于基于所述眼部对象的色调信息和预设区域参数,将所述眼部对象划分为多个目标区域,其中,所述目标区域的范围大于原始目标区域,所述原始目标区域通过基于所述色调信息对所述眼部对象进行划分所得到;A division module, configured to divide the eye object into a plurality of target areas based on the hue information of the eye object and preset area parameters, wherein the scope of the target area is larger than the original target area, and the original target area The region is obtained by dividing the eye object based on the hue information;
    眼妆模块,用于根据所述眼妆操作中的眼妆参数,对所述多个目标区域分别进行与所述目标区域的色调匹配的眼妆处理,得到多个眼妆结果;An eye makeup module, configured to perform eye makeup processing on the multiple target areas according to the eye makeup parameters in the eye makeup operation to match the color tone of the target areas, to obtain multiple eye makeup results;
    生成模块,用于根据所述多个眼妆结果,生成目标用户图像。A generating module, configured to generate target user images according to the plurality of eye makeup results.
  10. 一种电子设备,包括:An electronic device comprising:
    处理器;processor;
    用于存储处理器可执行指令的存储器;memory for storing processor-executable instructions;
    其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至8中任意一项所述的方法。Wherein, the processor is configured to invoke instructions stored in the memory to execute the method according to any one of claims 1-8.
  11. 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至8中任意一项所述的方法。A computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the method according to any one of claims 1 to 8 is implemented.
PCT/CN2022/120109 2021-09-27 2022-09-21 Image processing method and apparatus, electronic device, and storage medium WO2023045950A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111137187.6 2021-09-27
CN202111137187.6A CN113781359A (en) 2021-09-27 2021-09-27 Image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2023045950A1 true WO2023045950A1 (en) 2023-03-30

Family

ID=78853735

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/120109 WO2023045950A1 (en) 2021-09-27 2022-09-21 Image processing method and apparatus, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN113781359A (en)
WO (1) WO2023045950A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781359A (en) * 2021-09-27 2021-12-10 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999658A (en) * 1996-06-28 1999-12-07 Dainippon Screen Mfg. Co., Ltd. Image tone interpolation method and apparatus therefor
US20050141002A1 (en) * 2003-12-26 2005-06-30 Konica Minolta Photo Imaging, Inc. Image-processing method, image-processing apparatus and image-recording apparatus
US20140270514A1 (en) * 2013-03-14 2014-09-18 Ili Technology Corporation Image processing method
JP2015156072A (en) * 2014-02-20 2015-08-27 国立大学法人お茶の水女子大学 Eye make-up design creation method and program
CN108053365A (en) * 2017-12-29 2018-05-18 百度在线网络技术(北京)有限公司 For generating the method and apparatus of information
CN110572652A (en) * 2019-09-04 2019-12-13 锐捷网络股份有限公司 Static image processing method and device
CN111583102A (en) * 2020-05-14 2020-08-25 北京字节跳动网络技术有限公司 Face image processing method and device, electronic equipment and computer storage medium
CN113781359A (en) * 2021-09-27 2021-12-10 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584153A (en) * 2018-12-06 2019-04-05 北京旷视科技有限公司 Modify the methods, devices and systems of eye
US11403788B2 (en) * 2019-11-22 2022-08-02 Beijing Sensetime Technology Development Co., Ltd. Image processing method and apparatus, electronic device, and storage medium
CN112330527A (en) * 2020-05-29 2021-02-05 北京沃东天骏信息技术有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN112581395A (en) * 2020-12-15 2021-03-30 维沃移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112801916A (en) * 2021-02-23 2021-05-14 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112767285B (en) * 2021-02-23 2023-03-10 北京市商汤科技开发有限公司 Image processing method and device, electronic device and storage medium
CN112766234B (en) * 2021-02-23 2023-05-12 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999658A (en) * 1996-06-28 1999-12-07 Dainippon Screen Mfg. Co., Ltd. Image tone interpolation method and apparatus therefor
US20050141002A1 (en) * 2003-12-26 2005-06-30 Konica Minolta Photo Imaging, Inc. Image-processing method, image-processing apparatus and image-recording apparatus
US20140270514A1 (en) * 2013-03-14 2014-09-18 Ili Technology Corporation Image processing method
JP2015156072A (en) * 2014-02-20 2015-08-27 国立大学法人お茶の水女子大学 Eye make-up design creation method and program
CN108053365A (en) * 2017-12-29 2018-05-18 百度在线网络技术(北京)有限公司 For generating the method and apparatus of information
CN110572652A (en) * 2019-09-04 2019-12-13 锐捷网络股份有限公司 Static image processing method and device
CN111583102A (en) * 2020-05-14 2020-08-25 北京字节跳动网络技术有限公司 Face image processing method and device, electronic equipment and computer storage medium
CN113781359A (en) * 2021-09-27 2021-12-10 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113781359A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
WO2022179026A1 (en) Image processing method and apparatus, electronic device, and storage medium
WO2022179025A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
US9558591B2 (en) Method of providing augmented reality and terminal supporting the same
CN109816764B (en) Image generation method and device, electronic equipment and storage medium
WO2023045941A1 (en) Image processing method and apparatus, electronic device and storage medium
CN113160094A (en) Image processing method and device, electronic equipment and storage medium
CN110944230B (en) Video special effect adding method and device, electronic equipment and storage medium
WO2023045979A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN111986076A (en) Image processing method and device, interactive display device and electronic equipment
CN111091610B (en) Image processing method and device, electronic equipment and storage medium
WO2022193466A1 (en) Image processing method and apparatus, and electronic device and storage medium
WO2023045950A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN111445415B (en) Image restoration method and device, electronic equipment and storage medium
CN113822798B (en) Method and device for training generation countermeasure network, electronic equipment and storage medium
WO2023045961A1 (en) Virtual object generation method and apparatus, and electronic device and storage medium
CN111583142A (en) Image noise reduction method and device, electronic equipment and storage medium
CN113570581A (en) Image processing method and device, electronic equipment and storage medium
WO2023045946A1 (en) Image processing method and apparatus, electronic device, and storage medium
WO2023142645A1 (en) Image processing method and apparatus, and electronic device, storage medium and computer program product
CN112613447A (en) Key point detection method and device, electronic equipment and storage medium
CN113570583A (en) Image processing method and device, electronic equipment and storage medium
CN113762212A (en) Image processing method and device, electronic equipment and storage medium
US20220270313A1 (en) Image processing method, electronic device and storage medium
CN113763284A (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22871996

Country of ref document: EP

Kind code of ref document: A1