WO2023045941A1 - Appareil et procédé de traitement d'image, dispositif électronique et support de stockage - Google Patents

Appareil et procédé de traitement d'image, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2023045941A1
WO2023045941A1 PCT/CN2022/120090 CN2022120090W WO2023045941A1 WO 2023045941 A1 WO2023045941 A1 WO 2023045941A1 CN 2022120090 W CN2022120090 W CN 2022120090W WO 2023045941 A1 WO2023045941 A1 WO 2023045941A1
Authority
WO
WIPO (PCT)
Prior art keywords
beautification
target
area
beautified
preset
Prior art date
Application number
PCT/CN2022/120090
Other languages
English (en)
Chinese (zh)
Inventor
孙仁辉
苏柳
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023045941A1 publication Critical patent/WO2023045941A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the field of computer vision, and in particular to an image processing method and device, electronic equipment and a storage medium.
  • the present disclosure proposes an image processing scheme.
  • an image processing method including:
  • a beautification operation on the user image determine a target position of a target part in the user image; based on the target position, determine a plurality of target areas to be beautified in the user image, wherein the plurality of target areas Belonging to the preset area range of the target part; according to the beautification parameters in the beautification operation, performing beautification processing matching the target area on each of the multiple target areas to obtain multiple beautification results; according to the beautification parameters in the beautification operation
  • the above multiple beautification results are used to generate target user images.
  • the determining multiple target areas to be beautified in the user image based on the target position includes: copying the user image to multiple layers; traversing the A plurality of layers, using the traversed layer as the target layer; in the target layer, according to the target position, determine N target areas to be beautified in the user image, where N is positive An integer, N is less than the number of the plurality of target regions.
  • determining N target areas to be beautified in the user image according to the target position includes: obtaining the N target areas and A preset positional relationship between the target parts; in the target layer, with the target position as the center, area extension is performed according to the preset positional relationship to obtain the N target areas.
  • the multiple layers are arranged in a preset order, and the preset order matches the beautification execution order in the beautification operation.
  • the determining multiple target areas to be beautified in the user image based on the target position includes: centering on the target position, in multiple preset directions, Perform area extension according to multiple preset extension ranges respectively, and determine the multiple target areas.
  • the beautification process is performed on each of the multiple target areas that matches the target area, and multiple beautification results are obtained, including: Traverse the multiple target areas, use the traversed target area as the area to be beautified, and acquire the original color of the user image in the area to be beautified; determine the to-be-beautified area based on the beautification parameters in the beautification operation A beautification color corresponding to the beautification area; in the area to be beautified, the original color is fused with the beautification color to obtain a beautification result of the area to be beautified.
  • the multiple beautification results respectively belong to multiple layers, and the multiple layers are arranged in a preset order, and the preset order is the same as the beautification in the beautification operation.
  • Executing sequence matching; said generating the target user image according to the multiple beautification results includes: superimposing the multiple layers to which the multiple beautification results belong according to the preset order to obtain the target beautification result ; Fusing the target beautification result with the user image to obtain the target user image.
  • the superimposing the multiple layers to which the multiple beautification results belong according to the preset order to obtain the target beautification result includes: according to the preset order, The multiple layers to which the multiple beautification results belong are superimposed to obtain an intermediate beautification result; the intermediate beautification result is fused with a preset texture material to obtain the target beautification result.
  • the target part includes an eye part
  • the beautification operation includes an eye shadow rendering operation
  • the multiple target areas include a basic eye shadow area, a basic lower eye shadow area, an upper eyelid area, and an outer corner area , at least one of the inner eye corner area or the outer upper eye shadow area.
  • an image processing device including:
  • a determination module configured to determine a target position of a target part in the user image in response to a beautification operation on the user image; an area determination module, configured to determine a plurality of target parts to be beautified in the user image based on the target position The target area, wherein the multiple target areas belong to the preset area range of the target part; the beautification module is configured to compare each of the multiple target areas according to the beautification parameters in the beautification operation The beautification processing of the target area matching obtains multiple beautification results; the generating module is configured to generate a target user image according to the multiple beautification results.
  • the area determination module is configured to: respectively copy the user image to multiple layers; traverse the multiple layers, and use the traversed layer as the target layer; In the target layer, N target areas to be beautified in the user image are determined according to the target positions, where N is a positive integer and N is smaller than the number of the multiple target areas.
  • the area determination module is further configured to: obtain preset positional relationships between the N target areas and the target parts; The location is the center, and the region is extended according to the preset position relationship to obtain the N target regions.
  • the multiple layers are arranged in a preset order, and the preset order matches the beautification execution order in the beautification operation.
  • the area determining module is configured to: take the target position as the center, perform area extension in multiple preset directions respectively according to multiple preset extension ranges, and determine the multiple target area.
  • the beautification module is configured to: traverse the multiple target areas, use the traversed target area as the area to be beautified, and obtain the original image of the user image in the area to be beautified. Color; based on the beautification parameters in the beautification operation, determine the beautification color corresponding to the area to be beautified; in the area to be beautified, fuse the original color and the beautification color to obtain the area to be beautified beautification results.
  • the multiple beautification results respectively belong to multiple layers, and the multiple layers are arranged in a preset order, and the preset order is the same as the beautification in the beautification operation.
  • Executing sequence matching; the generation module is configured to: superimpose the multiple layers to which the multiple beautification results belong according to the preset order to obtain a target beautification result; combine the target beautification result with the The user images are fused to obtain the target user image.
  • the generation module is further configured to: superimpose the multiple layers to which the multiple beautification results belong according to the preset order to obtain an intermediate beautification result;
  • the intermediate beautification result is fused with the preset texture material to obtain the target beautification result.
  • the target part includes an eye part
  • the beautification operation includes an eye shadow rendering operation
  • the multiple target areas include a basic eye shadow area, a basic lower eye shadow area, an upper eyelid area, and an outer corner area , at least one of the inner eye corner area or the outer upper eye shadow area.
  • an electronic device including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to: execute the above image processing method.
  • a computer-readable storage medium on which computer program instructions are stored, and the above-mentioned image processing method is implemented when the computer program instructions are executed by a processor.
  • multiple target areas to be beautified in the user image are determined based on the target position, and then according to the beautification operation beautification parameters, perform beautification processing on multiple target areas that match the target areas to obtain multiple beautification results, and generate a target user image after beautifying the target parts according to the multiple beautification results.
  • the area to be beautified can be divided into multiple target areas based on the target position of the target part, and these multiple target areas are processed separately to obtain the target user image, thereby realizing independent beautification of multiple areas, improving While beautifying the flexibility, the richness of the beautification effect can also be improved.
  • the eye shadow rendering operation through the method proposed in the embodiment of the present disclosure, multiple target areas of eye shadow rendering can be processed separately. On the one hand, each target area can be improved separately. The eye shadow rendering accuracy of the region can also make the overall eye shadow effect richer and more layered.
  • Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
  • FIG. 2 shows a schematic diagram of a mask image of a target area according to an embodiment of the present disclosure.
  • FIG. 3 shows a schematic diagram of a mask image of a target region according to an embodiment of the present disclosure.
  • FIG. 4 shows a schematic diagram of a mask image of a target region according to an embodiment of the present disclosure.
  • FIG. 5 shows a schematic diagram of a mask image of a target area according to an embodiment of the present disclosure.
  • FIG. 6 shows a schematic diagram of a mask image of a target region according to an embodiment of the present disclosure.
  • FIG. 7 shows a schematic diagram of a mask image of a target area according to an embodiment of the present disclosure.
  • Fig. 8 shows a schematic diagram of a user image according to an embodiment of the present disclosure.
  • FIG. 9 shows an overlay effect obtained by overlaying multiple layers to which multiple beautification results belong according to a preset order.
  • FIG. 10 shows an overlay effect obtained by overlaying multiple layers to which multiple beautification results belong according to a preset order.
  • FIG. 11 shows an overlay effect obtained by overlaying multiple layers to which multiple beautification results belong according to a preset order.
  • FIG. 12 shows an overlay effect obtained by overlaying multiple layers to which multiple beautification results belong according to a preset order.
  • FIG. 13 shows an overlay effect obtained by overlaying multiple layers to which multiple beautification results belong according to a preset order.
  • FIG. 14 shows an overlay effect obtained by overlaying multiple layers to which multiple beautification results belong according to a preset order.
  • FIG. 15 shows a block diagram of an image processing device according to an embodiment of the present disclosure.
  • FIG. 17 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
  • the method can be applied to an image processing device or an image processing system, and the image processing device can be a terminal device, a server, or other processing devices.
  • the terminal device can be user equipment (User Equipment, UE), mobile device, user terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, computing device, vehicle-mounted device, wearable device wait.
  • the image processing method can be applied to a cloud server or a local server, and the cloud server can be a public cloud server or a private cloud server, which can be flexibly selected according to actual conditions.
  • the image processing method may also be implemented in a manner in which the processor invokes computer-readable instructions stored in the memory.
  • the image processing method may include:
  • Step S11 in response to the beautification operation on the user image, determine the target position of the target part in the user image.
  • the user image can be any image containing the target part of the user, and the user image can contain one or more users, and can also contain one or more target parts of the user, and its implementation form can be flexibly determined according to the actual situation.
  • the user image can contain one or more users, and can also contain one or more target parts of the user, and its implementation form can be flexibly determined according to the actual situation.
  • the target part can be any part in the user's image that needs to be beautified. Which parts are included in the target part, the implementation form can also be flexibly determined according to the actual situation of the beautification operation.
  • the target part can be the eye part;
  • the target part can be the face and the bridge of the nose and other parts that need to be repaired;
  • the target part can be the lip part, etc.
  • the beautification operation can be any operation that performs beautification processing on the target part of the user's image, such as various operations such as eye shadow rendering operation, facial repair operation, or lip makeup operation.
  • the operation content included in the beautification operation can be flexibly determined according to the actual situation, and is not limited to the following disclosed embodiments.
  • the beautification operation may include an operation of performing beautification processing on the user's target part in the user image; in some possible implementation manners, the beautification operation may also include various input beautification parameters and the like.
  • the beautification parameters may be parameters related to beautification of the target site input by the user, and the implementation form of the beautification parameters may be flexibly determined, such as various parameters such as beautification color, beautification intensity, or transparency.
  • the target location may be the location of the target part in the user's image.
  • the manner of determining the target location is not limited in the embodiments of the present disclosure, and is not limited to the following disclosed embodiments.
  • recognition processing may be performed on the user image to determine the target location.
  • the recognition processing manner is not limited in the embodiments of the present disclosure, for example, key point recognition or direct recognition of the entire target part may be used.
  • Step S12 based on the target position, determine a plurality of target areas to be beautified in the user's image.
  • the area type and area location contained in the target area can be flexibly determined according to the actual situation of the beautification operation. There may be overlapping areas between multiple target areas, and they may also be independent of each other. In the embodiment of the present disclosure, no Do limit.
  • the determined multiple target areas can be saved in the form of a mask image (mask).
  • the mask image can be used to determine the position of the target area in the user image.
  • each pixel in the mask image can also have Different transparency, so that the target area has a soft and transitional border. Wherein, the transparency of each pixel in the mask image can be flexibly set according to the actual situation, which is not limited in this embodiment of the present disclosure.
  • the target area may be one or more areas that need to be rendered in eye shadow rendering, such as the base eye shadow area, the base lower eye shadow area, the upper eyelid area, the outer corner area, and the inner eye corner area And 6 areas including the upper right eye shadow area.
  • 2 to 7 show schematic diagrams of mask images of the target area according to an embodiment of the present disclosure.
  • the eye shadow area may be the area between the eyes and the eyebrows.
  • the base lower eye shadow area may be the area under the eye where the lower eyelid is located.
  • the upper eyelid area may be the area above the eyes where the upper eyelid is located.
  • FIG. 1 the eye shadow area
  • the outer corner region may be the region where the corner of the eye near the contour of the face is located.
  • the inner corner region may be closer to the region where the corner of the eye on the nose side is located.
  • the upper outer eye shadow area may be the upper area in the outer corner of the eye.
  • the target area may be an area that needs to be treated in the contouring, such as the forehead area, the nose bridge area, or the mandible area.
  • the target area may be an area in the lip makeup that needs to be applied, such as the upper lip, the lower lip, the center of the lip, or the edge of the lip.
  • the target area since the target area needs to be determined according to the target position, the target area is generally within a certain range of the target part, so multiple target areas may belong to the preset area range of the target part, and the preset area range The size of can be determined according to the actual situation of the target site and target area, which is not limited in this embodiment of the present disclosure.
  • the method of determining multiple target areas according to the target position can be flexibly selected according to the actual situation.
  • multiple target areas can be determined by extending the target position in multiple directions, or a target area near the target position can be determined separately in different layers, etc. , refer to the following disclosed embodiments for details, and will not be expanded here.
  • Step S13 according to the beautification parameters in the beautification operation, perform beautification processing matching the target area on each of the multiple target areas, and obtain multiple beautification results.
  • the beautification parameters corresponding to different target areas can be the same or different, for example, different target areas can correspond to the same or different beautification colors, or different areas can be beautified with the same or different beautification intensities etc., can be flexibly set according to actual conditions, and are not limited to the embodiments of the present disclosure.
  • the beautification result may be a result obtained after the target area is beautified.
  • the manner of beautification processing may also be different. Therefore, corresponding beautification processing may be performed on multiple target areas to obtain multiple beautification results.
  • step S13 reference may be made to the following disclosed embodiments in detail, which will not be expanded here.
  • Step S14 generating target user images according to multiple beautification results.
  • the target user image may be an image obtained by beautifying the preset area range of the target part of the user image, and the method of generating the target user image may be flexibly determined according to the actual situation, for example, multiple beautification results may be fused to obtain An image of the target user, or a plurality of beautification results are fused with the user image to obtain the image of the target user.
  • multiple beautification results may also respectively belong to multiple layers.
  • the target user image may be obtained by superimposing the layers.
  • step S14 For some possible implementation manners of step S14, reference may be made to the following disclosed embodiments in detail, which will not be expanded here.
  • multiple target areas to be beautified in the user image are determined based on the target position, and then according to the beautification operation beautification parameters, perform beautification processing on multiple target areas that match the target areas to obtain multiple beautification results, and generate a target user image after beautifying the target parts according to the multiple beautification results.
  • the area to be beautified can be divided into multiple target areas based on the target position of the target part, and these multiple target areas are processed separately to obtain the target user image, thereby realizing independent beautification of multiple areas, improving While beautifying the flexibility, the richness of the beautification effect can also be improved.
  • the eye shadow rendering operation through the method proposed in the embodiment of the present disclosure, multiple target areas of the eye shadow rendering can be processed separately. On the one hand, each target area can be improved separately. The eye shadow rendering accuracy of the area can also make the overall eye shadow effect richer and layered.
  • step S12 may include:
  • N target areas to be beautified in the user image are determined according to the target position.
  • the layer may be any layer having an image processing or editing function, such as an editing layer in an image editing software (Photoshop, PS).
  • the number of multiple layers can be flexibly determined according to the actual situation, and is not limited in the embodiments of the present disclosure.
  • the number of layers can be the same as the number of the target area; in some possible implementations , the number of layers can also be smaller than the number of target areas, in this case, two or more target areas may be determined simultaneously in one or some layers.
  • Copying the user image to multiple layers respectively may be copying the target part and the preset area range of the target part in the user image to multiple layers, or copying the user image as a whole to multiple layers, or It is to directly copy the original layer where the user image is located to multiple layers, etc.
  • multiple layers can be traversed, and each layer traversed can be used as a target layer.
  • N target areas may be determined according to the target position, where N is a positive integer, and N is less than the number of multiple target areas.
  • N can be 1, that is, a target area is determined in each target layer, for example, for the eye shadow rendering operation, the 6 targets to be beautified mentioned in the above disclosed embodiments can be determined Area, in an example, the user image can be copied to 6 layers, and a target area to be beautified is determined in each layer, for example, the basic eye shadow area is determined in the first layer, and in The second layer defines the eye shadow area under the foundation and so on.
  • the number of N can also be an integer greater than 1 and less than the number of target areas.
  • the number of N can also be an integer greater than 1 and less than the number of target areas.
  • six target areas to be beautified can be determined.
  • the user The image is copied to 5 layers. Since the beautification color and beautification method of the base eye shadow area and the base eye shadow area are generally the same, the base eye shadow area and the base eye shadow area can be determined in the first layer, and In the remaining 4 layers, determine a target area and so on.
  • the manner of determining the target area in the target layer can be flexibly determined according to the actual situation of the target area. For details, refer to the following disclosed embodiments, which will not be expanded here.
  • multiple target areas can be determined separately and independently by copying the user image to multiple layers, which facilitates subsequent beautification processing on multiple target areas, improves the flexibility of beautification, and facilitates the The beautification effect of the area is changed to increase the richness of the beautification.
  • N target areas to be beautified in the user image are determined according to the target position, including:
  • the area is extended according to the preset position relationship to obtain N target areas.
  • the preset positional relationship may be an objectively existing positional relationship between the target area and the target part.
  • the preset positional relationship between the eye shadow area and the target part can be within the range of x1-x2 above the target part, and the preset positional relationship between the eye shadow area and the target part can be x3 below the target part Within the range of ⁇ x4, the values of x1, x2, x3 and x4 can be determined according to the actual situation, and are not limited in this embodiment of the present disclosure.
  • the preset positional relationship between other target areas and target parts can be deduced by analogy, and is flexibly determined according to the actual operation of eye shadow rendering, and will not be repeated here.
  • the area After determining the preset positional relationship, in the target layer, with the target position as the center, the area can be extended according to the preset positional relationship to obtain N target areas, wherein, the N target areas determined by the area extension can be Merged into one, or as N separate target areas for subsequent processing.
  • multiple layers may be arranged in a preset order, where the preset order may match the beautification execution order in the beautification operation.
  • the beautification execution sequence may be the beautification processing sequence of different target areas in the beautification operation.
  • the beautification execution order from front to back may be: basic eye shadow area, basic lower eye shadow area, outer upper eye shadow area, Outer eye area, upper eyelid area, and inner eye area.
  • the preset order of layer arrangement matches the beautification execution order, and the matching method can be set according to the actual situation.
  • the layer corresponding to the target area to be beautified first is set as the lowest layer among multiple layers, and the layer corresponding to the target area to be beautified last is set to The corresponding layer is set at the uppermost layer among the plurality of layers. Therefore, in an example, the target areas corresponding to the layers from bottom to top in the eye shadow rendering operation are: the base eye shadow area, the base bottom eye shadow area, the outer upper eye shadow area, the outer corner area, the upper eyelid area, and the inner eye shadow area. Eye area.
  • the beautified target image can be obtained by simulating the actual steps and methods of the beautification operation. images, effectively improving the effect and authenticity of landscaping.
  • step S12 may include:
  • the plurality of preset directions may be the directions of the plurality of target regions relative to the target part, respectively, and the plurality of preset extension ranges may be the extension ranges of the plurality of target regions respectively relative to the target part.
  • the multiple preset extension ranges are all within the preset area range.
  • the positions of multiple target areas can also be determined at the same time according to the preset directions and preset extension ranges between multiple target areas and target parts, so as to improve the convenience of determining the target areas, and then improve the beautification process. Processing efficiency.
  • step S13 may include:
  • Traverse multiple target areas use the traversed target area as the area to be beautified, and obtain the original color of the user image in the area to be beautified;
  • the original color is fused with the beautified color to obtain a beautification result of the area to be beautified.
  • the original color can be the color information of the user image itself. Since the position of the area to be beautified in the user image can be determined by step S12, the user image can be obtained by extracting the color of the user image within the range of the area to be beautified. The original color in the area to be beautified.
  • the beautification color may be a color used for rendering the area to be beautified, and the beautification color may be determined according to a color value or an RGB channel value input by the user during the beautification operation.
  • different target areas can correspond to different beautification parameters, so the beautification color corresponding to the area to be beautified can be determined according to the area type of the area to be beautified, for example, for the base eye shadow area and the base
  • the same beautification color can be used for the lower eye shadow area, and different beautification colors can be used for areas such as the outer corner of the eye, the inner corner of the eye highlight, the upper eyelid or the upper outer eye shadow.
  • the original color and the beautification color may be fused to obtain a beautification result of the area to be beautified, wherein the fusion method may be flexibly selected according to actual conditions. For example, each pixel in the area to be beautified can be traversed, and the original color and beautification color corresponding to each pixel can be added or multiplied to achieve fusion.
  • the beautification color and the mask image corresponding to the area to be beautified can also be mixed in a common way to achieve fusion.
  • each target area can be processed separately according to the beautification colors corresponding to different target areas to obtain the beautification results of each target area, so that different target areas can have different color beautification effects, effectively improving The richness of the overall beautification effect also improves the flexibility of the beautification process.
  • step S14 may include:
  • the multiple layers belonging to the multiple beautification results are superimposed to obtain the target beautification result
  • the target beautification result is fused with the user image to obtain the target user image.
  • multiple target areas may be determined through multiple layers, and correspondingly, multiple obtained beautification results may also respectively belong to multiple layers.
  • the multiple layers can be arranged in a preset order, so the positional relationship of the multiple layers during the superposition process can be consistent with the preset order.
  • the overlay order may be consistent with or different from the preset order.
  • overlay can be flexibly decided according to the actual situation, for example, it can be direct overlay between layers, and in some possible implementations, overlay can also be realized through some or some mixed methods, for example, multiple layers can be passed through Multiply the way to mix and overlay.
  • the target beautification result can be fused with the user image to obtain the target user image.
  • the fusion method is also not limited in this embodiment of the disclosure. You can refer to the above-mentioned fusion method of the original color and the beautification color. This will not be repeated here.
  • layers belonging to multiple beautification results can be superimposed according to a preset order to obtain the target beautification result.
  • the beautification effect on the user image can be determined simply and conveniently; Set the order to achieve layer overlay to improve the authenticity of the beautification effect.
  • a target beautification result which may include:
  • the intermediate beautification result is fused with the preset texture material to obtain the target beautification result.
  • the preset texture material can be an additional material for beautifying the target part, and its implementation method can be flexibly changed according to different beautification operations.
  • the preset texture material can include eye shadow powder material
  • the preset texture material can include materials such as matte or high gloss, and for lip makeup operations, the preset texture material can also include materials such as pearlescent, velvet, or matte.
  • the preset texture material and the intermediate beautification result can be superimposed in the form of layers.
  • the layers of the preset texture material can be superimposed Above the layer of the intermediate beautification result to enhance the beautification effect.
  • a target user image in which a preset area of a target part in the user image is beautified can be obtained.
  • Fig. 8 shows a schematic diagram of a user image according to an embodiment of the present disclosure.
  • Figures 9 to 14 show the superimposition effects obtained by superimposing multiple layers of multiple beautification results in a preset order (in order to protect the objects in the images, mosaics are performed on some parts of the faces in each figure deal with).
  • Figure 9 is the first overlay effect obtained after superimposing the beautification results of the eye shadow area on the basis;
  • Figure 10 is the second overlay effect obtained after superimposing the beautification results of the eye shadow area under the foundation on the basis of Figure 9;
  • Figure 11 is On the basis of Figure 10, the third overlay effect is obtained by superimposing the beautification results of the inner corner of the eye;
  • Figure 12 is the fourth superimposition effect obtained after superimposing the beautification results of the outer corner of the eye on the basis of Figure 11;
  • Figure 13 is On the basis of Figure 12, the fifth superimposition effect is obtained by superimposing the beautification result of the upper eyelid area;
  • Fig. 14 is the sixth superimposition effect obtained by superimposing the beautification result of the outer upper eye shadow area on the basis of Fig. 13 .
  • the change of the eye shadow effect is not obvious, but it can still be seen through the comparison that through the method proposed in the embodiment of the present disclosure, the beautification results of different target areas can be superimposed to obtain a more realistic and natural eye shadow rendering effect.
  • FIG. 15 shows a block diagram of an image processing device according to an embodiment of the present disclosure.
  • the image processing device 20 may include:
  • the determination module 21 is configured to determine the target position of the target part in the user image in response to the beautification operation on the user image.
  • the area determination module 22 is configured to determine multiple target areas to be beautified in the user image based on the target position, wherein the multiple target areas belong to the preset area range of the target part.
  • the beautification module 23 is configured to, according to the beautification parameters in the beautification operation, perform beautification processing matching the target area on each of the multiple target areas to obtain multiple beautification results.
  • the generating module 24 is configured to generate target user images according to multiple beautification results.
  • the area determination module is used to: copy the user image to multiple layers respectively; traverse multiple layers, and use the traversed layer as the target layer; in the target layer, according to The target position is to determine N target areas to be beautified in the user image, where N is a positive integer, and N is smaller than the number of multiple target areas.
  • the area determining module is further used to: obtain the preset positional relationship between the N target areas and the target part; The region is extended to obtain N target regions.
  • multiple layers are arranged in a preset order, and the preset order matches the beautification execution order in the beautification operation.
  • the area determination module is configured to: take the target position as the center, and perform area extension in multiple preset directions respectively according to multiple preset extension ranges, to determine multiple target areas.
  • the beautification module is used to: traverse multiple target areas, use the traversed target area as the area to be beautified, and obtain the original color of the user image in the area to be beautified; based on the beautification in the beautification operation parameter to determine the beautification color corresponding to the area to be beautified; in the area to be beautified, the original color and the beautification color are fused to obtain the beautification result of the area to be beautified.
  • multiple beautification results belong to multiple layers, and the multiple layers are arranged in a preset order, and the preset order matches the beautification execution order in the beautification operation; the generation module is used to : According to the preset sequence, superimpose multiple layers belonging to multiple beautification results to obtain the target beautification result; fuse the target beautification result with the user image to obtain the target user image.
  • the generation module is further used to: superimpose multiple layers belonging to multiple beautification results in a preset order to obtain an intermediate beautification result; fuse the intermediate beautification result with a preset texture material , to get the target beautification result.
  • the target area includes the eye area
  • the beautification operation includes an eye shadow rendering operation
  • the multiple target areas include a basic eye shadow area, a basic lower eye shadow area, an upper eyelid area, an outer corner area, an inner eye corner area, or an outer corner area. At least one of the upper eyeshadow areas.
  • This disclosure relates to the field of augmented reality.
  • acquiring the image information of the target object in the real environment and then using various visual correlation algorithms to detect or identify the relevant features, states and attributes of the target object, and thus obtain the image information that matches the specific application.
  • AR effect combining virtual and reality.
  • the target object may involve faces, limbs, gestures, actions, etc. related to the human body, or markers and markers related to objects, or sand tables, display areas or display items related to venues or places.
  • Vision-related algorithms can involve visual positioning, SLAM, 3D reconstruction, image registration, background segmentation, object key point extraction and tracking, object pose or depth detection, etc.
  • Specific applications can not only involve interactive scenes such as guided tours, navigation, explanations, reconstructions, virtual effect overlays and display related to real scenes or objects, but also special effects processing related to people, such as makeup beautification, body beautification, special effect display, virtual Interactive scenarios such as model display.
  • the relevant features, states and attributes of the target object can be detected or identified through the convolutional neural network.
  • the above-mentioned convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
  • the disclosed application example proposes an image processing method, including the following process:
  • Face recognition is performed based on the user image, and the target position of the eye part in the user image is determined. According to the target position, the area is extended outward from the eye part to obtain mask images of multiple target areas.
  • mask images of six target areas are respectively obtained, including: the base eye shadow area, the base lower eye shadow area, the upper eyelid area, the outer corner area, the inner eye corner area, and the outer upper eye shadow area.
  • the mask image can match the position of each area in the user image, and the mask image also contains the transparency information of each pixel in the target area, so the mask image has a soft transition boundary, which can simulate the powder in the The smudge effect of the skin.
  • the mask images of the 6 target areas can belong to 6 layers respectively, marked as a, b, c, d, e, and f, where layer a corresponds to the eye shadow area on the basis, and image b
  • the layer corresponds to the base lower eye shadow area
  • the c layer corresponds to the outer upper eye shadow area
  • the d layer corresponds to the outer corner area
  • the e layer corresponds to the upper eyelid area
  • the f layer corresponds to the inner eye corner area.
  • the layers are arranged in the order of a to f from bottom to top, that is, layer a is located at the bottom layer
  • layer f is located at the top layer.
  • the beautification color can be mixed with the mask image in this layer in a normal way.
  • the mixed layers can be marked as aa, bb, cc, dd, ee, and ff respectively, and the obtained
  • the mixed layers of the user image are mixed with the user image in the way of multiplication, and the beautification results of each target area are obtained, which are respectively marked as aaa, bbb, ccc, ddd, eee and fff.
  • the layer to which each beautification result belongs is mixed and superimposed from top to bottom in the order of fff to aaa, and the intermediate target beautification result can be obtained.
  • the image to which each beautification result belongs The order in which layers are superimposed cannot be changed.
  • preset texture materials such as eye shadow loose powder materials
  • the image processing method proposed in the application example of the present disclosure can make the eye shadow effect more natural through multi-level partition rendering, and can realize more parameter definitions. For example, the change of the beautification color of different target areas in the eye shadow, the change of the transparency of the mask image corresponding to different target areas, etc., so that the eye shadow effect is more controllable, and it is convenient for users to customize beautification parameters according to their preferences, and it is also convenient for institutions to use the application examples of this disclosure.
  • the method provides users with richer customization functions.
  • the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible
  • the inner logic is OK.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, on which computer program instructions are stored, and the above-mentioned method is implemented when the computer program instructions are executed by a processor.
  • the computer readable storage medium may be a volatile computer readable storage medium or a nonvolatile computer readable storage medium.
  • An embodiment of the present disclosure also proposes an electronic device, including: a processor; and a memory for storing instructions executable by the processor; wherein the processor is configured as the above method.
  • the above-mentioned memory can be volatile memory (volatile memory), such as RAM; or non-volatile memory (non-volatile memory), such as ROM, flash memory (flash memory), hard disk (Hard Disk Drive , HDD) or solid-state drive (Solid-State Drive, SSD); or a combination of the above types of memory, and provide instructions and data to the processor.
  • volatile memory such as RAM
  • non-volatile memory non-volatile memory
  • ROM read-only memory
  • flash memory flash memory
  • HDD hard disk
  • SSD solid-state drive
  • the aforementioned processor may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor. It can be understood that, for different devices, the electronic device used to implement the above processor function may also be other, which is not specifically limited in this embodiment of the present disclosure.
  • Electronic devices may be provided as terminals, servers, or other forms of devices.
  • the embodiments of the present disclosure further provide a computer program, which implements the above method when the computer program is executed by a processor.
  • FIG. 16 is a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • the electronic device 800 may be a terminal such as a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, or a personal digital assistant.
  • electronic device 800 may include one or more of the following components: processing component 802, memory 804, power supply component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814 , and the communication component 816.
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as those associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802 .
  • the memory 804 is configured to store various types of data to support operations at the electronic device 800 . Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like.
  • the memory 804 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • the power supply component 806 provides power to various components of the electronic device 800 .
  • Power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 800 .
  • the multimedia component 808 includes a screen providing an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense a boundary of a touch or swipe action, but also detect duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capability.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC), which is configured to receive external audio signals when the electronic device 800 is in operation modes, such as call mode, recording mode and voice recognition mode. Received audio signals may be further stored in memory 804 or sent via communication component 816 .
  • the audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessments of various aspects of electronic device 800 .
  • the sensor component 814 can detect the open/closed state of the electronic device 800, the relative positioning of components, such as the display and the keypad of the electronic device 800, the sensor component 814 can also detect the electronic device 800 or a Changes in position of components, presence or absence of user contact with electronic device 800 , electronic device 800 orientation or acceleration/deceleration and temperature changes in electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include an optical sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access a wireless network based on communication standards, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 816 receives a broadcast signal or broadcast related personnel information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wide Band
  • Bluetooth Bluetooth
  • electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmable gate array
  • controller microcontroller, microprocessor or other electronic component implementation for performing the methods described above.
  • a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to implement the above method.
  • FIG. 17 is a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922 , which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922 , such as application programs.
  • the application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above method.
  • Electronic device 1900 may also include a power supply component 1926 configured to perform power management of electronic device 1900, a wired or wireless network interface 1950 configured to connect electronic device 1900 to a network, and an input-output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • a non-transitory computer-readable storage medium such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to implement the above method.
  • the present disclosure can be a system, method and/or computer program product.
  • a computer program product may include a computer readable storage medium having computer readable program instructions thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
  • a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer-readable storage media include: portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or flash memory), static random access memory (SRAM), compact disc read only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanically encoded device, such as a printer with instructions stored thereon A hole card or a raised structure in a groove, and any suitable combination of the above.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disc read only memory
  • DVD digital versatile disc
  • memory stick floppy disk
  • mechanically encoded device such as a printer with instructions stored thereon
  • a hole card or a raised structure in a groove and any suitable combination of the above.
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
  • Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or Source or object code written in any combination, including object-oriented programming languages—such as Smalltalk, C++, etc., and conventional procedural programming languages—such as the “C” language or similar programming languages.
  • Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as via the Internet using an Internet service provider). connect).
  • electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs) or programmable logic arrays (PLAs), are personalized by utilizing status personnel information of computer readable program instructions, the electronic circuits Computer readable program instructions may be executed to implement various aspects of the present disclosure.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
  • each block in a flowchart or block diagram may represent a module, a portion of a program segment, or an instruction that includes one or more Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

La présente divulgation concerne un procédé et un appareil de traitement d'image, un dispositif électronique et un support de stockage. Le procédé consiste à : en réponse à une opération d'embellissement d'une image d'utilisateur, déterminer une position cible d'une partie cible de l'image d'utilisateur ; déterminer de multiples zones cibles à embellir dans l'image d'utilisateur sur la base de la position cible, les multiples zones cibles appartenant à une plage de zone prédéfinie de la partie cible ; effectuer, selon un paramètre d'embellissement dans l'opération d'embellissement, l'embellissement qui correspond à la zone cible sur chacune des multiples zones cibles pour obtenir de multiples résultats d'embellissement ; et générer une image d'utilisateur cible selon les multiples résultats d'embellissement.
PCT/CN2022/120090 2021-09-27 2022-09-21 Appareil et procédé de traitement d'image, dispositif électronique et support de stockage WO2023045941A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111137672.3A CN113763286A (zh) 2021-09-27 2021-09-27 图像处理方法及装置、电子设备和存储介质
CN202111137672.3 2021-09-27

Publications (1)

Publication Number Publication Date
WO2023045941A1 true WO2023045941A1 (fr) 2023-03-30

Family

ID=78797743

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/120090 WO2023045941A1 (fr) 2021-09-27 2022-09-21 Appareil et procédé de traitement d'image, dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN113763286A (fr)
WO (1) WO2023045941A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763286A (zh) * 2021-09-27 2021-12-07 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN114463212A (zh) * 2022-01-28 2022-05-10 北京大甜绵白糖科技有限公司 图像处理方法及装置、电子设备和存储介质
CN115880168A (zh) * 2022-09-30 2023-03-31 北京字跳网络技术有限公司 图像修复方法、装置、设备、计算机可读存储介质及产品

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107742274A (zh) * 2017-10-31 2018-02-27 广东欧珀移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN109543646A (zh) * 2018-11-30 2019-03-29 深圳市脸萌科技有限公司 人脸图像处理方法、装置、电子设备及计算机存储介质
CN112102159A (zh) * 2020-09-18 2020-12-18 广州虎牙科技有限公司 人体美化方法、装置、电子设备及存储介质
CN112541955A (zh) * 2020-12-17 2021-03-23 维沃移动通信有限公司 图像处理方法、装置及设备
CN112801916A (zh) * 2021-02-23 2021-05-14 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN113421204A (zh) * 2021-07-09 2021-09-21 北京百度网讯科技有限公司 图像处理方法、装置、电子设备及可读存储介质
CN113763286A (zh) * 2021-09-27 2021-12-07 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9508181B2 (en) * 2011-08-31 2016-11-29 Adobe Systems Incorporated Ordering and rendering buffers for complex scenes with cyclic dependency
US9342322B2 (en) * 2011-09-12 2016-05-17 Microsoft Technology Licensing, Llc System and method for layering using tile-based renderers
CN102622613B (zh) * 2011-12-16 2013-11-06 彭强 一种基于双眼定位和脸型识别的发型设计方法
US9483853B2 (en) * 2012-05-23 2016-11-01 Glasses.Com Inc. Systems and methods to display rendered images
CN104537612A (zh) * 2014-08-05 2015-04-22 华南理工大学 一种自动的人脸图像皮肤美化方法
CN104574306A (zh) * 2014-12-24 2015-04-29 掌赢信息科技(上海)有限公司 一种即时视频中的人脸美化方法和电子设备
CN106023104B (zh) * 2016-05-16 2019-01-08 厦门美图之家科技有限公司 人脸眼部区域的图像增强方法、系统及拍摄终端
CN107358573A (zh) * 2017-06-16 2017-11-17 广东欧珀移动通信有限公司 图像美颜处理方法和装置
CN109102467A (zh) * 2017-06-21 2018-12-28 北京小米移动软件有限公司 图片处理的方法及装置
JP7003558B2 (ja) * 2017-10-12 2022-01-20 カシオ計算機株式会社 画像処理装置、画像処理方法、及びプログラム
CN107808137A (zh) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN109409319A (zh) * 2018-11-07 2019-03-01 北京旷视科技有限公司 一种宠物图像美化方法、装置及其存储介质
CN110111279B (zh) * 2019-05-05 2021-04-30 腾讯科技(深圳)有限公司 一种图像处理方法、装置及终端设备
CN110458921B (zh) * 2019-08-05 2021-08-03 腾讯科技(深圳)有限公司 一种图像处理方法、装置、终端以及存储介质
CN111640163B (zh) * 2020-06-03 2022-04-22 湖南工业大学 图像合成方法及计算机可读储存介质
CN112348736B (zh) * 2020-10-12 2023-03-28 武汉斗鱼鱼乐网络科技有限公司 一种去除黑眼圈的方法、存储介质、设备及系统
CN112883821B (zh) * 2021-01-27 2024-02-20 维沃移动通信有限公司 图像处理方法、装置及电子设备
CN112766234B (zh) * 2021-02-23 2023-05-12 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN113240792B (zh) * 2021-04-29 2022-08-16 浙江大学 一种基于人脸重建的图像融合生成式换脸方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107742274A (zh) * 2017-10-31 2018-02-27 广东欧珀移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN109543646A (zh) * 2018-11-30 2019-03-29 深圳市脸萌科技有限公司 人脸图像处理方法、装置、电子设备及计算机存储介质
CN112102159A (zh) * 2020-09-18 2020-12-18 广州虎牙科技有限公司 人体美化方法、装置、电子设备及存储介质
CN112541955A (zh) * 2020-12-17 2021-03-23 维沃移动通信有限公司 图像处理方法、装置及设备
CN112801916A (zh) * 2021-02-23 2021-05-14 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质
CN113421204A (zh) * 2021-07-09 2021-09-21 北京百度网讯科技有限公司 图像处理方法、装置、电子设备及可读存储介质
CN113763286A (zh) * 2021-09-27 2021-12-07 北京市商汤科技开发有限公司 图像处理方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN113763286A (zh) 2021-12-07

Similar Documents

Publication Publication Date Title
WO2022179025A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage
WO2022179026A1 (fr) Procédé et appareil de traitement d'images, dispositif électronique et support de stockage
WO2023045941A1 (fr) Appareil et procédé de traitement d'image, dispositif électronique et support de stockage
WO2018153267A1 (fr) Procédé de session vidéo de groupe et dispositif de réseau
CN113160094A (zh) 图像处理方法及装置、电子设备和存储介质
WO2017152673A1 (fr) Procédé et appareil de génération d'animation d'expression pour un modèle de visage humain
US20180182144A1 (en) Information processing apparatus, information processing method, and program
US20180182145A1 (en) Information processing apparatus, information processing method, and program
CN110991327A (zh) 交互方法及装置、电子设备和存储介质
US20210256672A1 (en) Method, electronic device and storage medium for processing image
WO2016197469A1 (fr) Procédé et appareil pour générer une interface de déverrouillage, et dispositif électronique
WO2023045979A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage
WO2023045207A1 (fr) Procédé et appareil de traitement de tâche, dispositif électronique, support de stockage et programme informatique
TWI752473B (zh) 圖像處理方法及裝置、電子設備和電腦可讀儲存媒體
WO2023051356A1 (fr) Procédé et appareil d'affichage d'objet virtuel, dispositif électronique et support de stockage
CN113822798B (zh) 生成对抗网络训练方法及装置、电子设备和存储介质
WO2023045950A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage
WO2023045946A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage
CN113570581A (zh) 图像处理方法及装置、电子设备和存储介质
WO2023142645A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique, support de stockage et produit programme informatique
WO2023045961A1 (fr) Appareil et procédé de génération d'objet virtuel, ainsi que dispositif électronique et support de stockage
JP2019512141A (ja) 顔モデル編集方法及び装置
US11960653B2 (en) Controlling augmented reality effects through multi-modal human interaction
WO2022042160A1 (fr) Procédé et appareil de traitement d'images
US20220270313A1 (en) Image processing method, electronic device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22871987

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE