WO2022077970A1 - 特效添加方法及装置 - Google Patents

特效添加方法及装置 Download PDF

Info

Publication number
WO2022077970A1
WO2022077970A1 PCT/CN2021/105513 CN2021105513W WO2022077970A1 WO 2022077970 A1 WO2022077970 A1 WO 2022077970A1 CN 2021105513 W CN2021105513 W CN 2021105513W WO 2022077970 A1 WO2022077970 A1 WO 2022077970A1
Authority
WO
WIPO (PCT)
Prior art keywords
hair
region
area
image
color
Prior art date
Application number
PCT/CN2021/105513
Other languages
English (en)
French (fr)
Inventor
武珊珊
赵松涛
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2022077970A1 publication Critical patent/WO2022077970A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the technical field of special effect addition, and in particular, to a special effect addition method, apparatus, electronic device, and storage medium.
  • the present disclosure provides a special effect adding method, device, electronic device and storage medium.
  • a method for adding special effects which is applied to an electronic device, and the method includes:
  • the area to be dyed is rendered in a preset color.
  • the determining the first hair region and the face region in the person image includes:
  • the semantic segmentation model is used to perform semantic segmentation processing on the input image
  • a first hair region and a face region in the person image are determined.
  • the determining a second hair region based on the first hair region and the human face region includes:
  • the first hair area and the face area determine the first bangs area in the character image
  • the second hair region is determined based on the second bang region and the first hair region.
  • the determining, according to the first hair region and the face region, the first bangs region in the person image includes:
  • An overlapping area between the expanded first hair area and the expanded face area is determined as the first bangs area in the person image.
  • the expansion processing is performed on the first hair region and the human face region respectively to obtain the expanded first hair region and the expanded human face region, including:
  • the first hair region and the human face region are respectively subjected to expansion processing to obtain the expanded first hair region and the expanded human face region.
  • performing attenuation processing on the first fringe area to obtain a second fringe area including:
  • each pixel value in the first bangs area is reduced to obtain the second bangs area.
  • the performing guided filtering based on the second hair region and the person image to determine the region to be dyed includes:
  • the second hair region is subjected to guided filtering processing using the adjusted image to obtain the to-be-dyed region.
  • rendering the area to be dyed with a preset color includes:
  • color rendering is performed on the to-be-dyed area
  • color rendering is performed on the area to be dyed.
  • an apparatus for adding special effects including:
  • a response unit configured to execute a selection instruction in response to a color development special effect, enter a special effect shooting mode and acquire a character image
  • a segmentation unit configured to perform determining a first hair region and a face region in the person image
  • a determining unit configured to perform determining a second hair region based on the first hair region and the human face region
  • a guided filtering unit configured to perform guided filtering based on the second hair region and the person image to determine the region to be dyed
  • a rendering unit configured to perform rendering of the to-be-dyed area with a preset color.
  • the segmentation unit is specifically configured to input the person image into a semantic segmentation model; the semantic segmentation model is used to perform semantic segmentation processing on the input image; output based on the semantic segmentation model The result of semantic segmentation is to determine the first hair region and face region in the person image.
  • the determining unit is specifically configured to perform determining a first fringe area in the character image according to the first hair area and the face area; Attenuation processing is performed to obtain a second bangs region; based on the second bangs region and the first hair region, the second hair region is determined.
  • the determining unit is specifically configured to perform an expansion process on the first hair region and the human face region, respectively, to obtain an expanded first hair region and an expanded human face region; An overlapping area between the expanded first hair area and the expanded face area is determined as the first bangs area in the person image.
  • the determining unit is specifically configured to perform determining a minimum rectangle covering the first hair region; query an expansion coefficient corresponding to the length of the short side of the minimum rectangle; using the expansion coefficient, respectively Expansion processing is performed on the first hair region and the human face region to obtain the expanded first hair region and the expanded human face region.
  • the determining unit is specifically configured to perform acquiring the pixel mean value of the first bangs area; query an attenuation coefficient corresponding to the pixel mean value, and reduce the first bangs according to the attenuation coefficient Each pixel value in the area is obtained to obtain the second bangs area.
  • the guiding filtering unit is specifically configured to perform determining a target channel image corresponding to a color channel with a maximum variance in the character image; and adjusting the image contrast of the target channel image to obtain an adjusted image. image; wherein, the image contrast of the adjusted image is greater than the image contrast of the target channel image; using the adjusted image to conduct guided filtering processing on the second hair region to obtain the to-be-dyed region.
  • the rendering unit is specifically configured to obtain a target rendering color corresponding to the special effect shooting mode; perform color rendering on the area to be dyed based on the target rendering color; and/or , in response to the hair color selection instruction implemented in the hair color selection entry; obtain the target rendering hair color corresponding to the hair color selection instruction; and perform color rendering on the area to be dyed based on the target rendering hair color.
  • an electronic device including a memory and a processor, the memory stores a computer program, and the processor implements the first aspect or the first aspect when executing the computer program The special effect adding method described in any embodiment.
  • a storage medium having a computer program stored thereon, the computer program implementing the special effect addition according to the first aspect or any embodiment of the first aspect when the computer program is executed by a processor method.
  • a computer program product comprising a computer program, the computer program being stored in a readable storage medium, and at least one processor of a device from the readable storage medium The computer program is read and executed, so that the device executes the special effect adding method described in any embodiment of the first aspect.
  • the embodiment of the present disclosure acquires a character image after entering the special effect shooting mode in response to the hair color special effect selection instruction; determines the first hair area and the face area in the character image; determines the first hair area and the face area based on the first hair area and the face area Second hair area; conduct guided filtering based on the second hair area and the image of the person to determine the area to be dyed; finally, render the area to be dyed with a preset color; in this way, the situation that the boundary line may be missing in the area to be dyed can be avoided, and it can be ensured
  • the area to be dyed has good edge characteristics, which avoids the non-hair area in the image to be processed from being affected by the rendering process, so that the hair area in the human image can be dyed more realistically and accurately.
  • Fig. 1 is a flow chart of a method for adding special effects according to an exemplary embodiment.
  • Fig. 2 is a flow chart of another method for adding special effects according to an exemplary embodiment.
  • Fig. 3 is a block diagram of an apparatus for adding special effects according to an exemplary embodiment.
  • Fig. 4 is an internal structure diagram of an electronic device according to an exemplary embodiment.
  • Fig. 1 is a flowchart of a method for adding special effects according to an exemplary embodiment, including the following steps.
  • the special effect adding method may be performed by an electronic device.
  • the electronic devices can be, but are not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices.
  • step S110 in response to the color development special effect selection instruction, enter the special effect shooting mode, and acquire a person image.
  • the color development special effect selection instruction may refer to an instruction for the user to select the electronic device to enter the special effect shooting mode.
  • the special effect shooting mode may refer to a mode in which a special effect of hair dyeing is added to the captured image.
  • the person image may refer to an image including a photographed person.
  • the electronic device installed with the image shooting software may first display a shooting page; wherein, the shooting page includes a color development special effect selection entrance.
  • the user can perform a triggering operation on the color development special effect selection entry, thereby realizing the input of the color development special effect selection instruction into the electronic device.
  • the electronic device enters the special effect shooting mode in response to receiving the color development special effect selection instruction.
  • the electronic device acquires an image including a person to be photographed, that is, a person image.
  • step S120 a first hair region and a face region in the person image are determined.
  • the electronic device in response to acquiring a person image, can perform semantic segmentation on the person image through a pre-trained semantic segmentation model, and determine the first hair region and face region in the person image.
  • the pre-trained semantic segmentation model is used to perform semantic segmentation processing on the input image.
  • the process specifically includes: the electronic device can input the human image into the pre-trained semantic segmentation model; the electronic device obtains the pre-trained semantic segmentation model.
  • step S130 a second hair region is determined based on the first hair region and the human face region.
  • the electronic device in response to the electronic device determining the first hair region and the face region in the character image, may determine the second hair region based on the first hair region and the face region in the character image. Specifically, in the process of determining the second hair region based on the first hair region and the face region, the electronic device specifically includes: the electronic device can determine the first bangs in the character image according to the first hair region and the face region area. The electronic device performs attenuation processing on the first bangs area to obtain the second bangs area.
  • the second bangs area can also be named as the first bangs area after attenuation processing.
  • the electronic device determines the second hair region based on the second bangs region and the first hair region. For example, the electronic device may adjust the bangs region in the first hair region based on the second bangs region to obtain the adjusted hair region as the second hair region.
  • step S140 guided filtering is performed based on the second hair region and the person image to determine the region to be dyed.
  • the electronic device can use the human image to perform guided filtering processing on the second hair region, so as to make the obtained second hair region after the guided filtering obvious; the electronic device determines the second hair region after the guided filtering as area to be stained.
  • step S150 the to-be-dyed area is rendered in a preset color.
  • the electronic device in response to the electronic device determining the to-be-dyed area in the character image, may perform color adjustment processing on the to-be-dyed area in a manner corresponding to the color development special effect selection instruction to obtain a processed character image. Specifically, the electronic device may determine the target hair color according to the hair color special effect selection instruction. Based on the target hair color, the electronic device adjusts the color level of each pixel in the area to be dyed, so that the hair color in the area to be dyed matches the target hair color, and then obtains a processed person image.
  • the special effect adding method by responding to the hair color special effect selection instruction, enter the special effect shooting mode and obtain the character image; determine the first hair area and the face area in the character image; Second hair area; conduct guided filtering based on the second hair area and the image of the person to determine the area to be dyed; finally, render the area to be dyed with a preset color; in this way, the situation that the boundary line may be missing in the area to be dyed can be avoided, and it can be ensured
  • the area to be dyed has good edge characteristics, which avoids the non-hair area in the image to be processed from being affected by the rendering process, so that the hair area in the human image can be dyed more realistically and accurately.
  • determining the first bangs region in the character image according to the first hair region and the face region includes: performing expansion processing on the first hair region and the face region respectively to obtain the expanded first hair region and the expanded face area; determine the overlapping area between the expanded first hair area and the expanded face area as the first bangs area in the person image.
  • the electronic device in the process of determining the first bangs area according to the first hair area and the face area, specifically includes: the electronic device can perform expansion processing on the first hair area and the face area respectively, and obtain the expanded The first hair area and the expanded face area. Specifically, the electronic device may acquire the current hair thickness of the photographed person in the image to be processed. Then, expansion processing is adaptively performed on the first hair region and the face region based on the current hair thickness.
  • the electronic device may determine the target expansion coefficient corresponding to the current hair thickness in the pre-established positive correlation between the hair thickness and the expansion coefficient. Then, based on the target expansion coefficient, the electronic device performs expansion processing on the first hair region and the human face region to obtain an expanded first hair region and an expanded human face region.
  • the electronic device determines an overlapping area between the expanded first hair area and the expanded face area, and uses the overlapping area as the first bangs area in the person image.
  • the expanded first hair area and the expanded human face area are obtained by performing expansion processing on the first hair area and the expanded human face area respectively; and the expanded first hair area and the expanded human face area are determined.
  • the overlapping area between the subsequent face areas is used as the first bangs area in the character image, so that the robustness in the process of determining the first bangs area can be improved.
  • performing expansion processing on the first hair region and the human face region respectively to obtain the expanded first hair region and the expanded human face region includes: determining a minimum rectangle covering the first hair region; querying and The expansion coefficient corresponding to the length of the short side of the smallest rectangle; using the expansion coefficient, the first hair area and the face area are expanded respectively to obtain the expanded first hair area and the expanded face area.
  • the electronic device performs expansion processing on the first hair region and the human face region, respectively, to obtain the expanded first hair region and the expanded human face region, which specifically includes: the electronic device can display a mask map of the hair region. , determine the smallest rectangle that can cover the first hair region. The electronic device obtains the length of the short side of the minimum rectangle, and determines the expansion coefficient corresponding to the length of the short side of the minimum rectangle in the pre-established positive correlation between the length of the short side and the coefficient of expansion. The electronic device uses an expansion coefficient to perform expansion processing on the first hair region and the human face region, respectively, to obtain the expanded first hair region and the expanded human face region.
  • the technical solution of the embodiment of the present application is to determine the minimum rectangle covering the first hair area; query the expansion coefficient corresponding to the length of the short side of the minimum rectangle; Obtaining the expanded first hair area and the expanded face area; in this way, the first bangs area determined based on the expanded first hair area and the expanded face area can be well adapted to be photographed.
  • the thickness of the character's hair which improves the realism of the rendering result of dyed hair.
  • performing attenuation processing on the first fringe area to obtain the second fringe area includes: obtaining the pixel mean value of the fringe area; querying the attenuation coefficient corresponding to the pixel mean value, and reducing the pixel value in the first fringe area according to the attenuation coefficient Each pixel value is obtained to obtain the second bangs area.
  • the pixel value of the first bangs region after attenuation processing in the second bangs region is smaller than the pixel value of the first bangs region in the mask image.
  • the electronic device performs attenuation processing on the first bangs area to obtain the second bangs area, which specifically includes: the electronic device can obtain the pixel value of the first bangs area, and according to the pixel mean value of the pixel value, in the pixel value In the positive correlation between the mean value and the attenuation coefficient, the attenuation coefficient corresponding to the pixel mean value is queried; according to the attenuation coefficient, each pixel value in the first bangs area is attenuated to obtain the second bangs area; wherein, in the second bangs area The pixel value of the first bangs region after the attenuation processing is smaller than the pixel value of the first bangs region in the mask image.
  • the pixel value of the first bangs area may be the pixel mean value of the first bangs area, may also be the pixel variance value of the first bangs area, or may also be the pixel median value of the first bangs area, which is not specified in this disclosure. limited.
  • the attenuation coefficient may be 0.5; when the pixel average value of the first bangs area is 150, the attenuation coefficient may be 0.6, etc.
  • the positive correlation between the pixel mean and the attenuation coefficient can be determined from the experimental results.
  • the technical solution of the embodiment of the present application is to obtain the pixel mean value of the first bangs area; query the attenuation coefficient corresponding to the pixel mean value; and perform attenuation processing on each pixel value in the first bangs area according to the attenuation coefficient, so as to obtain the second bangs area ;
  • each pixel value in the first bangs area can be adaptively attenuated according to the pixel distribution of the first bangs area, so that the dyeing effect of the bangs area after adding the hair color special effects will not be too stiff, improving How realistic is the rendering of the hair color in the bangs area.
  • performing guided filtering based on the second hair region and the person image to determine the region to be dyed includes: determining a target channel image corresponding to a color channel with the largest variance in the person image; adjusting the image contrast of the target channel image, An adjusted image is obtained; the second hair region is subjected to guided filtering processing using the adjusted image to obtain an area to be dyed.
  • the image contrast of the adjusted image is greater than that of the target channel image.
  • the electronic device performs guided filtering based on the second hair region and the person image to determine the region to be dyed, which specifically includes: the electronic device can determine the color channel with the largest variance among the three RGB channels corresponding to the to-be-processed image. . The electronic device determines the target channel image corresponding to the color channel with the largest variance in the person image.
  • the electronic device adjusts the image contrast of the target channel image to obtain an adjusted image whose image contrast is greater than that of the target image. Specifically, the electronic device culls the pixels of the target channel image with the specified pixel value; then stretches the image after culling the pixel points to the range of the pixel value, and stretches the pixel value of the image after culling the pixel points.
  • the minimum value of is the minimum value of the pixel value range
  • the maximum value is the maximum value of the pixel value range.
  • the pixel value range may be 0-255, or may also be the pixel value range of the image of the selected channel. So, before culling the pixels of the specified pixel value, the histogram of the target image can be drawn.
  • the electronic device uses the adjusted image as the guide image, and uses the second hair region as the guided image, so as to perform guide filtering processing to obtain the area to be dyed.
  • the technical solutions of the embodiments of the present application by determining the target channel image corresponding to the color channel with the largest variance in the human image; adjusting the image contrast of the target channel image to obtain an adjusted image with high contrast, which can reduce the number of electronic devices as much as possible
  • the amount of data calculation in the process of using the adjusted image to conduct guided filtering processing on the second hair region improves the real-time performance in the process of hair dyeing and rendering.
  • rendering the area to be dyed with a preset color includes: obtaining a target rendering color corresponding to the special effect shooting mode; based on the target rendering color, performing color rendering on the area to be dyed; and/or, in response to implementing the color rendering Select the hair color selection command of the entrance; obtain the target rendering hair color corresponding to the hair color selection command; perform color rendering on the area to be dyed based on the target rendering hair color.
  • the process specifically includes: the electronic device can acquire the target rendering hair color corresponding to the special effect shooting mode.
  • the special effect shooting mode interface currently displayed by the electronic device further includes a hair color selection entry.
  • the hair color selection entry is used for users to switch between different hair color rendering effects.
  • the electronic device may respond to the hair color selection instruction implemented by the user at the hair color selection entry, and then acquire the target rendering hair color corresponding to the hair color selection instruction.
  • the electronic device then renders the hair color based on the target, and performs color rendering on the area to be dyed. Specifically, the electronic device may render the hair color based on the target, and perform toning processing on the to-be-dyed area to obtain the toned area. The hair color in the toned area matches the target rendered hair color.
  • the electronic device in the process of rendering the area to be dyed with the preset color, the electronic device renders the hair color by acquiring the target corresponding to the special effect shooting mode, and/or, in response to the hair color selection implemented in the hair color selection entrance command to obtain the target rendering hair color corresponding to the hair color selection command, and based on the target rendering hair color, color rendering is performed on the area to be dyed, so that the rendered character image can meet the user's needs for adding hair color special effects, and there is no need to The character image is re-processed for adding special effects, which improves the efficiency of adding special effects to the image by the electronic device.
  • FIG. 2 is a flow chart of another special effect adding method according to an exemplary embodiment.
  • the special effect adding method can be executed by an electronic device.
  • the special effect adding method includes the following steps.
  • step S202 in response to the color development special effect selection instruction, enter the special effect shooting mode and acquire a person image.
  • step S204 the person image is input into a semantic segmentation model; the semantic segmentation model is used to perform semantic segmentation processing on the input image.
  • a first hair region and a face region in the person image are determined based on the semantic segmentation result output by the semantic segmentation model.
  • expansion processing is performed on the first hair region and the human face region, respectively, to obtain an expanded first hair region and an expanded human face region.
  • step S210 an overlapping area between the expanded first hair area and the expanded face area is determined as the first bangs area in the person image.
  • step S212 attenuation processing is performed on the first bangs region to obtain a second bangs region.
  • step S214 the second hair region is determined based on the second bangs region and the first hair region.
  • step S216 the target channel image corresponding to the color channel with the largest variance in the person image is determined.
  • step S218 the image contrast of the target channel image is adjusted to obtain an adjusted image; wherein, the image contrast of the adjusted image is greater than the image contrast of the target channel image.
  • step S220 a guided filtering process is performed on the second hair region by using the adjusted image to obtain a to-be-dyed region.
  • step S222 the to-be-dyed area is rendered in a preset color.
  • steps in the flowcharts of FIG. 1 and FIG. 2 are shown in sequence according to the arrows, these steps are not necessarily executed in the sequence shown by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in FIG. 1 and FIG. 2 may include multiple steps or multiple stages. These steps or stages are not necessarily executed and completed at the same time, but may be executed at different times. The order of execution is also not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages within the other steps.
  • Fig. 3 is a block diagram of an apparatus for adding special effects according to an exemplary embodiment.
  • the device includes:
  • the response unit 310 is configured to execute a selection instruction in response to a color development special effect, enter a special effect shooting mode and acquire a character image;
  • a segmentation unit 320 configured to perform determining the first hair region and the face region in the person image
  • a determining unit 330 configured to determine a second hair region based on the first hair region and the human face region
  • a guided filtering unit 340 configured to perform guided filtering based on the second hair region and the person image to determine the region to be dyed
  • the rendering unit 350 is configured to perform rendering of the to-be-dyed area with a preset color.
  • the segmentation unit 320 is specifically configured to input the person image into a semantic segmentation model; the semantic segmentation model is used to perform semantic segmentation processing on the input image; based on the semantic segmentation model The output semantic segmentation result determines the first hair region and the face region in the person image.
  • the determining unit 330 is specifically configured to determine, according to the first hair region and the face region, the first bangs region in the character image; Perform attenuation processing to obtain a second bangs region; and determine the second hair region based on the second bangs region and the first hair region.
  • the determining unit 330 is specifically configured to perform expansion processing on the first hair region and the human face region, respectively, to obtain an expanded first hair region and an expanded human face region ; Determine the overlapping area between the expanded first hair area and the expanded face area as the first bangs area in the character image.
  • the determining unit 330 is specifically configured to perform determining a minimum rectangle covering the first hair region; query the expansion coefficient corresponding to the length of the short side of the minimum rectangle; using the expansion coefficient, Expansion processing is performed on the first hair region and the human face region, respectively, to obtain the expanded first hair region and the expanded human face region.
  • the determining unit 330 is specifically configured to perform acquiring the pixel mean value of the first bangs area; query the attenuation coefficient corresponding to the pixel mean value, and reduce the first notch value according to the attenuation coefficient For each pixel value in the bangs area, the second bangs area is obtained.
  • the guiding filtering unit 340 is specifically configured to determine the target channel image corresponding to the color channel with the largest variance in the character image; adjust the image contrast of the target channel image, and obtain the adjusted image wherein, the image contrast of the adjusted image is greater than the image contrast of the target channel image; using the adjusted image to conduct guided filtering processing on the second hair region to obtain the to-be-dyed region.
  • the rendering unit 350 is specifically configured to perform acquiring the target rendering hair color corresponding to the special effect shooting mode; based on the target rendering hair color, perform color rendering on the area to be dyed; and/ Or, in response to the hair color selection instruction implemented in the hair color selection entry; acquiring the target rendering hair color corresponding to the hair color selection instruction; and performing color rendering on the area to be dyed based on the target rendering hair color.
  • FIG. 4 is a block diagram of a device 400 for performing a special effect adding method according to an exemplary embodiment.
  • device 400 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, or the like.
  • device 400 may include one or more of the following components: processing component 402, memory 404, power component 406, multimedia component 408, audio component 410, input/output (I/O) interface 412, sensor component 414, and Communication component 416 .
  • Processing component 402 generally controls the overall operation of device 400, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 402 may include one or more processors 420 to execute instructions to perform all or some of the steps of the methods described above.
  • processing component 402 may include one or more modules that facilitate interaction between processing component 402 and other components.
  • processing component 402 may include a multimedia module to facilitate interaction between multimedia component 408 and processing component 402.
  • Memory 404 is configured to store various types of data to support operation at device 400 . Examples of such data include instructions for any application or method operating on device 400, contact data, phonebook data, messages, pictures, videos, and the like. Memory 404 may be implemented by any type of volatile or non-volatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • Power component 406 provides power to various components of device 400 .
  • Power supply components 406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to device 400 .
  • Multimedia component 408 includes screens that provide an output interface between the device 400 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • multimedia component 408 includes a front-facing camera and/or a rear-facing camera. When the device 400 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data.
  • Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 410 is configured to output and/or input audio signals.
  • audio component 410 includes a microphone (MIC) that is configured to receive external audio signals when device 400 is in operating modes, such as call mode, recording mode, and voice recognition mode.
  • the received audio signal may be further stored in memory 404 or transmitted via communication component 416 .
  • audio component 410 also includes a speaker for outputting audio signals.
  • the I/O interface 412 provides an interface between the processing component 402 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 414 includes one or more sensors for providing status assessments of various aspects of device 400 .
  • the sensor component 414 can detect the open/closed state of the device 400, the relative positioning of components, such as the display and keypad of the device 400, and the sensor component 414 can also detect a change in the position of the device 400 or a component of the device 400 , the presence or absence of user contact with the device 400 , the orientation or acceleration/deceleration of the device 400 and the temperature change of the device 400 .
  • Sensor assembly 414 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 416 is configured to facilitate wired or wireless communication between device 400 and other devices.
  • Device 400 may access wireless networks based on communication standards, such as WiFi, carrier networks (eg, 2G, 3G, 4G, or 5G), or a combination thereof.
  • the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 416 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • device 400 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • non-transitory computer readable storage medium including instructions, such as memory 404 including instructions, executable by processor 420 of device 400 to perform the above method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种特效添加方法及装置,该方法包括:响应于发色特效选择指令,在进入特效拍摄模式后获取人物图像(S110);确定所述人物图像中的第一头发区域和人脸区域(S120);基于所述第一头发区域和所述人脸区域,确定第二头发区域(S130);基于所述第二头发区域和所述人物图像进行引导滤波,确定待染色区域(S140);以预设颜色渲染所述待染色区域(S150)。

Description

特效添加方法及装置
本申请要求于2020年10月16日提交至中国专利局、申请号为202011110352.4的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及特效添加技术领域,尤其涉及特效添加方法、装置、电子设备及存储介质。
背景技术
随着智能手机的拍照能力不断提高,越来越多的人通过使用智能手机来拍摄照片、视频以记录自己生活中的精彩瞬间。
用户在使用智能手机进行视频或者照片拍摄时,往往会使用安装在智能手机上的各种拍摄软件为拍摄到的图像添加各种特效效果,如为人物图像中添加头发染色特效。
发明内容
本公开提供一种特效添加方法、装置、电子设备及存储介质。
根据本公开实施例的第一方面,提供一种特效添加方法,应用于电子设备,所述方法包括:
响应于发色特效选择指令,进入特效拍摄模式并获取人物图像;
确定所述人物图像中的第一头发区域和人脸区域;
基于所述第一头发区域和所述人脸区域,确定第二头发区域;
基于所述第二头发区域和所述人物图像进行引导滤波,确定待染色区域;
以预设颜色渲染所述待染色区域。
在一些实施例中,所述确定所述人物图像中的第一头发区域和人脸区域,包括:
将所述人物图像输入至语义分割模型,所述语义分割模型用于对输入的图像进行语义分割处理;
基于所述语义分割模型输出的语义分割结果,确定所述人物图像中的第一头发区域和人脸区域。
在一些实施例中,所述基于所述第一头发区域和所述人脸区域,确定第二头发区域,包括:
根据所述第一头发区域和所述人脸区域,确定所述人物图像中的第一刘海区域;
对所述第一刘海区域进行衰减处理,得到第二刘海区域;
基于所述第二刘海区域和所述第一头发区域,确定所述第二头发区域。
在一些实施例中,所述根据所述第一头发区域和所述人脸区域,确定所述人物图像中的第一刘海区域,包括:
分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到膨胀后的第一头发区域 和膨胀后的人脸区域;
确定所述膨胀后的第一头发区域与所述膨胀后的人脸区域之间的重叠区域,作为所述人物图像中的第一刘海区域。
在一些实施例中,所述分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域,包括:
确定覆盖所述第一头发区域的最小矩形;
查询与所述最小矩形的短边长度对应的膨胀系数;
采用所述膨胀系数,分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到所述膨胀后的第一头发区域和所述膨胀后的人脸区域。
在一些实施例中,所述对所述第一刘海区域进行衰减处理,得到第二刘海区域,包括:
获取所述第一刘海区域的像素均值;
查询与所述像素均值对应的衰减系数;
根据所述衰减系数,降低所述第一刘海区域中的各个像素值,得到所述第二刘海区域。
在一些实施例中,所述基于所述第二头发区域和所述人物图像进行引导滤波,确定待染色区域,包括:
确定所述人物图像中的方差最大颜色通道对应的目标通道图像;
对所述目标通道图像的图像对比度进行调整,得到调整后的图像,其中,所述调整后的图像的图像对比度大于所述目标通道图像的图像对比度;
采用所述调整后的图像对所述第二头发区域进行引导滤波处理,得到所述待染色区域。
在一些实施例中,所述以预设颜色渲染所述待染色区域,包括:
获取所述特效拍摄模式对应的目标渲染发色;
基于所述目标渲染发色,对所述待染色区域进行颜色渲染;
和/或,
响应实施于发色选择入口的发色选择指令;
获取所述发色选择指令对应的目标渲染发色;
基于所述目标渲染发色,对所述待染色区域进行颜色渲染。
根据本公开实施例的第二方面,提供一种特效添加装置,包括:
响应单元,被配置为执行响应于发色特效选择指令,进入特效拍摄模式并获取人物图像;
分割单元,被配置为执行确定所述人物图像中的第一头发区域和人脸区域;
确定单元,被配置为执行基于所述第一头发区域和所述人脸区域,确定第二头发区域;
引导滤波单元,被配置为执行基于所述第二头发区域和所述人物图像进行引导滤波,确定待染色区域;
渲染单元,被配置为执行以预设颜色渲染所述待染色区域。
在一些实施例中,所述分割单元,具体被配置为执行将所述人物图像输入至语义分割 模型;所述语义分割模型用于对输入的图像进行语义分割处理;基于所述语义分割模型输出的语义分割结果,确定所述人物图像中的第一头发区域和人脸区域。
在一些实施例中,所述确定单元,具体被配置为执行根据所述第一头发区域和所述人脸区域,确定所述人物图像中的第一刘海区域;对所述第一刘海区域进行衰减处理,得到第二刘海区域;基于所述第二刘海区域和所述第一头发区域,确定所述第二头发区域。
在一些实施例中,所述确定单元,具体被配置为执行分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域;确定所述膨胀后的第一头发区域与所述膨胀后的人脸区域之间的重叠区域,作为所述人物图像中的第一刘海区域。
在一些实施例中,所述确定单元,具体被配置为执行确定覆盖所述第一头发区域的最小矩形;查询与所述最小矩形的短边长度对应的膨胀系数;采用所述膨胀系数,分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到所述膨胀后的第一头发区域和所述膨胀后的人脸区域。
在一些实施例中,所述确定单元,具体被配置为执行获取所述第一刘海区域的像素均值;查询与所述像素均值对应的衰减系数,根据所述衰减系数,降低所述第一刘海区域中的各个像素值,得到所述第二刘海区域。
在一些实施例中,所述引导滤波单元,具体被配置为执行确定所述人物图像中的方差最大颜色通道对应的目标通道图像;对所述目标通道图像的图像对比度进行调整,得到调整后的图像;其中,所述调整后的图像的图像对比度大于所述目标通道图像的图像对比度;采用所述调整后的图像对所述第二头发区域进行引导滤波处理,得到所述待染色区域。
在一些实施例中,所述渲染单元,具体被配置为执行获取所述特效拍摄模式对应的目标渲染发色;基于所述目标渲染发色,对所述待染色区域进行颜色渲染;和/或,响应实施于发色选择入口的发色选择指令;获取所述发色选择指令对应的目标渲染发色;基于所述目标渲染发色,对所述待染色区域进行颜色渲染。
根据本公开实施例的第三方面,提供一种电子设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现如第一方面或第一方面的任意实施例所述的特效添加方法。
根据本公开实施例的第四方面,提供一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面或第一方面的任意实施例所述的特效添加方法。
根据本公开实施例的第五方面,提供一种计算机程序产品,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,设备的至少一个处理器从所述可读存储介质读取并执行所述计算机程序,使得设备执行第一方面的任意实施例所述的特效添加方法。
本公开的实施例通过响应于发色特效选择指令,在进入特效拍摄模式后获取人物图像;确定人物图像中的第一头发区域和人脸区域;基于第一头发区域和人脸区域,确定第二头发区域;基于第二头发区域和人物图像进行引导滤波,确定待染色区域;最后,以预设颜 色渲染待染色区域;如此,可以避免待染色区域可能存在的边界线缺失的情况,能够保证待染色区域有着良好的边缘特性,避免待处理图像中的非头发区域受到渲染处理的影响,实现更加真实且准确地对人物图像中的头发区域进行染色。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
图1是根据一示例性实施例示出的一种特效添加方法的流程图。
图2是根据一示例性实施例示出的另一种特效添加方法的流程图。
图3是根据一示例性实施例示出的一种特效添加装置的框图。
图4是根据一示例性实施例示出的一种电子设备的内部结构图。
具体实施方式
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
图1是根据一示例性实施例示出的一种特效添加方法的流程图,包括以下步骤。该特效添加方法可以由电子设备执行。实际应用中,电子设备可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。
在步骤S110中,响应于发色特效选择指令,进入特效拍摄模式,并获取人物图像。
其中,发色特效选择指令可以是指用户选择电子设备进入特效拍摄模式的指令。
其中,特效拍摄模式可以是指为拍摄到的图像添加头发染色特效的模式。
其中,人物图像可以是指包括有被拍摄人物的图像。
具体实现中,安装有图像拍摄软件的电子设备可以先显示拍摄页面;其中,该拍摄页面包括发色特效选择入口。实际应用中,用户可以对该发色特效选择入口进行触发操作,进而实现将发色特效选择指令输入至电子设备中。电子设备响应于接收到该发色特效选择指令,进入特效拍摄模式。当电子设备成功进入特效拍摄模式时,电子设备则获取包括有被拍摄人物的图像即人物图像。
在步骤S120中,确定所述人物图像中的第一头发区域和人脸区域。
具体实现中,响应于获取到人物图像,电子设备可以通过预训练的语义分割模型对人 物图像进行语义分割,确定出人物图像中的第一头发区域和人脸区域。
其中,预训练的语义分割模型用于对输入的图像进行语义分割处理。
具体来说,电子设备在确定人物图像中的第一头发区域和人脸区域的过程中,具体包括:电子设备可以将人物图像输入至预训练的语义分割模型;电子设备获取预训练的语义分割模型输出的语义分割结果,并基于预训练的语义分割模型输出的语义分割结果,确定出人物图像中的第一头发区域和人脸区域。
在步骤S130中,基于所述第一头发区域和所述人脸区域,确定第二头发区域。
具体实现中,响应于电子设备确定出人物图像中的第一头发区域和人脸区域,电子设备可以基于人物图像中的第一头发区域和人脸区域确定第二头发区域。具体来说,电子设备在基于第一头发区域和人脸区域,确定第二头发区域的过程中,具体包括:电子设备可以根据第一头发区域和人脸区域,确定人物图像中的第一刘海区域。电子设备对第一刘海区域进行衰减处理,得到第二刘海区域。
其中,第二刘海区域也可以命名为衰减处理后的第一刘海区域。
电子设备基于第二刘海区域和第一头发区域,确定第二头发区域。例如,电子设备可以基于第二刘海区域,对第一头发区域中的刘海区域进行调整,得到调整后的头发区域,作为第二头发区域。
在步骤S140中,基于所述第二头发区域和所述人物图像进行引导滤波,确定待染色区域。
具体实现中,电子设备可以采用人物图像对第二头发区域进行引导滤波处理,进而使得到的引导滤波后的第二头发区域的边缘特征明显;电子设备将引导滤波后的第二头发区域确定为待染色区域。
在步骤S150中,以预设颜色渲染所述待染色区域。
具体实现中,响应于电子设备确定出人物图像中的待染色区域,电子设备可以以与发色特效选择指令对应的方式对待染色区域进行颜色调节处理,得到处理后的人物图像。具体来说,电子设备可以根据该发色特效选择指令,确定目标头发颜色。电子设备基于该目标头发颜色,调整待染色区域中各个像素的颜色水平,以使待染色区域的头发颜色与目标头发颜色相适配,进而得到处理后的人物图像。
上述特效添加方法中,通过响应于发色特效选择指令,进入特效拍摄模式并获取人物图像;确定人物图像中的第一头发区域和人脸区域;基于第一头发区域和人脸区域,确定第二头发区域;基于第二头发区域和人物图像进行引导滤波,确定待染色区域;最后,以预设颜色渲染待染色区域;如此,可以避免待染色区域可能存在的边界线缺失的情况,能够保证待染色区域有着良好的边缘特性,避免待处理图像中的非头发区域受到渲染处理的影响,实现更加真实且准确地对人物图像中的头发区域进行染色。
在一些实施例中,根据第一头发区域和人脸区域,确定人物图像中的第一刘海区域, 包括:分别对第一头发区域和人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域;确定膨胀后的第一头发区域与膨胀后的人脸区域之间的重叠区域,作为人物图像中的第一刘海区域。
具体实现中,电子设备在根据第一头发区域和人脸区域,确定第一刘海区域的过程中,具体包括:电子设备可以分别对第一头发区域和的人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域。具体来说,电子设备可以获取被拍摄人物在待处理图像中的当前头发厚度。然后,在基于该当前头发厚度适应性地对第一头发区域和人脸区域进行膨胀处理。
例如,电子设备可以在预先建立的头发厚度与膨胀系数的正相关关系中,确定与该当前头发厚度对应的目标膨胀系数。然后,电子设备再基于该目标膨胀系数,对第一头发区域和人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域。
电子设备确定膨胀后的第一头发区域和膨胀后的人脸区域之间的重叠区域,并将该重叠区域,作为人物图像中的第一刘海区域。
本申请实施例的技术方案,通过分别对第一头发区域和的人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域;确定膨胀后的第一头发区域与膨胀后的人脸区域之间的重叠区域,作为人物图像中的第一刘海区域,如此,可以提高确定第一刘海区域过程中的鲁棒性。
在一些实施例中,分别对第一头发区域和人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域,包括:确定覆盖第一头发区域的最小矩形;查询与最小矩形的短边长度对应的膨胀系数;采用膨胀系数,分别对第一头发区域和人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域。
具体实现中,电子设备在分别对第一头发区域和人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域,具体包括:电子设备可以在头发区域的掩码图中,确定能覆盖第一头发区域的最小矩形。电子设备在获取该最小矩形的短边长度,并在预先建立的短边长度与膨胀系数的正相关关系中,确定与该最小矩形的短边长度对应的膨胀系数。电子设备采用膨胀系数,分别对第一头发区域和人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域。
本申请实施例的技术方案,通过确定覆盖第一头发区域的最小矩形;查询与最小矩形的短边长度对应的膨胀系数;采用膨胀系数,分别对第一头发区域和人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域;如此,使得在后续基于膨胀后的第一头发区域和膨胀后的人脸区域确定得到的第一刘海区域可以良好地适配被拍摄人物的头发厚度,提高了染发渲染结果的真实程度。
在一些实施例中,对第一刘海区域进行衰减处理,得到第二刘海区域,包括:获取刘海区域的像素均值;查询与像素均值对应的衰减系数,根据衰减系数,降低第一刘海区域中的各个像素值,得到第二刘海区域。
其中,第二刘海区域中的衰减处理后的第一刘海区域的像素值小于掩码图中的第一刘海区域的像素值。
具体实现中,电子设备在对第一刘海区域进行衰减处理,得到第二刘海区域的过程中,具体包括:电子设备可以获取第一刘海区域的像素值,根据该像素值的像素均值,在像素均值与衰减系数的正相关关系中,查询与像素均值对应的衰减系数;根据衰减系数,对第一刘海区域中的各个像素值进行衰减处理,得到第二刘海区域;其中,第二刘海区域中的衰减处理后的第一刘海区域的像素值小于掩码图中的第一刘海区域的像素值。
其中,第一刘海区域的像素值可以是第一刘海区域的像素均值,还可以是第一刘海区域的像素方差值、或者还可以是第一刘海区域的像素中值,本公开不做具体限定。例如,第一刘海区域的像素均值为120时,衰减系数可以为0.5;第一刘海区域的像素均值为150时,衰减系数可以为0.6等。像素均值与衰减系数的正相关关系可由实验结果确定。
本申请实施例的技术方案,通过获取第一刘海区域的像素均值;查询与像素均值对应的衰减系数;根据衰减系数,对第一刘海区域中的各个像素值进行衰减处理,得到第二刘海区域;如此,可以根据第一刘海区域的像素分布情况,适应性地对第一刘海区域中的各个像素值进行衰减处理,使得在添加发色特效后的刘海区域的染色效果不会过于生硬,提高对了刘海区域的发色进行渲染的真实程度。
在一些实施例中,基于第二头发区域和人物图像进行引导滤波,确定待染色区域,包括:确定人物图像中的方差最大颜色通道对应的目标通道图像;对目标通道图像的图像对比度进行调整,得到调整后的图像;采用调整后的图像对第二头发区域进行引导滤波处理,得到待染色区域。
其中,调整后的图像的图像对比度大于目标通道图像的图像对比度。
具体实现中,电子设备在基于第二头发区域和人物图像进行引导滤波,确定待染色区域的过程中,具体包括:电子设备可以在待处理图像对应的RGB三通道中,确定方差最大的颜色通道。电子设备确定人物图像中的方差最大颜色通道对应的目标通道图像。
电子设备对目标通道图像的图像对比度进行调整,得到图像对比度大于目标图像的图像对比度的调整后的图像。具体来说,电子设备剔除目标通道图像中,指定像素值的像素点;然后将剔除像素点后的图像,拉伸至像素值取值区间内,拉伸后剔除像素点后的图像的像素值的最小值为像素值取值区间的最小值,最大值为像素值取值区间的最大值。例如,像素值取值区间可以为0-255,或者还可以是选取的通道的图像的像素值取值区间。所以,在剔除指定像素值的像素点之前,可以绘制目标图像的直方图。电子设备采用调整后的图像作为引导图,并将第二头发区域作为被引导图,从而进行引导滤波处理,得到待染色区域。
本申请实施例的技术方案,通过确定人物图像中的方差最大颜色通道对应的目标通道图像;对目标通道图像的图像对比度进行调整,得到对比度高的调整后的图像,可以尽可能地减少电子设备在采用调整后的图像对第二头发区域进行引导滤波处理的过程中的数 据计算量,提高了头发染色渲染过程中的实时性。
在一些实施例中,以预设颜色渲染待染色区域,包括:获取特效拍摄模式对应的目标渲染发色;基于目标渲染发色,对待染色区域进行颜色渲染;和/或,响应实施于发色选择入口的发色选择指令;获取发色选择指令对应的目标渲染发色;基于目标渲染发色,对待染色区域进行颜色渲染。
具体实现中,电子设备在以预设颜色渲染待染色区域的过程中,具体包括:电子设备可以获取特效拍摄模式对应的目标渲染发色。
当然,在电子设备进入特效拍摄模式时,电子设备当前所显示的特效拍摄模式界面中还包括发色选择入口。该发色选择入口用于供用户切换不同的发色渲染特效。电子设备可以响应用户实施于发色选择入口的发色选择指令,进而获取发色选择指令对应的目标渲染发色。
电子设备再基于目标渲染发色,对待染色区域进行颜色渲染。具体来说,电子设备可以基于目标渲染发色,对待染色区域进行调色处理,得到调色后区域。其中,该调色后区域中的头发颜色与该目标渲染发色相匹配。
本申请实施例的技术方案,电子设备在以预设颜色渲染待染色区域的过程中,通过获取特效拍摄模式对应的目标渲染发色,和/或,响应实施于发色选择入口的发色选择指令,以获取发色选择指令对应的目标渲染发色,并基于该目标渲染发色,对待染色区域进行颜色渲染,使得经过渲染后的人物图像可以满足用户的发色特效添加需求,进而无需对人物图像重新进行特效添加处理,提高了电子设备对图像的特效添加效率。
图2是根据一示例性实施例示出的另一种特效添加方法的流程图,该特效添加方法可以由电子设备执行,如图2所示,该特效添加方法包括以下步骤。在步骤S202中,响应于发色特效选择指令,进入特效拍摄模式并获取人物图像。在步骤S204中,将所述人物图像输入至语义分割模型;所述语义分割模型用于对输入的图像进行语义分割处理。在步骤S206中,基于所述语义分割模型输出的语义分割结果,确定所述人物图像中的第一头发区域和人脸区域。在步骤S208中,分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域。在步骤S210中,确定所述膨胀后的第一头发区域与所述膨胀后的人脸区域之间的重叠区域,作为所述人物图像中的第一刘海区域。在步骤S212中,对所述第一刘海区域进行衰减处理,得到第二刘海区域。在步骤S214中,基于所述第二刘海区域和所述第一头发区域,确定所述第二头发区域。在步骤S216中,确定所述人物图像中的方差最大颜色通道对应的目标通道图像。在步骤S218中,对所述目标通道图像的图像对比度进行调整,得到调整后的图像;其中,所述调整后的图像的图像对比度大于所述目标通道图像的图像对比度。在步骤S220中,采用所述调整后的图像对所述第二头发区域进行引导滤波处理,得到待染色区域。在步骤S222中,以预设颜色渲染所述待染色区域。需要说明的是,上述步骤的具体限定可以参见上文对一种特效添加方法的具体限定,在此不再赘述。
应该理解的是,虽然图1和图2的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1和图2中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
图3是根据一示例性实施例示出的一种特效添加装置框图。参照图3,该装置包括:
响应单元310,被配置为执行响应于发色特效选择指令,进入特效拍摄模式并获取人物图像;
分割单元320,被配置为执行确定所述人物图像中的第一头发区域和人脸区域;
确定单元330,被配置为执行基于所述第一头发区域和所述人脸区域,确定第二头发区域;
引导滤波单元340,被配置为执行基于所述第二头发区域和所述人物图像进行引导滤波,确定待染色区域;
渲染单元350,被配置为执行以预设颜色渲染所述待染色区域。
在一些实施例中,所述分割单元320,具体被配置为执行将所述人物图像输入至语义分割模型;所述语义分割模型用于对输入的图像进行语义分割处理;基于所述语义分割模型输出的语义分割结果,确定所述人物图像中的第一头发区域和人脸区域。
在一些实施例中,所述确定单元330,具体被配置为执行根据所述第一头发区域和所述人脸区域,确定所述人物图像中的第一刘海区域;对所述第一刘海区域进行衰减处理,得到第二刘海区域;基于所述第二刘海区域和所述第一头发区域,确定所述第二头发区域。
在一些实施例中,所述确定单元330,具体被配置为执行分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域;确定所述膨胀后的第一头发区域与所述膨胀后的人脸区域之间的重叠区域,作为所述人物图像中的第一刘海区域。
在一些实施例中,所述确定单元330,具体被配置为执行确定覆盖所述第一头发区域的最小矩形;查询与所述最小矩形的短边长度对应的膨胀系数;采用所述膨胀系数,分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到所述膨胀后的第一头发区域和所述膨胀后的人脸区域。
在一些实施例中,所述确定单元330,具体被配置为执行获取所述第一刘海区域的像素均值;查询与所述像素均值对应的衰减系数,根据所述衰减系数,降低所述第一刘海区域中的各个像素值,得到所述第二刘海区域。
在一些实施例中,所述引导滤波单元340,具体被配置为执行确定所述人物图像中的方差最大颜色通道对应的目标通道图像;对所述目标通道图像的图像对比度进行调整,得 到调整后的图像;其中,所述调整后的图像的图像对比度大于所述目标通道图像的图像对比度;采用所述调整后的图像对所述第二头发区域进行引导滤波处理,得到所述待染色区域。
在一些实施例中,所述渲染单元350,具体被配置为执行获取所述特效拍摄模式对应的目标渲染发色;基于所述目标渲染发色,对所述待染色区域进行颜色渲染;和/或,响应实施于发色选择入口的发色选择指令;获取所述发色选择指令对应的目标渲染发色;基于所述目标渲染发色,对所述待染色区域进行颜色渲染。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图4是根据一示例性实施例示出的一种用于执行特效添加方法的设备400的框图。例如,设备400可以是移动电话、计算机、数字广播终端、消息收发设备、游戏控制台、平板设备、医疗设备、健身设备、个人数字助理等。
参照图4,设备400可以包括以下一个或多个组件:处理组件402、存储器404、电力组件406、多媒体组件408、音频组件410、输入/输出(I/O)的接口412、传感器组件414以及通信组件416。
处理组件402通常控制设备400的整体操作,诸如与显示、电话呼叫、数据通信、相机操作和记录操作相关联的操作。处理组件402可以包括一个或多个处理器420来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件402可以包括一个或多个模块,便于处理组件402和其他组件之间的交互。例如,处理组件402可以包括多媒体模块,以方便多媒体组件408和处理组件402之间的交互。
存储器404被配置为存储各种类型的数据以支持在设备400的操作。这些数据的示例包括用于在设备400上操作的任何应用程序或方法的指令、联系人数据、电话簿数据、消息、图片、视频等。存储器404可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM)、电可擦除可编程只读存储器(EEPROM)、可擦除可编程只读存储器(EPROM)、可编程只读存储器(PROM)、只读存储器(ROM)、磁存储器、快闪存储器、磁盘或光盘。
电源组件406为设备400的各种组件提供电力。电源组件406可以包括电源管理系统,一个或多个电源,及其他与为设备400生成、管理和分配电力相关联的组件。
多媒体组件408包括在所述设备400和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件408包括一个前置摄像头和/或后置摄像头。当设备400处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置 摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件410被配置为输出和/或输入音频信号。例如,音频组件410包括一个麦克风(MIC),当设备400处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器404或经由通信组件416发送。在一些实施例中,音频组件410还包括一个扬声器,用于输出音频信号。
I/O接口412为处理组件402和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件414包括一个或多个传感器,用于为设备400提供各个方面的状态评估。例如,传感器组件414可以检测到设备400的打开/关闭状态,组件的相对定位,例如所述组件为设备400的显示器和小键盘,传感器组件414还可以检测设备400或设备400一个组件的位置改变,用户与设备400接触的存在或不存在,设备400方位或加速/减速和设备400的温度变化。传感器组件414可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件414还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件414还可以包括加速度传感器、陀螺仪传感器、磁传感器、压力传感器或温度传感器。
通信组件416被配置为便于设备400和其他设备之间有线或无线方式的通信。设备400可以接入基于通信标准的无线网络,如WiFi,运营商网络(如2G、3G、4G或5G),或它们的组合。在一个示例性实施例中,通信组件416经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件416还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,设备400可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器404,上述指令可由设备400的处理器420执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实 施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由上面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (25)

  1. 一种特效添加方法,应用于电子设备,其特征在于,所述方法包括:
    响应于发色特效选择指令,进入特效拍摄模式并获取人物图像;
    确定所述人物图像中的第一头发区域和人脸区域;
    基于所述第一头发区域和所述人脸区域,确定第二头发区域;
    基于所述第二头发区域和所述人物图像进行引导滤波,确定待染色区域;
    以预设颜色渲染所述待染色区域。
  2. 根据权利要求1所述的特效添加方法,其特征在于,所述确定所述人物图像中的第一头发区域和人脸区域,包括:
    将所述人物图像输入至语义分割模型,所述语义分割模型用于对输入的图像进行语义分割处理;
    基于所述语义分割模型输出的语义分割结果,确定所述人物图像中的第一头发区域和人脸区域。
  3. 根据权利要求1所述的特效添加方法,其特征在于,所述基于所述第一头发区域和所述人脸区域,确定第二头发区域,包括:
    根据所述第一头发区域和所述人脸区域,确定所述人物图像中的第一刘海区域;
    对所述第一刘海区域进行衰减处理,得到第二刘海区域;
    基于所述第二刘海区域和所述第一头发区域,确定所述第二头发区域。
  4. 根据权利要求3所述的特效添加方法,其特征在于,所述根据所述第一头发区域和所述人脸区域,确定所述人物图像中的第一刘海区域,包括:
    分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域;
    确定所述膨胀后的第一头发区域与所述膨胀后的人脸区域之间的重叠区域,作为所述人物图像中的第一刘海区域。
  5. 根据权利要求4所述的特效添加方法,其特征在于,所述分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域,包括:
    确定覆盖所述第一头发区域的最小矩形;
    查询与所述最小矩形的短边长度对应的膨胀系数;
    采用所述膨胀系数,分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到所述膨胀后的第一头发区域和所述膨胀后的人脸区域。
  6. 根据权利要求3所述的特效添加方法,其特征在于,所述对所述第一刘海区域进行衰减处理,得到第二刘海区域,包括:
    获取所述第一刘海区域的像素均值;
    查询与所述像素均值对应的衰减系数;
    根据所述衰减系数,降低所述第一刘海区域中的各个像素值,得到所述第二刘海区域。
  7. 根据权利要求1所述的特效添加方法,其特征在于,所述基于所述第二头发区域和所述人物图像进行引导滤波,确定待染色区域,包括:
    确定所述人物图像中的方差最大颜色通道对应的目标通道图像;
    对所述目标通道图像的图像对比度进行调整,得到调整后的图像,其中,所述调整后的图像的图像对比度大于所述目标通道图像的图像对比度;
    采用所述调整后的图像对所述第二头发区域进行引导滤波处理,得到所述待染色区域。
  8. 根据权利要求1所述的特效添加方法,其特征在于,所述以预设颜色渲染所述待染色区域,包括:
    获取所述特效拍摄模式对应的目标渲染发色;
    基于所述目标渲染发色,对所述待染色区域进行颜色渲染;
    和/或,
    响应实施于发色选择入口的发色选择指令;
    获取所述发色选择指令对应的目标渲染发色;
    基于所述目标渲染发色,对所述待染色区域进行颜色渲染。
  9. 一种特效添加装置,应用于电子设备,其特征在于,包括:
    响应单元,被配置为执行响应于发色特效选择指令,进入特效拍摄模式并获取人物图像;
    分割单元,被配置为执行确定所述人物图像中的第一头发区域和人脸区域;
    确定单元,被配置为执行基于所述第一头发区域和所述人脸区域,确定第二头发区域;
    引导滤波单元,被配置为执行基于所述第二头发区域和所述人物图像进行引导滤波,确定待染色区域;
    渲染单元,被配置为执行以预设颜色渲染所述待染色区域。
  10. 根据权利要求9所述的特效添加装置,其特征在于,所述分割单元,具体被配置为执行将所述人物图像输入至语义分割模型,所述语义分割模型用于对输入的图像进行语义分割处理;基于所述语义分割模型输出的语义分割结果,确定所述人物图像中的第一头发区域和人脸区域。
  11. 根据权利要求9所述的特效添加装置,其特征在于,所述确定单元,具体被配置为执行根据所述第一头发区域和所述人脸区域,确定所述人物图像中的第一刘海区域;对所述第一刘海区域进行衰减处理,得到第二刘海区域;基于所述第二刘海区域和所述第一头发区域,确定所述第二头发区域。
  12. 根据权利要求11所述的特效添加装置,其特征在于,所述确定单元,具体被 配置为执行分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域;确定所述膨胀后的第一头发区域与所述膨胀后的人脸区域之间的重叠区域,作为所述人物图像中的第一刘海区域。
  13. 根据权利要求12所述的特效添加装置,其特征在于,所述确定单元,具体被配置为执行确定覆盖所述第一头发区域的最小矩形;查询与所述最小矩形的短边长度对应的膨胀系数;采用所述膨胀系数,分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到所述膨胀后的第一头发区域和所述膨胀后的人脸区域。
  14. 根据权利要求11所述的特效添加装置,其特征在于,所述确定单元,具体被配置为执行获取所述第一刘海区域的像素均值;查询与所述像素均值对应的衰减系数,根据所述衰减系数,降低所述第一刘海区域中的各个像素值,得到所述第二刘海区域。
  15. 根据权利要求9所述的特效添加装置,其特征在于,所述引导滤波单元,具体被配置为执行确定所述人物图像中的方差最大颜色通道对应的目标通道图像;对所述目标通道图像的图像对比度进行调整,得到调整后的图像,其中,所述调整后的图像的图像对比度大于所述目标通道图像的图像对比度;采用所述调整后的图像对所述第二头发区域进行引导滤波处理,得到所述待染色区域。
  16. 根据权利要求9所述的特效添加装置,其特征在于,所述渲染单元,具体被配置为执行获取所述特效拍摄模式对应的目标渲染发色;基于所述目标渲染发色,对所述待染色区域进行颜色渲染;和/或,响应实施于发色选择入口的发色选择指令;获取所述发色选择指令对应的目标渲染发色;基于所述目标渲染发色,对所述待染色区域进行颜色渲染。
  17. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令,以实现以下步骤:
    响应于发色特效选择指令,进入特效拍摄模式并获取人物图像;
    确定所述人物图像中的第一头发区域和人脸区域;
    基于所述第一头发区域和所述人脸区域,确定第二头发区域;
    基于所述第二头发区域和所述人物图像进行引导滤波,确定待染色区域;
    以预设颜色渲染所述待染色区域。
  18. 根据权利要求17所述的电子设备,其特征在于,所述处理器被配置为执行所述指令,以实现以下步骤:
    将所述人物图像输入至语义分割模型,所述语义分割模型用于对输入的图像进行语义分割处理;
    基于所述语义分割模型输出的语义分割结果,确定所述人物图像中的第一头发区域和人脸区域。
  19. 根据权利要求17所述的电子设备,其特征在于,所述处理器被配置为执行所述指令,以实现以下步骤:
    根据所述第一头发区域和所述人脸区域,确定所述人物图像中的第一刘海区域;
    对所述第一刘海区域进行衰减处理,得到第二刘海区域;
    基于所述第二刘海区域和所述第一头发区域,确定所述第二头发区域。
  20. 根据权利要求19所述的电子设备,其特征在于,所述处理器被配置为执行所述指令,以实现以下步骤:
    分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到膨胀后的第一头发区域和膨胀后的人脸区域;
    确定所述膨胀后的第一头发区域与所述膨胀后的人脸区域之间的重叠区域,作为所述人物图像中的第一刘海区域。
  21. 根据权利要求20所述的电子设备,其特征在于,所述处理器被配置为执行所述指令,以实现以下步骤:
    确定覆盖所述第一头发区域的最小矩形;
    查询与所述最小矩形的短边长度对应的膨胀系数;
    采用所述膨胀系数,分别对所述第一头发区域和所述人脸区域进行膨胀处理,得到所述膨胀后的第一头发区域和所述膨胀后的人脸区域。
  22. 根据权利要求19所述的电子设备,其特征在于,所述处理器被配置为执行所述指令,以实现以下步骤:
    获取所述第一刘海区域的像素均值;
    查询与所述像素均值对应的衰减系数;
    根据所述衰减系数,降低所述第一刘海区域中的各个像素值,得到所述第二刘海区域。
  23. 根据权利要求17所述的电子设备,其特征在于,所述处理器被配置为执行所述指令,以实现以下步骤:
    确定所述人物图像中的方差最大颜色通道对应的目标通道图像;
    对所述目标通道图像的图像对比度进行调整,得到调整后的图像,其中,所述调整后的图像的图像对比度大于所述目标通道图像的图像对比度;
    采用所述调整后的图像对所述第二头发区域进行引导滤波处理,得到所述待染色区域。
  24. 根据权利要求17所述的电子设备,其特征在于,所述处理器被配置为执行所述指令,以实现以下步骤:
    获取所述特效拍摄模式对应的目标渲染发色;
    基于所述目标渲染发色,对所述待染色区域进行颜色渲染;
    和/或,
    响应实施于发色选择入口的发色选择指令;
    获取所述发色选择指令对应的目标渲染发色;
    基于所述目标渲染发色,对所述待染色区域进行颜色渲染。
  25. 一种非暂时性机器可读存储介质,存储有指令,当所述指令由电子设备的处理器执行时,使得所述电子设备能够执行以下步骤:
    响应于发色特效选择指令,进入特效拍摄模式并获取人物图像;
    确定所述人物图像中的第一头发区域和人脸区域;
    基于所述第一头发区域和所述人脸区域,确定第二头发区域;
    基于所述第二头发区域和所述人物图像进行引导滤波,确定待染色区域;
    以预设颜色渲染所述待染色区域。
PCT/CN2021/105513 2020-10-16 2021-07-09 特效添加方法及装置 WO2022077970A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011110352.4 2020-10-16
CN202011110352.4A CN112258605A (zh) 2020-10-16 2020-10-16 特效添加方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022077970A1 true WO2022077970A1 (zh) 2022-04-21

Family

ID=74244564

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/105513 WO2022077970A1 (zh) 2020-10-16 2021-07-09 特效添加方法及装置

Country Status (2)

Country Link
CN (1) CN112258605A (zh)
WO (1) WO2022077970A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258605A (zh) * 2020-10-16 2021-01-22 北京达佳互联信息技术有限公司 特效添加方法、装置、电子设备及存储介质
CN112883821B (zh) * 2021-01-27 2024-02-20 维沃移动通信有限公司 图像处理方法、装置及电子设备
CN113129319B (zh) * 2021-04-29 2023-06-23 北京市商汤科技开发有限公司 图像处理方法、装置、计算机设备和存储介质
CN114240742A (zh) * 2021-12-17 2022-03-25 北京字跳网络技术有限公司 图像处理方法、装置、电子设备及存储介质
CN114758027A (zh) * 2022-04-12 2022-07-15 北京字跳网络技术有限公司 图像处理方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808136A (zh) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 图像处理方法、装置、可读存储介质和计算机设备
CN110807780A (zh) * 2019-10-23 2020-02-18 北京达佳互联信息技术有限公司 图像处理方法和装置
CN111127591A (zh) * 2019-12-24 2020-05-08 腾讯科技(深圳)有限公司 图像染发处理方法、装置、终端和存储介质
US20200294243A1 (en) * 2019-06-03 2020-09-17 Beijing Dajia Internet Information Technology Co., Ltd. Method, electronic device and storage medium for segmenting image
CN112258605A (zh) * 2020-10-16 2021-01-22 北京达佳互联信息技术有限公司 特效添加方法、装置、电子设备及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110730303B (zh) * 2019-10-25 2022-07-12 腾讯科技(深圳)有限公司 图像染发处理方法、装置、终端和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808136A (zh) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 图像处理方法、装置、可读存储介质和计算机设备
US20200294243A1 (en) * 2019-06-03 2020-09-17 Beijing Dajia Internet Information Technology Co., Ltd. Method, electronic device and storage medium for segmenting image
CN110807780A (zh) * 2019-10-23 2020-02-18 北京达佳互联信息技术有限公司 图像处理方法和装置
CN111127591A (zh) * 2019-12-24 2020-05-08 腾讯科技(深圳)有限公司 图像染发处理方法、装置、终端和存储介质
CN112258605A (zh) * 2020-10-16 2021-01-22 北京达佳互联信息技术有限公司 特效添加方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN112258605A (zh) 2021-01-22

Similar Documents

Publication Publication Date Title
WO2022077970A1 (zh) 特效添加方法及装置
WO2016011747A1 (zh) 肤色调整方法和装置
US20180286097A1 (en) Method and camera device for processing image
CN106408603B (zh) 拍摄方法及装置
CN105095881B (zh) 人脸识别方法、装置及终端
WO2016138752A1 (zh) 拍摄参数调整方法和装置
CN107341777B (zh) 图片处理方法及装置
CN108154466B (zh) 图像处理方法及装置
WO2022110837A1 (zh) 图像处理方法及装置
CN107967459B (zh) 卷积处理方法、装置及存储介质
US11308692B2 (en) Method and device for processing image, and storage medium
CN107015648B (zh) 图片处理方法及装置
CN112188091B (zh) 人脸信息识别方法、装置、电子设备及存储介质
CN110580688A (zh) 一种图像处理方法、装置、电子设备及存储介质
WO2022095860A1 (zh) 指甲特效的添加方法及装置
US20220327749A1 (en) Method and electronic device for processing images
CN112004020B (zh) 图像处理方法、装置、电子设备及存储介质
US9665925B2 (en) Method and terminal device for retargeting images
CN107730443B (zh) 图像处理方法、装置及用户设备
CN107437269B (zh) 一种处理图片的方法及装置
WO2021189927A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN110913120B (zh) 图像拍摄方法及装置、电子设备、存储介质
WO2022193573A1 (zh) 人脸融合方法及装置
CN115914721A (zh) 直播画面处理方法、装置、电子设备及存储介质
US11252341B2 (en) Method and device for shooting image, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21879018

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.09.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21879018

Country of ref document: EP

Kind code of ref document: A1