WO2022110837A1 - 图像处理方法及装置 - Google Patents

图像处理方法及装置 Download PDF

Info

Publication number
WO2022110837A1
WO2022110837A1 PCT/CN2021/106913 CN2021106913W WO2022110837A1 WO 2022110837 A1 WO2022110837 A1 WO 2022110837A1 CN 2021106913 W CN2021106913 W CN 2021106913W WO 2022110837 A1 WO2022110837 A1 WO 2022110837A1
Authority
WO
WIPO (PCT)
Prior art keywords
hair
image
texture
processed
highlight
Prior art date
Application number
PCT/CN2021/106913
Other languages
English (en)
French (fr)
Inventor
郑屹
马重阳
侯沛宏
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2022110837A1 publication Critical patent/WO2022110837A1/zh

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a storage medium.
  • the present disclosure provides an image processing method, apparatus, electronic device and storage medium.
  • an image processing method including:
  • Fusion processing is performed on the hair texture image and the hair region in the to-be-processed image, and a processed image is generated, and the processed image carries the hair texture information in the hair texture image and the color in the hair region. texture information.
  • the determining the hair direction information in the hair region includes:
  • the hair direction is the hair growth direction of the hair in the to-be-processed image on the pixel;
  • the local smoothing result is used as the hair direction information in the hair region.
  • performing local smoothing processing on the hair direction corresponding to each of the pixel points to obtain a local smoothing result including:
  • the surrounding pixels of the target pixel the pixel distance between the surrounding pixel and the target pixel is less than a preset threshold, and the surrounding pixel and the target pixel are in the hair area. Pixels on the same hair;
  • performing fusion processing on the hair texture image and the hair region in the to-be-processed image to generate a processed image includes:
  • the image processing method further includes:
  • the hair highlight information is added to the processed image to obtain a fill-light hair image.
  • adding the hair highlight information to the processed image to obtain a fill-in hair image comprising:
  • generating a hair texture image based on the hair direction information and a preset hair texture material includes:
  • the texture direction corresponding to each of the pixel points is consistent with the smoothed hair direction corresponding to each of the pixel points;
  • the adjusted hair texture material image is used as the hair texture image.
  • an image processing apparatus including:
  • an acquisition unit configured to perform acquisition of an image to be processed, and identify a hair region from the image to be processed
  • a determination unit configured to perform determination of hair direction information in the hair region
  • a generating unit configured to generate a hair texture image based on the hair direction information and a preset hair texture material
  • the fusion unit is configured to perform fusion processing on the hair texture image and the hair region in the to-be-processed image to generate a processed image, the processed image carrying the hair texture information in the hair texture image and Color texture information in the hair region.
  • the determining unit is further configured to obtain a hair direction corresponding to each pixel in the hair region, where the hair direction is that the hair in the image to be processed is on the pixel the hair growth direction; perform local smoothing processing on the hair direction corresponding to each pixel point to obtain a local smoothing result, and the local smoothing result includes the smoothed hair direction corresponding to each pixel point; The result is the hair direction information in the hair region.
  • the determining unit is further configured to perform determining any pixel point in the hair region as a target pixel point; acquiring surrounding pixels of the target pixel, the surrounding pixels being the same as the target pixel
  • the pixel distance between the target pixel points is less than a preset threshold, and the surrounding pixel points and the target pixel point are pixels on the same hair in the hair area; for the hair corresponding to the surrounding pixel points
  • the direction and the hair direction corresponding to the target pixel point are fused to obtain the fusion hair direction corresponding to the target pixel point, and the fusion hair direction corresponding to the target pixel point is used as the local smoothing result.
  • the fusion unit is further configured to perform noise filtering processing on the hair region to obtain an image after noise filtering; The images are fused to obtain the processed images.
  • the fusion unit is further configured to perform, in the image to be processed, extracting hair highlight information in the hair region; and adding the hair highlight information to the processed image , to get the fill light hair image.
  • the fusion unit is further configured to perform fusion of the hair highlight information into the hair texture image to generate a hair highlight image, where the hair highlight effect in the hair highlight image is based on a preset The highlight effect parameters of The hair specular distribution in the specular image is consistent.
  • the generating unit is further configured to perform acquiring a hair texture material image; along the smoothed hair direction corresponding to each of the pixel points, respectively, for the direction of the texture in the hair texture material image Adjust to obtain an adjusted hair texture material image; the texture direction corresponding to each pixel in the adjusted hair texture material image is consistent with the smoothed hair direction corresponding to each pixel; The adjusted hair texture material image is used as the hair texture image.
  • an electronic device including a memory and a processor, the memory stores a computer program, and the processor implements the first aspect or the first aspect when executing the computer program
  • the processor implements the first aspect or the first aspect when executing the computer program
  • a storage medium having a computer program stored thereon, the computer program implementing the image processing according to the first aspect or any embodiment of the first aspect when executed by a processor method.
  • a computer program product comprising a computer program, the computer program being stored in a readable storage medium, and at least one processor of a device from the readable storage medium The computer program is read and executed, so that the device executes the image processing method described in the first aspect or any embodiment of the first aspect.
  • the embodiment of the present disclosure obtains the image to be processed, and identifies the hair area from the image to be processed; determines the hair direction information in the hair area; generates the hair texture image based on the hair direction information and the preset hair texture material;
  • the texture image and the hair area in the image to be processed are fused to generate a processed image, so that the processed image carries the hair texture information in the hair texture image and the color texture information in the hair area.
  • the hair texture information adapted to the hair flow direction in the image is added to the to-be-processed image, so that the obtained processed image has good hair details, avoiding the loss of hair details when the traditional technology is used for hairdressing processing of the hair image.
  • FIG. 1 is an application environment diagram of an image processing method according to an exemplary embodiment.
  • Fig. 2 is a flowchart of an image processing method according to an exemplary embodiment.
  • Fig. 3 is a schematic diagram showing a hair highlight effect according to an exemplary embodiment.
  • Fig. 4 is a schematic flowchart of a method for generating a hair texture image according to an exemplary embodiment.
  • Fig. 5 is a flowchart of an image processing method according to an exemplary embodiment.
  • Fig. 6 is a processing flow chart of an image processing method according to an exemplary embodiment.
  • Fig. 7 is a block diagram of an image processing apparatus according to an exemplary embodiment.
  • Fig. 8 is an internal structure diagram of an electronic device according to an exemplary embodiment.
  • the image processing method provided by the present disclosure can be applied to the application environment shown in FIG. 1 .
  • the electronic device 110 acquires the image to be processed, and identifies the hair region from the to-be-processed image; the electronic device 110 determines the hair direction information in the hair region; the electronic device 110 generates the hair direction information and the preset hair texture material based on the hair direction information and the preset hair texture material. Hair texture image; the electronic device 110 performs fusion processing on the hair texture image and the hair region in the to-be-processed image to generate a processed image, which carries the hair texture information in the hair texture image and the color texture information in the hair region.
  • the electronic device 110 can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices.
  • Fig. 2 is a flow chart of an image processing method according to an exemplary embodiment. The method can be executed by the electronic device of Fig. 1. As shown in Fig. 2, the method includes the following steps.
  • step S210 an image to be processed is acquired, and a hair region is identified from the to-be-processed image.
  • the image to be processed may refer to an image that needs to be processed.
  • the image may be an image pre-stored in the electronic device, or may be an image captured by the electronic device in real time.
  • the hair area is the area where the hair of the person or animal in the image is located.
  • the hair area is the area where the hair of the person is located in the image
  • the hair area is the animal's hair area. The area where the fur is located.
  • the electronic device may first identify the to-be-processed image, and determine the hair region in the to-be-processed image to obtain the hair region mask image. For example, the electronic device may acquire a predetermined number of sample image data in advance, annotate the sample image data, and obtain a target recognition model by training the annotated sample image data. The image is recognized by the target recognition model, and the hair area in the electronic device image is determined.
  • the method for identifying the hair region from the image to be processed may also be implemented in other manners, which are not limited in the embodiments of the present disclosure.
  • step S220 hair direction information in the hair region is determined.
  • the electronic device can identify the hair region in the image to be processed by means of neural network, image analysis, etc., and determine each pixel in the hair region The hair direction corresponding to the point, and the hair direction corresponding to each pixel point in the hair area is used as the hair direction information in the hair area.
  • step S230 a hair texture image is generated based on the hair direction information and a preset hair texture material.
  • the preset hair texture material may refer to a material corresponding to the hair texture used to be added to the image to be processed.
  • the hair texture image carries hair texture information adapted to the hair direction information in the hair region.
  • the electronic device may generate a hair texture image based on the hair direction information and a preset hair texture material. In some embodiments, the electronic device may transform the texture lines in the hair texture material based on the hair direction corresponding to each pixel in the hair region to obtain a hair texture image.
  • the hair texture image includes a hair texture; and the hair direction corresponding to each pixel in the hair texture is consistent with the hair direction corresponding to each pixel in the hair region.
  • step S240 fusion processing is performed on the hair texture image and the hair region in the to-be-processed image to generate a processed image, where the processed image carries the hair texture information in the hair texture image and the color texture information in the hair region.
  • the electronic device performs fusion processing on the hair texture image and the hair region in the to-be-processed image to generate a processed image, so that the processed image carries the hair texture information in the hair texture image and the color texture in the hair region information, even if the hair in the processed image has more hairline detail.
  • the hair region is identified from the to-be-processed image by acquiring the to-be-processed image; the hair direction information in the hair region is determined; the hair texture image is generated based on the hair direction information and the preset hair texture material; The hair texture image and the hair area in the to-be-processed image are fused to generate a processed image, so that the processed image carries the hair texture information in the hair texture image and the color texture information in the hair area.
  • the hair texture information adapted to the hair flow direction in the region is added to the image to be processed, so that the obtained processed image has good hair details, avoiding the loss of hair details when traditional techniques are used to process the hair image.
  • determining the hair direction information in the hair area includes: acquiring the hair direction corresponding to each pixel in the hair area, where the hair direction is the hair growth direction of the hair in the image to be processed on the pixel; The local smoothing process is performed on the hair direction corresponding to each pixel to obtain a local smoothing result.
  • the local smoothing result includes the smoothed hair direction corresponding to each pixel; the local smoothing result is used as the hair direction information in the hair area.
  • the hair direction is the hair growth direction of the hair in the image to be processed on the pixel point.
  • the electronic device in the process of determining the hair direction information in the hair area, includes: the electronic device obtains the hair direction corresponding to each pixel in the hair area; in some embodiments, the electronic device may Calculate the gradient of , and determine the hair direction corresponding to each pixel based on the gradient calculation result. Of course, the electronic device may also convolve the neighborhood pattern of each pixel in the hair region with filters along different directions, and obtain the hair direction at each pixel by finding the maximum filter response.
  • the electronic device can obtain a predetermined number of sample image data in advance, mark the hair direction corresponding to each pixel in the sample image data, and obtain a target recognition model by training the marked sample image data.
  • the image is recognized by the target recognition model, and the hair direction corresponding to each pixel in the hair region in the image to be processed is determined.
  • the electronic device performs local smoothing processing on the hair direction corresponding to each pixel to obtain a local smoothing result; and uses the local smoothing result as the hair direction information in the hair area.
  • the local smoothing result includes the smoothed hair direction corresponding to each pixel.
  • the hair direction corresponding to each pixel in the hair region is obtained, and the hair direction corresponding to each pixel is subjected to local smoothing processing to obtain the smoothed hair direction corresponding to each pixel, so that The hairline smoothing effect in the processed image results in good hairline detail in the resulting processed image.
  • the local smoothing process is performed on the hair direction corresponding to each pixel to obtain a local smoothing result, including: determining any pixel in the hair area as the target pixel; obtaining the surrounding area of the target pixel. Pixel point; the pixel distance between the surrounding pixel point and the target pixel point is less than the preset threshold; the surrounding pixel point and the target pixel point are the pixels on the same hair in the hair area; the hair direction and target corresponding to the surrounding pixel points The hair direction corresponding to the pixel point is fused to obtain the fused hair direction corresponding to the target pixel point, and the fused hair direction corresponding to the target pixel point is used as the local smoothing result.
  • the pixel distance between the surrounding pixels and the target pixel is less than a preset threshold.
  • the electronic device performs local smoothing processing on the hair direction corresponding to each pixel point, and the process of obtaining the local smoothing result includes: determining any pixel point in the hair area as the target pixel point; obtaining the target pixel point The surrounding pixel points of ; the hair direction corresponding to the surrounding pixel points and the hair direction corresponding to the target pixel point are fused to obtain the fusion hair direction corresponding to the target pixel point.
  • the electronic device may perform a weighted average on the hair direction corresponding to the surrounding pixels and the hair direction corresponding to the target pixel to obtain the weighted average hair direction as the fusion hair direction corresponding to the target pixel.
  • the electronic device takes the fused hair direction corresponding to each target pixel as the smoothed hair direction corresponding to each target pixel, and then obtains a local smoothing result.
  • any pixel in the hair region is determined as the target pixel; the surrounding pixels of the target pixel are obtained; the pixel distance between the surrounding pixel and the target pixel is less than a preset threshold; Fusion processing is performed on the hair direction corresponding to the surrounding pixels and the hair direction corresponding to the target pixel, and the fused hair direction corresponding to the target pixel is obtained as the smoothed hair direction; in this way, according to the hair direction corresponding to the surrounding pixels of the target pixel Fusion is performed to achieve accurate local smoothing of the hair direction of the target pixel, so that the hair texture details in the processed image obtained based on the local smoothing result have a more supple hair effect.
  • performing fusion processing on the hair texture image and the hair region in the image to be processed to generate a processed image includes: performing noise filtering processing on the hair region to obtain a noise filtered image; The hair texture image and the noise filtered image are fused to obtain a processed image.
  • the electronic device when the electronic device performs fusion processing on the hair texture image and the hair region in the to-be-processed image to generate the processed image, the electronic device may perform noise filtering processing on the hair region to obtain a noise filtering process. After removing the image; in some embodiments, the electronic device may perform an image smoothing operation on the hair area based on the hair direction corresponding to each pixel in the hair area, such as an over-smoothing operation, to filter out noise in the hair area and obtain Image after noise filtering.
  • the electronic device performs fusion processing on the hair texture image and the image after noise filtering to obtain a processed image.
  • the electronic device may perform fusion processing on the hair texture image and the noise-filtered image, so as to combine the hair texture information in the hair texture image into the noise-filtered image, so as to obtain the simultaneous carrying A processed image with hair texture information in the hair texture image and color texture information in the hair region.
  • the noise filtering process is performed on the hair region to obtain an image after noise filtering;
  • the hair texture image and the image after noise filtering are fused to obtain a processed image; in this way, the processed image obtained by processing can carry the hair texture information in the hair texture image and the color texture information in the hair region, It does not carry redundant noise information, which improves the overall texture of the processed image.
  • the image processing method further includes: in the image to be processed, extracting hair highlight information in the hair region; adding the hair highlight information to the processed image to obtain a fill-light hair image.
  • adding the hair highlight information to the processed image to obtain a fill-in hair image includes: fusing the hair highlight information into the hair texture image to generate a hair highlight image; performing fusion processing on the hair highlight image and the processed image to obtain Fill light hair image.
  • the hair highlight distribution area in the fill-light hair image is consistent with the hair highlight distribution area in the hair highlight image.
  • the hair highlight distribution area in the hair highlight image is determined according to preset highlight effect parameters.
  • the hair highlight distribution area in the processed image is consistent with the hair highlight distribution area in the hair highlight image.
  • the supplementary light hair image may refer to a processed image carrying hair highlight information of the to-be-processed image.
  • the electronic device can extract the hair highlight information in the hair area in the image to be processed; add the hair highlight information to the processed image to obtain a fill-light hair image; in this way, the fill-light hair image simultaneously It carries the hair texture information in the hair texture image, the color texture information in the hair area, and the hair highlight information.
  • the electronic device adds the hair highlight information to the processed image, and the process of obtaining the supplementary hair image includes: the electronic device can combine the hair highlight information into the hair texture image to generate a hair highlight image.
  • the highlight region is obtained by extracting the hair highlight information in the hair region based on the electronic device, and the electronic device can combine the highlight region with the hair texture image to obtain the initial hair highlight distribution region; then, the electronic device can obtain the initial hair highlight distribution region according to the
  • the preset highlight effect parameters are used to adjust the distribution area and range of the highlight in the initial hair highlight distribution area using a physically-based rendering method, thereby generating a hair highlight image.
  • the rendered hair specular image has different hair specular effects.
  • the hair highlight effect in the hair highlight image may include, but is not limited to, hair highlight effects such as band highlight, filament highlight, point highlight, and highlight synthesis.
  • FIG. 3 exemplarily provides a schematic diagram of a hair highlight effect.
  • 310 is a band highlight
  • 320 is a silk highlight
  • 330 is a point highlight
  • 340 is a highlight composite.
  • the electronic device fuses the hair highlight image and the processed image to obtain a fill-light hair image, so that the hair highlight information in the hair area is fused into the processed image to obtain a fill-light hair image.
  • the technical solution in the embodiment of the present application is to extract the hair highlight information in the hair region from the image to be processed; fuse the hair highlight information into the hair texture image to generate a hair highlight image; combine the hair highlight image with the processed hair highlight image
  • the image is fused, so that the obtained fill-light hair image carries the hair highlight information in the image to be processed and has good hair details, which improves the fidelity of the hair area in the fill-light hair image, and makes the fill-light hair image. It can meet the image processing needs of users.
  • generating a hair texture image based on the hair direction information and a preset hair texture material includes: acquiring a hair texture material image; The direction of the texture in the material image is adjusted to obtain the adjusted hair texture material image; the adjusted hair texture material image is used as the hair texture image.
  • the texture direction corresponding to each pixel in the adjusted hair texture material image is consistent with the smoothed hair direction corresponding to each pixel.
  • the process of generating the hair texture image by the electronic device based on the hair direction information and the preset hair texture material includes: the electronic device may acquire a hair texture material image including the hair texture material.
  • the electronic device can adjust the direction of the texture in the hair texture material image along the smoothed hair direction corresponding to each pixel, that is, perform linear transformation processing on each texture in the hair texture material image; for example, an electronic device Linear changes such as translation, rotation, scaling, etc. can be performed on each texture in the hair texture material image along the smoothed hair direction corresponding to each pixel point, and the adjusted hair texture material with rich hair details can be collaged and combined.
  • the electronic device uses the adjusted hair texture material image as a hair texture image, that is, a hair detail image.
  • the electronic device may use hair texture material images with different hair texture characteristics according to the actual needs of the user, so that the obtained hair texture image has hair texture images with different hair texture characteristics.
  • FIG. 4 exemplarily provides a schematic flowchart of a method for generating a hair texture image.
  • the electronic device can obtain the hair texture material image 420 for the image to be processed 410, and the electronic device performs linear transformation processing on each texture in the hair texture material image along the hair flow direction corresponding to each hair in the hair area to obtain the hair Texture image 430.
  • the technical solution in the embodiment of the present application is to obtain the hair texture material image by acquiring the hair texture material image; respectively adjusting the direction of the texture in the hair texture material image along the smoothed hair direction corresponding to each pixel point to obtain the adjusted hair texture material image; the adjusted hair texture material image can make the generated hair texture image have rich and real hair details, and improve the fidelity of the hair area in the fill-light hair image.
  • Fig. 5 is a flow chart showing another image processing method according to an exemplary embodiment.
  • the method used in the image processing method in Fig. 1 includes the following steps.
  • step S502 an image to be processed is acquired, and a hair region is identified from the to-be-processed image.
  • step S504 the hair direction corresponding to each pixel point in the hair region is obtained, and the hair direction is the hair growth direction of the hair in the to-be-processed image on the pixel point.
  • step S506 a local smoothing process is performed on the hair direction corresponding to each of the pixel points to obtain a local smoothing result, which is used as the hair direction information in the hair region, and the local smoothing result includes the corresponding value of each pixel point.
  • the smoothed hair direction a hair texture material image is acquired.
  • step S510 the direction of the texture in the hair texture material image is adjusted along the smoothed hair direction corresponding to each pixel point, to obtain an adjusted hair texture material image.
  • the texture direction corresponding to each pixel point in the hair texture material image is consistent with the smoothed hair direction corresponding to each pixel point.
  • the adjusted hair texture material image is used as the hair texture image.
  • step S514 noise filtering processing is performed on the hair region to obtain an image after noise filtering.
  • step S516 fusion processing is performed on the hair texture image and the noise filtered image to obtain a processed image, where the processed image carries the hair texture information in the hair texture image and the hair Color texture information in the region.
  • steps in the flowcharts of FIG. 2 and FIG. 5 are shown in sequence according to the arrows, these steps are not necessarily executed in the sequence shown by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in FIG. 2 and FIG. 5 may include multiple steps or multiple stages. These steps or stages are not necessarily executed and completed at the same time, but may be executed at different times. The order of execution is also not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages within the other steps.
  • FIG. 6 provides a processing flow chart of an image processing method; wherein, the electronic device obtains the image to be processed 610, and identifies the hair area from the image to be processed, and obtains the hair area mask 620 ; The electronic device determines the hair flow direction corresponding to each hair in the hair area and obtains the hair direction diagram 630; The electronic device generates a hair texture image 640 based on the hair flow direction and the hair texture material to be added; Based on the hair corresponding to each pixel in the hair area direction, perform image smoothing on the hair area to obtain a smoothed image 650; perform fusion processing on the hair texture image and the smoothed image to obtain a fused image 660; in the image to be processed, extract the hair highlight information in the hair area, namely A highlight distribution map 670 carrying hair highlight information is obtained; the electronic device adds the hair highlight information to the fused image to obtain a processed image 680 .
  • the electronic device obtains the image to be processed 610, and identifies the hair area from the image to
  • FIG. 7 is a block diagram of an image processing apparatus 700 according to an exemplary embodiment. 7, the apparatus 700 includes:
  • an acquisition unit 710 configured to perform acquisition of an image to be processed, and identify a hair region from the image to be processed
  • a determining unit 720 configured to perform determining the hair direction information in the hair region
  • a generating unit 730 configured to generate a hair texture image based on the hair direction information and a preset hair texture material
  • the fusion unit 740 is configured to perform fusion processing on the hair texture image and the hair region in the to-be-processed image, and generate a processed image that carries the hair texture information in the hair texture image and color texture information in the hair region.
  • the determining unit 720 is further configured to obtain the hair direction corresponding to each pixel in the hair region, where the hair direction is the direction of the hair in the image to be processed.
  • the hair growth direction on the pixel point ; perform local smoothing processing on the hair direction corresponding to each pixel point to obtain a local smoothing result, and the local smoothing result includes the smoothed hair direction corresponding to each pixel point;
  • the local smoothing result is used as the hair direction information in the hair region.
  • the determining unit 720 is further configured to perform determining any pixel point in the hair region as a target pixel point; obtain surrounding pixels of the target pixel point, the surrounding pixels are The pixel distance between the pixel point and the target pixel point is less than a preset threshold, and the surrounding pixel point and the target pixel point are pixels on the same hair in the hair area; The corresponding hair direction and the hair direction corresponding to the target pixel point are fused to obtain the fusion hair direction corresponding to the target pixel point, and the fusion hair direction corresponding to the target pixel point is used as the local smoothing result.
  • the fusion unit 740 is further configured to perform noise filtering processing on the hair region to obtain an image after noise filtering; The filtered image is fused to obtain the processed image.
  • the fusion unit 740 is further configured to perform, in the image to be processed, extracting hair highlight information in the hair region; adding the hair highlight information to the processing In the post image, a fill light hair image is obtained.
  • the fusion unit 740 is further configured to perform fusion of the hair highlight information into the hair texture image to generate a hair highlight image, and the hair highlight effect in the hair highlight image is: Determined and obtained according to preset highlight effect parameters; the hair highlight image and the processed image are fused to obtain the fill-light hair image, wherein the hair highlight distribution area in the fill-light hair image is the same as the The hair highlight distribution areas in the hair highlight image are consistent.
  • the generating unit 730 is further configured to perform acquiring a hair texture material image; respectively along the smoothed hair direction corresponding to each of the pixels, The direction of the texture is adjusted to obtain an adjusted hair texture material image.
  • the texture direction corresponding to each pixel in the adjusted hair texture material image is the same as the smoothed hair direction corresponding to each pixel. Consistent; use the adjusted hair texture material image as the hair texture image.
  • FIG. 8 is a block diagram of an electronic device 800 for executing an image processing method according to an exemplary embodiment.
  • electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, and the like.
  • an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814 and communication component 816.
  • a processing component 802 a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814 and communication component 816.
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 802 can include one or more processors 820 to execute instructions to perform all or some of the steps of the methods described above.
  • processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components.
  • processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
  • Memory 804 is configured to store various types of data to support operation at electronic device 800 . Examples of such data include instructions for any application or method operating on electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like. Memory 804 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable programmable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable programmable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • Power supply assembly 806 provides power to various components of electronic device 800 .
  • Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 800 .
  • Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP).
  • the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front-facing camera and/or a rear-facing camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data.
  • Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 810 is configured to output and/or input audio signals.
  • audio component 810 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 800 is in operating modes, such as call mode, recording mode, and voice recognition mode. The received audio signal may be further stored in memory 804 or transmitted via communication component 816 .
  • audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of electronic device 800 .
  • the sensor assembly 814 can detect the open/closed state of the electronic device 800, the relative positioning of the components, such as the display and the keypad of the electronic device 800, the sensor assembly 814 can also detect the electronic device 800 or one of the electronic device 800 Changes in the position of components, presence or absence of user contact with the electronic device 800 , orientation or acceleration/deceleration of the electronic device 800 and changes in the temperature of the electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices.
  • Electronic device 800 may access wireless networks based on communication standards, such as WiFi, carrier networks (eg, 2G, 3G, 4G, or 5G), or a combination thereof.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmed gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmed gate array
  • controller microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • non-transitory computer-readable storage medium including instructions, such as a memory 804 including instructions, executable by the processor 820 of the electronic device 800 to accomplish the above method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法及装置,所述方法包括:获取待处理图像,从所述待处理图像中识别出毛发区域(S210);确定所述毛发区域中的毛发方向信息(S220);基于所述毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像(S230);对所述毛发纹理图像和所述待处理图像中的毛发区域进行融合处理,生成处理后图像,所述处理后图像携带有所述毛发纹理图像中的毛发纹理信息和所述毛发区域中的颜色纹理信息(S240)。

Description

图像处理方法及装置
本申请要求于2020年11月27日提交至中国专利局、申请号为202011364522.1的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种图像处理方法、图像处理装置、电子设备及存储介质。
背景技术
随着智能手机的拍照能力不断提高,越来越多的人通过使用智能手机来拍摄照片、视频以记录自己生活中的精彩瞬间。
用户在使用智能手机进行视频或者照片拍摄时,往往会使用安装在智能手机上的各种图像美化软件对拍摄到的图像进行美化处理,如对人物图像中的头发图像进行变色、柔化等美化处理。
发明内容
本公开提供一种图像处理方法、装置、电子设备及存储介质。
根据本公开实施例的第一方面,提供一种图像处理方法,包括:
获取待处理图像,从所述待处理图像中识别出毛发区域;
确定所述毛发区域中的毛发方向信息;
基于所述毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像;
对所述毛发纹理图像和所述待处理图像中的毛发区域进行融合处理,生成处理后图像,所述处理后图像携带有所述毛发纹理图像中的毛发纹理信息和所述毛发区域中的颜色纹理信息。
在一些实施例中,所述确定所述毛发区域中的毛发方向信息,包括:
获取所述毛发区域中每个像素点对应的毛发方向,所述毛发方向为所述待处理图像中的毛发在所述像素点上的毛发生长方向;
对每个所述像素点对应的毛发方向进行局部平滑处理,得到局部平滑结果,所述局部平滑结果包括每个所述像素点对应的平滑后毛发方向;
将所述局部平滑结果,作为所述毛发区域中的毛发方向信息。
在一些实施例中,所述对每个所述像素点对应的毛发方向进行局部平滑处理,得到局部平滑结果,包括:
将所述毛发区域中的任意一个像素点确定为目标像素点;
获取所述目标像素点的周围像素点,所述周围像素点与所述目标像素点之间的像素距 离小于预设阈值,所述周围像素点与所述目标像素点为所述毛发区域中的同一条毛发上的像素点;
对所述周围像素点对应的毛发方向和所述目标像素点对应的毛发方向进行融合处理,得到所述目标像素点对应的融合毛发方向,并将所述目标像素点对应的融合毛发方向作为所述局部平滑结果。
在一些实施例中,所述对所述毛发纹理图像和所述待处理图像中的毛发区域进行融合处理,生成处理后图像,包括:
对所述毛发区域进行杂色滤除处理,得到杂色滤除后图像;
对所述毛发纹理图像和所述杂色滤除后图像进行融合处理,得到所述处理后图像。
在一些实施例中,该图像处理方法还包括:
在所述待处理图像中,提取出所述毛发区域中的毛发高光信息;
将所述毛发高光信息添加至所述处理后图像中,得到补光毛发图像。
在一些实施例中,所述将所述毛发高光信息添加至所述处理后图像中,得到补光毛发图像,包括:
将所述毛发高光信息融合到所述毛发纹理图像中,生成毛发高光图像,所述毛发高光图像中的毛发高光效果为根据预设的高光效果参数确定得到的;
将所述毛发高光图像和所述处理后图像进行融合处理,得到所述补光毛发图像,其中,所述补光毛发图像中的毛发高光分布区域与所述毛发高光图像中的毛发高光分布区域相一致。
在一些实施例中,所述基于所述毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像,包括:
获取包含所述毛发纹理素材的毛发纹理素材图像;
分别沿着每个所述像素点对应的平滑后毛发方向,对所述毛发纹理素材图像中的纹理的方向进行调整,得到调整后的毛发纹理素材图像,所述调整后的毛发纹理素材图像中的每个所述像素点对应的纹理方向与每个所述像素点对应的平滑后毛发方向相一致;
将所述调整后的毛发纹理素材图像,作为所述毛发纹理图像。
根据本公开实施例的第二方面,提供一种图像处理装置,包括:
获取单元,被配置为执行获取待处理图像,从所述待处理图像中识别出毛发区域;
确定单元,被配置为执行确定所述毛发区域中的毛发方向信息;
生成单元,被配置为执行基于所述毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像;
融合单元,被配置为执行对所述毛发纹理图像和所述待处理图像中的毛发区域进行融合处理,生成处理后图像,所述处理后图像携带有所述毛发纹理图像中的毛发纹理信息和所述毛发区域中的颜色纹理信息。
在一些实施例中,所述确定单元,还被配置为执行获取所述毛发区域中每个像素点对 应的毛发方向,所述毛发方向为所述待处理图像中的毛发在所述像素点上的毛发生长方向;对每个所述像素点对应的毛发方向进行局部平滑处理,得到局部平滑结果,所述局部平滑结果包括每个所述像素点对应的平滑后毛发方向;将所述局部平滑结果,作为所述毛发区域中的毛发方向信息。
在一些实施例中,所述确定单元,还被配置为执行将所述毛发区域中的任意一个像素点确定为目标像素点;获取所述目标像素点的周围像素点,所述周围像素点与所述目标像素点之间的像素距离小于预设阈值,所述周围像素点与所述目标像素点为所述毛发区域中的同一条毛发上的像素点;对所述周围像素点对应的毛发方向和所述目标像素点对应的毛发方向进行融合处理,得到所述目标像素点对应的融合毛发方向,并将所述目标像素点对应的融合毛发方向作为所述局部平滑结果。
在一些实施例中,所述融合单元,还被配置为执行对所述毛发区域进行杂色滤除处理,得到杂色滤除后图像;对所述毛发纹理图像和所述杂色滤除后图像进行融合处理,得到所述处理后图像。
在一些实施例中,所述融合单元,还被配置为执行在所述待处理图像中,提取出所述毛发区域中的毛发高光信息;将所述毛发高光信息添加至所述处理后图像中,得到补光毛发图像。
在一些实施例中,所述融合单元,还被配置为执行将所述毛发高光信息融合到所述毛发纹理图像中,生成毛发高光图像,所述毛发高光图像中的毛发高光效果为根据预设的高光效果参数确定得到的;将所述毛发高光图像和所述处理后图像进行融合处理,得到所述补光毛发图像,其中,所述补光毛发图像中的毛发高光分布区域与所述毛发高光图像中的毛发高光分布区域相一致。
在一些实施例中,所述生成单元,还被配置为执行获取毛发纹理素材图像;分别沿着每个所述像素点对应的平滑后毛发方向,对所述毛发纹理素材图像中的纹理的方向进行调整,得到调整后的毛发纹理素材图像;所述调整后的毛发纹理素材图像中的每个所述像素点对应的纹理方向与每个所述像素点对应的平滑后毛发方向相一致;将所述调整后的毛发纹理素材图像,作为所述毛发纹理图像。
根据本公开实施例的第三方面,提供一种电子设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现如第一方面或第一方面的任意实施例所述的图像处理方法。
根据本公开实施例的第四方面,提供一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面或第一方面的任意实施例所述的图像处理方法。
根据本公开实施例的第五方面,提供一种计算机程序产品,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,设备的至少一个处理器从所述可读存储介质读取并执行所述计算机程序,使得设备执行第一方面或第一方面的任意实施例所述的图像处理方法。
本公开的实施例通过获取待处理图像,并从待处理图像中识别出毛发区域;确定毛发区域中的毛发方向信息;基于毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像;对毛发纹理图像和待处理图像中的毛发区域进行融合处理,生成处理后图像,使得处理后图像携带有毛发纹理图像中的毛发纹理信息和毛发区域中的颜色纹理信息,如此,可以实现将与毛发区域中的毛发流向相适配的毛发纹理信息添加至待处理图像,使得得到的处理后图像有着良好的发丝细节,避免了传统技术在对头发图像进行美发处理时出现毛发细节丢失的情况。
附图说明
图1是根据一示例性实施例示出的一种图像处理方法的应用环境图。
图2是根据一示例性实施例示出的一种图像处理方法的流程图。
图3是根据一示例性实施例示出的一种毛发高光效果的示意图。
图4是根据一示例性实施例示出的一种生成毛发纹理图像的方法的流程示意图。
图5是根据一示例性实施例示出的一种图像处理方法的流程图。
图6是根据一示例性实施例示出的一种图像处理方法的处理流程图。
图7是根据一示例性实施例示出的一种图像处理装置的框图。
图8是根据一示例性实施例示出的一种电子设备的内部结构图。
具体实施方式
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。
本公开所提供的图像处理方法,可以应用于如图1所示的应用环境中。其中,电子设备110获取待处理图像,从所述待处理图像中识别出毛发区域;电子设备110确定毛发区域中的毛发方向信息;电子设备110基于毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像;电子设备110对毛发纹理图像和待处理图像中的毛发区域进行融合处理,生成处理后图像,处理后图像携带有毛发纹理图像中的毛发纹理信息和毛发区域中的颜色纹理信息。实际应用中,电子设备110可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。
图2是根据一示例性实施例示出的一种图像处理方法的流程图,该方法可以由图1的电子设备执行,如图2所示,该方法包括以下步骤。
在步骤S210中,获取待处理图像,从待处理图像中识别出毛发区域。
其中,待处理图像可以是指需要进行图像处理的图像。
其中,图像可以是预先存储在电子设备中的图像,也可以是电子设备实时拍摄得到的图像。
其中,毛发区域为图像中的人物或动物的毛发所在的区域,例如,在包括人物的图像中,毛发区域为图像中人物的头发所在的区域,在包括动物的图像中,毛发区域为动物的皮毛所在的区域。
在一些实施例中,电子设备可以首先对待处理图像进行识别,确定待处理图像中的毛发区域即得到头发区域掩码图像。例如,电子设备可以提前获取预定数量的样本图像数据,对样本图像数据进行标注,通过标注的样本图像数据训练得到目标识别模型。通过目标识别模型对图像进行识别,确定电子设备图像中的毛发区域。
需要说明的是,从待处理图像中识别出毛发区域的方法也可以通过其他方式实现,本公开的实施例对此不做限制。
在步骤S220中,确定毛发区域中的毛发方向信息。
在一些实施例中,基于电子设备从待处理图像中识别出的毛发区域,电子设备可以通过神经网络、图像分析等方式,对待处理图像中的毛发区域进行识别,确定出毛发区域中每个像素点对应的毛发方向,并将毛发区域中每个像素点对应的毛发方向作为毛发区域中的毛发方向信息。
在步骤S230中,基于毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像。
其中,预设的毛发纹理素材可以是指用于添加至待处理图像中的毛发纹理对应的素材。
其中,毛发纹理图像为携带有与毛发区域中的毛发方向信息相适配的毛发纹理信息。
在一些实施例中,电子设备可以基于毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像。在一些实施例中,电子设备可以基于毛发区域中每个像素点对应的毛发方向,对毛发纹理素材中的纹理线条进行变换,得到毛发纹理图像。其中,毛发纹理图像包括有发丝纹理;且该发丝纹理中每个像素点对应的毛发方向与毛发区域中每个像素点对应的毛发方向一致。
在步骤S240中,对毛发纹理图像和待处理图像中的毛发区域进行融合处理,生成处理后图像,处理后图像携带有毛发纹理图像中的毛发纹理信息和毛发区域中的颜色纹理信息。
在一些实施例中,电子设备对毛发纹理图像和待处理图像中的毛发区域进行融合处理,生成处理后图像,使得处理后图像携带有毛发纹理图像中的毛发纹理信息和毛发区域中的颜色纹理信息,即使处理后图像中的毛发具有更多的发丝细节。
上述图像处理方法中,通过获取待处理图像,并从待处理图像中识别出毛发区域;确定毛发区域中的毛发方向信息;基于毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像;对毛发纹理图像和待处理图像中的毛发区域进行融合处理,生成处理后图像,使得 处理后图像携带有毛发纹理图像中的毛发纹理信息和毛发区域中的颜色纹理信息,如此,可以实现将与毛发区域中的毛发流向相适配的毛发纹理信息添加至待处理图像,使得得到的处理后图像有着良好的发丝细节,避免了传统技术在对头发图像进行美发处理时出现毛发细节丢失的情况。
在一示例性实施例中,确定毛发区域中的毛发方向信息,包括:获取毛发区域中每个像素点对应的毛发方向,毛发方向为待处理图像中的毛发在像素点上的毛发生长方向;对每个像素点对应的毛发方向进行局部平滑处理,得到局部平滑结果,局部平滑结果包括每个像素点对应的平滑后毛发方向;将局部平滑结果,作为毛发区域中的毛发方向信息。
其中,毛发方向为待处理图像中的毛发在像素点上的毛发生长方向。
在一些实施例中,电子设备在确定毛发区域中的毛发方向信息的过程中,包括:电子设备获取毛发区域中每个像素点对应的毛发方向;在一些实施例中,电子设备可以对毛发区域的梯度进行计算,基于梯度计算结果确定每个像素对应的毛发方向。当然,电子设备还可以对头发区域中每个像素的邻域图案与沿不同方向的滤波器进行卷积,通过找出最大的滤波相应得到每个像素处的毛发方向。
另外,电子设备可以提前获取预定数量的样本图像数据,对样本图像数据中的各个像素对应的毛发方向进行标注,通过标注的样本图像数据训练得到目标识别模型。通过目标识别模型对图像进行识别,确定待处理图像中的毛发区域中每个像素对应的毛发方向。
电子设备对每个像素点对应的毛发方向进行局部平滑处理,得到局部平滑结果;并将局部平滑结果,作为毛发区域中的毛发方向信息。其中,局部平滑结果包括每个像素点对应的平滑后毛发方向。
本申请实施例的技术方案,通过获取毛发区域中每个像素点对应的毛发方向,并对每个像素点对应的毛发方向进行局部平滑处理,得到每个像素点对应的平滑后毛发方向,使得处理后图像中的发丝柔顺效果,使得得到的处理后图像有着良好的发丝细节。
在一示例性实施例中,对每个像素点对应的毛发方向进行局部平滑处理,得到局部平滑结果,包括:将毛发区域中的任意一个像素点确定为目标像素点;获取目标像素点的周围像素点;周围像素点与目标像素点之间的像素距离小于预设阈值;周围像素点与目标像素点为毛发区域中的同一条毛发上的像素点;对周围像素点对应的毛发方向和目标像素点对应的毛发方向进行融合处理,得到目标像素点对应的融合毛发方向,并将目标像素点对应的融合毛发方向作为局部平滑结果。
其中,周围像素点与目标像素点之间的像素距离小于预设阈值。
在一些实施例中,电子设备在对每个像素点对应的毛发方向进行局部平滑处理,得到局部平滑结果的过程包括:将毛发区域中的任意一个像素点确定为目标像素点;获取目标像素点的周围像素点;对周围像素点对应的毛发方向和目标像素点对应的毛发方向进行融合处理,得到目标像素点对应的融合毛发方向。在一些实施例中,电子设备可以对周围像素点对应的毛发方向和目标像素点对应的毛发方向进行加权取平均,得到加权平均后的毛 发方向,作为目标像素点对应的融合毛发方向。
最后,电子设备再基于各个目标像素点对应的融合毛发方向,作为各个目标像素点对应的平滑后毛发方向,进而得到局部平滑结果。
本申请实施例的技术方案,通过将毛发区域中的任意一个像素点确定为目标像素点;获取目标像素点的周围像素点;周围像素点与目标像素点之间的像素距离小于预设阈值;对周围像素点对应的毛发方向和目标像素点对应的毛发方向进行融合处理,得到目标像素点对应的融合毛发方向作为平滑后毛发方向;如此,通过根据目标像素点的周围像素点对应的毛发方向进行融合,实现对目标像素点的毛发方向进行准确地局部平滑处理,使得基于该局部平滑结果处理得到的处理后图像中的发丝纹理细节具有更为柔顺的发丝效果。
在一示例性实施例中,对毛发纹理图像和待处理图像中的毛发区域进行融合处理,生成处理后图像,包括:对毛发区域进行杂色滤除处理,得到杂色滤除后图像;对毛发纹理图像和杂色滤除后图像进行融合处理,得到处理后图像。
在一些实施例中,电子设备在对毛发纹理图像和待处理图像中的毛发区域进行融合处理,生成处理后图像的过程中,电子设备可以对毛发区域进行杂色滤除处理,得到杂色滤除后图像;在一些实施例中,电子设备可以基于毛发区域中每个像素点对应的毛发方向,对毛发区域进行图像平滑操作,如过度平滑操作,以过滤掉毛发区域中的杂色进而得到杂色滤除后图像。
再然后,电子设备再对毛发纹理图像和杂色滤除后图像进行融合处理,得到处理后图像。在一些实施例中,电子设备可以将对毛发纹理图像和杂色滤除后图像进行融合处理,进而实现将毛发纹理图像中的毛发纹理信息结合至杂色滤除后图像中,以得到同时携带有毛发纹理图像中的毛发纹理信息和毛发区域中的颜色纹理信息的处理后图像。
本申请实施例的技术方案,在将对毛发纹理图像和待处理图像中的毛发区域进行融合处理的过程中,通过对毛发区域进行杂色滤除处理,得到杂色滤除后图像;再对毛发纹理图像和杂色滤除后图像进行融合处理,得到处理后图像;如此,可以使处理得到的处理后图像在携带毛发纹理图像中的毛发纹理信息和毛发区域中的颜色纹理信息的同时,不携带冗余的杂色信息,提高了处理后图像的画面的整体质感。
在一示例性实施例中,该图像处理方法还包括:在待处理图像中,提取出毛发区域中的毛发高光信息;将毛发高光信息添加至处理后图像中,得到补光毛发图像。
其中,将毛发高光信息添加至处理后图像中,得到补光毛发图像,包括:将毛发高光信息融合到毛发纹理图像中,生成毛发高光图像;将毛发高光图像和处理后图像进行融合处理,得到补光毛发图像。
其中,补光毛发图像中的毛发高光分布区域与毛发高光图像中的毛发高光分布区域相一致。
其中,毛发高光图像中的毛发高光分布区域为根据预设的高光效果参数确定得到的。
其中,处理后图像中的毛发高光分布区域与毛发高光图像中的毛发高光分布区域相一 致。
其中,补光毛发图像可以是指携带有待处理图像的毛发高光信息的处理后图像。
在一些实施例中,电子设备可以在待处理图像中,提取出毛发区域中的毛发高光信息;将毛发高光信息添加至处理后图像中,得到补光毛发图像;如此,该补光毛发图像同时携带有毛发纹理图像中的毛发纹理信息、毛发区域中的颜色纹理信息和毛发高光信息。
电子设备将毛发高光信息添加至处理后图像中,得到补光毛发图像的过程包括:电子设备可以将毛发高光信息结合至毛发纹理图像中,生成毛发高光图像。在一些实施例中,基于电子设备提取出毛发区域中的毛发高光信息即得到高光区域,电子设备可以将该高光区域与结合至毛发纹理图像,得到初始毛发高光分布区域;然后,电子设备可以根据预设的高光效果参数并采用基于物理的渲染方式去调整初始毛发高光分布区域中的高光的分布面积及范围,进而生成毛发高光图像。
需要说明的是,采用不同的高光效果参数,则渲染得到的毛发高光图像具有不同的毛发高光效果。实际应用中,毛发高光图像中的毛发高光效果可以包括但不限于是带状高光、丝状高光、点状高光、高光合成等毛发高光效果。
为了便于本领域技术人员的理解,图3实例性地提供了一种毛发高光效果的示意图。其中,310为带状高光、320为丝状高光、330为点状高光、340为高光合成。
电子设备将毛发高光图像和处理后图像进行融合处理,得到补光毛发图像,从而实现将毛发区域中的毛发高光信息融合至处理后图像中,得到补光毛发图像。
本申请实施例中的技术方案,通过在待处理图像中,提取出毛发区域中的毛发高光信息;并将毛发高光信息融合到毛发纹理图像中,生成毛发高光图像;将毛发高光图像和处理后图像进行融合处理,使得得到的补光毛发图像携带有待处理图像中的的毛发高光信息又具有着良好的发丝细节,提高了补光毛发图像中毛发区域的拟真程度,使得补光毛发图像可以满足用户的图像处理需求。
在一示例性实施例中,基于毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像,包括:获取毛发纹理素材图像;分别沿着每个像素点对应的平滑后毛发方向,对毛发纹理素材图像中的纹理的方向进行调整,得到调整后的毛发纹理素材图像;将调整后的毛发纹理素材图像,作为毛发纹理图像。
其中,调整后的毛发纹理素材图像中的每个像素点对应的纹理方向与每个像素点对应的平滑后毛发方向相一致。
在一些实施例中,电子设备基于毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像的过程包括:电子设备可以获取包含有毛发纹理素材的毛发纹理素材图像。电子设备可以分别沿着每个像素点对应的平滑后毛发方向,对毛发纹理素材图像中的纹理的方向进行调整,即对毛发纹理素材图像中的各条纹理进行线性变换处理;例如,电子设备可以沿着每个像素点对应的平滑后毛发方向,对毛发纹理素材图像中的各条纹理进行平移、旋转、缩放等线性变化,拼贴组合出具有丰富发丝细节的调整后的毛发纹理素材图像,使得调整 后的毛发纹理素材图像中的每个像素点对应的纹理方向与每个像素点对应的平滑后毛发方向相一致。最后,电子设备将调整后的毛发纹理素材图像,作为毛发纹理图像即发丝细节图。实际应用中,电子设备可以根据用户的实际需求,可以采用具有不同毛发纹理特征的毛发纹理素材图像,进而使得到的毛发纹理图像具有不同毛发纹理特征的毛发纹理图像。
为了便于本领域技术人员的理解,图4示例性地提供了一种生成毛发纹理图像的方法的流程示意图。其中,电子设备可以针对待处理图像410获取毛发纹理素材图像420,电子设备沿着毛发区域中的每条毛发对应的毛发流向,对毛发纹理素材图像中的各条纹理进行线性变换处理,得到毛发纹理图像430。
本申请实施例中的技术方案,通过获取毛发纹理素材图像;分别沿着每个像素点对应的平滑后毛发方向,对毛发纹理素材图像中的纹理的方向进行调整,得到调整后的毛发纹理素材图像;将调整后的毛发纹理素材图像,进而可以使生成的毛发纹理图像具有丰富且真实的发丝细节,提高了补光毛发图像中毛发区域的拟真程度。
图5是根据一示例性实施例示出的另一种图像处理方法的流程图,如图5所示,该方法用于图1中的图像处理方法中,包括以下步骤。在步骤S502中,获取待处理图像,从所述待处理图像中识别出毛发区域。在步骤S504中,获取所述毛发区域中每个像素点对应的毛发方向,所述毛发方向为所述待处理图像中的毛发在所述像素点上的毛发生长方向。在步骤S506中,对每个所述像素点对应的毛发方向进行局部平滑处理,得到局部平滑结果,作为所述毛发区域中的毛发方向信息,所述局部平滑结果包括每个所述像素点对应的平滑后毛发方向。在步骤S508中,获取毛发纹理素材图像。在步骤S510中,分别沿着每个所述像素点对应的平滑后毛发方向,对所述毛发纹理素材图像中的纹理的方向进行调整,得到调整后的毛发纹理素材图像,所述调整后的毛发纹理素材图像中的每个所述像素点对应的纹理方向与每个所述像素点对应的平滑后毛发方向相一致。在步骤S512中,将所述调整后的毛发纹理素材图像,作为所述毛发纹理图像。在步骤S514中,对所述毛发区域进行杂色滤除处理,得到杂色滤除后图像。在步骤S516中,对所述毛发纹理图像和所述杂色滤除后图像进行融合处理,得到处理后图像,所述处理后图像携带有所述毛发纹理图像中的毛发纹理信息和所述毛发区域中的颜色纹理信息。需要说明的是,上述步骤的具体限定可以参见上文对一种图像处理方法的具体限定,在此不再赘述。
应该理解的是,虽然图2和图5的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2和图5中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
为了便于本领域技术人员的理解,图6提供了一种图像处理方法的处理流程图;其中, 电子设备获取待处理图像610,并从待处理图像中识别出毛发区域,得到头发区域掩码620;电子设备确定毛发区域中每条毛发对应的毛发流向即得到头发方向图630;电子设备基于毛发流向和待添加的毛发纹理素材,生成毛发纹理图像640;基于毛发区域中每个像素对应的毛发方向,对毛发区域进行图像平滑处理,得到平滑后图像650;对毛发纹理图像和平滑后图像进行融合处理,得到融合后图像660;在待处理图像中,提取出毛发区域中的毛发高光信息即得到携带有毛发高光信息的高光分布图670;电子设备将毛发高光信息添加至融合后图像中,得到处理后图像680。
图7是根据一示例性实施例示出的一种图像处理装置700的框图。参照图7,该装置700包括:
获取单元710,被配置为执行获取待处理图像,从所述待处理图像中识别出毛发区域;
确定单元720,被配置为执行确定所述毛发区域中的毛发方向信息;
生成单元730,被配置为执行基于所述毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像;
融合单元740,被配置为执行对所述毛发纹理图像和所述待处理图像中的毛发区域进行融合处理,生成处理后图像,所述处理后图像携带有所述毛发纹理图像中的毛发纹理信息和所述毛发区域中的颜色纹理信息。
在一示例性实施例中,所述确定单元720,还被配置为执行获取所述毛发区域中每个像素点对应的毛发方向,所述毛发方向为所述待处理图像中的毛发在所述像素点上的毛发生长方向;对每个所述像素点对应的毛发方向进行局部平滑处理,得到局部平滑结果,所述局部平滑结果包括每个所述像素点对应的平滑后毛发方向;将所述局部平滑结果,作为所述毛发区域中的毛发方向信息。
在一示例性实施例中,所述确定单元720,还被配置为执行将所述毛发区域中的任意一个像素点确定为目标像素点;获取所述目标像素点的周围像素点,所述周围像素点与所述目标像素点之间的像素距离小于预设阈值,所述周围像素点与所述目标像素点为所述毛发区域中的同一条毛发上的像素点;对所述周围像素点对应的毛发方向和所述目标像素点对应的毛发方向进行融合处理,得到所述目标像素点对应的融合毛发方向,并将所述目标像素点对应的融合毛发方向作为所述局部平滑结果。
在一示例性实施例中,所述融合单元740,还被配置为执行对所述毛发区域进行杂色滤除处理,得到杂色滤除后图像;对所述毛发纹理图像和所述杂色滤除后图像进行融合处理,得到所述处理后图像。
在一示例性实施例中,所述融合单元740,还被配置为执行在所述待处理图像中,提取出所述毛发区域中的毛发高光信息;将所述毛发高光信息添加至所述处理后图像中,得到补光毛发图像。
在一示例性实施例中,所述融合单元740,还被配置为执行将所述毛发高光信息融合到所述毛发纹理图像中,生成毛发高光图像,所述毛发高光图像中的毛发高光效果为根据 预设的高光效果参数确定得到的;将所述毛发高光图像和所述处理后图像进行融合处理,得到所述补光毛发图像,其中,所述补光毛发图像中的毛发高光分布区域与所述毛发高光图像中的毛发高光分布区域相一致。
在一示例性实施例中,所述生成单元730,还被配置为执行获取毛发纹理素材图像;分别沿着每个所述像素点对应的平滑后毛发方向,对所述毛发纹理素材图像中的纹理的方向进行调整,得到调整后的毛发纹理素材图像,所述调整后的毛发纹理素材图像中的每个所述像素点对应的纹理方向与每个所述像素点对应的平滑后毛发方向相一致;将所述调整后的毛发纹理素材图像,作为所述毛发纹理图像。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图8是根据一示例性实施例示出的一种用于执行图像处理方法的电子设备800的框图。例如,电子设备800可以是移动电话、计算机、数字广播终端、消息收发设备、游戏控制台、平板设备、医疗设备、健身设备、个人数字助理等。
参照图8,电子设备800可以包括以下一个或多个组件:处理组件802、存储器804、电力组件806、多媒体组件808、音频组件810、输入/输出(I/O)的接口812、传感器组件814以及通信组件816。
处理组件802通常控制电子设备800的整体操作,诸如与显示、电话呼叫、数据通信、相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令、联系人数据、电话簿数据、消息、图片、视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM)、电可擦除可编程只读存储器(EEPROM)、可擦除可编程只读存储器(EPROM)、可编程只读存储器(PROM)、只读存储器(ROM)、磁存储器、快闪存储器、磁盘或光盘。
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。在屏幕包括触摸面板的情况下,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。在电子设备800处 于操作模式,如拍摄模式或视频模式的情况下,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),在电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式的情况下,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器、陀螺仪传感器、磁传感器、压力传感器或温度传感器。
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如WiFi,运营商网络(如2G、3G、4G或5G),或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器804,上述指令可由电子设备800的处理器820执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (22)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    获取待处理图像,从所述待处理图像中识别出毛发区域;
    确定所述毛发区域中的毛发方向信息;
    基于所述毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像;
    对所述毛发纹理图像和所述待处理图像中的毛发区域进行融合处理,生成处理后图像,所述处理后图像携带有所述毛发纹理图像中的毛发纹理信息和所述毛发区域中的颜色纹理信息。
  2. 根据权利要求1所述的图像处理方法,其特征在于,所述确定所述毛发区域中的毛发方向信息,包括:
    获取所述毛发区域中每个像素点对应的毛发方向,所述毛发方向为所述待处理图像中的毛发在所述像素点上的毛发生长方向;
    对每个所述像素点对应的毛发方向进行局部平滑处理,得到局部平滑结果,所述局部平滑结果包括每个所述像素点对应的平滑后毛发方向;
    将所述局部平滑结果,作为所述毛发区域中的毛发方向信息。
  3. 根据权利要求2所述的图像处理方法,其特征在于,所述对每个所述像素点对应的毛发方向进行局部平滑处理,得到局部平滑结果,包括:
    将所述毛发区域中的任意一个像素点确定为目标像素点;
    获取所述毛发区域中与所述目标像素点在同一条毛发上的像素点作为所述目标像素点的周围像素点,所述周围像素点与所述目标像素点之间的像素距离小于预设阈值;
    对所述周围像素点对应的毛发方向和所述目标像素点对应的毛发方向进行融合处理,得到所述目标像素点对应的融合毛发方向,并将所述目标像素点对应的融合毛发方向作为所述局部平滑结果。
  4. 根据权利要求1所述的图像处理方法,其特征在于,所述对所述毛发纹理图像和所述待处理图像中的毛发区域进行融合处理,生成处理后图像,包括:
    对所述毛发区域进行杂色滤除处理,得到杂色滤除后图像;
    对所述毛发纹理图像和所述杂色滤除后图像进行融合处理,得到所述处理后图像。
  5. 根据权利要求4所述的图像处理方法,其特征在于,所述方法还包括:
    在所述待处理图像中,提取出所述毛发区域中的毛发高光信息;
    将所述毛发高光信息添加至所述处理后图像中,得到补光毛发图像。
  6. 根据权利要求5所述的图像处理方法,其特征在于,所述将所述毛发高光信息添加至所述处理后图像中,得到补光毛发图像,包括:
    将所述毛发高光信息融合到所述毛发纹理图像中,生成毛发高光图像,所述毛发高光图像中的毛发高光效果为根据预设的高光效果参数确定得到的;
    将所述毛发高光图像和所述处理后图像进行融合处理,得到所述补光毛发图像,其中,所述补光毛发图像中的毛发高光分布区域与所述毛发高光图像中的毛发高光分布区域相一致。
  7. 根据权利要求2至6中任意一项所述的图像处理方法,其特征在于,所述基于所述毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像,包括:
    获取包含所述毛发纹理素材的毛发纹理素材图像;
    分别沿着每个所述像素点对应的平滑后毛发方向,对所述毛发纹理素材图像中的纹理的方向进行调整,得到调整后的毛发纹理素材图像,所述调整后的毛发纹理素材图像中的每个所述像素点对应的纹理方向与每个所述像素点对应的平滑后毛发方向相一致;
    将所述调整后的毛发纹理素材图像,作为所述毛发纹理图像。
  8. 一种图像处理装置,其特征在于,包括:
    获取单元,被配置为执行获取待处理图像,从所述待处理图像中识别出毛发区域;
    确定单元,被配置为执行确定所述毛发区域中的毛发方向信息;
    生成单元,被配置为执行基于所述毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像;
    融合单元,被配置为执行对所述毛发纹理图像和所述待处理图像中的毛发区域进行融合处理,生成处理后图像,所述处理后图像携带有所述毛发纹理图像中的毛发纹理信息和所述毛发区域中的颜色纹理信息。
  9. 根据权利要求8所述的图像处理装置,其特征在于,所述确定单元,还被配置为执行获取所述毛发区域中每个像素点对应的毛发方向,所述毛发方向为所述待处理图像中的毛发在所述像素点上的毛发生长方向;对每个所述像素点对应的毛发方向进行局部平滑处理,得到局部平滑结果,所述局部平滑结果包括每个所述像素点对应的平滑后毛发方向;将所述局部平滑结果,作为所述毛发区域中的毛发方向信息。
  10. 根据权利要求9所述的图像处理装置,其特征在于,所述确定单元,还被配置为执行将所述毛发区域中的任意一个像素点确定为目标像素点;获取所述目标像素点的周围像素点,所述周围像素点与所述目标像素点之间的像素距离小于预设阈值,所述周围像素点与所述目标像素点为所述毛发区域中的同一条毛发上的像素点;对所述周围像素点对应的毛发方向和所述目标像素点对应的毛发方向进行融合处理,得到所述目标像素点对应的融合毛发方向,并将所述目标像素点对应的融合毛发方向作为所述局部平滑结果。
  11. 根据权利要求8所述的图像处理装置,其特征在于,所述融合单元,还被配置为执行对所述毛发区域进行杂色滤除处理,得到杂色滤除后图像;对所述毛发纹理图像和所述杂色滤除后图像进行融合处理,得到所述处理后图像。
  12. 根据权利要求11所述的图像处理装置,其特征在于,所述融合单元,还被配 置为执行在所述待处理图像中,提取出所述毛发区域中的毛发高光信息;将所述毛发高光信息添加至所述处理后图像中,得到补光毛发图像。
  13. 根据权利要求12所述的图像处理装置,其特征在于,所述融合单元,还被配置为执行将所述毛发高光信息融合到所述毛发纹理图像中,生成毛发高光图像,所述毛发高光图像中的毛发高光效果为根据预设的高光效果参数确定得到的;将所述毛发高光图像和所述处理后图像进行融合处理,得到所述补光毛发图像,其中,所述补光毛发图像中的毛发高光分布区域与所述毛发高光图像中的毛发高光分布区域相一致。
  14. 根据权利要求9至12中任意一项所述的图像处理装置,其特征在于,所述生成单元,还被配置为执行获取包含所述毛发纹理素材的毛发纹理素材图像;分别沿着每个所述像素点对应的平滑后毛发方向,对所述毛发纹理素材图像中的纹理的方向进行调整,得到调整后的毛发纹理素材图像,所述调整后的毛发纹理素材图像中的每个所述像素点对应的纹理方向与每个所述像素点对应的平滑后毛发方向相一致;将所述调整后的毛发纹理素材图像,作为所述毛发纹理图像。
  15. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令,以实现以下步骤:
    获取待处理图像,从所述待处理图像中识别出毛发区域;
    确定所述毛发区域中的毛发方向信息;
    基于所述毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像;
    对所述毛发纹理图像和所述待处理图像中的毛发区域进行融合处理,生成处理后图像,所述处理后图像携带有所述毛发纹理图像中的毛发纹理信息和所述毛发区域中的颜色纹理信息。
  16. 根据权利要求15所述的电子设备,其特征在于,所述处理器被配置为执行所述指令,以实现以下步骤:
    获取所述毛发区域中每个像素点对应的毛发方向,所述毛发方向为所述待处理图像中的毛发在所述像素点上的毛发生长方向;
    对每个所述像素点对应的毛发方向进行局部平滑处理,得到局部平滑结果,所述局部平滑结果包括每个所述像素点对应的平滑后毛发方向;
    将所述局部平滑结果,作为所述毛发区域中的毛发方向信息。
  17. 根据权利要求16所述的电子设备,其特征在于,所述处理器被配置为执行所述指令,以实现以下步骤:
    将所述毛发区域中的任意一个像素点确定为目标像素点;
    获取所述目标像素点的周围像素点,所述周围像素点与所述目标像素点之间的像素距离小于预设阈值,所述周围像素点与所述目标像素点为所述毛发区域中的同一条 毛发上的像素点;
    对所述周围像素点对应的毛发方向和所述目标像素点对应的毛发方向进行融合处理,得到所述目标像素点对应的融合毛发方向,并将所述目标像素点对应的融合毛发方向作为所述局部平滑结果。
  18. 根据权利要求15所述的电子设备,其特征在于,所述处理器被配置为执行所述指令,以实现以下步骤:
    对所述毛发区域进行杂色滤除处理,得到杂色滤除后图像;
    对所述毛发纹理图像和所述杂色滤除后图像进行融合处理,得到所述处理后图像。
  19. 根据权利要求18所述的电子设备,其特征在于,所述处理器被配置为执行所述指令,以实现以下步骤:
    在所述待处理图像中,提取出所述毛发区域中的毛发高光信息;
    将所述毛发高光信息添加至所述处理后图像中,得到补光毛发图像。
  20. 根据权利要求19所述的电子设备,其特征在于,所述处理器被配置为执行所述指令,以实现以下步骤:
    将所述毛发高光信息融合到所述毛发纹理图像中,生成毛发高光图像,所述毛发高光图像中的毛发高光效果为根据预设的高光效果参数确定得到的;
    将所述毛发高光图像和所述处理后图像进行融合处理,得到所述补光毛发图像,其中,所述补光毛发图像中的毛发高光分布区域与所述毛发高光图像中的毛发高光分布区域相一致。
  21. 根据权利要求16至20中任意一项所述的电子设备,所述处理器被配置为执行所述指令,以实现以下步骤:
    获取包含所述毛发纹理素材的毛发纹理素材图像;
    分别沿着每个所述像素点对应的平滑后毛发方向,对所述毛发纹理素材图像中的纹理的方向进行调整,得到调整后的毛发纹理素材图像,所述调整后的毛发纹理素材图像中的每个所述像素点对应的纹理方向与每个所述像素点对应的平滑后毛发方向相一致;
    将所述调整后的毛发纹理素材图像,作为所述毛发纹理图像。
  22. 一种存储介质,当所述存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行以下步骤:
    获取待处理图像,从所述待处理图像中识别出毛发区域;
    确定所述毛发区域中的毛发方向信息;
    基于所述毛发方向信息和预设的毛发纹理素材,生成毛发纹理图像;
    对所述毛发纹理图像和所述待处理图像中的毛发区域进行融合处理,生成处理后图像,所述处理后图像携带有所述毛发纹理图像中的毛发纹理信息和所述毛发区域中的颜色纹理信息。
PCT/CN2021/106913 2020-11-27 2021-07-16 图像处理方法及装置 WO2022110837A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011364522.1A CN112330570B (zh) 2020-11-27 2020-11-27 图像处理方法、装置、电子设备及存储介质
CN202011364522.1 2020-11-27

Publications (1)

Publication Number Publication Date
WO2022110837A1 true WO2022110837A1 (zh) 2022-06-02

Family

ID=74309622

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/106913 WO2022110837A1 (zh) 2020-11-27 2021-07-16 图像处理方法及装置

Country Status (2)

Country Link
CN (1) CN112330570B (zh)
WO (1) WO2022110837A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116843689A (zh) * 2023-09-01 2023-10-03 山东众成菌业股份有限公司 一种菌盖表面破损检测方法
CN117237244A (zh) * 2023-11-16 2023-12-15 平利县女娲茗鼎农业科技有限公司 基于数据增强的畜牧兽医用动物体温智能监测系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330570B (zh) * 2020-11-27 2024-03-12 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质
CN113064539B (zh) * 2021-03-04 2022-07-29 北京达佳互联信息技术有限公司 特效控制方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8249365B1 (en) * 2009-09-04 2012-08-21 Adobe Systems Incorporated Methods and apparatus for directional texture generation using sample-based texture synthesis
CN107103619A (zh) * 2017-04-19 2017-08-29 腾讯科技(上海)有限公司 一种头发纹理方向的处理方法、装置及系统
CN110060321A (zh) * 2018-10-15 2019-07-26 叠境数字科技(上海)有限公司 基于真实材质的毛发快速实时渲染方法
CN111524171A (zh) * 2020-04-26 2020-08-11 网易(杭州)网络有限公司 图像处理方法、装置和电子设备
CN112330570A (zh) * 2020-11-27 2021-02-05 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260581B (zh) * 2020-01-17 2023-09-26 北京达佳互联信息技术有限公司 图像处理方法、装置及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8249365B1 (en) * 2009-09-04 2012-08-21 Adobe Systems Incorporated Methods and apparatus for directional texture generation using sample-based texture synthesis
CN107103619A (zh) * 2017-04-19 2017-08-29 腾讯科技(上海)有限公司 一种头发纹理方向的处理方法、装置及系统
CN110060321A (zh) * 2018-10-15 2019-07-26 叠境数字科技(上海)有限公司 基于真实材质的毛发快速实时渲染方法
CN111524171A (zh) * 2020-04-26 2020-08-11 网易(杭州)网络有限公司 图像处理方法、装置和电子设备
CN112330570A (zh) * 2020-11-27 2021-02-05 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116843689A (zh) * 2023-09-01 2023-10-03 山东众成菌业股份有限公司 一种菌盖表面破损检测方法
CN116843689B (zh) * 2023-09-01 2023-11-21 山东众成菌业股份有限公司 一种菌盖表面破损检测方法
CN117237244A (zh) * 2023-11-16 2023-12-15 平利县女娲茗鼎农业科技有限公司 基于数据增强的畜牧兽医用动物体温智能监测系统
CN117237244B (zh) * 2023-11-16 2024-02-02 平利县女娲茗鼎农业科技有限公司 基于数据增强的畜牧兽医用动物体温智能监测系统

Also Published As

Publication number Publication date
CN112330570B (zh) 2024-03-12
CN112330570A (zh) 2021-02-05

Similar Documents

Publication Publication Date Title
US10880495B2 (en) Video recording method and apparatus, electronic device and readable storage medium
WO2022110837A1 (zh) 图像处理方法及装置
US10565763B2 (en) Method and camera device for processing image
KR101649596B1 (ko) 피부색 조절방법, 장치, 프로그램 및 기록매체
WO2021031609A1 (zh) 活体检测方法及装置、电子设备和存储介质
WO2016029641A1 (zh) 照片获取方法及装置
WO2017031901A1 (zh) 人脸识别方法、装置及终端
WO2016127671A1 (zh) 图像滤镜生成方法及装置
WO2022077970A1 (zh) 特效添加方法及装置
JP6391708B2 (ja) 虹彩画像を取得する方法および装置、ならびに虹彩識別機器
US11308692B2 (en) Method and device for processing image, and storage medium
CN107730448B (zh) 基于图像处理的美颜方法及装置
US11403789B2 (en) Method and electronic device for processing images
CN107015648B (zh) 图片处理方法及装置
CN110580688B (zh) 一种图像处理方法、装置、电子设备及存储介质
KR20220043004A (ko) 차폐된 이미지 검출 방법, 장치 및 매체
KR20190111034A (ko) 특징 이미지 획득 방법 및 디바이스, 및 사용자 인증 방법
WO2022095860A1 (zh) 指甲特效的添加方法及装置
CN108961156B (zh) 人脸图像处理的方法及装置
CN112004020B (zh) 图像处理方法、装置、电子设备及存储介质
CN110502993B (zh) 图像处理方法、装置、电子设备及存储介质
CN110110742B (zh) 多特征融合方法、装置、电子设备及存储介质
CN106469446B (zh) 深度图像的分割方法和分割装置
WO2022193573A1 (zh) 人脸融合方法及装置
US11252341B2 (en) Method and device for shooting image, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21896344

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19/09/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21896344

Country of ref document: EP

Kind code of ref document: A1