CN107392099B - Method and device for extracting hair detail information and terminal equipment - Google Patents

Method and device for extracting hair detail information and terminal equipment Download PDF

Info

Publication number
CN107392099B
CN107392099B CN201710459550.3A CN201710459550A CN107392099B CN 107392099 B CN107392099 B CN 107392099B CN 201710459550 A CN201710459550 A CN 201710459550A CN 107392099 B CN107392099 B CN 107392099B
Authority
CN
China
Prior art keywords
hair
area
target
image
detail information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710459550.3A
Other languages
Chinese (zh)
Other versions
CN107392099A (en
Inventor
曾元清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710459550.3A priority Critical patent/CN107392099B/en
Publication of CN107392099A publication Critical patent/CN107392099A/en
Application granted granted Critical
Publication of CN107392099B publication Critical patent/CN107392099B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention provides a method, a device and terminal equipment for extracting hair detail information, wherein the method comprises the following steps: determining a first area corresponding to the hair in the original image; determining a target area where the hair is located from the first area according to the characteristic information of the hair; extracting a guide image containing a hair target area from the original image; and performing image processing on the guide image based on the original image to acquire a target image comprising hair detail information. By the method, the hair region and the hair detail information of the portrait can be accurately extracted, and the problem that the hair information in the portrait can only be roughly extracted in the prior art is solved.

Description

Method and device for extracting hair detail information and terminal equipment
Technical Field
The present invention relates to the field of image processing, and in particular, to a method, an apparatus, and a terminal device for extracting hair detail information.
Background
It is good for all people. With the continuous development of intelligent terminal equipment, various beauty tools come into play. The hair is a part of human body and plays an important role in the overall image of the human body. Because the patterns of the hair are varied and the background in the image is often complex, how to accurately extract the hair information from the image becomes a research hotspot.
The existing hair extraction method can only roughly extract hair information in the portrait and cannot extract detailed information such as hair and the like.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first object of the present invention is to provide a method for extracting hair detail information, so as to accurately extract hair region and hair detail information of a portrait, and solve the problem that only hair information in the portrait can be roughly extracted in the prior art.
A second object of the present invention is to provide an apparatus for extracting hair detail information.
A third object of the present invention is to provide a terminal device.
A fourth object of the invention is to propose a computer program product.
A fifth object of the invention is to propose a non-transitory computer-readable storage medium.
In order to achieve the above object, a first embodiment of the present invention provides a method for extracting hair detail information, including:
determining a first area corresponding to the hair in the original image;
determining a target area where the hair is located from the first area according to the characteristic information of the hair;
extracting a guide image containing a hair target area from the original image;
and performing image processing on the guide image based on the original image to acquire a target image comprising hair detail information.
According to the method for extracting the hair detail information, the first area corresponding to the hair is determined in the original image, the target area where the hair is located is determined from the first area according to the characteristic information of the hair, the guide image containing the hair target area is extracted from the original image, the guide image is subjected to image processing based on the original image, and the target image containing the hair detail information is obtained. Therefore, the hair area and the hair detail information of the portrait can be accurately extracted, and technical support is provided for the smoothing operation and the beautifying processing in the image processing.
In order to achieve the above object, a second embodiment of the present invention provides an apparatus for extracting hair detail information, including:
the first determining module is used for determining a first area corresponding to the hair in the original image;
the second determining module is used for determining a target area where the hair is located from the first area according to the characteristic information of the hair;
an extraction module for extracting a guide image containing a hair target region from the original image;
and the acquisition module is used for carrying out image processing on the guide image based on the original image and acquiring a target image comprising hair detail information.
According to the device for extracting the hair detail information, the first area corresponding to the hair is determined in the original image, the target area where the hair is located is determined from the first area according to the characteristic information of the hair, the guide image containing the hair target area is extracted from the original image, the guide image is subjected to image processing based on the original image, and the target image including the hair detail information is obtained. Therefore, the hair area and the hair detail information of the portrait can be accurately extracted.
To achieve the above object, a third aspect of the present invention provides a terminal device, including:
the hair-specific information extraction device comprises a shell, and a processor, a memory and a display interface which are positioned in the shell, wherein the processor executes a program corresponding to executable program codes by reading the executable program codes stored in the memory so as to execute the method for extracting hair specific information.
According to the terminal device provided by the embodiment of the invention, the first area corresponding to the hair is determined in the original image, the target area where the hair is located is determined from the first area according to the characteristic information of the hair, the guide image containing the target area of the hair is extracted from the original image, and the guide image is subjected to image processing based on the original image, so that the target image comprising the hair detail information is obtained. Therefore, the hair area and the hair detail information of the portrait can be accurately extracted.
To achieve the above object, a fourth embodiment of the present invention provides a computer program product, wherein when the instructions of the computer program product are executed by a processor, the method for extracting hair detail information as described in the first embodiment is performed.
To achieve the above object, a fifth embodiment of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for extracting hair detail information as described in the first embodiment is implemented.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a method for extracting hair detail information according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method for extracting hair detail information according to another embodiment of the present invention;
fig. 3 is a schematic flow chart of extracting hair detail information according to another embodiment of the present invention;
fig. 4 is a schematic flow chart illustrating a method for extracting hair detail information according to still another embodiment of the present invention;
FIG. 5 is a diagram illustrating the effect of extracting hair information based on a guiding filter;
FIG. 6 is a diagram illustrating the effect of extracting hair detail information based on a guiding filter;
fig. 7 is a schematic structural diagram of an apparatus for extracting hair detail information according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a method, an apparatus and a terminal device for extracting hair detail information according to an embodiment of the present invention with reference to the drawings.
Fig. 1 is a schematic flow chart of a method for extracting hair detail information according to an embodiment of the present invention.
As shown in fig. 1, the method for extracting hair detail information may include the steps of:
and S11, determining a first area corresponding to the hair in the original image.
When the image contains a portrait, the position of the face can be identified by adopting a face identification technology, and the position information of each main facial organ in the face, such as the positions of eyes and eyebrows, can be identified.
Since the hair grows at the top of the head and for a part of the person, such as a girl and/or a boy who has a bang, there is also a part of the hair in front of the forehead, in this embodiment, the area above the eyebrows or eyes in the original image can be determined as the first area corresponding to the hair.
And S12, determining a target area where the hair is located from the first area according to the characteristic information of the hair.
Generally, compared with other areas of the human face, the hair has more detail information in the area where the hair is located, and the brightness is darker than other areas, so in this embodiment, the more detail information and the darker brightness can be used as the feature information of the hair to determine the target area where the hair is located from the first area.
S13, a guide image including the hair target region is extracted from the original image.
In this embodiment, after the target area where the hair is located is determined from the first area, the guide image including the hair target area may be extracted from the original image according to the determined target area.
S14, the guide image is subjected to image processing based on the original image, and a target image including hair detail information is acquired.
In this embodiment, after the guide image is extracted, a related image processing technology, such as a matting function of a guide filtering algorithm, may be adopted to perform image processing on the guide image based on the original image, so as to obtain a target image including hair detail information.
It should be noted that, a specific process of performing image processing on the guide image based on the original image to obtain the target image including the hair detail information will be given in the following content, and in order to avoid redundancy, detailed description is not provided here.
In order to reduce the data amount of the image and highlight the target area of interest for further processing of the image, optionally, in a possible implementation manner of the embodiment of the present invention, after the target image including the hair detail information is acquired, the target image may be subjected to binarization processing, and the binarized target image is used as a final output result for further processing.
In the method for extracting hair detail information in this embodiment, a first region corresponding to hair is determined in an original image, a target region where the hair is located is determined from the first region according to characteristic information of the hair, a guide image including the target region of the hair is extracted from the original image, and the guide image is subjected to image processing based on the original image, so as to obtain a target image including the hair detail information. Therefore, the hair area and the hair detail information of the portrait can be accurately extracted, and technical support is provided for the smoothing operation and the beautifying processing in the image processing.
In order to more clearly illustrate the implementation process of determining the first region corresponding to the hair in the original image, the present invention further provides another method for extracting hair detail information, and fig. 2 is a flowchart of a method for extracting hair detail information according to another embodiment of the present invention.
As shown in fig. 2, on the basis of the embodiment shown in fig. 1, step S11 may include the following steps:
and S21, recognizing the face in the original image to obtain the eye area in the face.
In order not to obstruct the view, even for the person with bang, the hair in front of the forehead does not grow over the part below the eyes, so in this embodiment, the position of the eye area can be used as a boundary line to narrow the range of the area where the hair is extracted.
Therefore, in this embodiment, a face recognition technology may be first adopted to recognize the face in the original image, so as to obtain the eye region in the face.
S22, using the first boundary of the eye region as a starting point, and using the region covered by the first boundary to the head boundary as a first region.
The first boundary is a boundary located above the line connecting the two eyes.
In this embodiment, after the eye area in the face is identified by the face identification technology, the two eyes may be connected, and the boundary located above the connection line between the two eyes is used as the first boundary, and then the area covered from the first boundary to the head boundary is used as the first area, with the first boundary being used as the starting point.
In summary, the first region may be represented as a portion of the recognized face content above the eye region, i.e., the upper half region bounded by the line connecting the two eyes.
According to the method for extracting the hair detail information, the eye area in the face is obtained by identifying the face in the original image, the first boundary of the eye area is used as the starting point, the area covered from the first boundary to the head boundary is used as the first area, the range of the area where the hair is extracted can be reduced, and a foundation is laid for accurately extracting the area where the hair is located.
In the embodiment shown in fig. 2, since the first region is a first region divided from a boundary located above the line connecting both eyes as a starting point, the hair information contained in the first region may include not only hair but also eyebrows. Since the eyebrows have the characteristic information similar to the hair, the target area where the hairs are located, which is determined from the first area according to the characteristic information of the hairs, may include the area where the eyebrows are located, so that the determined target area where the hairs are located is not accurate enough.
In view of the above problem, an embodiment of the present invention provides another method for extracting hair detail information, and fig. 3 is a schematic flow chart of extracting hair detail information according to another embodiment of the present invention.
As shown in fig. 3, based on the above embodiment, step S12 may include the following steps:
s31, a connected region within the first region is acquired.
The face region may be divided into a plurality of connected regions, for example, according to each of the facial organs, two eyes may be divided into two connected regions, two eyebrows may be divided into two connected regions, hair may be divided into one large connected region, and so on. In the specific division, for example, the division may be performed according to the luminance information, and the region in which the luminance information is continuously the same may be divided into one connected region. For another example, the areas with similar color information may be divided into a connected area according to the color information. In this embodiment, the division condition is not limited.
Therefore, in this embodiment, after the first region is determined, the first region may be further divided according to different luminance information of each region in the first region to obtain different connected regions in the first region.
And S32, acquiring the brightness value and/or the frequency value of each connected region.
In general, a hair region generally has the following characteristic information with respect to a skin region: (1) the detail information of the hair region is more, and thus in the frequency domain, the hair region is mainly in the high frequency part; (2) the hair area is generally darker in intensity than the skin area. Therefore, the luminance values and/or frequency values of the connected regions can be used as a basis for distinguishing between hair regions and skin regions.
In this embodiment, before extracting the hair region from the connected region, the luminance value and/or the frequency value of each connected region in the first region need to be obtained.
And S33, determining a target connected region including the hair from all the connected regions according to the brightness value and/or the frequency value.
Since the frequency value of the hair region is high and the luminance is low, the luminance value and/or the frequency value of each connected region are obtained, and then the target connected region including the hair can be determined from all the connected regions according to the obtained luminance value and/or frequency value, that is, the hair region is obtained.
And S34, acquiring the area of each target connected region.
Since the target communication area including the hair may include not only the hair area but also the eyebrow area, and the area of the hair area is much larger than that of the eyebrow area, the size of the area can be used as a basis for distinguishing the eyebrow area from the hair area. Thus, in the present embodiment, after the target connected regions including the hair are determined, the area of each target connected region can be further acquired.
And S35, taking the target communication region with the largest area as the target region.
Because the facial hairs only include the hairs and the eyebrows, and the area of the hair region is much larger than that of the eyebrow region, in this embodiment, after the area of each target connected region including the hairs is obtained, the eyebrow region and the hair region can be distinguished according to the area, and the target connected region with the largest area is the hair region, that is, the target region where the hairs are to be determined are located.
According to the method for extracting the hair detail information, the connected regions of the first region are obtained, the brightness value and/or the frequency value of each connected region are obtained, the target connected regions including the hair are determined from all the connected regions according to the brightness values and/or the frequency values, the area of each target connected region is further obtained, the target connected region with the largest area is used as the target region, the influence of eyebrow on the obtained result can be eliminated, and the accuracy of obtaining the hair region is improved.
In order to more clearly illustrate the implementation process of obtaining the target image including the hair detail information, another method for extracting the hair detail information is provided in the embodiment of the present invention, and fig. 4 is a flowchart illustrating the method for extracting the hair detail information according to another embodiment of the present invention.
As shown in fig. 4, on the basis of the embodiment shown in fig. 1, step S14 may include the following steps:
and S41, inputting the original image into a preset guiding filter, and acquiring gradient information of the hair in the original image.
When applied to video processing, the guidance filter includes an input video (denoted as P), a guidance video (denoted as I), and an output video (denoted as Q), where the input video P and the reference guidance video I may be the same video or different videos.
The principle of the guided filtering is: assuming that an image can be regarded as a two-dimensional function, and the filtered output image Q and the input image P satisfy a linear relationship in a two-dimensional window, as shown in formula (1):
wherein, wkRepresents a square window with the length and width of 2 x r, wherein r represents the radius of the window; p is the value of the input image, Q is the value of the output image, k represents the index number of the window, i represents the pixel number of the input image and the output image, akAnd bkRepresenting the coefficients of the corresponding linear function leading to the filtering when the central position of the filtering window is located at k.
As can be seen from equation (1), in a local area, there is a linear relationship between the output image Q and the input image P.
It should be noted that the input image P is generally a to-be-processed image, and the guide image I may be another image or the to-be-processed image itself, which is also called a guide filter.
The gradient is simultaneously obtained for both sides of the formula (1), and the result shown in the formula (2) can be obtained:
as can be seen from equation (2), when the referenced guide image I has specific gradient information, the output image Q after the guide filtering process also has similar gradient information, so the output image Q has similar edge information to the guide image I, and the guide filtering can maintain the edge characteristics while performing the smoothing process.
Fig. 5 is a schematic diagram illustrating the effect of extracting hair information based on a guide filter. In fig. 5, the left image is an input image, the middle image is a guide image, and the right image is an output image. As can be seen from fig. 5, the image output by the guided filter has significant hair detail information.
In this embodiment, after the original image is input to the preset guidance filter, gradient information of hair in the original image can be acquired.
And S42, determining the probability that each pixel of the guide image belongs to the hair according to the gradient information.
As can be seen from the foregoing description of the guide filtering, the guide image and the output image have similar edge information, and therefore, in this embodiment, the probability that each pixel in the guide image belongs to hair can be determined based on the correlation technique according to the acquired gradient information.
S43, forming a target image including hair detail information according to the probability of each pixel.
In this embodiment, after the probability that each pixel in the guide image belongs to the hair is determined, the target image including the hair detail information may be formed according to the probability of each pixel.
Specifically, forming the target image including the hair detail information according to the probability of each pixel may include: judging whether the pixel is a pixel occupied by hair or not according to the probability of each pixel and a preset probability threshold; the target image is formed using the pixels occupied by the hair.
Since the more hairs are, the thicker the hairs are, the more accurate the pixel values of the corresponding regions are, and the higher the probability that the pixel values belong to hairs is. In order to prevent the pixel value of the area with less hairs from being judged as the pixel not occupied by the hairs, a probability threshold value can be preset, the pixel of which the probability that each pixel belongs to the hairs reaches the probability threshold value is judged as the pixel occupied by the hairs, and then the pixel occupied by the hairs is used for forming the target image.
Fig. 6 is a schematic diagram illustrating the effect of extracting hair detail information based on the guiding filter. In fig. 6, the left image is an input image, i.e., an original image, the middle image is a guide image, and the right image is an output image, i.e., a target image. As can be seen from fig. 6, the target image output by the guided filter has significant hair detail information.
According to the method for extracting the hair detail information, the original image is input into the preset guide filter, the gradient information of the hair in the original image is obtained, the probability that each pixel in the guide image belongs to the hair is determined according to the gradient information, the target image comprising the hair detail information is formed according to the probability of each pixel, and the detail information of the hair can be accurately extracted.
In order to implement the above embodiments, the present invention further provides a device for extracting hair detail information.
Fig. 7 is a schematic structural diagram of an apparatus for extracting hair detail information according to an embodiment of the present invention.
As shown in fig. 7, the apparatus for extracting hair detail information includes: a first determination module 710, a second determination module 720, an extraction module 730, and an acquisition module 740. Wherein the content of the first and second substances,
the first determining module 710 is configured to determine a first region corresponding to hair in the original image.
Specifically, the first determining module 710 is configured to identify a face in an original image, obtain an eye region in the face, and use a first boundary of the eye region as a starting point, and use a region covered by the first boundary to a head boundary as a first region, where the first boundary is a boundary located above a line connecting two eyes.
And a second determining module 720, configured to determine, according to the characteristic information of the hair, a target area where the hair is located from the first area.
Optionally, in a possible implementation manner of the embodiment of the present invention, the second determining module 720 is further configured to obtain connected regions in the first region, obtain a brightness value and/or a frequency value of each connected region, determine target connected regions including hairs from all the connected regions according to the brightness value and/or the frequency value, obtain an area of each target connected region, and use the target connected region with the largest area as the target region.
An extracting module 730, configured to extract a guide image including the hair target region from the original image.
An obtaining module 740, configured to perform image processing on the guide image based on the original image, and obtain a target image including hair detail information.
Optionally, in a possible implementation manner of the embodiment of the present invention, the obtaining module 740 is further configured to input the original image into a preset guiding filter, obtain gradient information of hair in the original image, determine a probability that each pixel of the guiding image belongs to the hair according to the gradient information, and form the target image including the hair detail information according to the probability of each pixel.
It should be noted that the foregoing explanation of the embodiment of the method for extracting hair detail information is also applicable to the apparatus for extracting hair detail information of the present embodiment, and the implementation principle thereof is similar, and is not repeated herein.
In the method for extracting hair detail information in this embodiment, a first region corresponding to hair is determined in an original image, a target region where the hair is located is determined from the first region according to characteristic information of the hair, a guide image including the target region of the hair is extracted from the original image, and the guide image is subjected to image processing based on the original image, so as to obtain a target image including the hair detail information. Therefore, the hair area and the hair detail information of the portrait can be accurately extracted.
In order to implement the above embodiments, the present invention further provides a terminal device.
Fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
As shown in fig. 8, the terminal device 80 includes: a housing 801, and a processor 802, a memory 803 and a display interface 804 located in the housing 801, wherein the processor 802 runs a program corresponding to executable program code by reading the executable program code stored in the memory 803 for implementing the method of extracting hair detail information as described in the foregoing embodiments.
The terminal device of this embodiment determines a first region corresponding to hair in an original image, determines a target region where the hair is located from the first region according to characteristic information of the hair, extracts a guide image including the target region of the hair from the original image, and performs image processing on the guide image based on the original image to obtain a target image including hair detail information. Therefore, the hair area and the hair detail information of the portrait can be accurately extracted.
In order to implement the above embodiments, the present invention further provides a computer program product, wherein when the instructions in the computer program product are executed by a processor, the method for extracting hair detail information as described in the foregoing embodiments is performed.
In order to implement the above embodiments, the present invention also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, is capable of implementing the method of extracting hair detail information as described in the foregoing embodiments.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (8)

1. A method of extracting hair detail information, comprising:
determining a first area corresponding to the hair in the original image;
determining a target area where the hair is located from the first area according to the characteristic information of the hair;
extracting a guide image containing a hair target area from the original image;
inputting the original image into a preset guide filter to obtain gradient information of hair in the original image;
determining the probability that each pixel of the guide image belongs to hair according to the gradient information;
forming the target image including hair detail information according to the probability of each pixel.
2. The method for extracting hair detail information according to claim 1, wherein the determining the first region corresponding to the hair in the original image comprises:
identifying a face in the original image to obtain an eye area in the face;
taking a first boundary of the eye area as a starting point, and taking an area covered from the first boundary to a head boundary as the first area; the first boundary is a boundary located above a line connecting two eyes.
3. The method for extracting hair detail information according to claim 2, wherein the determining the target area where the hair is located from the first area according to the characteristic information of the hair comprises:
acquiring a communication area in the first area;
acquiring a brightness value and/or a frequency value of each connected region;
determining a target connected region comprising hairs from all connected regions according to the brightness values and/or the frequency values;
acquiring the area of each target communication area;
and taking the target connected region with the largest area as the target region.
4. The method of extracting hair detail information according to claim 1, wherein said forming the target image including hair detail information according to the probability of each pixel comprises:
judging whether the pixel is a pixel occupied by hair or not according to the probability of each pixel and a preset probability threshold;
the target image is formed using the pixels occupied by the hair.
5. An apparatus for extracting hair detail information, comprising:
the first determining module is used for determining a first area corresponding to the hair in the original image;
the second determining module is used for determining a target area where the hair is located from the first area according to the characteristic information of the hair;
an extraction module for extracting a guide image containing a hair target region from the original image;
the acquisition module is used for inputting the original image into a preset guide filter and acquiring gradient information of hair in the original image; determining the probability that each pixel of the guide image belongs to hair according to the gradient information; forming the target image including hair detail information according to the probability of each pixel.
6. The apparatus for extracting hair detail information according to claim 5, wherein the first determining module is specifically configured to identify a face in the original image, obtain an eye region in the face, and take a first boundary of the eye region as a starting point, and take an area covered by the first boundary to a head boundary as the first area; the first boundary is a boundary located above a line connecting two eyes.
7. A terminal device, comprising one or more of the following components: a housing and a processor, a memory and a display interface located in the housing, wherein the processor runs a program corresponding to an executable program code stored in the memory by reading the executable program code for implementing the method for extracting hair detail information according to any one of claims 1 to 4.
8. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method of extracting hair detail information according to any one of claims 1 to 4.
CN201710459550.3A 2017-06-16 2017-06-16 Method and device for extracting hair detail information and terminal equipment Active CN107392099B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710459550.3A CN107392099B (en) 2017-06-16 2017-06-16 Method and device for extracting hair detail information and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710459550.3A CN107392099B (en) 2017-06-16 2017-06-16 Method and device for extracting hair detail information and terminal equipment

Publications (2)

Publication Number Publication Date
CN107392099A CN107392099A (en) 2017-11-24
CN107392099B true CN107392099B (en) 2020-01-10

Family

ID=60332207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710459550.3A Active CN107392099B (en) 2017-06-16 2017-06-16 Method and device for extracting hair detail information and terminal equipment

Country Status (1)

Country Link
CN (1) CN107392099B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260581B (en) * 2020-01-17 2023-09-26 北京达佳互联信息技术有限公司 Image processing method, device and storage medium
CN112489169B (en) * 2020-12-17 2024-02-13 脸萌有限公司 Portrait image processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778188A (en) * 2009-01-14 2010-07-14 华晶科技股份有限公司 Method for beautifying faces in digital image
CN102419868A (en) * 2010-09-28 2012-04-18 三星电子株式会社 Device and method for modeling 3D (three-dimensional) hair based on 3D hair template
CN104517265A (en) * 2014-11-06 2015-04-15 福建天晴数码有限公司 Intelligent buffing method and intelligent buffing device
CN105550999A (en) * 2015-12-09 2016-05-04 西安邮电大学 Video image enhancement processing method based on background reuse

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110194762A1 (en) * 2010-02-04 2011-08-11 Samsung Electronics Co., Ltd. Method for detecting hair region

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778188A (en) * 2009-01-14 2010-07-14 华晶科技股份有限公司 Method for beautifying faces in digital image
CN102419868A (en) * 2010-09-28 2012-04-18 三星电子株式会社 Device and method for modeling 3D (three-dimensional) hair based on 3D hair template
CN104517265A (en) * 2014-11-06 2015-04-15 福建天晴数码有限公司 Intelligent buffing method and intelligent buffing device
CN105550999A (en) * 2015-12-09 2016-05-04 西安邮电大学 Video image enhancement processing method based on background reuse

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于mean shift 的头发自动检测";傅文林,等;《微型电脑应用》;20101231;论文第1节 *

Also Published As

Publication number Publication date
CN107392099A (en) 2017-11-24

Similar Documents

Publication Publication Date Title
Navarro et al. Accurate segmentation and registration of skin lesion images to evaluate lesion change
US9351683B2 (en) Wrinkle detection method, wrinkle detection device and recording medium storing wrinkle detection program, as well as wrinkle evaluation method, wrinkle evaluation device and recording medium storing wrinkle evaluation program
CN107301389B (en) Method, device and terminal for identifying user gender based on face features
CN109359634B (en) Face living body detection method based on binocular camera
CN111524080A (en) Face skin feature identification method, terminal and computer equipment
Kamboj A color-based approach for melanoma skin cancer detection
CN110807780B (en) Image processing method and device
CN106485190A (en) Fingerprint register method and device
CN112419295A (en) Medical image processing method, apparatus, computer device and storage medium
Xue et al. Automatic 4D facial expression recognition using DCT features
US20180218496A1 (en) Automatic Detection of Cutaneous Lesions
CN107392099B (en) Method and device for extracting hair detail information and terminal equipment
Gritzman et al. Comparison of colour transforms used in lip segmentation algorithms
KR101436988B1 (en) Method and Apparatus of Skin Pigmentation Detection Using Projection Transformed Block Coefficient
CN107341774A (en) Facial image U.S. face processing method and processing device
KR101654287B1 (en) A Navel Area Detection Method Based on Body Structure
Zheng Static and dynamic analysis of near infra-red dorsal hand vein images for biometric applications
CN111814738A (en) Human face recognition method, human face recognition device, computer equipment and medium based on artificial intelligence
KR20070088982A (en) Deformation-resilient iris recognition methods
KR101496852B1 (en) Finger vein authentication system
Zulfikar et al. Android application: skin abnormality analysis based on edge detection technique
US11436832B2 (en) Living skin tissue tracking in video stream
CN114202723A (en) Intelligent editing application method, device, equipment and medium through picture recognition
Ko et al. Image-processing based facial imperfection region detection and segmentation
JP2010277196A (en) Information processing apparatus and method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: Guangdong Opel Mobile Communications Co., Ltd.

GR01 Patent grant
GR01 Patent grant