CA2651539A1 - Method and apparatus for hair colour simulation - Google Patents

Method and apparatus for hair colour simulation Download PDF

Info

Publication number
CA2651539A1
CA2651539A1 CA002651539A CA2651539A CA2651539A1 CA 2651539 A1 CA2651539 A1 CA 2651539A1 CA 002651539 A CA002651539 A CA 002651539A CA 2651539 A CA2651539 A CA 2651539A CA 2651539 A1 CA2651539 A1 CA 2651539A1
Authority
CA
Canada
Prior art keywords
hair
pixels
colour
pixel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CA002651539A
Other languages
French (fr)
Other versions
CA2651539C (en
Inventor
Parham Aarabi
Tian Yu Tommy Liu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Modiface Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2651539A1 publication Critical patent/CA2651539A1/en
Application granted granted Critical
Publication of CA2651539C publication Critical patent/CA2651539C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

Method and apparatus are provided that change the colour of a person's hair in a digital color image composed of pixels. The position of the person's face in the digital image is identified. One or more hair sampling regions are identified based on the expected position of the hair relative to the position of the face. One or more characteristics of the hair are extracted from the hair sampling region or regions. Pixels in the image are compared with the extracted hair characteristics thereby to detect hair pixels throughout the image. Pixels in the image are also compared with one or more criteria indicating that a pixel is a non-hair pixel thereby allowing detection of non-hair pixels identified in error as hair pixel, A new hair colour is selectively applied to the detected hair pixels, omitting hair pixels identified in error.

Description

METHOD AND APPARATUS FOR HAIR COLOUR SIMULATION
FIELD OF THE INVENTION

The invention relates generally to processing of digital images, and, more specifically, to methods and apparatus for transforming the hair colour of a person shown in a digital colour image formed of pixels.
BACKGROUND OF THE INVENTION

Simulating hair colours is a useful application for hair dye manufacturers, hair salons, and the average consumer who is interested in knowing what he or she would look like with a different hair colour.

to A variety of prior art computer-based techniques exist for hair style simulation that require the user to move a hairstyle on top of his or her photo, resizing and adjusting manually as needed. The invention disclosed in this specification is different as recolouring of the actual hair in an uploaded photo is fully automatic; that is, it does not require the user to manually manipulate a hairstyle about the image of his or her face.

Other approaches to simulation of hair dying have been proposed but are subject to various shortcomings. PCT application no.
WO/2005/089589 to Sasaki et al for an invention entitled "Color Simulation System for Hair Coloring" proposes an approach in which colour simulation can only be performed on an image with the region to colour selected manually beforehand. Published U.S. patent application no. 9/692,929 to Chen el al for an invention entitled "Method for Blond Hair Pixel Removal in Image Skin Colour Detection" and published U.S. Patent application no.
11/002264 to Paschalakis for an invention entitled "Method and Apparatus for Separating Hair and Skin in Images" both require an extensive training set in order to develop a colour model of skin and hair. Prior art methods that develop and store models based on data from training sets tend to perform well only for images that are similar in colour to the training set database and have a much lower success rate for images from a broader range of sources.

3o Also, the approach in such prior art addresses the problem of reducing false ti acceptance of hair as skin with a focus primarily on skin detection rather than hair detection.

BRIEF SUMMARY OF THE INVENTION.

In one aspect, the invention provides a method of identifying hair pixels in a colour digital image of a person. The method involves identifying the position of the face in the digital image and automatically selecting one or more hair sampling regions based on the expected position of the hair relative to the identified position of the face. (The position of the person's face in the digital image can be identified using the teachings of published U.S. patent application no. 12/090,677 filed April 18, 2008 and claiming a priority date of May 5, 2006 for an invention entitled "Method, System and Computer Program Product for Automatic and Semi-Automatic Modification of Digital Images of Faces.") One or more characteristics of the hair, typically mean colour, deviation from the mean, and optionally texture, are automatically extracted from the selected hair sampling region or regions.
Pixels in the image are then automatically compared with the extracted hair characteristics to detect hair pixels.

In another aspect, the invention provides a method of changing the colour of a person's hair in a digital colour image formed of pixels. In this aspect, the invention involves selecting a new hair colour and optionally the intensity of that hair colour. The invention also involves identification of the position of the person's face in the digital image, and automatic selection of one or more hair sampling regions based on the expected position of the hair relative to the face. One or more characteristics of the displayed hair are once again extracted from the hair sampling region or regions. Pixels are then automatically compared with the extracted hair characteristics to detect hair pixels throughout the image. Such detection of the hair pixels is prone to potential errors involving identification of non-hair pixels as hair pixels.
To address that problem, pixels in the image area also compared with one or more criteria indicating that a pixel is a non-hair pixel thereby effectively allowing detection of non-hair pixels identified in error as hair pixels. The new hair colour is then applied selectively to the detected hair pixels, the application of the new hair colour comprising detecting non-hair pixels identified in error as hair pixels and avoiding application of the new hair colour to such non-hair pixels. In preferred form, various likelihood masks indicating the probability that pixels are or are not hair pixels are generated and combined to form a master mask that is then used to colour hair pixels.
Various aspects of the invention will be apparent from a description below of preferred embodiments and will be more specifically defined in the appended claims. For purposes of this specification, including 1 o the appended claims, the term "likelihood mask" should be understood as an array or other data structure that can be interrogated on a pixel-by-pixel basis to indicate the probability that a pixel in a digital colour image is or is not a hair pixel. The term "hole" as used with respect to a digital image of a person's hair or a corresponding likelihood mask is typically a small region in which shadows or highlights adversely affect identification of hair pixels. A
method of "filling" such holes to characterize the contained pixels as hair pixels will be described below.

DESCRIPTION OF THE DRAWINGS

The invention will be better understood with reference to drawings, in which:

fig. 1 represents a digital colour picture of a person before processing to change the person's hair colour;

fig. 2 schematically illustrates a face box that identifies the position of the person's face in the image and three hair sampling regions that are positioned in predetermined locations relative to the face box;

fig. 3 schematically illustrates a hair likelihood mask generated for the entire image;

fig. 4 schematically illustrates the hair likelihood mask after filling holes in the person's hair that were not identified as hair;
fig. 5 schematically illustrates a skin likelihood mask that incidentally indicates that certain pixels are erroneously identified as hair pixels;

fig. 6 effectively illustrates face and body filter-type likelihood masks that further identify pixels that are not hair pixels based on distance relative to the center or bottom of the face box;

fig. 7 effectively illustrates a master mask formed by combining the various masks described above;

fig. 8 illustrates portions of a computer menu system that allows a user to upload an image, to adjust a face box if deemed appropriate, and to select a hair color and intensity;

fig. 9 is a flow chart illustrating an overall method of applying a new hair colour to a digital colour image of a person's face and hair;

fig. 10 is a flow chart showing how to assign values to a likelihood mask that uses mean colour and variance as selection criteria;
fig. 11 is a flow chart showing how to assign values to a likelihood mask based on texture and using a colour histogram as a selection criterion; and, fig. 12 is a flow chart showing how to eliminate holes in a hair colour or hair texture likelihood mask.

DESCRIPTION OF PREFERRED EMBODIMENTS

Reference is made to fig. 9, which is a flow chart illustrating a software-based method of applying a new hair colour to a digital colour image of a person's face and hair such as the image of fig. 1. Preliminary steps include loading the image (step 10) into a computer that implements the method, and automatically positioning a face box 12 (white rectangular outline in fig. 2) that frames the person's face (step 14) in the uploaded image.
These steps can be initiated through a menu system such as that shown in fig.
8.

The menu system preferably allows the user to select not only a new hair colour (at step 16) but also the intensity of the new hair colour. A
rectangular area 90 is provided where an uploaded image of a person (such as the image of fig. 1) is displayed, and the menu system preferably allows the user to switch between before and after images using before and after buttons 92, 94. The available hair colours are found in four columns of buttons:

blonde hair colours in column 96, brunette hair colours in column 98, red hair colours in column 100, and exotic (not natural) colours in column 102.
Although not shown, each button is preferably coloured to identify the particular hair colour triggered, and the buttons in each column are preferably sorted according to shades of the basic four hair colours represented. A slide to control 104 allows the user to specify desired intensity of colour. In this menu system, clicking on a particular colour button in the colour columns or on a random colour button 106 triggers the hair coloring process. One option that may be associated with the menu system is to allow adjustment of the face box 12 to better frame the person's face by dragging with a mouse but the prior art method referred to in the summary of the invention will usually be sufficient to position the face box 12 without such intervention by a user.
The exact menu system used is largely a matter of personal preference, and various implementations of menu systems with similar functionality will be apparent to those skilled in the art. Labels such as "Blonde", "Brunette", "Red Head", and "Exotic" may be applied to the tops of the button columns 96-102;

"before" and "after" to the before and after buttons 92, 94; "Intensity" to the slide control 104 and so forth. Such labels have not be shown in fig. 8 but may normally appear in the language of a particular market. Upload and download functions may also be incorporated into the menu window itself but can alternatively be provided in a separate menu bar.

Once the menu system is actuated to recolor the person's hair in the image, the software automatically locates three hair sampling regions relative to the position of person's face as shown in fig. 2. These hair sampling regions are represented by rectangular boxes 20, 22, 24 positioned 3o about the perimeter of the face box 12 but normally invisible to the user.
How the sampling boxes are positioned relative to the face box 12 should be noted.
An upper sampling box 20 extends horizontally the width of the face box 12, and extends 1/4 of the height of the face box 12 above the top of the face box 12 down to 1/8 of the height of the face box 12 above the top of the face box 12. Two lateral sampling boxes 22, 24 are positioned to either side of the face box 12 and each extends from the top of the face box 12 to half-way down the face box 12. Each of the two lateral sampling boxes 22, 24 has one side coincident with one side of the face box 12, and each extends laterally from the associated side edge of the face box 12 by about 1/6 of the width of the face box 12. The dimensions of the sampling boxes 20, 22, 24 and their t o positions relative to the face box 12 are predetermined and dependent on the dimensions of the face box 12. Greater weight (for example, three-fold) is preferably given to the upper sampling box 20 to accommodate the possibility that the person's hair is cut short at sides of the head.

Once the positions of the hair sampling boxes 20, 22, 24 are set, the mean hair color and variance of the hair colour of the hair within the sampling boxes 20, 22, 24 are automatically extracted at step 26. Each pixel in the image is then compared with the hair colour characteristics to produce a hair likelihood mask, the production of which is indicated in fig. 9 as a subroutine 28 detailed in fig. 10. The colour of each pixel is essentially an RGB value with three components, namely, red, green, and blue content. Let mean[0], mean[1], mean[2] represent the mean values of the colour components (red, green, and blue respectively) in the hair sampling boxes 20, 22, 24, and let var[0], va.r[1], var[2] be the variance in the colors of the sampling regions (red, green and blue respectively). One then calculates the difference between the color of a pixel and the reference values, as follows:
rd = red - mean[0]

gd = green - mean[ 1 ]
bd = blue - mean[2]

The distribution of pixels in 3D color space is assumed to be Gaussian, and the hair likelihood value of each pixel is given by the formula:

likelihood = exp(-rd*rd/(2*var[0])-gd*gd/(2*var[1])-bd*bd/(2*var[2]))*255.
The function above maps the difference between the pixel colour value and the mean value into a probability value that lies in the range of 0 to 255, which simplifies visual inspection of the results during debugging. As apparent in fig. 10, the reference mean and variance are received as parameters at step 30; the colour value of the next pixel is retrieved at step 32;
the retrieved colour value is compared with the reference values using the formula above at step 34; and the calculated probability value is assigned to the current pixel at step 36. If all pixels have not been processed as tested at step 37, steps 30, 32, 34 are repeated for the next pixel until all pixels have been processed. The resultant hair likelihood mask is diagrammatically illustrated in fig. 3, where brighter portions represent higher probability values suggesting the presence of hair pixels and darker portions represent lower probability values suggesting the presence of non-hair pixels.

Holes in the hair likelihood mask (fig. 3) are then filled at step 38 using the procedure shown in fig. 12. At step 40 of fig. 12, the hair likelihood mask 38 is inspected to find the next hole. If the next hole is located at step 42, the procedure checks at step 44 whether the size of the next hole is less than a predetermined threshold value, which may typically be a diameter of less than 15 pixels. If the threshold size test is met, each pixel contained with the hole is assigned at step 46 a probability value corresponding to the average probability value at the perimeter of the hole.
This effectively characterizes the pixels contained in the hole as hair pixels.
This process is repeated until all holes of less than 15 pixel diameter have been processed.

The method of fig. 9 then automatically extracts texture characteristics of the hair at step 48 from the hair sampling boxes 20, 22, 24 to provide a reference histogram. In RGB space, a reference histogram is generated whose dimensions are 10 x 10 x 10, each dimension corresponding to one colour component, red, green or blue. For example, the red component of pixel colour value is broken into 10 bins, the red colour range of bin 1 being 0-25, the red colour range of bin 2 being 26-51, and so forth, until bin which spans the balance of the red colour range of 234-255. Green and blue colour components are similarly classified. Thus, for an RGB histogram cube, there will be 10 values for each side, making for a total of 1000 elements in the histogram. The value in each element is a ratio of the 5 frequency of that particular color to the total number of pixels in the sampling boxes 20, 22, 24. The values are generated by taking each pixel within the sampling patch, calculating the bin number for each of its RGB components, and adding 1 to the corresponding element in the histogram. After the entire patch is sampled, the values within the histogram are divided by the total 10 number of pixels.

At step 50 of fig. 9, a hair texture likelihood mask is generated using the procedure illustrated in fig. 11. The procedure receives the reference histogram of step 48 as a parameter at step 52 of fig. 11. At step 54, a histogram is generated for the next 5 x 5 image sample patch.

Corresponding entries in the reference and sample histograms are then compared at step 56. In the comparison, the lower of the two values for a particular element is taken to be the amount of similarity within that element.
Thus, if the element [0,0,0] has a value of 0.04 in the reference histogram, and a value of 0.1 in the current histogram, then 0.04 is taken as the similarity amount. For elements where both histograms have a value of 0, the similarity amount is 0 since that particular color does not appear in either the reference histogram or current patch of the image. The ratios are then summed and that sum might theoretically take values from 0-25 (for a 5x5 patch) but usually lies somewhere between 0 and 5. The sum is then normalized to range between 0 and 255 and stored at as the likelihood value of the central pixel in the current image patch at step 58. The assigned values may be subject to optional actions such as clearing all values less than 10, or boosting all values by multiplying its square root with square root of 255; that is, the likelihood is set to the square root of the product of the likelihood value and 255. This process is repeated at step 60 of fig. 11 until all 5 x 5 patches in the image have been processed. Holes in the texture-based likelihood mask are then detected and filled at step 62 of fig. 9, substantially in the manner discussed above.

At steps 64-72 of fig. 9, a skin colour likelihood mask (see fig.
5) and a skin texture likelihood mask (not shown) are generated. At step 64 of fig. 9, a skin sampling box (not illustrated) is set. The sampling box is positioned below the top of the face box 12 by 1/8 of the height of the face box 12. It extends downward to 1/4 of the height of the face box 12 below the top of the face box 12. The prior art technique for locating the face box 12 suggests that the region so defined will represent the skin of the forehead.

The mean and variance of the skin colour are then extracted from the skin sampling box at step 66, and are used at step 68 to generate the skin color likelihood mask of fig. 5. The mask incidentally indicates the probability that a pixel identified as a hair pixel is actually a skin pixel. A reference skin texture histogram (not shown) is then derived at step 70 from the pixels in the skin sampling box, and a skin texture likelihood mask is generated at step 72, using the procedure generally indicated in fig. 11. Details of how the skin colour and the skin texture probability masks are generated will be apparent from the detailed description above of how the hair colour and hair texture probability masks are generated. In essence, the processes are identical but start with different reference variables and histograms. The skin likelihood mask may optionally be blurred using a Gaussian blurring filter to reduce discontinuities.

Two additional measures are taken to reduce the likelihood that a non-hair pixel is erroneously identified as a hair pixel. First, an oval gradient mask 74 (diagrammatically illustrated in fig. 6) with value 0 at center (non-hair pixels) and 255 at its edges (hair pixels) is scaled at step 76 of fig. 9 to the same dimensions as the face box 12. Second, a rectangular gradient mask 78 (diagrammatically illustrated in fig. 6) with value 0 at the bottom (non-hair pixels) and 255 at the top (hair pixels) is scaled at step 80 to have the same width as the image and to extend downward from below the person's face. In the diagrammatic representation of fig. 6, darker areas indicate less likelihood of finding a hair pixel.

At step 82, the various likelihood masks are combined to create a master mask (shown diagrammatically in fig. 7). In the diagrammatic representation of the master mask, lighter regions correspond to presence of hair pixels and darker regions correspond to presence of non-hair pixels. The master mask is created by multiplying probability values on a pixel-by-pixel basis between masks and then scaling to produce a result ranging between 0 to 255. The new hair color is then selectively applied to all pixels within the image that has a non-zero probability entry in the master mask, with varying intensity corresponding to the value of the probability entry. Simply assigning the desired hair color to entire hair regions would tend not to produce realistic results. The method of fig. 9 takes into account the texture and lighting of the hair. At step 84, hair colour is applied to each pixel in an amount that corresponds substantially to the difference between the new hair colour and the mean value of the original hair colour. Preferably, the final hair colour applied to a pixel is given by:

(old + mixratio *(mastermask*((old-mean)*textureratio+(new-mean)))/255) where, "old" is the original pixel colour, "mean" is the mean hair colour of he original hair, and "mastermask" is the probability value of the master mask at a particular pixel position. The intensity of the color change can be controlled by the parameter "mixratio", a higher value producing more change in hair color and a lower value producing less change, and is ultimately set by an intensity setting in the menu system. In digital images, light-colored hair will generally have a greater texture variation than dark-colored hair. To produce a realistic result when changing a dark-colored hair to light-colored hair, the term (old-mean)*textureratio is put into the equation to increase the amount of texture when the mean original color is darker. The parameter "textureratio"
is calculated using the following formula:

exp(-((mean_r + mean_g + means_b)/ratio)^power) The values of the parameters "ratio" and "power" can be modified in different applications of the method to control the amount of texture adjustment produced. In preferred form, the value of ratio is typically 150 and power is typically 3. After all pixels have been re-colored accordingly, the resulting image is outputted to the user on a viewing display or saved onto a storage device.

The invention can be implemented with a computer system and appropriate software supplied for example on a compact disk. It can also be provided as an online service where users upload images through their home computers, mobile systems, or other consoles with network capability. The invention also has application to mobile camera phones and digital cameras with a software or hardware implementation. Because of its automatic nature, the invention can be implemented at a kiosk equipped to capture an image of a customer and suggest hair colors. As an advertising technique, a dye manufacturer may provide a computer interface that identifies distinct dyes and demonstrates how a customer would look after selecting a particular dye.
It will be appreciated that particular embodiments of the invention have been described and that modifications may be made to those embodiments without necessarily departing from the scope of the appended claims.

Claims (20)

1. In a digital colour image formed of pixels and displaying a person including the person's face and hair, a method of changing the colour of the hair in the digital image, comprising:

selecting a new hair colour;

identifying the position of the face in the digital image;
automatically selecting one or more hair sampling regions based on the expected position of the hair relative to the identified position of the face;

automatically extracting one or more characteristics of the hair from the selected one or more hair sampling regions;

automatically comparing pixels in the image with the extracted one or more hair characteristics thereby to detect hair pixels throughout the image, the detecting of the hair pixels including potential errors involving identification of non-hair pixels as hair pixels;

automatically comparing pixels in the image with one or more criteria indicating that a pixel is a non-hair pixel thereby allowing detection of non-hair pixels identified in error as detected hair pixels; and, automatically applying the new hair colour selectively to the detected hair pixels, the application of the new hair colour comprising detecting non-hair pixels identified in error as hair pixels and avoiding application of the new hair colour to such non-hair pixels.
2. The method of claim 1 in which the one or more extracted hair characteristics include the mean value and variance of the colour of the hair.
3. The method of claim 2 in which:

the one or more extracted hair characteristics include texture; and, the texture is extracted by generating a colour histogram of the pixels in the selected one or more hair sampling regions.
4. The method of claim 2 in which the selective application of the new hair colour to each of the detected hair pixels comprises adding an amount to the existing colour of the pixel which amount corresponds to the difference between the new hair colour and the mean value of the original hair colour as extracted from the one or more hair sampling regions.
5. The method of claim 1 in which the automatic comparison of pixels in the image with the one or more extracted hair characteristics comprises generating one or more likelihood masks indicating the probability that each pixel in the image is a hair pixel.
6. The method of claim 5 comprising:

automatically examining the one or more likelihood masks to identify spots in the image of less than a predetermined size containing pixels with relatively low probability of being hair pixels and surrounded by pixels with relatively high probability of being hair pixels; and, adjusting the one or more likelihood masks to indicate that the pixels contained in each of the identified spots are likely hair pixels.
7. The method of claim 1 in which:

the method comprises automatically selecting one or more skin sampling regions based on the expected position of the one or more skin sampling regions relative to the position of the face;

the method comprises automatically extracting one or more characteristics of the skin from the selected one or more skin sampling regions;

the method comprises automatically comparing the detected hair pixels with the one or more extracted skin characteristics thereby to identify detected hair pixels that substantially match the extracted skin characteristics; and, the selective application of the new hair colour to detected hair pixels comprises avoiding application of the new hair colour to the detected hair pixels whose characteristics substantially match the one or more extracted skin characteristics.
8. The method of claim 7 in which the extracted one or more characteristics of the skin include the mean value and variance of the colour of the skin.
9. The method of claim 8 in which:

the extracted characteristics of the skin include texture; and, the skin texture is detected by generating a colour histogram of the pixels in the selected one or more skin sampling regions.
10. The method of claim 7 in which:

the automatic comparison of pixels of the image with the one or more extracted hair characteristics comprises generating one or more likelihood masks indicating the probability that each pixel in the image is a hair pixel;

the automatic comparison of the detected hair pixels with the one or more extracted skin characteristics comprises generating one or more likelihood mask indicating the probability that any pixel in the image is a skin pixel;
and, the application of the new colour comprises comparing all of the likelihood masks thereby to identify errors in the identification of any pixel as a detected hair pixel and to avoid application of the new colour to detected hair pixels identified in error.
11. The method of claim 7 in which the automatic application of the new hair colour to the detected hair pixels comprises avoiding application of the new hair colour to detected hair pixels located more than a predetermined distance below the position of the face.
12. The method of claim 7 in which the automatic application of the new hair colour to the detected hair pixels comprises avoiding application of the new hair colour to detected hair pixels that are located within a preselected generally oval region substantially centered relative to the face and contained within the borders of the face.
13. In a digital colour image formed of pixels and displaying a person including the person's face and hair, a method of changing the colour of the hair in the digital image, comprising:

selecting a new hair colour;

identifying the position of the face in the digital image;
automatically selecting one or more hair sampling regions based on the expected position of the hair relative to the identified position of the face;

automatically extracting one or more characteristics of the hair from the selected one or more hair sampling regions;

automatically comparing pixels in the image with the one or more extracted hair characteristics and generating one or more likelihood masks indicating the probability that each of the pixels in the image is a hair pixel;

automatically selecting one or more skin sampling regions based on the expected position of the one or more skin sampling regions relative to the position of the face;

automatically extracting one or more characteristics of the skin from the selected one or more skin sampling regions;

automatically comparing pixels in the image with the one or more extracted skin characteristics and generating one or more likelihood masks indicating the probability that each of the pixels in the image is a skin pixel; and, automatically applying the new hair colour selectively to the pixels in the image responsive to at least the one or more likelihood masks indicating the probability that each pixel is a hair pixel and the one or more likelihood masks indicating the probability that each pixel is a skin pixel.
14. The method of claim 13 comprising generating a master likelihood mask before application of the new hair colour by combining at least the one or more likelihood masks indicating the probability that each pixels is a hair pixel and the one or more likelihood masks indicating the probability that each pixel is a skin pixel thereby to indicate in the master likelihood mask the overall probability that any pixel in the image is a hair pixel.
15. The method of claim 13 comprising generating one or more likelihood masks in response to the distance of each of the pixels in the image from either a central point in the face or below the bottom of the face.
16. The method of claim 15 comprising generating a master likelihood mask before application of the new hair colour by combining at least the one or more likelihood masks indicating the probability that each pixel is a hair pixel, the one or more likelihood masks indicating the probability that each pixel is a skin pixel, and the one or more likelihood masks generated in response to distance, thereby to indicate the overall probability of whether a pixel in the image is a hair pixel.
17. In a digital colour image formed of pixels and displaying a person including the person's face and hair, a method of changing the colour of the hair in the digital image, comprising:

selecting a new hair colour;

identifying the position of the face in the digital image;
automatically selecting one or more hair sampling regions based on the expected position of the hair relative to the identified position of the face;

automatically extracting one or more characteristics of the hair from the selected one or more hair sampling regions;

automatically comparing pixels in the image with the one or more extracted hair characteristics and generating one or more hair likelihood mask indicating the probability that each of the pixels in the image is a hair pixel;

automatically comparing pixels in the image with a plurality of other criteria and generating a plurality of non-hair likelihood masks each indicating the probability that each of the pixels in the image is not a hair pixel;

combining the hair likelihood masks with the non-hair likelihood masks to generate a master likelihood mask indicating the overall probability that a pixel in the image is a hair pixel; and, automatically applying the new hair colour selectively to the pixels in the image in response to the master likelihood mask.
18. The method of claim 17 comprising:

automatically examining the master likelihood mask to identify spots in the image of less than a predetermined size containing pixels with relatively low probability of being hair pixels surrounded by pixels with a relatively high probability of being hair pixels; and, adjusting the master likelihood mask to indicate that the pixels contained in each of the identified spots are likely hair pixels.
19. In a digital colour image formed of pixels and displaying a person including the person's face and hair, a method of identifying hair pixels, comprising:

identifying the position of the face in the digital image;
automatically selecting one or more hair sampling regions based on the expected position of the hair relative to the identified position of the face;

automatically extracting one or more characteristics of the hair from the selected one or more hair sampling regions; and, automatically comparing pixels in the image with the extracted one or more hair characteristics thereby to detect hair pixels.
20. A computer product containing instructions in a form readable by a computer and adapted to implement the method of any one of claims 1-19 on the computer.
CA2651539A 2008-07-18 2009-01-29 Method and apparatus for hair colour simulation Active CA2651539C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13515408P 2008-07-18 2008-07-18
US60/135,154 2008-07-18

Publications (2)

Publication Number Publication Date
CA2651539A1 true CA2651539A1 (en) 2010-01-18
CA2651539C CA2651539C (en) 2016-07-05

Family

ID=41571007

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2651539A Active CA2651539C (en) 2008-07-18 2009-01-29 Method and apparatus for hair colour simulation

Country Status (1)

Country Link
CA (1) CA2651539C (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016050576A1 (en) * 2014-10-02 2016-04-07 Henkel Ag & Co. Kgaa Method and data-processing device for computer-assisted hair colouring guidance
US9928601B2 (en) 2014-12-01 2018-03-27 Modiface Inc. Automatic segmentation of hair in images
US10939742B2 (en) 2017-07-13 2021-03-09 Shiseido Company, Limited Systems and methods for virtual facial makeup removal and simulation, fast facial detection and landmark tracking, reduction in input video lag and shaking, and a method for recommending makeup

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10866716B2 (en) 2019-04-04 2020-12-15 Wheesearch, Inc. System and method for providing highly personalized information regarding products and services

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016050576A1 (en) * 2014-10-02 2016-04-07 Henkel Ag & Co. Kgaa Method and data-processing device for computer-assisted hair colouring guidance
US10217244B2 (en) 2014-10-02 2019-02-26 Henkel Ag & Co. Kgaa Method and data processing device for computer-assisted hair coloring guidance
US9928601B2 (en) 2014-12-01 2018-03-27 Modiface Inc. Automatic segmentation of hair in images
US10939742B2 (en) 2017-07-13 2021-03-09 Shiseido Company, Limited Systems and methods for virtual facial makeup removal and simulation, fast facial detection and landmark tracking, reduction in input video lag and shaking, and a method for recommending makeup
US11000107B2 (en) 2017-07-13 2021-05-11 Shiseido Company, Limited Systems and methods for virtual facial makeup removal and simulation, fast facial detection and landmark tracking, reduction in input video lag and shaking, and method for recommending makeup
US11039675B2 (en) 2017-07-13 2021-06-22 Shiseido Company, Limited Systems and methods for virtual facial makeup removal and simulation, fast facial detection and landmark tracking, reduction in input video lag and shaking, and method for recommending makeup
US11344102B2 (en) 2017-07-13 2022-05-31 Shiseido Company, Limited Systems and methods for virtual facial makeup removal and simulation, fast facial detection and landmark tracking, reduction in input video lag and shaking, and a method for recommending makeup

Also Published As

Publication number Publication date
CA2651539C (en) 2016-07-05

Similar Documents

Publication Publication Date Title
US11854070B2 (en) Generating virtual makeup products
US11854072B2 (en) Applying virtual makeup products
US9142054B2 (en) System and method for changing hair color in digital images
JP4398726B2 (en) Automatic frame selection and layout of one or more images and generation of images bounded by frames
US10403036B2 (en) Rendering glasses shadows
JP4753025B2 (en) Makeup simulation method
US8638338B2 (en) Adjusting color attribute of an image in a non-uniform way
US20140306982A1 (en) Method for simulating hair having variable colorimetry and device for implementing said method
US8483480B2 (en) Method and system for factoring an illumination image
US9396411B2 (en) Method and system for generating intrinsic images using a single reflectance technique
JP2005151282A (en) Apparatus and method of image processing, and program
CA2651539C (en) Method and apparatus for hair colour simulation
CN109949248B (en) Method, apparatus, device and medium for modifying color of vehicle in image
CN110084871B (en) Image typesetting method and device and electronic terminal
DE102017212176A1 (en) System and method for determining efficacy of a cosmetic skin treatment
WO2013077946A2 (en) Color analytics for a digital image
JP7463774B2 (en) MAKEUP SIMULATION DEVICE, MAKEUP SIMULATION METHOD, AND PROGRAM
US9754155B2 (en) Method and system for generating intrinsic images using a single reflectance technique
CN110351549A (en) Screen display state detection method, device, terminal device and readable storage medium storing program for executing
CN105872516A (en) Method and device for obtaining parallax parameters of three-dimensional film source
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
CN105654541B (en) Video in window treating method and apparatus
US11475544B2 (en) Automated braces removal from images
CN114565506B (en) Image color migration method, device, equipment and storage medium
JP7153280B2 (en) MAKEUP SIMULATION SYSTEM, MAKEUP SIMULATION METHOD AND PROGRAM

Legal Events

Date Code Title Description
EEER Examination request