WO2001077976A2 - Image segmenting to enable electronic shopping for wearable goods and cosmetic services - Google Patents

Image segmenting to enable electronic shopping for wearable goods and cosmetic services Download PDF

Info

Publication number
WO2001077976A2
WO2001077976A2 PCT/US2001/009729 US0109729W WO0177976A2 WO 2001077976 A2 WO2001077976 A2 WO 2001077976A2 US 0109729 W US0109729 W US 0109729W WO 0177976 A2 WO0177976 A2 WO 0177976A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
customer
segment
regions
services
Prior art date
Application number
PCT/US2001/009729
Other languages
French (fr)
Other versions
WO2001077976A3 (en
Inventor
Pierre N. Fay
Joshua Flachsbart
David Franklin
Original Assignee
Eyeweb, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US53656000A priority Critical
Priority to US09/536,560 priority
Application filed by Eyeweb, Inc. filed Critical Eyeweb, Inc.
Publication of WO2001077976A2 publication Critical patent/WO2001077976A2/en
Publication of WO2001077976A3 publication Critical patent/WO2001077976A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00221Acquiring or recognising human faces, facial parts, facial sketches, facial expressions
    • G06K9/00268Feature extraction; Face representation
    • G06K9/00281Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce, e.g. shopping or e-commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/168Segmentation; Edge detection involving transform domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20072Graph-based image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

A system and corresponding method for segmenting into regions a digital image derived from a photograph of part of a body, such as a photograph of the face of a person. The segmented regions can then be used as the basis for showing for example the face of the person wearing different cosmetic materials, since particular kinds of cosmetic materials are applied to particular regions of the face. The image segmenting is performed using a segment locator, responsive to a digital image of some part of a body and also to generic locating guidance. The segment locator provides segment location information that is approximate, and is used by a segment delineator to provide a precisely segmented image, in the form of at least one binary mask identifying a particular region along with the original image.

Description

IMAGE SEGMENTING TO ENABLE ELECTRONIC SHOPPING FOR WEARABLE GOODS AND COSMETIC SERVICES

FIELD OF THE INVENTION

The present invention pertains to the field of shopping for goods that are worn by a customer (including for example clothes, jewelry and cosmetics) or cosmetic services (including for example hair styles and hair treatments) , using a display device to allow a shopper to electronically (virtually) try on the goods or (virtually) try out the cosmetic services. More specifically, the present invention is concerned with image segmenting technology used to identify segments of an image of a customer where goods would be worn or applied (in the case of cosmetics) , or where cosmetic services would be directed, thus making it possible for a computer to use the segmented image to show the customer with different goods worn or applied or to show the customer as the customer would appear as a result of cosmetic services .

BACKGROUND OF THE INVENTION

In purchasing goods to be worn or applied or in purchasing cosmetic services, a customer would like to see how the customer would look to others when wearing the goods, or for example after a makeover (including the use of cosmetic materials, hair treatments and hair styling, and jewelry) . For example, in purchasing a cosmetic material, it is very often useful to try on the cosmetic material, such as lipstick, eyeshadow, or blush, before purchasing it. In trying on first one cosmetic material and then another, it is necessary for the customer to remove the first cosmetic material. In addition, to actually try on a cosmetic material, a customer must visit a store selling the cosmetic material .

Today, because of the increased computing capabilities of readily available computers, including so-called personal computers, it is reasonable to attempt to have a computer provide a customer with an image of the customer' s own face or an image of more of the customer, modified to illustrate how the customer would appear wearing a cosmetic material or other wearable goods, including even a hairstyle. To do so, however, requires a precise identification in the image of those regions of the customer's face or other parts of the customer where the goods would be worn (or, in the case of cosmetics, where the cosmetic materials would be applied) .

If this could be done, then a customer could visit a retail store and use a computer at the retail store to do a virtual "trying on" of the wearable goods, or a virtual "trying out" of cosmetic services, and could even do the virtual "trying on" or "trying out" from home, or from some other, convenient location.

SUMMARY OF THE INVENTION

Accordingly, the present invention provides an image segmenter and a corresponding method for segmenting an image, for segmenting into predetermined regions a digital image derived from a photograph of a person or animal, including : a segment locator, responsive to the digital image and to generic locating guidance, for providing segment location information; and a segment delineator, responsive to the digital image and to the segment location information, for providing a segmented image including a binary mask.

In a further aspect of the invention, the segment locator is a template deforming module, and the segment locator is a steerable filter module. In such an embodiment, the generic locating guidance is a deformable template (a kind of generic representation of the part of the body whose image is being segmented) and associated constraints (indicating how the part of the body whose image is being segmented may change from individual to individual) , and the segment locating information is a deformed template.

In other further aspects of the invention, either the segment locator or the segment delineator is based on a neural network, or either the segment locator or the segment delineator is based on principal component analysis, or either the segment locator or the segment delineator is based on wavelet graph analysis.

In yet other further aspects of the invention, the predetermined regions include a lipstick region, an eyeshadow region, and a blush region. The pre-determined regions can in general be any regions of a body, including regions where eyeglasses are worn, or where a shirt or blouse is worn, or where pants are worn, or the individual's hair (for helping show a hair style or hair treatment) .

In yet another aspect of the invention, the image segmenter is part of a shopping system allowing a customer to electronically try on wearable goods (including cosmetic materials) or electronically try out cosmetic services. The shopping system according to the invention includes: a means for providing the digital image derived from a photograph of a person; and a remote electronic store, responsive to the digital image, the remote electronic store including the segmenting system, the remote electronic store for providing the customer via a display device with madeup images showing how the customer would appear wearing the wearable goods or as a result of the cosmetic services.

In another aspect of the invention, a method is provided for enabling a customer to electronically try on wearable goods (including cosmetic materials) or electronically try out cosmetic services, including the steps of: taking a photograph of the customer and providing a digital encoding of the photograph; and performing a two-part process to precisely identify pre-determined regions where the wearable goods would be worn or applied or where the cosmetic services would be directed, a first part for approximately locating each of the regions, and a second part to delineate the approximately located regions .

BRIEF DESCRIPTION OF THE DRAWINGS

The above and other objects, features and advantages of the invention will become apparent from a consideration of the subsequent detailed description presented in connection with accompanying drawings, in which:

Fig. 1 is a flowchart/ block diagram showing the elements of a system for electronically trying on cosmetics, as an example of wearable goods, allowing for the purchase of the cosmetics tried on, according to the present invention;

Fig. 2 is an illustration of a human face, indicating regions identified by the present invention where different types of cosmetics (eyeshadow, blush, and lipstick) would be applied;

Fig. 3 is a flow chart showing the steps according to the present invention for electronically trying on and ordering cosmetic materials;

Fig. 4 is a flowchart/ block diagram showing the principal components of a face segmenter (one kind of image segmenter) , and in particular a face segmenter for identifying where on a human face to apply different types of cosmetic materials;

Fig. 5 is a flow chart showing one embodiment of a method for providing a segmented image, according to the present invention; and

Fig. 6 is a flowchart/ block diagram showing the principal components of an image segmenting system according to the present invention.

BEST MODE FOR CARRYING OUT THE INVENTION

The image segmenting invention of the present invention will first be described with respect to identifying segments (regions) in a photograph of a customer showing where cosmetic materials would be applied. Such image segmenting is called here face segmenting in case of a photograph of only a customer's face. Then the preferred embodiment of the image segmenting invention itself will be described, again in particular with respect to face segmenting allowing a customer to electronically try on cosmetics. Finally, other embodiments of the image segmenting of the present invention will be described, and also other applications, besides electronically trying on cosmetics.

Face (Image) Segmenting to Allow Electronically Trying On Cosmetics

Referring now to Fig. 1, a system enabling a customer 12' to electronically try on and possibly order cosmetic materials, such as eyeshadow, lipstick, or blush, includes one or more kiosks 10, which are located in convenient locations, such as in retail stores, shopping malls or in specialty stores, or in even as standalone facilities in parking lots and on street corners. In addition, the system includes a remote electronic store 20, which interfaces with the kiosk 10 via respective interfaces 24 and 13. The system optionally includes a personal computer 14 located in a cosmetics retail location 18, or even in the kiosk 10. Alternatively, a customer 12' can use any personal computer 14, such as a home personal computer, to interface with the system of the present invention. In the preferred embodiment, a customer 12' uses a personal computer from a convenient location, such as the customer's home, to interface with the system of the present invention.

Still referring to Fig. 1, according to the present invention, a customer 12 visits the kiosk 10 where a photograph of the customer is taken, possibly by an assistant, but preferably in a purely automated process (for example by a machine like those that automatically take photographs of tourists at pavilions or other locations visited mostly by tourists) . The photograph is provided to the remote electronic store 20 as a digitized image, along with the customer's password.

It is of course not necessary for a customer to visit a kiosk to have the customer's photograph taken. If the customer has an appropriate camera (high-resolution, color) and equipment needed to provide a photograph as a digital image (equipment that may be integral with a camera, such as a digital camera) , the customer could load a digital image of a photograph of the customer onto a home computer and provide the digital image to the remote electronic store 20 over the Internet .

The remote electronic store includes an image maker 21 that examines the image, as will be described below, to determine what regions of the image correspond to regions of the customer's face where different types of cosmetics would be applied and to produce a set of (binary) masks (one for each region) that can overlay the image and show the customer how the customer would appear wearing different cosmetic materials. The combination of the set of binary masks along with the original image is here called a segmented image. The image maker 21 saves the segmented image and password in a customer database 25. Later, the customer 12' uses a personal computer 14 to access the remote electronic store 20 so as to be able to electronically try on different cosmetic materials, such as eye shadow, lipstick, and blush. The customer provides the customer's password and indicates through choice indicators what cosmetic materials the customer would like to "try on." (The customer would also use choice indicators to place an order.) The image maker then retrieves from the customer database the segmented image based on the customer password and retrieves from a cosmetic database 22 information about the cosmetics selected for electronically ' (virtually) "trying on" by the customer, and provides to the customer an image, called here a made-up image, of a customer wearing the selected cosmetic materials. If the customer 12' then chooses to purchase the cosmetic material, the customer can provide a credit card number or other billing information to the remote electronic store 20, and the selected cosmetic materials and credit card number are then stored in a cosmetics retail database 16 as an order to be processed. An interface 17 with a retail location then provides the order to an appropriate cosmetics retail location 18, a location where the customer might conveniently pick up and possibly pay for the order. Alternatively, instead of having the customer visit the cosmetics retail location, the cosmetics retail location 18 may ship the ordered cosmetic materials to the customer.

Still referring to Fig. 1, the kiosk 10 includes a turnkey photo system 11, which includes a camera 15 capable of taking a high resolution color photograph of a customer's face. A customer 12 then uses the interface 13 with the remote electronic store 20 to obtain a password. In addition the customer can provide for the first time an indication of what types of cosmetics the customer is interested in. This preference information is called here cosmetics selection data. It can indicate particular preferred cosmetics (by brand name) , or can indicate information useful to the remote electronic store for suggesting particular cosmetics. For example, the customer can be asked to indicate whether the customer plays sports, or is often outdoors, which would indicate that some kinds of cosmetics are more suitable then others .

The kiosk 10 provides to the electronic store, and in particular to the image maker 21 through the interface with the kiosk, the customer's password, image data and cosmetic selection data. Later, when the customer 12' uses the personal computer 14 to electronically try on cosmetic materials, the remote electronic store 20 suggests particular cosmetic materials to the customer. To suggest cosmetic materials, the customer interface 23 pulls' out of the customer database 25 the cosmetics selections data provided by the customer at the kiosk, analyzes the cosmetics selection data and refers to the cosmetics data base 22 for cosmetics product information, indicated as cosmetics data in Fig. 1, based on the cosmetics selection data. In the preferred embodiment, cosmetics data includes information not only about the product itself such as its ingredients, but also data useful to the image maker in determining how to display cosmetic material when it is applied to a human face. The cosmetics data is retrieved from the cosmetics database 22 by the image maker 21 in preparing a made-up image. It is of course also sometimes advantageous to have a customer input further cosmetics selection data through the personal computer 14 while in the course of electronically trying-on cosmetic materials.

In the preferred embodiment, the customer interface 23 also suggests cosmetics materials based on the original (color) image of customer, stored with the segmented image in the customer database 25, whether or not the customer ever provides cosmetics selection data.

Fig. 2 illustrates different regions of a face the image maker 21 (Fig. 1) must identify, regions where different kinds of cosmetic materials would be applied. For example, the image (digitized version of the original photograph taken at the kiosk) will include a region 31 where lipstick would be applied. The image maker might also identify a region

(not shown) where lipliner would be applied. In addition, the image would include regions 32a-b where blush would be applied. Finally, an image would include regions where eye cosmetics would be applied, including for example eyeliner

(in regions 33a-b and 35a-b) and eyeshadow (in regions 34a- b) . In addition, the image maker would identify regions (not shown) where mascara would be applied (on the eyelashes) .

Referring now to Fig. 3, a flow chart is shown indicating the overall process beginning with the customer visiting a kiosk and terminating in a customer ordering cosmetics .

Face (Image) Segmenting Using a Deformable Template and a Steerable Filter Module

The image segmenting of the present invention, referred to as face segmenting in case of identifying only segments (regions) in an image of a face, will now be described in the particular case of identifying segments of a human face where different cosmetic materials would be applied.

Referring now to Fig. 4, a face segmenter 40, a subsystem of the image maker 21 used to identify where in an image (digitized photograph of the customer's face) different cosmetic materials would be applied, is shown in the preferred embodiment as including a template deforming module 41 and a steerable filter module 42. The template deforming module 41 uses as an input the image of the customer acquired at the kiosk 10 and also uses a deformable template for a human face, along with constraints on how the human face can change, from one person to another. For example, a human face will have a typical range of aspect ratios of height to width, so that in varying for example the height, the width can be assumed to be known to within some typical range of values. The template deforming module 41 produces a deformed template which provides an approximate location of the target regions, i.e. the regions where the different cosmetics materials would be applied.

In the preferred embodiment, in producing the deformed template, the template deforming module 41 uses as input not only the original image, but also key points marked on the image by a technician or even by the customer. The key points provide the precise locations of some particular features of the face in the image, such as the center of the pupils of the customer. The deformable template deforms the starting template by fixing the corresponding locations in the deformable template to the precisely located key points in the image, and as the template is deformed in one dimension to fit the key points, it also deforms in the orthogonal direction according to the constraint information, absent some overriding information.

The steerable filter module 42 then uses the deformed template and the original image to more precisely locate the target regions. It does so by essentially identifying for each particular region edges in the image that delineate the region, using the deformed template merely as a guide to locating the different regions. The final product of the steerable filter module is a segmented image, i.e. the original image and in addition a set of binary masks (one for each region) that are to be overlaid on the original image and that include information about where on the image different cosmetic materials would be applied.

In the preferred embodiment, the original image is marked (either by a technician or by the customer) to locate the key points either at the kiosk 10 (see Fig. 1) or at the remote electronic store. Instead of manually locating key points, however, it is also possible to use other technology, such as neural networks, to automatically examine the image provided by the high resolution camera 15 (Fig. 1) so as to precisely locate the key points.

Referring now to Fig. 5, a flow chart is provided indicating the process performed by the image maker 21 (Fig. 1) . In the preferred embodiment the image that is acquired at the kiosk and used as an input to the image maker is a 24- bit color, high-resolution image, along with a set of coordinates locating on the image the key points, which in the preferred embodiment include the center of the iris of each eye, the points making up a vertical line drawn down the center of the face, and the points making up a horizontal line drawn across the corners of the mouth.

Other Image Segmenting and Other Applications

Besides using a deformable template coupled with a steerable filter module, it is sometimes advantageous to use various other methods for providing a segmented image, including principal component analysis (based on modeling images as reducible matrices in a high-dimensional space) , wavelet graph analysis (similar to deformable template matching, except that Gabor wavelet transform localization is included in every node in the deformable template while the deformable template is being adjusted to fit the image) , and neural network based recognition system (for example those using neural networks trained with a series of customer images using back propagation to determine the weights for connections between radial basis functions in a network of such functions) . According to the present invention, whatever particular methods are used, image segmenting is a task bifurcated into two subtasks: finding the general location of a segment (or equivalently approximately determining a region), and then delineating the segment (i.e. more precisely determining the region) .

Referring now to Fig. 6, an image segmenter 60 is shown as including a segment locator module 61 for locating a segment generally and a segment delineator module 62 for delineating the approximately defined and located segment. The segment locator 61 uses as input the image derived from the photograph of a person (or even a non-human animal) , as well as generic guidance on locating one or more segments in the image, i.e. guidance that is useful for an image of the same part of any body, not guidance associated with a particular body. Thus, for example, in the case of the image segmenter of Fig. 4 (called a face segmenter when applied to segmenting the image of a face, compared to other parts of a body) , the generic locating guidance includes a (generic) deformable template and (generic) constraints.

The segment locator 61 may optionally use key points to assist it in its subtask, as in the embodiment of a face segmenter shown in Fig. 3. As mentioned above, however, the need for key points can be eliminated by adjusting the segment locator to accomplish its subtask without reliance on key points. Also, if key points are used, they can be provided automatically by a pre-processor stage (not shown) , such as a neural network based system for identifying in an unmarked image key points of use to the segment locator 61.

The output of the segment locator 61 is segment location information, such as is provided by the deformed template in the face segmenting embodiment of Fig. 4. The segment delineator 62 uses the segment location information along with the original image, unmarked with key points, to more precisely define the segment, ultimately providing a segmented image, i.e. a set of binary masks along with the original unmarked image .

Besides using a deformable template for the segment locator module 61 and a steerable filter module for the segment delineator module 62, as in the face segmenting embodiment of Fig. 4, each module can be based on a different neural network, one trained to recognize the general location of predetermined target regions (such as regions associated with the eyes or lips) , and one trained to precisely discriminate between a target region and its surroundings.

As has been mentioned above, the present invention is intended to comprehend applications involving the segmenting into regions of the image of any part of a body, and not necessarily the body of a human. In addition, an image segmented according to the present invention can be used not only to display how cosmetics would appear on a wearer, but also how other items would appear, including jewelry, clothes (including a hat) , eyeglasses, a hairstyle and treated hair, or even a tattoo or other form of wearable art . The present invention allows electronically trying on any wearable goods and trying out cosmetic services, allowing shopping for such goods and services even from home over the Internet .

It is to be understood that the above-described arrangements are only illustrative of the application of the principles of the present invention. Numerous modifications and alternative arrangements may be devised by those skilled in the art without departing from the spirit and scope of the present invention, and the appended claims are intended to cover such modifications and arrangements.

Claims

What is claimed is:
1. An image segmenter for segmenting into predetermined regions a digital image derived from a photograph of a person or animal, comprising: a) a segment locator, responsive to the digital image and to generic locating guidance, for providing segment location information; and b) a segment delineator, responsive to the digital image and to the segment location information, for providing a segmented image including a binary mask.
2. An image segmenter as in claim 1, wherein the segment locator is a template deforming module, wherein the segment locator is a steerable filter module, wherein the generic locating guidance is a deformable template and associated constraints, and further wherein the segment locating information is a deformed template.
3. An image segmenter as in claim 1, wherein either the segment locator or the segment delineator is based on a neural network.
4. An image segmenter as in claim 1, wherein either the segment locator or the segment delineator is based on principal component analysis .
5. An image segmenter as in claim 1, wherein either the segment locator or the segment delineator is based on wavelet graph analysis.
6. An image segmenter as in claim 1, wherein the predetermined regions include a lipstick region, an eyeshadow region, and a blush region.
7. An image segmenter as in claim 1, wherein the image segmenter is part of a shopping system allowing a customer to electronically try on wearable goods or electronically try out cosmetic services, the shopping system comprising: a) a means for providing the digital image derived from a photograph of a person; and b) a remote electronic store, responsive to the digital image, the remote electronic store including the segmenting system, the remote electronic store for providing the customer via a display device with madeup images showing how the customer would appear wearing the wearable goods or as a result of the cosmetic services.
8. An image segmenter as in claim 7 , wherein the remote electronic store suggests particular wearable goods or cosmetic services based on the image or based on selection data provided by the customer.
9. A method for enabling a customer to electronically try on wearable goods or electronically try out cosmetic services, comprising the steps of: a) taking a photograph of the customer and providing a digital encoding of the photograph; and b) performing a two-part process to precisely identify pre-determined regions where the wearable goods would be worn or applied or where the cosmetic services would be directed, a first part for approximately locating each of the regions, and a second part to delineate the approximately located regions.
10. The method of claim 9, wherein the first part uses a template deforming module and the second part uses a steerable filter module.
11. The method of claim 9, wherein at least one of the two parts uses a neural network.
12. The method of claim 9, wherein at least one of the two parts uses principal component analysis.
13. The method of claim 9, wherein at least one of the two parts uses wavelet ' graph analysis.
14. A method as in claim 9, further comprising the steps of: a) obtaining from the customer at least some wearable goods or cosmetics services selection data; and b) recommending to the customer particular wearable goods or cosmetics services based on the wearable goods or cosmetics services selection data or based on the customer image .
15. A method as in claim 14, wherein the wearable goods or cosmetics services selection data includes lifestyle information, such as whether the customer often participates in sporting activities.
PCT/US2001/009729 2000-03-28 2001-03-27 Image segmenting to enable electronic shopping for wearable goods and cosmetic services WO2001077976A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US53656000A true 2000-03-28 2000-03-28
US09/536,560 2000-03-28

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU4948101A AU4948101A (en) 2000-03-28 2001-03-27 Image segmenting to enable electronic shopping for wearable goods and cosmetic services

Publications (2)

Publication Number Publication Date
WO2001077976A2 true WO2001077976A2 (en) 2001-10-18
WO2001077976A3 WO2001077976A3 (en) 2003-03-13

Family

ID=24139014

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/009729 WO2001077976A2 (en) 2000-03-28 2001-03-27 Image segmenting to enable electronic shopping for wearable goods and cosmetic services

Country Status (2)

Country Link
AU (1) AU4948101A (en)
WO (1) WO2001077976A2 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1372109A2 (en) * 2002-05-31 2003-12-17 Eastman Kodak Company Method and system for enhancing portrait images
US6761697B2 (en) 2001-10-01 2004-07-13 L'oreal Sa Methods and systems for predicting and/or tracking changes in external body conditions
WO2006003625A1 (en) * 2004-07-02 2006-01-12 Koninklijke Philips Electronics N.V. Video processing
US7187788B2 (en) 2003-02-28 2007-03-06 Eastman Kodak Company Method and system for enhancing portrait images that are processed in a batch mode
US7324668B2 (en) 2001-10-01 2008-01-29 L'oreal S.A. Feature extraction in beauty analysis
US7437344B2 (en) 2001-10-01 2008-10-14 L'oreal S.A. Use of artificial intelligence in providing beauty advice
US7634103B2 (en) 2001-10-01 2009-12-15 L'oreal S.A. Analysis using a three-dimensional facial image
US8007062B2 (en) 2005-08-12 2011-08-30 Tcms Transparent Beauty Llc System and method for applying a reflectance modifying agent to improve the visual attractiveness of human skin
US8184901B2 (en) 2007-02-12 2012-05-22 Tcms Transparent Beauty Llc System and method for applying a reflectance modifying agent to change a person's appearance based on a digital image
US8942775B2 (en) 2006-08-14 2015-01-27 Tcms Transparent Beauty Llc Handheld apparatus and method for the automated application of cosmetics and other substances
US10002498B2 (en) 2013-06-17 2018-06-19 Jason Sylvester Method and apparatus for improved sales program and user interface
US10092082B2 (en) 2007-05-29 2018-10-09 Tcms Transparent Beauty Llc Apparatus and method for the precision application of cosmetics
US10467779B2 (en) 2018-12-04 2019-11-05 Tcms Transparent Beauty Llc System and method for applying a reflectance modifying agent to change a person's appearance based on a digital image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2728982A1 (en) * 1994-12-29 1996-07-05 Jean Marc Robin Automatic recognition of characteristic facial features and simulation of aesthetic image of a target (face) real

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2728982A1 (en) * 1994-12-29 1996-07-05 Jean Marc Robin Automatic recognition of characteristic facial features and simulation of aesthetic image of a target (face) real

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BARRETT W A: "A survey of face recognition algorithms and testing results" SIGNALS, SYSTEMS & COMPUTERS, 1997. CONFERENCE RECORD OF THE THIRTY-FIRST ASILOMAR CONFERENCE ON PACIFIC GROVE, CA, USA 2-5 NOV. 1997, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 2 November 1997 (1997-11-02), pages 301-305, XP010280840 ISBN: 0-8186-8316-3 *
CHOW G ET AL: "TOWARDS A SYSTEM FOR AUTOMATIC FACIAL FEATURE DETECTION" PATTERN RECOGNITION, PERGAMON PRESS INC. ELMSFORD, N.Y, US, vol. 26, no. 12, 1 December 1993 (1993-12-01), pages 1739-1755, XP000420368 ISSN: 0031-3203 *
HOWARD A ET AL: "A multi-stage neural network for automatic target detection" NEURAL NETWORKS PROCEEDINGS, 1998. IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE. THE 1998 IEEE INTERNATIONAL JOINT CONFERENCE ON ANCHORAGE, AK, USA 4-9 MAY 1998, NEW YORK, NY, USA,IEEE, US, 4 May 1998 (1998-05-04), pages 231-236, XP010286538 ISBN: 0-7803-4859-1 *
LEE R S T ET AL: "An integrated elastic contour fitting and attribute graph matching model for automatic face coding and recognition" KNOWLEDGE-BASED INTELLIGENT INFORMATION ENGINEERING SYSTEMS, 1999. THIRD INTERNATIONAL CONFERENCE ADELAIDE, SA, AUSTRALIA 31 AUG.-1 SEPT. 1999, PISCATAWAY, NJ, USA,IEEE, US, 31 August 1999 (1999-08-31), pages 292-295, XP010370958 ISBN: 0-7803-5578-4 *
YIN CHAN ET AL: "Video shot classification using human faces" PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) LAUSANNE, SEPT. 16 - 19, 1996, NEW YORK, IEEE, US, vol. 1, 16 September 1996 (1996-09-16), pages 843-846, XP010202526 ISBN: 0-7803-3259-8 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6761697B2 (en) 2001-10-01 2004-07-13 L'oreal Sa Methods and systems for predicting and/or tracking changes in external body conditions
US7437344B2 (en) 2001-10-01 2008-10-14 L'oreal S.A. Use of artificial intelligence in providing beauty advice
US7634103B2 (en) 2001-10-01 2009-12-15 L'oreal S.A. Analysis using a three-dimensional facial image
US7324668B2 (en) 2001-10-01 2008-01-29 L'oreal S.A. Feature extraction in beauty analysis
EP1372109A2 (en) * 2002-05-31 2003-12-17 Eastman Kodak Company Method and system for enhancing portrait images
EP1372109A3 (en) * 2002-05-31 2005-02-09 Eastman Kodak Company Method and system for enhancing portrait images
US7082211B2 (en) 2002-05-31 2006-07-25 Eastman Kodak Company Method and system for enhancing portrait images
US7602949B2 (en) 2003-02-28 2009-10-13 Eastman Kodak Company Method and system for enhancing portrait images that are processed in a batch mode
US7187788B2 (en) 2003-02-28 2007-03-06 Eastman Kodak Company Method and system for enhancing portrait images that are processed in a batch mode
US7212657B2 (en) 2003-02-28 2007-05-01 Eastman Kodak Company Method and system for enhancing portrait image that are processed in a batch mode
US7636485B2 (en) 2003-02-28 2009-12-22 Eastman Kodak Company Method and system for enhancing portrait images that are processed in a batch mode
WO2006003625A1 (en) * 2004-07-02 2006-01-12 Koninklijke Philips Electronics N.V. Video processing
US8915562B2 (en) 2005-08-12 2014-12-23 Tcms Transparent Beauty Llc System and method for applying a reflectance modifying agent to improve the visual attractiveness of human skin
US10016046B2 (en) 2005-08-12 2018-07-10 Tcms Transparent Beauty, Llc System and method for applying a reflectance modifying agent to improve the visual attractiveness of human skin
US9247802B2 (en) 2005-08-12 2016-02-02 Tcms Transparent Beauty Llc System and method for medical monitoring and treatment through cosmetic monitoring and treatment
US8007062B2 (en) 2005-08-12 2011-08-30 Tcms Transparent Beauty Llc System and method for applying a reflectance modifying agent to improve the visual attractiveness of human skin
US8942775B2 (en) 2006-08-14 2015-01-27 Tcms Transparent Beauty Llc Handheld apparatus and method for the automated application of cosmetics and other substances
US10043292B2 (en) 2006-08-14 2018-08-07 Tcms Transparent Beauty Llc System and method for applying a reflectance modifying agent to change a person's appearance based on a digital image
US8582830B2 (en) 2007-02-12 2013-11-12 Tcms Transparent Beauty Llc System and method for applying a reflectance modifying agent to change a persons appearance based on a digital image
US8184901B2 (en) 2007-02-12 2012-05-22 Tcms Transparent Beauty Llc System and method for applying a reflectance modifying agent to change a person's appearance based on a digital image
US10163230B2 (en) 2007-02-12 2018-12-25 Tcms Transparent Beauty Llc System and method for applying a reflectance modifying agent to change a person's appearance based on a digital image
US10092082B2 (en) 2007-05-29 2018-10-09 Tcms Transparent Beauty Llc Apparatus and method for the precision application of cosmetics
US10002498B2 (en) 2013-06-17 2018-06-19 Jason Sylvester Method and apparatus for improved sales program and user interface
US10467779B2 (en) 2018-12-04 2019-11-05 Tcms Transparent Beauty Llc System and method for applying a reflectance modifying agent to change a person's appearance based on a digital image

Also Published As

Publication number Publication date
WO2001077976A3 (en) 2003-03-13
AU4948101A (en) 2001-10-23

Similar Documents

Publication Publication Date Title
Dantcheva et al. What else does your biometric data reveal? A survey on soft biometrics
US7437344B2 (en) Use of artificial intelligence in providing beauty advice
US6492986B1 (en) Method for human face shape and motion estimation based on integrating optical flow and deformable models
US7364293B2 (en) Method and system for ordering customized cosmetic contact lenses
US6959119B2 (en) Method of evaluating cosmetic products on a consumer with future predictive transformation
US6188777B1 (en) Method and apparatus for personnel detection and tracking
US6624843B2 (en) Customer image capture and use thereof in a retailing system
NL1007397C2 (en) A method and apparatus for reproducing with a modified appearance of at least a portion of the human body.
US7016824B2 (en) Interactive try-on platform for eyeglasses
US8976160B2 (en) User interface and authentication for a virtual mirror
JP3529954B2 (en) Face immediately taxonomy and facial features map
EP0552770A2 (en) Apparatus for extracting facial image characteristic points
AU769041B2 (en) Virtual makeover
EP1424655A2 (en) A method of creating 3-D facial models starting form face images
AU2009253838B2 (en) An item recommendation system
CN102947850B (en) Content output means, content output method
US20030065578A1 (en) Methods and systems involving simulated application of beauty products
JP6443472B2 (en) Search support system, search support method, and search support program
AU2014308590B2 (en) Method and system to create custom products
US20160080662A1 (en) Methods for extracting objects from digital images and for performing color change on the object
US6583792B1 (en) System and method for accurately displaying superimposed images
US7062454B1 (en) Previewing system and method
US20080199042A1 (en) Targeted marketing system and method
US7479956B2 (en) Method of virtual garment fitting, selection, and processing
US7324668B2 (en) Feature extraction in beauty analysis

Legal Events

Date Code Title Description
AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 69(1) EPC

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP