WO2022144233A1 - Method for determining a skin colour of a face and corresponding system - Google Patents
Method for determining a skin colour of a face and corresponding system Download PDFInfo
- Publication number
- WO2022144233A1 WO2022144233A1 PCT/EP2021/086979 EP2021086979W WO2022144233A1 WO 2022144233 A1 WO2022144233 A1 WO 2022144233A1 EP 2021086979 W EP2021086979 W EP 2021086979W WO 2022144233 A1 WO2022144233 A1 WO 2022144233A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- imk
- user
- interest
- skin colour
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000010801 machine learning Methods 0.000 claims abstract description 38
- 238000011156 evaluation Methods 0.000 claims abstract description 32
- 238000012545 processing Methods 0.000 claims abstract description 30
- 238000012937 correction Methods 0.000 claims description 20
- 238000004891 communication Methods 0.000 claims description 13
- 238000007781 pre-processing Methods 0.000 claims description 13
- 239000002537 cosmetic Substances 0.000 claims description 7
- 230000001815 facial effect Effects 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 230000008676 import Effects 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 2
- 238000012549 training Methods 0.000 description 28
- 101100337798 Drosophila melanogaster grnd gene Proteins 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 239000003086 colorant Substances 0.000 description 3
- 230000000875 corresponding effect Effects 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000001594 aberrant effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 239000000843 powder Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000033458 reproduction Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- A—HUMAN NECESSITIES
- A45—HAND OR TRAVELLING ARTICLES
- A45D—HAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
- A45D44/00—Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
- A45D2044/007—Devices for determining the condition of hair or skin or for selecting the appropriate cosmetic or hair treatment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/10—Recognition assisted with metadata
Definitions
- TITLE Method for determining a skin colour of a face and corresponding system
- Modes of implementation and of embodiment of the present invention relate to a method for determining a skin colour of a face, for example but non-limitingly, and a corresponding system, in particular in the context of recommendation of cosmetic products.
- Patent application PCT/US2020/041350 which was filed on 9th July 2020 and claimed the priority of provisional patent application US 16/516080, which was filed on 18th July 2019, describes techniques that allow these technical limitations to be overcome in order to accurately estimate skin colour in an image, whatever the lighting conditions.
- patent application PCT/US2020/041350 describes a system that uses at least one machine-learning model to generate an automatic estimation of skin colour.
- a user has a mobile computational device equipped with a photographic sensor and captures one or more images of his face with the photographic sensor. The mobile computational device transmits the one or more images to a device for determining skin colour that uses one or more machine-learning models to determine skin colour on the basis of the one or more images.
- the user may capture a plurality of images by recording a video, instead of one isolated image, and thus modify the lighting conditions by moving the mobile computational device during the capture of the video.
- the machine-learning model may be used to generate a plurality of determinations of skin colour, which may then be averaged or otherwise combined to improve the accuracy of the determination.
- the user computational device may be a smartphone, a tablet computer, a personal computer, a smartwatch or any other computational device able to have an image library.
- image library what is meant is a directory, conventionally provided in user computational devices, that has access to all or almost all of the images and photographs stored in the memory of the user computational device or in a remote memory accessible from the user computational device, a memory of “cloud” type for example.
- Said at least one image is preferably a photograph taken with a photographic sensor of the user computational device, and optionally a photograph taken with another device and imported into the user computational device.
- the machine-learning model although previously trained to process images the lighting conditions of which are modified by movement of the mobile computational device during the capture of a video, performs even better when images the lighting conditions of which are even more diverse and varied, i.e. images such as may exist in the image library of the user computational device, are processed.
- the invention is particularly advantageously and profitably applicable to determination of the skin colour of a face, it may nevertheless be applied to the determination of the skin colour of another region of interest, for example another part of the body, in particular with a view to applying, to said region of interest, at least one cosmetic product based on said evaluation of the skin colour of this region of interest.
- said at least one image includes metadata timestamping the creation of the respective image (i.e. for example the date and time of the capture of the photograph); and the evaluation of the colour of the skin is carried out on the basis of the numerical value of each image weighted according to a current date and to respective timestamping metadata.
- the weighting may correspond to the attribution of an index of confidence to the image, according to the date on which the photograph was taken, for example in order to give, in the evaluation of skin colour, less weight to photographs that are too old or more weight to very recent photographs.
- the numerical value of each image is weighted according to the current date and to respective timestamping metadata so as to take into account a variation in the hue of the skin colour in the course of seasons of the year.
- the variation in the hue of the skin colour, or in other words tanning may have a large amplitude in certain people, and is naturally (i.e. ignoring artificial tanning methods) closely correlated with the seasons of the year.
- this mode of implementation may for example be provided so as to give less weight to photographs taken in the season opposite to the season of the current date, and to give more weight to photographs taken in the same season as the season of the current date.
- the weighting taking account of the variation in the hue of the skin colour in the course of the seasons of the year is carried out so that said evaluation of the skin colour provides a prediction of the skin colour such as it will be subsequently to the current date.
- this mode of implementation may be provided so as to give less weight to photographs taken in the seasons preceding the season of the current date, and to give more weight to photographs taken in the seasons following the season of the current date.
- said at least one image includes metadata timestamping the creation of the respective image and/or metadata geolocating the creation of the respective image (i.e. for example GPS coordinates of the place of capture of the photograph); and the processing comprises pre-processing comprising a correction of the colour temperature of each image according to the respective metadata.
- timestamping and geolocating metadata make it possible to conjecture lighting conditions, with a respective colour temperature for which compensation is to be made.
- the timestamping metadata may indicate whether a photograph was taken during the day or at night
- the geolocating metadata may further indicate whether the photograph was taken in an inside space or in an outside space.
- the correction of the colour temperature of each image is carried out further according to weather-archive data corresponding to the respective metadata, allowing the colour temperature of the lighting conditions of the creation of the respective image to be estimated.
- the weather-archive data allow, with certainty, for a given date and a given place, especially daylight and nighttime hours, periods of sunrise and sunset, and insolation conditions to be known. This allows the accuracy of the conjecture as regards lighting conditions to be increased and the correction of colour temperature to be refined.
- the importing further comprises obtaining a reference image representing a reference face of the user
- the processing comprises pre-processing comprising a selection, via facial recognition, of the reference face in said at least one image imported from the image library of the user computational device.
- obtaining the reference image comprises a photographic capture of isolated the region of interest of the user, or an identification by a user of an image representing isolated the region of interest among the images of said image library.
- the machine-learning model is a pre-trained convolutional neural network.
- the method further comprises a recommendation of at least one cosmetic product based on said evaluation of the skin colour of the region of interest.
- a system for determining a skin colour of a region of interest of a user, comprising communication means that are suitable for communicating with a user computational device and that are configured to import, from an image library of the user computational device, at least one image containing a representation of the region of interest of the user.
- the system comprises processing means configured to perform processing of said at least one image with a machine-learning model suitable for providing, for each imported image, a numerical value representative of the skin colour of the region of interest present in said at least one imported image, and furthermore to perform evaluation of the skin colour of the region of interest on the basis of said one or more numerical values of each imported image.
- a system is also provided that comprises communication means that are suitable for communicating with a user computational device and that are configured to implement the importing step of the method such as defined above, and processing means configured to implement the steps of performing processing and evaluation of the method such as defined above.
- the system further comprises the user computational device configured to communicate to the communication means said at least one image of the image library.
- the communication means are suitable for communicating with the user computational device via a telecommunication network, such as the Internet.
- a computer program is also provided comprising instructions that, when the program is executed by a computer, lead the latter to implement the method such as defined above.
- a computer-readable medium comprising instructions that, when they are executed by a computer, lead the latter to implement the method such as defined above.
- FIG.3 illustrate modes of embodiment and of implementation of the invention.
- Figure 1 shows one example of a system SYS comprising processing means PU configured to perform processing TS of said at least one image IMi, IMj, IMk with a machine-learning model AI TS, and to evaluate EVAL the skin colour of a region of interest of a user, present in the one or more images IMi, IMj, IMk.
- processing means PU configured to perform processing TS of said at least one image IMi, IMj, IMk with a machine-learning model AI TS, and to evaluate EVAL the skin colour of a region of interest of a user, present in the one or more images IMi, IMj, IMk.
- the description is also given with respect to the nonlimiting case in which the region of interest of the user is the face of the user.
- the machine-learning model is configured and trained to provide, for each image IMk, a numerical value ClrEstm ___k representative of the skin colour of a face present in each of the images IMk.
- the means for performing evaluation EVAL are configured to evaluate the skin colour of the face on the basis of the numerical values ClrEstm k of each image.
- the evaluation may comprise a computation combining the values ClrEstm ___k, such as a computation of a mean or median, whether weighted or not.
- Communication means COM are suitable for communicating with a user computational device AFP and are configured to receive IN DAT said images IMi, IMj, IMk from an image library LIB of the user computational device APP.
- the user computational device APP may belong to the system SYS in the sense that it may be specifically configured, for example via software, via execution of an application or of a website, to collaborate with the processing means PU with a view to transmitting the images IMi, IMj, IMk during the import IN DAT of the images by the processing means PU.
- the user computational device APP and the communication means COM may communicate using any suitable communication technology, such as wireless communication technologies, for example Wi-Fi, Wi-MAX, Bluetooth, 2G, 3G, 4G, 5G and LTE, or wired communication technologies, for example Ethernet, FireWire and USB.
- wireless communication technologies for example Wi-Fi, Wi-MAX, Bluetooth, 2G, 3G, 4G, 5G and LTE, or wired communication technologies, for example Ethernet, FireWire and USB.
- the user computational device and the device for determining skin colour may communicate at least partially via the Internet.
- the user computational device APP may for example be a smartphone, a tablet computer, a personal computer, a smartwatch or any other computational device able to have an image library LIB.
- the image library LIB is conventionally a directory provided in the interfaces of user computational devices APP, and which has access to all or almost all of the images and photographs IM1-IM4, IMk stored in an internal nonvolatile memory INTJNTVM of the user computational device APP.
- the images and photographs IM1- IM4, IMk of the library LIB may also or alternatively be stored in a remote memory of a server CLD accessible from the user computational device APP (for example in “the cloud” to use the well-known expression).
- Said at least one image IM1-IM4, IMk is preferably a photograph, of a face, taken with a photographic sensor CAM of the user computational device APP, or a photograph, of a face, taken with another device and imported into the image library LIB of the user computational device APP.
- the images IM1-IM4, IMk of the library of images IMk have a digital imagedata format, such as the JPEG format (JPEG standing for Joint Photographic Expert Group), the PNG format (PNG standing for Portable Network Graphics), the GIFF format (GIFF standing for Graphics Interchange Format), the TIFF format (TIFF standing for Tagged Image File Format) or any other image format.
- JPEG format JPEG standing for Joint Photographic Expert Group
- PNG PNG standing for Portable Network Graphics
- GIFF format GIFF standing for Graphics Interchange Format
- TIFF TIFF standing for Tagged Image File Format
- each image IM1-IM4, IMk of the library of images IMk further includes metadata MTD1-MTD4, MTDk providing diverse information regarding the images.
- certain at least of the images IM1-IM4, IMk of the library LIB include timestamping metadata, providing information on the date and time of creation of the respective image, and/or geolocating metadata providing information on the place of creation of the respective image.
- the timestamping and/or geolocating metadata may allow the processing by the machine-learning model AIJTS and the evaluation EVAL of skin colour to be refined and improved.
- the skin colour thus obtained by the machine-learning model AIJTS is true to life and may subsequently be used to recommend one or more cosmetic products that are precisely correlated with the skin colour of each user, in particular foundation, powder and variants of foundation and of powder, and cosmetic products that would complement or would be suitable for the skin colour of the user.
- Figure 2 illustrates one example of a method for determining a skin colour of a face, especially implemented by the processing means PU of the system described with reference to Figure 1.
- the method comprises importing IN DAT at least one image IMk from the image library LIB of the user computational device, in a way such as described with reference to Figure 1.
- the processing TS, of said at least one image IMk, implemented in this example comprises pre-processing P_TS and processing with a machine-learning model AIJTS that is configured to provide a numerical value ClrEstm Jc representative of the skin colour of a face present in said that at least one image IMk.
- the pre-processing P TS comprises converting the images IMk into a format suitable for the machine-learning model AIJTS.
- the pre-processing P S is suitable for isolating one or more portions of the image containing a face, via a conventional face-detection mechanism FceDet that is known per se, and for centring and scaling Cntr+Scl the portion of the image containing a face, so as to provide normalized data to the machine-learning model.
- the centring Cntr may comprise cropping the image so as to preserve only the portion containing the face
- the scaling Scl comprises enlarging or shrinking the cropped image, and under-sampling or over-sampling pixels of the cropped image, so as to provide a cropped image having a set size and a set resolution.
- the pre-processing further comprises selecting cropped images containing the same face, via a conventional facial recognition mechanism FceReco that is known per se.
- the facial recognition FceReco identifies and detects a target face, i.e. a reference face provided in a reference image IMref.
- a target face i.e. a reference face provided in a reference image IMref.
- the reference face is isolated in the reference image IMref, i.e. the reference face is the only face present in the reference image IMref.
- the reference image IMref may for example be imported from the image library LIB of the user computational device, and be identified to be the reference image IMref by a user.
- the reference image IMref may be taken by the user by means of the photographic sensor (CAM) of the user computational device, and is advantageously a self-portrait photograph (conventionally “selfie”) PhtSlf, so as to comprise isolated the face of the user by way of reference face.
- the images IMk of the image library advantageously include metadata MTDk.
- the pre-processing P TS may advantageously comprise a correction of the colour temperature TempCorr of the imported images, which correction is established according to the respective metadata MTDk.
- Colour temperature is a quantity well known to those skilled in the art, and usually measured in kelvin, that characterizes a light source by comparison to the theoretical principle of thermal radiation of a black body.
- the correction of the colour temperature of each image is carried out further according to external data EXT MTD, such as weather-archive data and geographic data, allowing, in correspondence with the respective metadata MTDk, the colour temperature of the lighting conditions of the creation of the respective image to be estimated.
- external data EXT MTD such as weather-archive data and geographic data
- timestamping and geolocating metadata MTDk ts make it possible to conjecture lighting conditions, with a respective colour temperature for which compensation is to be made.
- the timestamping metadata ts may indicate whether a photograph was taken during the day or at night
- the geolocating metadata geoloc may further indicate whether the photograph was taken in an inside space or in an outside space.
- the weather-archive data allow, with certainty, for a given date and a given place, especially daylight and nighttime hours, periods of sunrise and sunset, and insolation conditions to be known. This allows the accuracy of the conjecture as regards lighting conditions to be increased and the correction of colour temperature to be refined.
- the correction of colour temperature TempCorr by the pre-processing means P TS may, for example, employ a list of conditions that may be met by the metadata MTDk, external data EXT MTD optionally being indexed with said metadata MTDk.
- a lookup table may possibly allow a specific colour-temperature correction to be selected according to which conditions are met or not.
- the colour-temperature correction will then possibly be made so as to correct an illumination of the type “inside lighting” or “incandescent bulb”. If the metadata MTDk reflect the conditions “dusk”, “outside” and “sunlight”, then the colour-temperature correction will possibly be made so as to correct an illumination of the setting-sun type.
- the mechanism of correction of the colour temperature TempCorr is comparable to a debay ering or demosai cing technique, such a technique conventionally being used to individually rebalance the RGB channels of the RAW image captured by the photographic sensor before the image is stored.
- the mechanism of correction of the colour temperature TempCorr advantageously uses a debayering or demosaicing matrix the points of which are tailored according to the conditions, which are evaluated via the metadata MTDk, in the imported images IMk. These adjustments are then consolidated in a static image file that is verified assuming that the colour of one element in the image will always be known (such as for example the white of an eye or to a lesser extent of teeth).
- the images IMk thus centred Cntr on a face FceDet, scaled Scl, optionally selected FceReco and optionally corrected TempCorr, are provided to the machinelearning model AI TS.
- the machine-learning model AI TS computes numerical values ClrEstm ___k representative of the skin colour of the face present in each image IMk.
- the machine-learning model AI TS may for example be implemented via a convolutional neural network that is pre-trained, for example in the way described in patent application PCT/US2020/041350, or such as summarized below with reference to Figure 3.
- the machine-learning model AIJTS may also be a feed-forward neural network, or a recurrent neural network. Any suitable training technique may be used, especially gradient-descent techniques such as stochastic gradient descent, batch gradient descent and mini-batch gradient descent.
- the machine-learning model AIJTS may be capable of selecting conforming images ImSel by detecting aberrant image-capture conditions in the image, such as especially a nonconforming image-capture angle.
- This selection of conforming images may be carried out in two stages. Firstly, a machine-learning model allows explicit tests of conditions met to be carried out (example: face looking at the photographic sensor, a single face in the image, a face is indeed in the image). After this first filtering stage, a second selection is made using a quality score wghtl, which is for example learnt in a weakly supervised way during the training of the model AIJTS.
- the quality score wghtl is purely statistical and is especially optimized to improve the accuracy of the model AIJTS during training.
- the machine-learning model AI TS may assign each result a first weight wghtl, especially allowing aberrant results to be discarded.
- an evaluation EVAL of the skin colour of the face is carried out on the basis of said numerical values ClrEstm k and of the respective first weights wghtl .
- the evaluation EVAL may comprise a computation combining the values ClrEstm k, such as a computation of a mean or median, weighted with the first weights wghtl .
- the valuation may advantageously be weighted by second weights wght2, which are obtained on the basis of the metadata MTDk of the respective images, in particular the timestamping metadata ts, in order to take into account a variation in the hue of the skin colour in the course of the seasons of the year, i.e. tanning of the skin.
- the weight of images the date of which is that of a season opposite the season of the current date may be decreased using a second weight wght2 of low value, whereas the weight of images the date of which is that of the same season as the season of the current date may be increased using a second weight wght2 of high value.
- the second weights wght2 may be determined so that said evaluation EVAL of the skin colour provides a prediction of the skin colour such as it will be subsequently to the current date.
- this may be achieved by giving less weight to images the date of which is that of seasons preceding the season of the current date, and by giving more weight to images the date of which is that of seasons following the season of the current date.
- Figure 3 illustrates a non-limiting example of implementation of training TRN of the machine-learning model AIJTS with a view to achieving provision of a numerical value ClrEstm Jc representative of the skin colour of a face present in atleast one image IMk, such as described above with reference to Figures 1 and 2.
- the training may be implemented by processing means PU comprising additional training means TRN, especially configured to control the means for receiving data IN DAT, the pre-processing means P TS, the machine-learning model AIJTS and the evaluating means EVAL.
- processing means PU comprising additional training means TRN, especially configured to control the means for receiving data IN DAT, the pre-processing means P TS, the machine-learning model AIJTS and the evaluating means EVAL.
- a set of training images IN DAT associated with factual skin colour information GRND TRTH, which information will be referred to as the “ground truth”, are collected for the implementation of the training TRN of the machinelearning model AI TS.
- the ground truth GRND TRTH as to the skin colour is used as empirical evidence or information to label the images provided by a training subject, i.e. a volunteer, who will be referred to as the panellist.
- the ground truth GRND TRTH may be collected by the user, using a technique for determining skin colour that is standard in the industry, such as comparison with a colour chart or evaluation by a spectrophotometer dedicated to the measurement of skin colours.
- one portion of the image may contain a representation of a known reference colour chart, and a correction of the colour of the image that returns the representation of the reference colour chart to its original colours may allow the ground truth GRND TRTH as to the skin colour present in the image after said colour correction to be determined.
- the colour correction that provides the ground truth GRND TRTH may be implemented in the pre-processing P TS on command by the training means TRN.
- a spectrophotometer may be used at least once on the skin of the panellist, the colour measured by the spectrophotometer possibly being used directly, give or take any translation of the colour code, as the ground truth GRND TRTH.
- the set of training images IN DAT is provided by the panellist via a user computational device of the same type as the device APP described above with reference to Figure 1.
- the training images IN DAT are imported via use of the photosensitive sensor CAM of the user computational device of the panellist to capture one or more training images comprising the face of the panellist.
- the training images may be drawn from individual photographs PhtSlf, for example taken in selfie mode, or from at least one video recording Vid360, for example taken in selfie mode, and in which the lighting conditions may be modified by making the viewpoint move, for example via a rotation of the device around the face of the panellist.
- Multiple training images may then be generated by extracting individual images from the video Vid360. It may be advantageous to capture a video, from which a plurality of training images may be drawn, at least because it will considerably increase the efficiency with which a large amount of training data under various lighting conditions is generated.
- the pre-processing P TS only comprises face detection FceDet and centring and scaling Cntr+Scl such as described above with reference to Figure 2.
- face detection FceDet the assumption that the panellist will provide images only of his face may be made, in which case facial recognition FceReco is not necessary.
- the metadata provide no particular context allowing the correction of colour temperature TempCorr to be parameterized.
- the training images IN DAT may in addition or alternatively be drawn from an image library (IMkGLIB) of the user computational device of the panellist.
- IMkGLIB image library
- provision may be made to carry out the steps of facial recognition FceReco and of correction of colour temperature TempCorr such as described with reference to Figure 2.
- the machine-learning model AIJTS executes parameterizable computations on the training images IN DAT, and provides numerical values ClrEstm representative of the skin colour of a face present in the training images.
- the evaluation EVAL of the skin colour may be carried out such as described with reference to Figure 2, via a statistical operation AVRG that combines the various numerical values ClrEstm.
- the skin colour thus evaluated EVAL, or indeed the numerical values ClrEstm directly, are then compared to the ground truth GRND TRTH in order to reparameterize the computations executed by the machine-learning model AIJTS in order to get as close as possible to the ground truth.
- the ground truth GRND TRTH as to the skin colour is used as labelling datum to indicate a desired result of the processing of the training images IN DAT by the machine-learning model AI TS.
- the machine-learning model AI TS is advantageously trained with the data of a plurality of panellists.
- the training images IN DAT and the ground truth GRND TRTH associated with each panellist may be stored in a memory of the processing means PU and training means TRN.
- the computation of the machine-learning model AIJTS is parameterized for all of the data of all of the panellists in order to be as universal as possible.
- the processing means PU use the machine-learning model AIJTS such as described above to determine the skin colour of the face present in at least one image imported from an image library of a user computational device APP.
- the skin colour may then be used to recommend one or more cosmetic products that complement or are suitable for the skin colour thus determined.
Abstract
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/259,915 US20240086989A1 (en) | 2020-12-31 | 2021-12-21 | Method for determining a skin colour of a face and corresponding system |
JP2023540164A JP2024502818A (en) | 2020-12-31 | 2021-12-21 | Method and corresponding system for determining facial skin color |
KR1020237019387A KR20230101895A (en) | 2020-12-31 | 2021-12-21 | Method and corresponding system for determining facial skin color |
EP21843700.2A EP4271223A1 (en) | 2020-12-31 | 2021-12-21 | Method for determining a skin colour of a face and corresponding system |
CN202180080152.5A CN116648728A (en) | 2020-12-31 | 2021-12-21 | Method for determining facial skin color and corresponding system |
Applications Claiming Priority (12)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/139,457 | 2020-12-31 | ||
US17/139,338 US20210235845A1 (en) | 2020-01-31 | 2020-12-31 | Smart swappable cartridge system for cosmetic dispensing device |
US17/139,457 US20210236863A1 (en) | 2020-01-31 | 2020-12-31 | Ecosystem for dispensing personalized skincare product |
US17/139,338 | 2020-12-31 | ||
US17/139,454 US11900434B2 (en) | 2020-01-31 | 2020-12-31 | Ecosystem for dispensing personalized foundation |
US17/139,391 US11478063B2 (en) | 2020-01-31 | 2020-12-31 | Cleaning system for cosmetic dispensing device |
US17/139,340 US11935107B2 (en) | 2020-01-31 | 2020-12-31 | Ecosystem for dispensing personalized lipstick |
US17/139,340 | 2020-12-31 | ||
US17/139,454 | 2020-12-31 | ||
US17/139,391 | 2020-12-31 | ||
FRFR2101039 | 2021-02-03 | ||
FR2101039A FR3118517B1 (en) | 2020-12-31 | 2021-02-03 | Method for determining the color of the skin of a face and corresponding system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022144233A1 true WO2022144233A1 (en) | 2022-07-07 |
Family
ID=79601771
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2021/086979 WO2022144233A1 (en) | 2020-12-31 | 2021-12-21 | Method for determining a skin colour of a face and corresponding system |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022144233A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008108760A1 (en) * | 2007-03-08 | 2008-09-12 | Hewlett-Packard Development Company, L.P. | Method and system for recommending a product based upon skin color estimated from an image |
US10347163B1 (en) * | 2008-11-13 | 2019-07-09 | F.lux Software LLC | Adaptive color in illuminative devices |
WO2020118977A1 (en) * | 2018-12-15 | 2020-06-18 | 深圳市华星光电半导体显示技术有限公司 | Image color temperature correction method and device |
-
2021
- 2021-12-21 WO PCT/EP2021/086979 patent/WO2022144233A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008108760A1 (en) * | 2007-03-08 | 2008-09-12 | Hewlett-Packard Development Company, L.P. | Method and system for recommending a product based upon skin color estimated from an image |
US10347163B1 (en) * | 2008-11-13 | 2019-07-09 | F.lux Software LLC | Adaptive color in illuminative devices |
WO2020118977A1 (en) * | 2018-12-15 | 2020-06-18 | 深圳市华星光电半导体显示技术有限公司 | Image color temperature correction method and device |
Non-Patent Citations (1)
Title |
---|
KIPS ROBIN ET AL: "Beyond color correction: Skin color estimation in the wild through deep learning", ELECTRONIC IMAGING, 1 January 2020 (2020-01-01), pages 1 - 8, XP055850777, Retrieved from the Internet <URL:https://www.ingentaconnect.com/content/ist/ei/2020/00002020/00000005/art00005?crawler=true&mimetype=application/pdf&utm_source=TrendMD&utm_medium=cpc&utm_campaign=Electronic_Imaging_TrendMD_1> [retrieved on 20211013], DOI: 10.2352/ISSN.2470-1173.2020.5.MAAP-082 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108984657B (en) | Image recommendation method and device, terminal and readable storage medium | |
EP3654625B1 (en) | Method and system for providing recommendation information related to photography | |
CN108777815B (en) | Video processing method and device, electronic equipment and computer readable storage medium | |
CN107993191B (en) | Image processing method and device | |
CN108960290A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
US20170324965A1 (en) | Method and system for optimized delta encoding | |
CN108961302B (en) | Image processing method, image processing device, mobile terminal and computer readable storage medium | |
CN108765033B (en) | Advertisement information pushing method and device, storage medium and electronic equipment | |
CN105809146A (en) | Image scene recognition method and device | |
CN108897786A (en) | Recommended method, device, storage medium and the mobile terminal of application program | |
CN110418204B (en) | Video recommendation method, device, equipment and storage medium based on micro expression | |
CN107424117B (en) | Image beautifying method and device, computer readable storage medium and computer equipment | |
CN108548539B (en) | Navigation method and device based on image recognition, terminal and readable storage medium | |
CN108875820A (en) | Information processing method and device, electronic equipment, computer readable storage medium | |
CN108764371A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN112101195A (en) | Crowd density estimation method and device, computer equipment and storage medium | |
CN107578003B (en) | Remote sensing image transfer learning method based on geographic marking image | |
US11200650B1 (en) | Dynamic image re-timing | |
US20240086989A1 (en) | Method for determining a skin colour of a face and corresponding system | |
CN113221695B (en) | Method for training skin color recognition model, method for recognizing skin color and related device | |
WO2022144233A1 (en) | Method for determining a skin colour of a face and corresponding system | |
CN108898163B (en) | Information processing method and device, electronic equipment and computer readable storage medium | |
CN116648728A (en) | Method for determining facial skin color and corresponding system | |
CN114580573A (en) | Image-based cloud amount, cloud shape and weather phenomenon inversion device and method | |
CN114218429A (en) | Video color ring setting method, system, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21843700 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180080152.5 Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 20237019387 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18259915 Country of ref document: US Ref document number: 2023540164 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021843700 Country of ref document: EP Effective date: 20230731 |