EP3718050A1 - Machine-implemented facial health and beauty assistant - Google Patents

Machine-implemented facial health and beauty assistant

Info

Publication number
EP3718050A1
EP3718050A1 EP19705420.8A EP19705420A EP3718050A1 EP 3718050 A1 EP3718050 A1 EP 3718050A1 EP 19705420 A EP19705420 A EP 19705420A EP 3718050 A1 EP3718050 A1 EP 3718050A1
Authority
EP
European Patent Office
Prior art keywords
user
facial skin
image
machine learning
learning models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19705420.8A
Other languages
German (de)
French (fr)
Inventor
Celia LUDWINSKI
Florent VALCESCHINI
Yuanjie Li
Zhiyuan Song
Christine EL-FAKHRI
Hemant Joshi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ludwinski Celia
LOreal SA
Original Assignee
LOreal SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LOreal SA filed Critical LOreal SA
Publication of EP3718050A1 publication Critical patent/EP3718050A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/40Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
    • G06F18/41Interactive pattern learning with a human teacher
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45DHAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
    • A45D44/00Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
    • A45D2044/007Devices for determining the condition of hair or skin or for selecting the appropriate cosmetic or hair treatment

Definitions

  • a regimen is a systematic plan or course of action intended to improve the health and/or beauty of a human user.
  • a regimen might include cleaning the skin with a specific cleanser and applying specific creams, adhering specific dietary constraints, changing sleep habits, etc.
  • Conventional health and beauty portals also lack the mechanisms by which features of a user’s skin can be tracked over time, such as to observe the efficacy of the recommended regimen. They lack sufficient information explanation and advice, are not targeted separately for men and women, and suffer from false positives in detecting certain conditions (hair misidentified as wrinkles, for example).
  • the ability to recommend a health and/or beauty regimen through data analysis, as well as to track individual users’ progress through that regimen by data analysis has yet to be realized on a machine.
  • One or more images are accepted by one or more processing circuits from a user depicting the user’s facial skin.
  • machine learning models stored in one or more memory circuits are applied to the one or more images to classify facial skin characteristics, identify significant objects, determine beauty trends, and the like.
  • a regimen recommendation is provided to the user based on the classified facial skin characteristics.
  • a system comprising: one or more memory circuits configured to store machine learning models; and one or more processing circuits configured to: accept at least one image from a user depicting the user’s facial skin; apply the machine learning models to the image to classify facial skin characteristics; and generate a regimen recommendation to the user based on the classified facial skin characteristics.
  • the one or more processing circuits is further configured to: accept another image from a user depicting the user’s facial skin; apply the machine learning models to the other image to classify facial skin characteristics; and update the regimen recommendation to the user based on the classified facial skin characteristics of the other image.
  • the one or more processors are further configured to process the image to progress the user’s facial skin to a simulated future condition.
  • the simulated future condition is that of the user’s facial skin when the regimen is adhered to by the user.
  • the one or more processing circuits are physically separated into a client platform and a service platform communicatively coupled through a communication network.
  • the one or more processor circuits are further configured to: accept input from a plurality of users that classify the facial skin characteristics from training images provided thereto; and train the models using the accepted input.
  • a method comprising: accepting at least one image from a user depicting the user’s facial skin; applying machine learning models to the image to classify facial skin characteristics; and generating a regimen recommendation to the user based on the classified facial skin characteristics.
  • an apparatus comprising: a processing circuit to accept at least one image depicting facial skin of a user; a communication circuit to convey the accepted image to machine learning models and to receive a regimen recommendation from the machine learning models; and a user interface circuit to present the regimen recommendation to the user.
  • the processing circuit is further configured to: alert the user that another image depicting the user’s facial skin is required according to a predefined schedule; accept the other image from a user depicting the user’s facial skin; the communication circuit being further configured to convey the other image to the machine learning models and to receive an updated regimen recommendation from the machine learning models; and the user interface circuit being further configured to present the updated regimen recommendation to the user.
  • the user interface circuit is further configured to present images of human faces to the user; the processing circuit being further configured to accept input from the user that classifies facial skin characteristics from the images provided thereto through the user interface circuit; and the communication interface circuit being further configured to convey the user input to the machine learning models as training data.
  • the user interface circuit is further configured to present a user control by which the facial skin characteristics are rated on a predetermined scale.
  • the apparatus includes a camera communicatively coupled to the processing circuit to provide the image from the user thereto.
  • the camera, the processing circuit, the user interface circuit and the communication circuit are components of a smartphone.
  • a method comprising: accepting at least one image depicting facial skin of a user; conveying the accepted image to machine learning models; receiving a regimen recommendation from the machine learning models; and presenting the regimen recommendation to the user.
  • FIG. 1 is a schematic block diagram of an example system configuration by which the present general inventive concept can be embodied.
  • FIG. 2 is a flow diagram of a simple user interaction with an embodiment of the present general inventive concept.
  • FIG. 3 is a schematic block diagram of example data flow of an embodiment of the present general inventive concept.
  • FIG. 4 is a block diagram of crowdsourced training of machine learning models according to an embodiment of the present general inventive concept.
  • FIG. 5 is a diagram of an example client platform device on which the present general inventive concept can be embodied.
  • FIG. 6 is a flow diagram of example crowdsourced training of machine learning models according to an embodiment of the present general inventive concept.
  • FIG. 7 is a diagram illustrating a test operation in accordance with the crowdsourced machine learning model training.
  • exemplary is used herein to mean, “serving as an example, instance or illustration.” Any embodiment of construction, process, design, technique, etc., designated herein as exemplary is not necessarily to be construed as preferred or advantageous over other such embodiments. Particular quality or fitness of the examples indicated herein as exemplary is neither intended nor should be inferred.
  • FIG. 1 is a schematic block diagram of an exemplary facial health and beauty assistant (FHBA) system 100 comprising an FHBA client platform 110 and an FHBA service platform 120 communicatively coupled through a network 130.
  • FHBA client platform 110 is a smartphone, tablet computer or other mobile computing device, although the present invention is not so limited.
  • exemplary FHBA client platform 110 comprises a processor 112, memory 114, a camera 115, a user interface 116 and a communication interface 118 over which an FHBA client interface 150 may be implemented.
  • FHBA client interface 150 provides the primary portal through which a user accesses FHBA system 100.
  • FHBA service platform 120 comprises one or more server computers, each comprising a processor 122, a memory 124, a user interface 126 and a communication interface. These resources of FHBA service platform 120 may be utilized to implement an FHBA service interface 152, machine learning logic 154 and a storage memory 156.
  • Storage memory 156 represents a sufficient amount of volatile and persistent memory to embody the invention. Storage memory 156 may contain vast amounts of encoded human knowledge as well as space for the private profile of a single user. Storage memory 156 may further store processor instructions that, when executed by one or more processors 122, perform some task or procedure for embodiments of the invention. Storage memory 156 may further store user models (coefficients, weights, processor instructions, etc.) that are operable with machine learning logic 154 to prescribe a particular regimen for a user and track the user’s progress under the regimen.
  • Exemplary FHBA service interface 152 provides the infrastructure by which network access to FHBA services are both facilitated and controlled.
  • FHBA client interface 150 and FHBA service interface 152 communicate via a suitable communication link 145 using the signaling and data transport protocols for which communication interface 118 and communication interface 128 are constructed or otherwise configured.
  • FHBA service interface 156 may implement suitable Internet hosting services as well as authentication and other security mechanisms that allow access only to authorized users and protect the users’ private data. Additionally, FHBA service interface 152 may realize an application programming interface (API) that affords FHBA client interface 150 communication with, for example, machine learning logic 154.
  • API application programming interface
  • Machine learning logic 154 provides the infrastructure for embodiments of the invention to learn from and make predictions about data without being explicitly programmed to do so.
  • machine learning logic 154 implements one or more convolutional neural networks (CNNs), the models for which may be trained using open source datasets or crowdsourced data sets, as explained below.
  • CNNs convolutional neural networks
  • Other machine learning techniques may be used in conjunction with the present invention including, but not limited to, decision tree learning, association rule learning, artificial neural networks, deep learning, inductive logic programming, support vector machines, clustering, Bayesian networks, reinforcement learning, representation learning, similarity and metric learning, sparse dictionary learning, genetic algorithms, rule- based machine learning and learning classifiers. Additional techniques described in U.S. Patent No. 8,442,321, U.S. Patent No.
  • Embodiments of the invention determine various regimens for a user based on images of the user taken by camera 116 on FHBA client platform 110.
  • the images of the user’s face are preferably obtained under conditions of uniform lighting that is consistent over time.
  • embodiments of the invention provide for a mirror device 140 that includes mirror surface 144 circumscribed by a ring illuminator 142. This configuration is intended to define a temporally constant standard of illumination. When the invention is so embodied, temporally varying characteristics in images a user’s face are more readily recognized and labeled.
  • FIG. 2 is a flow diagram by which an example interaction with an embodiment of the invention can be explained.
  • the interaction of FIG. 2 is simple by design and is no way intended to be limiting.
  • the description of FIG. 2 is intended to illustrate functionality of the configuration illustrated in FIG. 1. Further features of the invention, beyond those described with reference to FIG. 2, will be discussed below.
  • a user may generate an image of his face, such as by camera 116 of FHBA client platform 110. This may be achieved with or without the illumination standard discussed above.
  • the user’s image is sent to FHBA service platform 120. This may be achieved by suitable communication protocols shared between FHBA client platform 110 and FHBA service platform 120 to realize communication link 145.
  • Machine learning logic 154 may perform analyses that determine, among other things, apparent age, i.e., the subjective age of the user estimated from a visual appearance of the user’s face; evenness of facial skin tone (is there blotching, age/sun spots, acne scarring and other blemishes); the presence of stress as seen in under eye puffiness, dark circles, overall tone drooping in eyelids/comers of the mouth, fine lines and eye redness; hydration level, often referred to as plump or slick, which presents as a lack of ashy-ness, skin flaking, dullness and fine lines; shine - a nonlinear parameter where the ideal is a moderate amount of shine; condition of pores - a reduced appearance of pores is desirable as it provides a healthy, youthful and smooth skin texture; the presence of acne as characterized by red/inflamed pimples and scarring; the presence of wrinkles, a fold,
  • process 200 may transition to operation 220, whereby the analyses results and the prescribed regimen (products and routines) and/or updates to the regimen are sent to the user via FHBA client interface 150.
  • process 200 may transition to operation 230, whereby FHBA service interface 152 sends a recommended regimen or updates to a regimen to FHBA client interface 150 in operation 230.
  • the user may follow the regimen as indicated in operation 235 and, in operation 240, it is determined whether a new interval has commenced. If so, process 200 reiterates from operation 210.
  • FHBA client interface 150 may access calendars and timers (as well as GPS) onboard FHBA client platform 110 as well as access to network-accessible calendars on network 130.
  • FHBA client interface 150 may remind the user to take a picture of his face, i.e., remind him of the new interval.
  • FHBA system 100 can determine from the images taken at each interval whether the recommended regimen is working and, if not, FHBA system 100 may revise the regimen, e.g., change a product, recommend further lifestyle changes, make a doctor’s appointment, etc.
  • FIG. 3 is a diagram of data flow between an exemplary FHBA client interface 150 and services of FHBA service platform 120. It should be noted that, in FIG. 3, FHBA service interface 152 has been omitted to avoid unnecessary congestion in the figure. However, those having skill in the relevant arts will recognize the operation of an FHBA service interface 152 to control and facilitate the data flow depicted in FIG. 3.
  • machine learning logic 154 may comprise a skin analyzer 330, facial appearance progression generator 335 and a regimen recommendation generator 340 and may be communicatively coupled to a user account database 310 and a product database 320. Machine learning logic 154 may train and utilize machine learning models 370 to recommend regimens and to track the progress of the user under the regimen.
  • training may involve selecting a set of features, e.g., apparent age, evenness, stress, hydration, shine, pores, acne, wrinkles, sagging, crow’s feet, etc., and assigning labels to image data that reflects the presence or prominence of those features.
  • the assigning of labels may be performed by a subject matter expert or, as explained below, through crowdsourced data.
  • machine learning logic 154 may configure models 370 to predict the degree to which the features are present in a test image, which may change over time.
  • the present invention is not limited to a particular model representation, which may include binary models, multiclass classification models, regression models, etc.
  • Exemplary user account database 310 contains the data of all users of FHBA system 100 in a secure manner. This includes user profile data, current and past user photos 357 for each user, current and past skin analyses 358 for each user, current and past product recommendations 362 and current and past routine recommendations 364 for each user.
  • Exemplary product database 320 contains the data of different products that can be used in a regimen.
  • Product database 320 may contain records reflecting the product names, active and inactive ingredients, label information, recommended uses, and so on.
  • product input 354 the user (and other users of FHBA system 100) may provide feedback on different products and may enter products not already in product database 320.
  • the present invention is not limited to particular products that can be entered in product database 320.
  • Skin analyzer 330 is constructed or is otherwise configured to classify various skin conditions or artifacts from imagery of a user’s face using machine learning techniques over models 370.
  • photographic images 352 of a user’s face are provided to skin analyzer 330 for analysis.
  • Skin analyzer 330 may implement image preprocessing mechanisms that include cropping, rotating, registering and filtering input images prior to analysis. After any such preprocessing, skin analyzer 330 may apply models 370 to the input image 357 to locate, identify and classify characteristics of the user’s facial skin.
  • Facial appearance progression generator 335 may operate on the user’s facial images to portray how the user’s face would appear sometime in the future. Such progression may be in age, for which age progression techniques may be deployed, or may be in appearance resulting from adherence to a regimen.
  • a progressed image 356 may be provided to the user through FHBA client interface 150.
  • Regimen recommendation generator 340 may operate on analysis results 358 obtained from skin analyzer 430 towards prescribing a regimen to the user. Models 370 may be trained to predict what products and routines (treatment, cosmetic and lifestyle recommendations, etc.) would be effective in meeting the user’s goal with regard to facial skin characteristics identified in the skin analysis.
  • Regimen recommendation generator 340 may format the analysis results 358 of skin analyzer 330 as a query into, for example, product database 320 based on knowledge encoded on models 370.
  • product database 320 may return product data and metadata 366, and product recommendations 362 and routine recommendations 364 may be provided to FHBA client interface 150.
  • FIG. 4 is a diagram of such an embodiment of the invention.
  • users 410 are presented a set of training images 420 over which they are asked to characterize facial skin characteristics and/or facial features.
  • a suitable scale is constructed, (e.g., integers 1-10) with which users can rate the severity or prominence of the feature. For example, each of users 410 (over time) are presented a large number of facial images and is walked through a set of questions regarding features and/or skin characteristics of the person in the image.
  • each user 410 is asked to rate the prominence of each of the features (e.g., apparent age, evenness, stress, hydration, shine, pores, acne, wrinkles, sagging, crow’s feet, etc.).
  • the answers to the questions may serve as labels used for training machine language logic 154.
  • FHBA client platform 110 in the form of a smartphone having a touchscreen 510 as user interface 118.
  • Exemplary FHBA client interface 150 is implemented on the computational resources of FHBA client platform 110 as discussed with reference to FIG. 1.
  • FHBA client interface 150 may present a photograph of a person’s face in image area 520 and may present via text 142 “On a scale from 1 - 10, where “1” means‘invisible’ and“10” for‘prominently present,’ how would you rate the presence this person’s crow’s feet?”
  • a suitable user interface control 144 (slider control illustrated in FIG. 5) may be implemented on FHBA client interface 150 that allows a user to input its rating.
  • FIG. 6 is a flow diagram of a crowdsourced training process 600 with which the present invention may be embodied.
  • a training image may be provided to FHBA client interface 150.
  • a set of training images 420 may have been preselected as including illustrative examples of the skin characteristics of interest.
  • the user is provided with a first question and waits for an answer (rating) in operation 630.
  • Such question might be, for example, “On a scale of 1 to 10, where‘G is‘invisible’ and‘10’ is“highly prominent,” how would you rate this models acne?”
  • the user’s answer may be formatted into a label suitable for machine training of machine learning logic 154 in operation 640.
  • operation 650 it is determined whether all questions relating to the currently displayed image have been answered. If not, process 600 may transition to back to operation 620, whereby the next question is presented. If all questions have been answered, as determined at operation 650, it is determined in operation 560 whether all training images have been presented. If not, process 600 may transition to back to operation 610, whereby the next training image is presented. If all training images have been presented, as determined at operation 660, the labeled images may be used to train models 370 in operation 670.
  • process 600 e.g., presenting next questions in operations 620 and 650 and/or presenting next images in operations 610 and 660, need not be performed in any one sitting.
  • the user may be prompted to answer a single question at a time (e.g., every time the user logs on) and it is only over time that all questions and images are presented to any one user.
  • users may be selected to answer all questions for all images in a single sitting. Over a large number of users and/or facial images, many labels may be generated for training models 370, where the statistical trends underlying such training reflect public views as opposed to those of a human expert.
  • FIG. 7 illustrates an example test operation in accordance with the crowdsourced training discussed above.
  • a test image 710 i.e., a user’s own image
  • machine learning logic 154 may analyze the image per the models trained on the crowdsourced data 720.
  • machine learning logic 154 estimates that 80% of people surveyed would rate the user’s crow’s feet a 7 out of 10 in terms of prominence, as indicated at 722.
  • machine learning logic 154 may recommend a regimen, (e.g., a cream specially formulated for crow’s feet and recommended application instructions), based on the severity score of 7.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,”“module” or“system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, e.g., an object oriented programming language such as Java, Smalltalk, C++ or the like, or a conventional procedural programming language, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user’s computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • the computer systems of the present invention embodiments may alternatively be implemented by any type of hardware and/or other processing circuitry.
  • the various functions of the computer systems may be distributed in any manner among any quantity of software modules or units, processing or computer systems and/or circuitry, where the computer or processing systems may be disposed locally or remotely of each other and communicate via any suitable communications medium (e.g., LAN, WAN, Intranet, Internet, hardwire, modem connection, wireless, etc.).
  • any suitable communications medium e.g., LAN, WAN, Intranet, Internet, hardwire, modem connection, wireless, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

An image is accepted by one or more processing circuits from a user depicting the user's facial skin. Machine learning models stored in one or more memory circuits are applied to the image to classify facial skin characteristics. A regimen recommendation is provided to the user based on the classified facial skin characteristics.

Description

MACHINE-IMPLEMENTED FACIAL HEALTH AND BEAUTY ASSISTANT
CROSS-REFERENCE TO RELATED APPLICATIONS
The present application is related to and claims the benefit of U.S. Provisional
Application No. 62/614,001 and U.S. Provisional Application No. 62/614,080, both filed on January 5, 2018, the entire contents of each of which are hereby incorporated herein by reference.
BACKGROUND
The health and beauty industry leverages advances in technology to improve the consumer experience with their products and services. Certain websites, for example, now avail themselves of facial recognition techniques that locate features (eyes, cheeks, nose, lips, chin, etc.) of the human face provided through a mobile device. Such computer vision techniques fail to embrace the full capabilities of machine learning, particularly where customization to particular consumers is concerned. That is, for one thing, conventional health and beauty portals lack the mechanisms by which a regimen is recommended by machine, as opposed to recommended by a human clinician. In an embodiment, a regimen is a systematic plan or course of action intended to improve the health and/or beauty of a human user. In the facial health and beauty domain, a regimen might include cleaning the skin with a specific cleanser and applying specific creams, adhering specific dietary constraints, changing sleep habits, etc.
Conventional health and beauty portals also lack the mechanisms by which features of a user’s skin can be tracked over time, such as to observe the efficacy of the recommended regimen. They lack sufficient information explanation and advice, are not targeted separately for men and women, and suffer from false positives in detecting certain conditions (hair misidentified as wrinkles, for example). The ability to recommend a health and/or beauty regimen through data analysis, as well as to track individual users’ progress through that regimen by data analysis has yet to be realized on a machine.
SUMMARY
One or more images are accepted by one or more processing circuits from a user depicting the user’s facial skin. In an embodiment, machine learning models stored in one or more memory circuits are applied to the one or more images to classify facial skin characteristics, identify significant objects, determine beauty trends, and the like. In an embodiment, a regimen recommendation is provided to the user based on the classified facial skin characteristics.
In an embodiment, a system is provided comprising: one or more memory circuits configured to store machine learning models; and one or more processing circuits configured to: accept at least one image from a user depicting the user’s facial skin; apply the machine learning models to the image to classify facial skin characteristics; and generate a regimen recommendation to the user based on the classified facial skin characteristics.
In an embodiment, the one or more processing circuits is further configured to: accept another image from a user depicting the user’s facial skin; apply the machine learning models to the other image to classify facial skin characteristics; and update the regimen recommendation to the user based on the classified facial skin characteristics of the other image.
In an embodiment, the one or more processors are further configured to process the image to progress the user’s facial skin to a simulated future condition.
In an embodiment, the simulated future condition is that of the user’s facial skin when the regimen is adhered to by the user.
In an embodiment, the one or more processing circuits are physically separated into a client platform and a service platform communicatively coupled through a communication network.
In an embodiment, the one or more processor circuits are further configured to: accept input from a plurality of users that classify the facial skin characteristics from training images provided thereto; and train the models using the accepted input.
In an embodiment, a method is provided comprising: accepting at least one image from a user depicting the user’s facial skin; applying machine learning models to the image to classify facial skin characteristics; and generating a regimen recommendation to the user based on the classified facial skin characteristics.
In an embodiment, an apparatus is provided comprising: a processing circuit to accept at least one image depicting facial skin of a user; a communication circuit to convey the accepted image to machine learning models and to receive a regimen recommendation from the machine learning models; and a user interface circuit to present the regimen recommendation to the user. In an embodiment, the processing circuit is further configured to: alert the user that another image depicting the user’s facial skin is required according to a predefined schedule; accept the other image from a user depicting the user’s facial skin; the communication circuit being further configured to convey the other image to the machine learning models and to receive an updated regimen recommendation from the machine learning models; and the user interface circuit being further configured to present the updated regimen recommendation to the user.
In an embodiment, the user interface circuit is further configured to present images of human faces to the user; the processing circuit being further configured to accept input from the user that classifies facial skin characteristics from the images provided thereto through the user interface circuit; and the communication interface circuit being further configured to convey the user input to the machine learning models as training data.
In an embodiment, the user interface circuit is further configured to present a user control by which the facial skin characteristics are rated on a predetermined scale.
In an embodiment, the apparatus includes a camera communicatively coupled to the processing circuit to provide the image from the user thereto.
In an embodiment, the camera, the processing circuit, the user interface circuit and the communication circuit are components of a smartphone.
In an embodiment, a method is provided comprising: accepting at least one image depicting facial skin of a user; conveying the accepted image to machine learning models; receiving a regimen recommendation from the machine learning models; and presenting the regimen recommendation to the user.
DESCRIPTION OF DRAWINGS
FIG. 1 is a schematic block diagram of an example system configuration by which the present general inventive concept can be embodied.
FIG. 2 is a flow diagram of a simple user interaction with an embodiment of the present general inventive concept.
FIG. 3 is a schematic block diagram of example data flow of an embodiment of the present general inventive concept. FIG. 4 is a block diagram of crowdsourced training of machine learning models according to an embodiment of the present general inventive concept.
FIG. 5 is a diagram of an example client platform device on which the present general inventive concept can be embodied.
FIG. 6 is a flow diagram of example crowdsourced training of machine learning models according to an embodiment of the present general inventive concept.
FIG. 7 is a diagram illustrating a test operation in accordance with the crowdsourced machine learning model training.
The present inventive concept is best described through certain embodiments thereof, which are described herein with reference to the accompanying drawings, wherein like reference numerals refer to like features throughout. It is to be understood that the term invention , when used herein, is intended to connote the inventive concept underlying the embodiments described below and not merely the embodiments themselves. It is to be understood further that the general inventive concept is not limited to the illustrative embodiments described below and the following descriptions should be read in such light.
Additionally, the word exemplary is used herein to mean, “serving as an example, instance or illustration.” Any embodiment of construction, process, design, technique, etc., designated herein as exemplary is not necessarily to be construed as preferred or advantageous over other such embodiments. Particular quality or fitness of the examples indicated herein as exemplary is neither intended nor should be inferred.
DESCRIPTION
FIG. 1 is a schematic block diagram of an exemplary facial health and beauty assistant (FHBA) system 100 comprising an FHBA client platform 110 and an FHBA service platform 120 communicatively coupled through a network 130. In one embodiment, FHBA client platform 110 is a smartphone, tablet computer or other mobile computing device, although the present invention is not so limited. As illustrated in FIG. 1, exemplary FHBA client platform 110 comprises a processor 112, memory 114, a camera 115, a user interface 116 and a communication interface 118 over which an FHBA client interface 150 may be implemented. FHBA client interface 150 provides the primary portal through which a user accesses FHBA system 100. In one embodiment of the present invention, FHBA service platform 120 comprises one or more server computers, each comprising a processor 122, a memory 124, a user interface 126 and a communication interface. These resources of FHBA service platform 120 may be utilized to implement an FHBA service interface 152, machine learning logic 154 and a storage memory 156. Storage memory 156 represents a sufficient amount of volatile and persistent memory to embody the invention. Storage memory 156 may contain vast amounts of encoded human knowledge as well as space for the private profile of a single user. Storage memory 156 may further store processor instructions that, when executed by one or more processors 122, perform some task or procedure for embodiments of the invention. Storage memory 156 may further store user models (coefficients, weights, processor instructions, etc.) that are operable with machine learning logic 154 to prescribe a particular regimen for a user and track the user’s progress under the regimen.
Exemplary FHBA service interface 152 provides the infrastructure by which network access to FHBA services are both facilitated and controlled. FHBA client interface 150 and FHBA service interface 152 communicate via a suitable communication link 145 using the signaling and data transport protocols for which communication interface 118 and communication interface 128 are constructed or otherwise configured. FHBA service interface 156 may implement suitable Internet hosting services as well as authentication and other security mechanisms that allow access only to authorized users and protect the users’ private data. Additionally, FHBA service interface 152 may realize an application programming interface (API) that affords FHBA client interface 150 communication with, for example, machine learning logic 154. Those having skill in the art will recognize other front-end services that can be used in conjunction with the present invention.
Machine learning logic 154 provides the infrastructure for embodiments of the invention to learn from and make predictions about data without being explicitly programmed to do so. In certain embodiments, machine learning logic 154 implements one or more convolutional neural networks (CNNs), the models for which may be trained using open source datasets or crowdsourced data sets, as explained below. Other machine learning techniques may be used in conjunction with the present invention including, but not limited to, decision tree learning, association rule learning, artificial neural networks, deep learning, inductive logic programming, support vector machines, clustering, Bayesian networks, reinforcement learning, representation learning, similarity and metric learning, sparse dictionary learning, genetic algorithms, rule- based machine learning and learning classifiers. Additional techniques described in U.S. Patent No. 8,442,321, U.S. Patent No. 9,015,083, U.S. Patent No. 9,536,293, U.S. Patent No. 9,324,022, and U.S. PG Publication No. 2014/0376819 Al, all of which are incorporated herein by reference, may be used with the present invention. In the descriptions that follow, it will be assumed that machine learning logic implements a convolutional neural network, although the present invention is not so limited. Those having skill in artificial intelligence will recognize numerous techniques that can be used in conjunction with the present invention without departing from the spirit and intended scope thereof.
Embodiments of the invention determine various regimens for a user based on images of the user taken by camera 116 on FHBA client platform 110. In certain embodiments, the images of the user’s face are preferably obtained under conditions of uniform lighting that is consistent over time. To that end and referring to FIG. 1, embodiments of the invention provide for a mirror device 140 that includes mirror surface 144 circumscribed by a ring illuminator 142. This configuration is intended to define a temporally constant standard of illumination. When the invention is so embodied, temporally varying characteristics in images a user’s face are more readily recognized and labeled.
FIG. 2 is a flow diagram by which an example interaction with an embodiment of the invention can be explained. The interaction of FIG. 2 is simple by design and is no way intended to be limiting. The description of FIG. 2 is intended to illustrate functionality of the configuration illustrated in FIG. 1. Further features of the invention, beyond those described with reference to FIG. 2, will be discussed below.
In operation 210, a user may generate an image of his face, such as by camera 116 of FHBA client platform 110. This may be achieved with or without the illumination standard discussed above. In operation 215, the user’s image is sent to FHBA service platform 120. This may be achieved by suitable communication protocols shared between FHBA client platform 110 and FHBA service platform 120 to realize communication link 145.
In operation 220, image analysis and machine learning is conducted to analyze the user’s skin from the images. Machine learning logic 154 may perform analyses that determine, among other things, apparent age, i.e., the subjective age of the user estimated from a visual appearance of the user’s face; evenness of facial skin tone (is there blotching, age/sun spots, acne scarring and other blemishes); the presence of stress as seen in under eye puffiness, dark circles, overall tone drooping in eyelids/comers of the mouth, fine lines and eye redness; hydration level, often referred to as plump or slick, which presents as a lack of ashy-ness, skin flaking, dullness and fine lines; shine - a nonlinear parameter where the ideal is a moderate amount of shine; condition of pores - a reduced appearance of pores is desirable as it provides a healthy, youthful and smooth skin texture; the presence of acne as characterized by red/inflamed pimples and scarring; the presence of wrinkles, a fold, ridge or crease in the skin may be discovered through the analysis; the presence of sagging, i.e., a droopy appearance of soft tissue caused by elasticity reduction and the presence of crow’s feet, a branching wrinkle specifically located at the outer corner of a person’s eye. Other conditions of the skin may be determined by machine learning logic 154. Further details of the analyses are provided below. Once the analyses have been completed, as determined in operation 225, process 200 may transition to operation 220, whereby the analyses results and the prescribed regimen (products and routines) and/or updates to the regimen are sent to the user via FHBA client interface 150.
In operation 225, it is determined whether the analysis is complete and, responsive to a positive determination thereof, process 200 may transition to operation 230, whereby FHBA service interface 152 sends a recommended regimen or updates to a regimen to FHBA client interface 150 in operation 230. The user may follow the regimen as indicated in operation 235 and, in operation 240, it is determined whether a new interval has commenced. If so, process 200 reiterates from operation 210. FHBA client interface 150 may access calendars and timers (as well as GPS) onboard FHBA client platform 110 as well as access to network-accessible calendars on network 130. Accordingly, once a week, say, FHBA client interface 150 may remind the user to take a picture of his face, i.e., remind him of the new interval. Over time, FHBA system 100 can determine from the images taken at each interval whether the recommended regimen is working and, if not, FHBA system 100 may revise the regimen, e.g., change a product, recommend further lifestyle changes, make a doctor’s appointment, etc.
FIG. 3 is a diagram of data flow between an exemplary FHBA client interface 150 and services of FHBA service platform 120. It should be noted that, in FIG. 3, FHBA service interface 152 has been omitted to avoid unnecessary congestion in the figure. However, those having skill in the relevant arts will recognize the operation of an FHBA service interface 152 to control and facilitate the data flow depicted in FIG. 3. As illustrated in FIG. 3, machine learning logic 154 may comprise a skin analyzer 330, facial appearance progression generator 335 and a regimen recommendation generator 340 and may be communicatively coupled to a user account database 310 and a product database 320. Machine learning logic 154 may train and utilize machine learning models 370 to recommend regimens and to track the progress of the user under the regimen. As those skilled in machine learning will attest, training may involve selecting a set of features, e.g., apparent age, evenness, stress, hydration, shine, pores, acne, wrinkles, sagging, crow’s feet, etc., and assigning labels to image data that reflects the presence or prominence of those features. The assigning of labels may be performed by a subject matter expert or, as explained below, through crowdsourced data. Taking the assigned labels as ground truth, machine learning logic 154 may configure models 370 to predict the degree to which the features are present in a test image, which may change over time. The present invention is not limited to a particular model representation, which may include binary models, multiclass classification models, regression models, etc.
Exemplary user account database 310 contains the data of all users of FHBA system 100 in a secure manner. This includes user profile data, current and past user photos 357 for each user, current and past skin analyses 358 for each user, current and past product recommendations 362 and current and past routine recommendations 364 for each user.
Exemplary product database 320 contains the data of different products that can be used in a regimen. Product database 320 may contain records reflecting the product names, active and inactive ingredients, label information, recommended uses, and so on. In certain embodiments, as illustrated as product input 354, the user (and other users of FHBA system 100) may provide feedback on different products and may enter products not already in product database 320. The present invention is not limited to particular products that can be entered in product database 320.
Skin analyzer 330 is constructed or is otherwise configured to classify various skin conditions or artifacts from imagery of a user’s face using machine learning techniques over models 370. In certain embodiments, photographic images 352 of a user’s face are provided to skin analyzer 330 for analysis. Skin analyzer 330 may implement image preprocessing mechanisms that include cropping, rotating, registering and filtering input images prior to analysis. After any such preprocessing, skin analyzer 330 may apply models 370 to the input image 357 to locate, identify and classify characteristics of the user’s facial skin. Facial appearance progression generator 335 may operate on the user’s facial images to portray how the user’s face would appear sometime in the future. Such progression may be in age, for which age progression techniques may be deployed, or may be in appearance resulting from adherence to a regimen. A progressed image 356 may be provided to the user through FHBA client interface 150.
Regimen recommendation generator 340 may operate on analysis results 358 obtained from skin analyzer 430 towards prescribing a regimen to the user. Models 370 may be trained to predict what products and routines (treatment, cosmetic and lifestyle recommendations, etc.) would be effective in meeting the user’s goal with regard to facial skin characteristics identified in the skin analysis. Regimen recommendation generator 340 may format the analysis results 358 of skin analyzer 330 as a query into, for example, product database 320 based on knowledge encoded on models 370. In response, product database 320 may return product data and metadata 366, and product recommendations 362 and routine recommendations 364 may be provided to FHBA client interface 150.
As indicated above, training of models 370 may be achieved by labeling of image data by an expert. However, in lieu of an expert, certain embodiments of the invention utilize crowdsourced data as training data. FIG. 4 is a diagram of such an embodiment of the invention. During training, users 410 are presented a set of training images 420 over which they are asked to characterize facial skin characteristics and/or facial features. In one embodiment, a suitable scale is constructed, (e.g., integers 1-10) with which users can rate the severity or prominence of the feature. For example, each of users 410 (over time) are presented a large number of facial images and is walked through a set of questions regarding features and/or skin characteristics of the person in the image. Using the scale (1-10), each user 410 is asked to rate the prominence of each of the features (e.g., apparent age, evenness, stress, hydration, shine, pores, acne, wrinkles, sagging, crow’s feet, etc.). The answers to the questions may serve as labels used for training machine language logic 154.
Referring to FIG. 5, there is illustrated an exemplary FHBA client platform 110 in the form of a smartphone having a touchscreen 510 as user interface 118. Exemplary FHBA client interface 150 is implemented on the computational resources of FHBA client platform 110 as discussed with reference to FIG. 1. FHBA client interface 150 may present a photograph of a person’s face in image area 520 and may present via text 142 “On a scale from 1 - 10, where “1” means‘invisible’ and“10” for‘prominently present,’ how would you rate the presence this person’s crow’s feet?” A suitable user interface control 144 (slider control illustrated in FIG. 5) may be implemented on FHBA client interface 150 that allows a user to input its rating.
FIG. 6 is a flow diagram of a crowdsourced training process 600 with which the present invention may be embodied. In operation 610, a training image may be provided to FHBA client interface 150. A set of training images 420 may have been preselected as including illustrative examples of the skin characteristics of interest. In operation 620, the user is provided with a first question and waits for an answer (rating) in operation 630. Such question might be, for example, “On a scale of 1 to 10, where‘G is‘invisible’ and‘10’ is“highly prominent,” how would you rate this models acne?” When the user has answered the question, as determined in operation 630, the user’s answer may be formatted into a label suitable for machine training of machine learning logic 154 in operation 640. In operation 650, it is determined whether all questions relating to the currently displayed image have been answered. If not, process 600 may transition to back to operation 620, whereby the next question is presented. If all questions have been answered, as determined at operation 650, it is determined in operation 560 whether all training images have been presented. If not, process 600 may transition to back to operation 610, whereby the next training image is presented. If all training images have been presented, as determined at operation 660, the labeled images may be used to train models 370 in operation 670.
It is to be understood that all iterations in process 600, e.g., presenting next questions in operations 620 and 650 and/or presenting next images in operations 610 and 660, need not be performed in any one sitting. For example, the user may be prompted to answer a single question at a time (e.g., every time the user logs on) and it is only over time that all questions and images are presented to any one user. Alternatively, users may be selected to answer all questions for all images in a single sitting. Over a large number of users and/or facial images, many labels may be generated for training models 370, where the statistical trends underlying such training reflect public views as opposed to those of a human expert.
FIG. 7 illustrates an example test operation in accordance with the crowdsourced training discussed above. A test image 710, i.e., a user’s own image, may be presented to machine learning logic 154, which analyzes the image per the models trained on the crowdsourced data 720. As illustrated in the figure, machine learning logic 154 estimates that 80% of people surveyed would rate the user’s crow’s feet a 7 out of 10 in terms of prominence, as indicated at 722. Accordingly, machine learning logic 154 may recommend a regimen, (e.g., a cream specially formulated for crow’s feet and recommended application instructions), based on the severity score of 7.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,”“module” or“system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a solid state disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, a phase change memory storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, e.g., an object oriented programming language such as Java, Smalltalk, C++ or the like, or a conventional procedural programming language, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user’s computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). It is to be understood that the software for the computer systems of the present invention embodiments may be developed by one of ordinary skill in the computer arts based on the functional descriptions contained in the specification and flow charts illustrated in the drawings. Further, any references herein of software performing various functions generally refer to computer systems or processors performing those functions under software control.
The computer systems of the present invention embodiments may alternatively be implemented by any type of hardware and/or other processing circuitry. The various functions of the computer systems may be distributed in any manner among any quantity of software modules or units, processing or computer systems and/or circuitry, where the computer or processing systems may be disposed locally or remotely of each other and communicate via any suitable communications medium (e.g., LAN, WAN, Intranet, Internet, hardwire, modem connection, wireless, etc.).
The foregoing examples are illustrative of certain functionality of embodiments of the invention and are not intended to be limiting. Indeed, other functionality and other possible use cases will be apparent to the skilled artisan upon review of this disclosure.

Claims

1. A system comprising:
one or more memory circuits configured to store machine learning models; and one or more processing circuits configured to:
accept at least one image from a user depicting the user’s facial skin; apply the machine learning models to the image to classify facial skin characteristics; and
generate a regimen recommendation to the user based on the classified facial skin characteristics.
2. The system of claim 1, wherein the one or more processing circuits is further configured to: accept another image from a user depicting the user’s facial skin;
apply the machine learning models to the other image to classify facial skin characteristics; and
update the regimen recommendation to the user based on the classified facial skin characteristics of the other image.
3. The system of claim 1, wherein the one or more processors are further configured to process the image to progress the user’s facial skin to a simulated future condition.
4. The system of claim 3, wherein the simulated future condition is that of the user’s facial skin when the regimen is adhered to by the user.
5. The system of claim 1, wherein the one or more processing circuits are physically separated into a client platform and a service platform communicatively coupled through a communication network.
6. The system of claim 1, wherein the one or more processor circuits are further configured to: accept input from a plurality of users that classify the facial skin characteristics from training images provided thereto; and
train the models using the accepted input.
7. A method comprising:
accepting at least one image from a user depicting the user’s facial skin;
applying machine learning models to the image to classify facial skin characteristics; and generating a regimen recommendation to the user based on the classified facial skin characteristics.
8. The method of claim 7 further comprising:
accepting another image from a user depicting the user’s facial skin;
applying the machine learning models to the other image to classify facial skin characteristics; and
updating the regimen recommendation to the user based on the classified facial skin characteristics of the other image.
9. The method of claim 7 further comprising processing the image to progress the user’s facial skin to a simulated future condition.
10. The method of claim 9, wherein the simulated future condition is that of the user’s facial skin when the regimen is adhered to by the user.
11. The method of claim 7 further comprising:
accepting input from a plurality of users that classify the facial skin characteristics from training images provided thereto; and
training the models using the accepted input.
12. An apparatus comprising:
a processing circuit to accept at least one image depicting facial skin of a user;
a communication circuit to convey the accepted image to machine learning models and to receive a regimen recommendation from the machine learning models; and
a user interface circuit to present the regimen recommendation to the user.
13. The apparatus of claim 12, wherein the processing circuit is further configured to: alert the user that another image depicting the user’s facial skin is required according to a predefined schedule;
accept the other image from a user depicting the user’s facial skin;
the communication circuit being further configured to convey the other image to the machine learning models and to receive an updated regimen recommendation from the machine learning models; and
the user interface circuit being further configured to present the updated regimen recommendation to the user.
14. The apparatus of claim 12, wherein the user interface circuit is further configured to present images of human faces to the user;
the processing circuit being further configured to accept input from the user that classifies facial skin characteristics from the images provided thereto through the user interface circuit; and the communication interface circuit being further configured to convey the user input to the machine learning models as training data.
15. The apparatus of claim 14, wherein the user interface circuit is further configured to present a user control by which the facial skin characteristics are rated on a predetermined scale.
16. The apparatus of claim 12, further comprising a camera communicatively coupled to the processing circuit to provide the image from the user thereto.
17. The apparatus of claim 12, wherein the camera, the processing circuit, the user interface circuit and the communication circuit are components of a smartphone.
18. A method comprising:
accepting at least one image depicting facial skin of a user;
conveying the accepted image to machine learning models;
receiving a regimen recommendation from the machine learning models; and
presenting the regimen recommendation to the user.
19. The method of claim 18 further comprising:
alerting the user that another image depicting the user’s facial skin is required according to a predefined schedule;
accepting the other image from a user depicting the user’s facial skin;
conveying the other image to the machine learning models;
receiving an updated regimen recommendation from the machine learning models; and presenting the updated regimen recommendation to the user.
20. The method of claim 18 further comprising:
presenting images of human faces to the user;
accepting input from the user that classifies facial skin characteristics from the images provided thereto; and
conveying the user input to the machine learning models as training data.
21. The method of claim 20 further comprising:
presenting a user control by which the facial skin characteristics are rated on a predetermined scale.
EP19705420.8A 2018-01-05 2019-01-07 Machine-implemented facial health and beauty assistant Pending EP3718050A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201862614001P 2018-01-05 2018-01-05
US201862614080P 2018-01-05 2018-01-05
PCT/US2019/012492 WO2019136354A1 (en) 2018-01-05 2019-01-07 Machine-implemented facial health and beauty assistant

Publications (1)

Publication Number Publication Date
EP3718050A1 true EP3718050A1 (en) 2020-10-07

Family

ID=65433729

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19705420.8A Pending EP3718050A1 (en) 2018-01-05 2019-01-07 Machine-implemented facial health and beauty assistant

Country Status (5)

Country Link
EP (1) EP3718050A1 (en)
JP (1) JP7407115B2 (en)
KR (1) KR102619221B1 (en)
CN (1) CN111868742A (en)
WO (1) WO2019136354A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11544845B2 (en) 2020-07-02 2023-01-03 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body before removing hair for determining a user-specific trapped hair value
US11734823B2 (en) 2020-07-02 2023-08-22 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body for determining a user-specific skin irritation value of the user's skin after removing hair
US20220000417A1 (en) * 2020-07-02 2022-01-06 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a skin area of a user for determining skin laxity
US11801610B2 (en) 2020-07-02 2023-10-31 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body for determining a hair growth direction value of the user's hair
US11419540B2 (en) 2020-07-02 2022-08-23 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a shaving stroke for determining pressure being applied to a user's skin
US11741606B2 (en) 2020-07-02 2023-08-29 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body after removing hair for determining a user-specific hair removal efficiency value
US11455747B2 (en) 2020-07-02 2022-09-27 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body for determining a user-specific skin redness value of the user's skin after removing hair
US11890764B2 (en) 2020-07-02 2024-02-06 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body for determining a hair density value of a user's hair
WO2022069659A2 (en) * 2020-09-30 2022-04-07 Studies&Me A/S A method and a system for determining severity of a skin condition
KR102344700B1 (en) * 2021-02-17 2021-12-31 주식회사 에프앤디파트너스 Clinical imaging device
CN117355875A (en) 2021-05-20 2024-01-05 伊卡美学导航股份有限公司 Computer-based body part analysis method and system
WO2024075109A1 (en) * 2022-10-05 2024-04-11 Facetrom Limited Attractiveness determination system and method

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030065589A1 (en) * 2001-10-01 2003-04-03 Daniella Giacchetti Body image templates with pre-applied beauty products
US20030064350A1 (en) * 2001-10-01 2003-04-03 Gilles Rubinstenn Beauty advisory system and method
US7437344B2 (en) * 2001-10-01 2008-10-14 L'oreal S.A. Use of artificial intelligence in providing beauty advice
JP4761924B2 (en) * 2004-10-22 2011-08-31 株式会社 資生堂 Skin condition diagnosis system and beauty counseling system
US20070058858A1 (en) * 2005-09-09 2007-03-15 Michael Harville Method and system for recommending a product based upon skin color estimated from an image
JP5116965B2 (en) 2005-11-08 2013-01-09 株式会社 資生堂 Cosmetic medical diagnosis method, cosmetic medical diagnosis system, cosmetic medical diagnosis program, and recording medium on which the program is recorded
KR100734849B1 (en) * 2005-11-26 2007-07-03 한국전자통신연구원 Method for recognizing face and apparatus thereof
US8391639B2 (en) 2007-07-23 2013-03-05 The Procter & Gamble Company Method and apparatus for realistic simulation of wrinkle aging and de-aging
US20170330029A1 (en) * 2010-06-07 2017-11-16 Affectiva, Inc. Computer based convolutional processing for image analysis
TWI471117B (en) * 2011-04-29 2015-02-01 Nat Applied Res Laboratoires Human facial skin roughness and wrinkle inspection based on smart phone
US8442321B1 (en) 2011-09-14 2013-05-14 Google Inc. Object recognition in images
US9916538B2 (en) * 2012-09-15 2018-03-13 Z Advanced Computing, Inc. Method and system for feature detection
US9015083B1 (en) 2012-03-23 2015-04-21 Google Inc. Distribution of parameter calculation for iterative optimization methods
JP2014010750A (en) * 2012-07-02 2014-01-20 Nikon Corp Timing determination device, timing determination system, timing determination method, and program for skin external preparation
US9256963B2 (en) * 2013-04-09 2016-02-09 Elc Management Llc Skin diagnostic and image processing systems, apparatus and articles
US9754177B2 (en) 2013-06-21 2017-09-05 Microsoft Technology Licensing, Llc Identifying objects within an image
WO2015134665A1 (en) 2014-03-04 2015-09-11 SignalSense, Inc. Classifying data with deep learning neural records incrementally refined through expert input
US9760935B2 (en) * 2014-05-20 2017-09-12 Modiface Inc. Method, system and computer program product for generating recommendations for products and treatments
US9536293B2 (en) 2014-07-30 2017-01-03 Adobe Systems Incorporated Image assessment using deep convolutional neural networks
JP5950486B1 (en) 2015-04-01 2016-07-13 みずほ情報総研株式会社 Aging prediction system, aging prediction method, and aging prediction program
WO2016203461A1 (en) * 2015-06-15 2016-12-22 Haim Amir Systems and methods for adaptive skin treatment
WO2017165363A1 (en) * 2016-03-21 2017-09-28 The Procter & Gamble Company Systems and methods for providing customized product recommendations
US10002415B2 (en) * 2016-04-12 2018-06-19 Adobe Systems Incorporated Utilizing deep learning for rating aesthetics of digital images
TWI585711B (en) * 2016-05-24 2017-06-01 泰金寶電通股份有限公司 Method for obtaining care information, method for sharing care information, and electronic apparatus therefor
CN107123027B (en) * 2017-04-28 2021-06-01 广东工业大学 Deep learning-based cosmetic recommendation method and system
CN107437073A (en) * 2017-07-19 2017-12-05 竹间智能科技(上海)有限公司 Face skin quality analysis method and system based on deep learning with generation confrontation networking
CN107480719B (en) * 2017-08-17 2020-08-07 广东工业大学 Skin care product recommendation method and system based on skin characteristic evaluation

Also Published As

Publication number Publication date
JP7407115B2 (en) 2023-12-28
KR102619221B1 (en) 2023-12-28
CN111868742A (en) 2020-10-30
JP2021510217A (en) 2021-04-15
WO2019136354A1 (en) 2019-07-11
KR20200105480A (en) 2020-09-07

Similar Documents

Publication Publication Date Title
US11817004B2 (en) Machine-implemented facial health and beauty assistant
US10943156B2 (en) Machine-implemented facial health and beauty assistant
JP7407115B2 (en) Machine performing facial health and beauty assistant
US11832958B2 (en) Automatic image-based skin diagnostics using deep learning
US11574739B2 (en) Systems and methods for formulating personalized skincare products
US11055762B2 (en) Systems and methods for providing customized product recommendations
US11488701B2 (en) Cognitive health state learning and customized advice generation
JP2021523785A (en) Systems and methods for hair coverage analysis
EP3699811A1 (en) Machine-implemented beauty assistant for predicting face aging
KR102445747B1 (en) Method and apparatus for providing total curation service based on skin analysis
CN112686232A (en) Teaching evaluation method and device based on micro expression recognition, electronic equipment and medium
US11854188B2 (en) Machine-implemented acne grading
Janowski et al. EMOTIF–A system for modeling 3D environment evaluation based on 7D emotional vectors
KR102352915B1 (en) Method for operating facial skin condition analysis server using artificial intelligence and big data, and program thereof
US20240112492A1 (en) Curl diagnosis system, apparatus, and method
US20240108280A1 (en) Systems, device, and methods for curly hair assessment and personalization
US20240112491A1 (en) Crowdsourcing systems, device, and methods for curly hair characterization
CN114502061B (en) Image-based automatic skin diagnosis using deep learning
CN117813661A (en) Skin analysis system and method implementations
KR102151251B1 (en) Method for estimating a turnaround time in hospital
US20200051447A1 (en) Cognitive tool for teaching generlization of objects to a person
Cole Edge Computing for Facial Recognition & Emotion Detection

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20200702

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20220629

RAP3 Party data changed (applicant data changed or rights of an application transferred)

Owner name: LUDWINSKI, CELIA

Owner name: L'OREAL