WO2019136354A1 - Assistant de beauté et de santé de visage mis en œuvre par machine - Google Patents

Assistant de beauté et de santé de visage mis en œuvre par machine Download PDF

Info

Publication number
WO2019136354A1
WO2019136354A1 PCT/US2019/012492 US2019012492W WO2019136354A1 WO 2019136354 A1 WO2019136354 A1 WO 2019136354A1 US 2019012492 W US2019012492 W US 2019012492W WO 2019136354 A1 WO2019136354 A1 WO 2019136354A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
facial skin
image
machine learning
learning models
Prior art date
Application number
PCT/US2019/012492
Other languages
English (en)
Inventor
Celia LUDWINSKI
Florent VALCESCHINI
Yuanjie Li
Zhiyuan Song
Christine EL-FAKHRI
Hemant Joshi
Original Assignee
L'oreal
Celia LUDWINSKI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by L'oreal, Celia LUDWINSKI filed Critical L'oreal
Priority to EP19705420.8A priority Critical patent/EP3718050A1/fr
Priority to JP2020536683A priority patent/JP7407115B2/ja
Priority to CN201980017228.2A priority patent/CN111868742A/zh
Priority to KR1020207019230A priority patent/KR102619221B1/ko
Publication of WO2019136354A1 publication Critical patent/WO2019136354A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/40Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
    • G06F18/41Interactive pattern learning with a human teacher
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45DHAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
    • A45D44/00Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
    • A45D2044/007Devices for determining the condition of hair or skin or for selecting the appropriate cosmetic or hair treatment

Definitions

  • a regimen is a systematic plan or course of action intended to improve the health and/or beauty of a human user.
  • a regimen might include cleaning the skin with a specific cleanser and applying specific creams, adhering specific dietary constraints, changing sleep habits, etc.
  • Conventional health and beauty portals also lack the mechanisms by which features of a user’s skin can be tracked over time, such as to observe the efficacy of the recommended regimen. They lack sufficient information explanation and advice, are not targeted separately for men and women, and suffer from false positives in detecting certain conditions (hair misidentified as wrinkles, for example).
  • the ability to recommend a health and/or beauty regimen through data analysis, as well as to track individual users’ progress through that regimen by data analysis has yet to be realized on a machine.
  • One or more images are accepted by one or more processing circuits from a user depicting the user’s facial skin.
  • machine learning models stored in one or more memory circuits are applied to the one or more images to classify facial skin characteristics, identify significant objects, determine beauty trends, and the like.
  • a regimen recommendation is provided to the user based on the classified facial skin characteristics.
  • a system comprising: one or more memory circuits configured to store machine learning models; and one or more processing circuits configured to: accept at least one image from a user depicting the user’s facial skin; apply the machine learning models to the image to classify facial skin characteristics; and generate a regimen recommendation to the user based on the classified facial skin characteristics.
  • the one or more processing circuits is further configured to: accept another image from a user depicting the user’s facial skin; apply the machine learning models to the other image to classify facial skin characteristics; and update the regimen recommendation to the user based on the classified facial skin characteristics of the other image.
  • the one or more processors are further configured to process the image to progress the user’s facial skin to a simulated future condition.
  • the simulated future condition is that of the user’s facial skin when the regimen is adhered to by the user.
  • the one or more processing circuits are physically separated into a client platform and a service platform communicatively coupled through a communication network.
  • the one or more processor circuits are further configured to: accept input from a plurality of users that classify the facial skin characteristics from training images provided thereto; and train the models using the accepted input.
  • a method comprising: accepting at least one image from a user depicting the user’s facial skin; applying machine learning models to the image to classify facial skin characteristics; and generating a regimen recommendation to the user based on the classified facial skin characteristics.
  • an apparatus comprising: a processing circuit to accept at least one image depicting facial skin of a user; a communication circuit to convey the accepted image to machine learning models and to receive a regimen recommendation from the machine learning models; and a user interface circuit to present the regimen recommendation to the user.
  • the processing circuit is further configured to: alert the user that another image depicting the user’s facial skin is required according to a predefined schedule; accept the other image from a user depicting the user’s facial skin; the communication circuit being further configured to convey the other image to the machine learning models and to receive an updated regimen recommendation from the machine learning models; and the user interface circuit being further configured to present the updated regimen recommendation to the user.
  • the user interface circuit is further configured to present images of human faces to the user; the processing circuit being further configured to accept input from the user that classifies facial skin characteristics from the images provided thereto through the user interface circuit; and the communication interface circuit being further configured to convey the user input to the machine learning models as training data.
  • the user interface circuit is further configured to present a user control by which the facial skin characteristics are rated on a predetermined scale.
  • the apparatus includes a camera communicatively coupled to the processing circuit to provide the image from the user thereto.
  • the camera, the processing circuit, the user interface circuit and the communication circuit are components of a smartphone.
  • a method comprising: accepting at least one image depicting facial skin of a user; conveying the accepted image to machine learning models; receiving a regimen recommendation from the machine learning models; and presenting the regimen recommendation to the user.
  • FIG. 1 is a schematic block diagram of an example system configuration by which the present general inventive concept can be embodied.
  • FIG. 2 is a flow diagram of a simple user interaction with an embodiment of the present general inventive concept.
  • FIG. 3 is a schematic block diagram of example data flow of an embodiment of the present general inventive concept.
  • FIG. 4 is a block diagram of crowdsourced training of machine learning models according to an embodiment of the present general inventive concept.
  • FIG. 5 is a diagram of an example client platform device on which the present general inventive concept can be embodied.
  • FIG. 6 is a flow diagram of example crowdsourced training of machine learning models according to an embodiment of the present general inventive concept.
  • FIG. 7 is a diagram illustrating a test operation in accordance with the crowdsourced machine learning model training.
  • exemplary is used herein to mean, “serving as an example, instance or illustration.” Any embodiment of construction, process, design, technique, etc., designated herein as exemplary is not necessarily to be construed as preferred or advantageous over other such embodiments. Particular quality or fitness of the examples indicated herein as exemplary is neither intended nor should be inferred.
  • FIG. 1 is a schematic block diagram of an exemplary facial health and beauty assistant (FHBA) system 100 comprising an FHBA client platform 110 and an FHBA service platform 120 communicatively coupled through a network 130.
  • FHBA client platform 110 is a smartphone, tablet computer or other mobile computing device, although the present invention is not so limited.
  • exemplary FHBA client platform 110 comprises a processor 112, memory 114, a camera 115, a user interface 116 and a communication interface 118 over which an FHBA client interface 150 may be implemented.
  • FHBA client interface 150 provides the primary portal through which a user accesses FHBA system 100.
  • FHBA service platform 120 comprises one or more server computers, each comprising a processor 122, a memory 124, a user interface 126 and a communication interface. These resources of FHBA service platform 120 may be utilized to implement an FHBA service interface 152, machine learning logic 154 and a storage memory 156.
  • Storage memory 156 represents a sufficient amount of volatile and persistent memory to embody the invention. Storage memory 156 may contain vast amounts of encoded human knowledge as well as space for the private profile of a single user. Storage memory 156 may further store processor instructions that, when executed by one or more processors 122, perform some task or procedure for embodiments of the invention. Storage memory 156 may further store user models (coefficients, weights, processor instructions, etc.) that are operable with machine learning logic 154 to prescribe a particular regimen for a user and track the user’s progress under the regimen.
  • Exemplary FHBA service interface 152 provides the infrastructure by which network access to FHBA services are both facilitated and controlled.
  • FHBA client interface 150 and FHBA service interface 152 communicate via a suitable communication link 145 using the signaling and data transport protocols for which communication interface 118 and communication interface 128 are constructed or otherwise configured.
  • FHBA service interface 156 may implement suitable Internet hosting services as well as authentication and other security mechanisms that allow access only to authorized users and protect the users’ private data. Additionally, FHBA service interface 152 may realize an application programming interface (API) that affords FHBA client interface 150 communication with, for example, machine learning logic 154.
  • API application programming interface
  • Machine learning logic 154 provides the infrastructure for embodiments of the invention to learn from and make predictions about data without being explicitly programmed to do so.
  • machine learning logic 154 implements one or more convolutional neural networks (CNNs), the models for which may be trained using open source datasets or crowdsourced data sets, as explained below.
  • CNNs convolutional neural networks
  • Other machine learning techniques may be used in conjunction with the present invention including, but not limited to, decision tree learning, association rule learning, artificial neural networks, deep learning, inductive logic programming, support vector machines, clustering, Bayesian networks, reinforcement learning, representation learning, similarity and metric learning, sparse dictionary learning, genetic algorithms, rule- based machine learning and learning classifiers. Additional techniques described in U.S. Patent No. 8,442,321, U.S. Patent No.
  • Embodiments of the invention determine various regimens for a user based on images of the user taken by camera 116 on FHBA client platform 110.
  • the images of the user’s face are preferably obtained under conditions of uniform lighting that is consistent over time.
  • embodiments of the invention provide for a mirror device 140 that includes mirror surface 144 circumscribed by a ring illuminator 142. This configuration is intended to define a temporally constant standard of illumination. When the invention is so embodied, temporally varying characteristics in images a user’s face are more readily recognized and labeled.
  • FIG. 2 is a flow diagram by which an example interaction with an embodiment of the invention can be explained.
  • the interaction of FIG. 2 is simple by design and is no way intended to be limiting.
  • the description of FIG. 2 is intended to illustrate functionality of the configuration illustrated in FIG. 1. Further features of the invention, beyond those described with reference to FIG. 2, will be discussed below.
  • a user may generate an image of his face, such as by camera 116 of FHBA client platform 110. This may be achieved with or without the illumination standard discussed above.
  • the user’s image is sent to FHBA service platform 120. This may be achieved by suitable communication protocols shared between FHBA client platform 110 and FHBA service platform 120 to realize communication link 145.
  • Machine learning logic 154 may perform analyses that determine, among other things, apparent age, i.e., the subjective age of the user estimated from a visual appearance of the user’s face; evenness of facial skin tone (is there blotching, age/sun spots, acne scarring and other blemishes); the presence of stress as seen in under eye puffiness, dark circles, overall tone drooping in eyelids/comers of the mouth, fine lines and eye redness; hydration level, often referred to as plump or slick, which presents as a lack of ashy-ness, skin flaking, dullness and fine lines; shine - a nonlinear parameter where the ideal is a moderate amount of shine; condition of pores - a reduced appearance of pores is desirable as it provides a healthy, youthful and smooth skin texture; the presence of acne as characterized by red/inflamed pimples and scarring; the presence of wrinkles, a fold,
  • process 200 may transition to operation 220, whereby the analyses results and the prescribed regimen (products and routines) and/or updates to the regimen are sent to the user via FHBA client interface 150.
  • process 200 may transition to operation 230, whereby FHBA service interface 152 sends a recommended regimen or updates to a regimen to FHBA client interface 150 in operation 230.
  • the user may follow the regimen as indicated in operation 235 and, in operation 240, it is determined whether a new interval has commenced. If so, process 200 reiterates from operation 210.
  • FHBA client interface 150 may access calendars and timers (as well as GPS) onboard FHBA client platform 110 as well as access to network-accessible calendars on network 130.
  • FHBA client interface 150 may remind the user to take a picture of his face, i.e., remind him of the new interval.
  • FHBA system 100 can determine from the images taken at each interval whether the recommended regimen is working and, if not, FHBA system 100 may revise the regimen, e.g., change a product, recommend further lifestyle changes, make a doctor’s appointment, etc.
  • FIG. 3 is a diagram of data flow between an exemplary FHBA client interface 150 and services of FHBA service platform 120. It should be noted that, in FIG. 3, FHBA service interface 152 has been omitted to avoid unnecessary congestion in the figure. However, those having skill in the relevant arts will recognize the operation of an FHBA service interface 152 to control and facilitate the data flow depicted in FIG. 3.
  • machine learning logic 154 may comprise a skin analyzer 330, facial appearance progression generator 335 and a regimen recommendation generator 340 and may be communicatively coupled to a user account database 310 and a product database 320. Machine learning logic 154 may train and utilize machine learning models 370 to recommend regimens and to track the progress of the user under the regimen.
  • training may involve selecting a set of features, e.g., apparent age, evenness, stress, hydration, shine, pores, acne, wrinkles, sagging, crow’s feet, etc., and assigning labels to image data that reflects the presence or prominence of those features.
  • the assigning of labels may be performed by a subject matter expert or, as explained below, through crowdsourced data.
  • machine learning logic 154 may configure models 370 to predict the degree to which the features are present in a test image, which may change over time.
  • the present invention is not limited to a particular model representation, which may include binary models, multiclass classification models, regression models, etc.
  • Exemplary user account database 310 contains the data of all users of FHBA system 100 in a secure manner. This includes user profile data, current and past user photos 357 for each user, current and past skin analyses 358 for each user, current and past product recommendations 362 and current and past routine recommendations 364 for each user.
  • Exemplary product database 320 contains the data of different products that can be used in a regimen.
  • Product database 320 may contain records reflecting the product names, active and inactive ingredients, label information, recommended uses, and so on.
  • product input 354 the user (and other users of FHBA system 100) may provide feedback on different products and may enter products not already in product database 320.
  • the present invention is not limited to particular products that can be entered in product database 320.
  • Skin analyzer 330 is constructed or is otherwise configured to classify various skin conditions or artifacts from imagery of a user’s face using machine learning techniques over models 370.
  • photographic images 352 of a user’s face are provided to skin analyzer 330 for analysis.
  • Skin analyzer 330 may implement image preprocessing mechanisms that include cropping, rotating, registering and filtering input images prior to analysis. After any such preprocessing, skin analyzer 330 may apply models 370 to the input image 357 to locate, identify and classify characteristics of the user’s facial skin.
  • Facial appearance progression generator 335 may operate on the user’s facial images to portray how the user’s face would appear sometime in the future. Such progression may be in age, for which age progression techniques may be deployed, or may be in appearance resulting from adherence to a regimen.
  • a progressed image 356 may be provided to the user through FHBA client interface 150.
  • Regimen recommendation generator 340 may operate on analysis results 358 obtained from skin analyzer 430 towards prescribing a regimen to the user. Models 370 may be trained to predict what products and routines (treatment, cosmetic and lifestyle recommendations, etc.) would be effective in meeting the user’s goal with regard to facial skin characteristics identified in the skin analysis.
  • Regimen recommendation generator 340 may format the analysis results 358 of skin analyzer 330 as a query into, for example, product database 320 based on knowledge encoded on models 370.
  • product database 320 may return product data and metadata 366, and product recommendations 362 and routine recommendations 364 may be provided to FHBA client interface 150.
  • FIG. 4 is a diagram of such an embodiment of the invention.
  • users 410 are presented a set of training images 420 over which they are asked to characterize facial skin characteristics and/or facial features.
  • a suitable scale is constructed, (e.g., integers 1-10) with which users can rate the severity or prominence of the feature. For example, each of users 410 (over time) are presented a large number of facial images and is walked through a set of questions regarding features and/or skin characteristics of the person in the image.
  • each user 410 is asked to rate the prominence of each of the features (e.g., apparent age, evenness, stress, hydration, shine, pores, acne, wrinkles, sagging, crow’s feet, etc.).
  • the answers to the questions may serve as labels used for training machine language logic 154.
  • FHBA client platform 110 in the form of a smartphone having a touchscreen 510 as user interface 118.
  • Exemplary FHBA client interface 150 is implemented on the computational resources of FHBA client platform 110 as discussed with reference to FIG. 1.
  • FHBA client interface 150 may present a photograph of a person’s face in image area 520 and may present via text 142 “On a scale from 1 - 10, where “1” means‘invisible’ and“10” for‘prominently present,’ how would you rate the presence this person’s crow’s feet?”
  • a suitable user interface control 144 (slider control illustrated in FIG. 5) may be implemented on FHBA client interface 150 that allows a user to input its rating.
  • FIG. 6 is a flow diagram of a crowdsourced training process 600 with which the present invention may be embodied.
  • a training image may be provided to FHBA client interface 150.
  • a set of training images 420 may have been preselected as including illustrative examples of the skin characteristics of interest.
  • the user is provided with a first question and waits for an answer (rating) in operation 630.
  • Such question might be, for example, “On a scale of 1 to 10, where‘G is‘invisible’ and‘10’ is“highly prominent,” how would you rate this models acne?”
  • the user’s answer may be formatted into a label suitable for machine training of machine learning logic 154 in operation 640.
  • operation 650 it is determined whether all questions relating to the currently displayed image have been answered. If not, process 600 may transition to back to operation 620, whereby the next question is presented. If all questions have been answered, as determined at operation 650, it is determined in operation 560 whether all training images have been presented. If not, process 600 may transition to back to operation 610, whereby the next training image is presented. If all training images have been presented, as determined at operation 660, the labeled images may be used to train models 370 in operation 670.
  • process 600 e.g., presenting next questions in operations 620 and 650 and/or presenting next images in operations 610 and 660, need not be performed in any one sitting.
  • the user may be prompted to answer a single question at a time (e.g., every time the user logs on) and it is only over time that all questions and images are presented to any one user.
  • users may be selected to answer all questions for all images in a single sitting. Over a large number of users and/or facial images, many labels may be generated for training models 370, where the statistical trends underlying such training reflect public views as opposed to those of a human expert.
  • FIG. 7 illustrates an example test operation in accordance with the crowdsourced training discussed above.
  • a test image 710 i.e., a user’s own image
  • machine learning logic 154 may analyze the image per the models trained on the crowdsourced data 720.
  • machine learning logic 154 estimates that 80% of people surveyed would rate the user’s crow’s feet a 7 out of 10 in terms of prominence, as indicated at 722.
  • machine learning logic 154 may recommend a regimen, (e.g., a cream specially formulated for crow’s feet and recommended application instructions), based on the severity score of 7.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,”“module” or“system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, e.g., an object oriented programming language such as Java, Smalltalk, C++ or the like, or a conventional procedural programming language, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user’s computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • the computer systems of the present invention embodiments may alternatively be implemented by any type of hardware and/or other processing circuitry.
  • the various functions of the computer systems may be distributed in any manner among any quantity of software modules or units, processing or computer systems and/or circuitry, where the computer or processing systems may be disposed locally or remotely of each other and communicate via any suitable communications medium (e.g., LAN, WAN, Intranet, Internet, hardwire, modem connection, wireless, etc.).
  • any suitable communications medium e.g., LAN, WAN, Intranet, Internet, hardwire, modem connection, wireless, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

Une image d'un utilisateur représentant la peau du visage de l'utilisateur est acceptée par un ou plusieurs circuits de traitement. Des modèles d'apprentissage automatique mémorisés dans un ou plusieurs circuits de mémoire sont appliqués à l'image de façon à classifier les caractéristiques de la peau du visage. Une recommandation de régime est fournit à l'utilisateur sur la base des caractéristiques de peau du visage classifiées.
PCT/US2019/012492 2018-01-05 2019-01-07 Assistant de beauté et de santé de visage mis en œuvre par machine WO2019136354A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP19705420.8A EP3718050A1 (fr) 2018-01-05 2019-01-07 Assistant de beauté et de santé de visage mis en oeuvre par machine
JP2020536683A JP7407115B2 (ja) 2018-01-05 2019-01-07 機械実行顔面健康及び美容アシスタント
CN201980017228.2A CN111868742A (zh) 2018-01-05 2019-01-07 机器实现的面部健康和美容助理
KR1020207019230A KR102619221B1 (ko) 2018-01-05 2019-01-07 머신 구현 안면 건강 및 미용 보조기

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201862614080P 2018-01-05 2018-01-05
US201862614001P 2018-01-05 2018-01-05
US62/614,001 2018-01-05
US62/614,080 2018-01-05

Publications (1)

Publication Number Publication Date
WO2019136354A1 true WO2019136354A1 (fr) 2019-07-11

Family

ID=65433729

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/012492 WO2019136354A1 (fr) 2018-01-05 2019-01-07 Assistant de beauté et de santé de visage mis en œuvre par machine

Country Status (5)

Country Link
EP (1) EP3718050A1 (fr)
JP (1) JP7407115B2 (fr)
KR (1) KR102619221B1 (fr)
CN (1) CN111868742A (fr)
WO (1) WO2019136354A1 (fr)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220000417A1 (en) * 2020-07-02 2022-01-06 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a skin area of a user for determining skin laxity
WO2022069659A3 (fr) * 2020-09-30 2022-06-16 Studies&Me A/S Procédé et système de détermination de gravité de problème de peau
US11419540B2 (en) 2020-07-02 2022-08-23 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a shaving stroke for determining pressure being applied to a user's skin
US11455747B2 (en) 2020-07-02 2022-09-27 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body for determining a user-specific skin redness value of the user's skin after removing hair
WO2022243498A1 (fr) 2021-05-20 2022-11-24 Ica Aesthetic Navigation Gmbh Procédés et systèmes d'analyse de partie corporelle basés sur un ordinateur
US11544845B2 (en) 2020-07-02 2023-01-03 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body before removing hair for determining a user-specific trapped hair value
US11734823B2 (en) 2020-07-02 2023-08-22 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body for determining a user-specific skin irritation value of the user's skin after removing hair
US11741606B2 (en) 2020-07-02 2023-08-29 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body after removing hair for determining a user-specific hair removal efficiency value
US11801610B2 (en) 2020-07-02 2023-10-31 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body for determining a hair growth direction value of the user's hair
US11890764B2 (en) 2020-07-02 2024-02-06 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body for determining a hair density value of a user's hair
WO2024075109A1 (fr) * 2022-10-05 2024-04-11 Facetrom Limited Système et procédé de détermination d'attractivité

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102344700B1 (ko) * 2021-02-17 2021-12-31 주식회사 에프앤디파트너스 임상용 촬영장치

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030065636A1 (en) * 2001-10-01 2003-04-03 L'oreal Use of artificial intelligence in providing beauty advice
US20030065589A1 (en) * 2001-10-01 2003-04-03 Daniella Giacchetti Body image templates with pre-applied beauty products
US20030064350A1 (en) * 2001-10-01 2003-04-03 Gilles Rubinstenn Beauty advisory system and method
US8442321B1 (en) 2011-09-14 2013-05-14 Google Inc. Object recognition in images
US20140376819A1 (en) 2013-06-21 2014-12-25 Microsoft Corporation Image recognition by image search
US9015083B1 (en) 2012-03-23 2015-04-21 Google Inc. Distribution of parameter calculation for iterative optimization methods
US9324022B2 (en) 2014-03-04 2016-04-26 Signal/Sense, Inc. Classifying data with deep learning neural records incrementally refined through expert input
US9536293B2 (en) 2014-07-30 2017-01-03 Adobe Systems Incorporated Image assessment using deep convolutional neural networks
US20170270593A1 (en) * 2016-03-21 2017-09-21 The Procter & Gamble Company Systems and Methods For Providing Customized Product Recommendations

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4761924B2 (ja) * 2004-10-22 2011-08-31 株式会社 資生堂 肌状態診断システムおよび美容のためのカウンセリングシステム
US20070058858A1 (en) * 2005-09-09 2007-03-15 Michael Harville Method and system for recommending a product based upon skin color estimated from an image
JP5116965B2 (ja) * 2005-11-08 2013-01-09 株式会社 資生堂 美容医療診断方法、美容医療診断システム、美容医療診断プログラム、及び該プログラムが記録された記録媒体
KR100734849B1 (ko) * 2005-11-26 2007-07-03 한국전자통신연구원 얼굴 인식 방법 및 그 장치
US8391639B2 (en) 2007-07-23 2013-03-05 The Procter & Gamble Company Method and apparatus for realistic simulation of wrinkle aging and de-aging
US20170330029A1 (en) * 2010-06-07 2017-11-16 Affectiva, Inc. Computer based convolutional processing for image analysis
TWI471117B (zh) * 2011-04-29 2015-02-01 Nat Applied Res Laboratoires 可用於行動裝置之人臉膚質評估演算介面裝置
US9916538B2 (en) * 2012-09-15 2018-03-13 Z Advanced Computing, Inc. Method and system for feature detection
JP2014010750A (ja) * 2012-07-02 2014-01-20 Nikon Corp 皮膚外用剤の塗布タイミング判定装置、タイミング判定システム、タイミング判定方法、及びプログラム
US9256963B2 (en) * 2013-04-09 2016-02-09 Elc Management Llc Skin diagnostic and image processing systems, apparatus and articles
US9760935B2 (en) * 2014-05-20 2017-09-12 Modiface Inc. Method, system and computer program product for generating recommendations for products and treatments
JP5950486B1 (ja) 2015-04-01 2016-07-13 みずほ情報総研株式会社 加齢化予測システム、加齢化予測方法及び加齢化予測プログラム
JP2018531437A (ja) * 2015-06-15 2018-10-25 アミール,ハイム 適応皮膚処置のためのシステムおよび方法
US10002415B2 (en) * 2016-04-12 2018-06-19 Adobe Systems Incorporated Utilizing deep learning for rating aesthetics of digital images
TWI585711B (zh) * 2016-05-24 2017-06-01 泰金寶電通股份有限公司 獲得保養信息的方法、分享保養信息的方法及其電子裝置
CN107123027B (zh) * 2017-04-28 2021-06-01 广东工业大学 一种基于深度学习的化妆品推荐方法及系统
CN107437073A (zh) * 2017-07-19 2017-12-05 竹间智能科技(上海)有限公司 基于深度学习与生成对抗网路的人脸肤质分析方法及系统
CN107480719B (zh) * 2017-08-17 2020-08-07 广东工业大学 一种基于皮肤特性评价的护肤产品的推荐方法及系统

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030065636A1 (en) * 2001-10-01 2003-04-03 L'oreal Use of artificial intelligence in providing beauty advice
US20030065589A1 (en) * 2001-10-01 2003-04-03 Daniella Giacchetti Body image templates with pre-applied beauty products
US20030064350A1 (en) * 2001-10-01 2003-04-03 Gilles Rubinstenn Beauty advisory system and method
US8442321B1 (en) 2011-09-14 2013-05-14 Google Inc. Object recognition in images
US9015083B1 (en) 2012-03-23 2015-04-21 Google Inc. Distribution of parameter calculation for iterative optimization methods
US20140376819A1 (en) 2013-06-21 2014-12-25 Microsoft Corporation Image recognition by image search
US9324022B2 (en) 2014-03-04 2016-04-26 Signal/Sense, Inc. Classifying data with deep learning neural records incrementally refined through expert input
US9536293B2 (en) 2014-07-30 2017-01-03 Adobe Systems Incorporated Image assessment using deep convolutional neural networks
US20170270593A1 (en) * 2016-03-21 2017-09-21 The Procter & Gamble Company Systems and Methods For Providing Customized Product Recommendations

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220000417A1 (en) * 2020-07-02 2022-01-06 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a skin area of a user for determining skin laxity
US11419540B2 (en) 2020-07-02 2022-08-23 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a shaving stroke for determining pressure being applied to a user's skin
US11455747B2 (en) 2020-07-02 2022-09-27 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body for determining a user-specific skin redness value of the user's skin after removing hair
US11544845B2 (en) 2020-07-02 2023-01-03 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body before removing hair for determining a user-specific trapped hair value
US11734823B2 (en) 2020-07-02 2023-08-22 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body for determining a user-specific skin irritation value of the user's skin after removing hair
US11741606B2 (en) 2020-07-02 2023-08-29 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body after removing hair for determining a user-specific hair removal efficiency value
US11801610B2 (en) 2020-07-02 2023-10-31 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body for determining a hair growth direction value of the user's hair
US11890764B2 (en) 2020-07-02 2024-02-06 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a user's body for determining a hair density value of a user's hair
US11896385B2 (en) 2020-07-02 2024-02-13 The Gillette Company Llc Digital imaging systems and methods of analyzing pixel data of an image of a shaving stroke for determining pressure being applied to a user's skin
WO2022069659A3 (fr) * 2020-09-30 2022-06-16 Studies&Me A/S Procédé et système de détermination de gravité de problème de peau
WO2022243498A1 (fr) 2021-05-20 2022-11-24 Ica Aesthetic Navigation Gmbh Procédés et systèmes d'analyse de partie corporelle basés sur un ordinateur
WO2024075109A1 (fr) * 2022-10-05 2024-04-11 Facetrom Limited Système et procédé de détermination d'attractivité

Also Published As

Publication number Publication date
JP7407115B2 (ja) 2023-12-28
EP3718050A1 (fr) 2020-10-07
CN111868742A (zh) 2020-10-30
KR102619221B1 (ko) 2023-12-28
KR20200105480A (ko) 2020-09-07
JP2021510217A (ja) 2021-04-15

Similar Documents

Publication Publication Date Title
US11817004B2 (en) Machine-implemented facial health and beauty assistant
US10943156B2 (en) Machine-implemented facial health and beauty assistant
JP7407115B2 (ja) 機械実行顔面健康及び美容アシスタント
US11832958B2 (en) Automatic image-based skin diagnostics using deep learning
US11574739B2 (en) Systems and methods for formulating personalized skincare products
US11055762B2 (en) Systems and methods for providing customized product recommendations
US11227248B2 (en) Facilitation of cognitive conflict resolution between parties
US11488701B2 (en) Cognitive health state learning and customized advice generation
EP3699811A1 (fr) Assistant de beauté mis en uvre par machine pour prédire le vieillissement du visage
JPWO2020113326A5 (fr)
KR102445747B1 (ko) 피부 분석에 기초하여 토탈 큐레이션 서비스를 제공하기 위한 방법 및 장치
US11854188B2 (en) Machine-implemented acne grading
KR102404302B1 (ko) 인공 지능과 빅 데이터를 이용한 얼굴 피부 상태 분석 서버의 작동 방법과 프로그램
KR102151251B1 (ko) 내원 소요 시간을 예측하는 방법
US20240112492A1 (en) Curl diagnosis system, apparatus, and method
US20240108280A1 (en) Systems, device, and methods for curly hair assessment and personalization
CN117813661A (zh) 皮肤分析系统和方法具体实施
US20200051447A1 (en) Cognitive tool for teaching generlization of objects to a person
FR3145227A1 (fr) SYSTÈME, APPAREIL ET PROCÉDÉ DE diagnostic de boucle
Cole Edge Computing for Facial Recognition & Emotion Detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19705420

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020536683

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019705420

Country of ref document: EP

Effective date: 20200702