WO2016001248A1 - Method for analyzing cosmetic routines of users and associated system - Google Patents

Method for analyzing cosmetic routines of users and associated system Download PDF

Info

Publication number
WO2016001248A1
WO2016001248A1 PCT/EP2015/064887 EP2015064887W WO2016001248A1 WO 2016001248 A1 WO2016001248 A1 WO 2016001248A1 EP 2015064887 W EP2015064887 W EP 2015064887W WO 2016001248 A1 WO2016001248 A1 WO 2016001248A1
Authority
WO
WIPO (PCT)
Prior art keywords
cosmetic
video
interest
routine
user
Prior art date
Application number
PCT/EP2015/064887
Other languages
French (fr)
Inventor
Marie-Stéphanie CESBRON
Lydie LECUYER
Guillaume LEBOSSE
Uy KHOU
Original Assignee
L'oreal
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by L'oreal filed Critical L'oreal
Publication of WO2016001248A1 publication Critical patent/WO2016001248A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Definitions

  • This invention relates to a method for analyzing cosmetic routines of users.
  • Cosmetic routine means gestures, expressions or actions usually carried out by a user when the user uses a cosmetic product in an environment.
  • the cosmetic product is for example a fluid product such as a liquid, a gel, a foam, a cream or a powder.
  • a "cosmetic product” is a product as defined in EC Regulation N° 1223/2009 of the European Parliament and the Council of November 30, 2009, relating to cosmetic products.
  • the method for analysis according to the invention is advantageously implemented using a capture device comprising a camera.
  • the capture device is preferably mobile, in order to be displaced with the user, in order to capture cosmetic routines in any location.
  • a known capture device is described in document US 2003/0065256.
  • Such a device generally comprises an electronic apparatus able to acquire a static image of the user and to send this image to a processing unit so that it can be analyzed. The image is then analyzed in order to establish a dermatological diagnosis, medical follow-up or to assess the efficacy over time of cosmetic products used by the user.
  • This device is therefore suited for analyzing the condition of a part of the body of a user at a given instant.
  • the information collected with the device is insufficient to fully and reliably target the habits of a user, and the analysis carried out is limited to a local observation of one surface of the body.
  • One purpose of the invention is to obtain a method that makes it possible to enter data pertaining to a cosmetic routine associated with a user, in an environment, in a simple and reproducible manner, for the purposes of a subsequent analysis and/or a reproduction of this routine.
  • the invention relates to a method of the aforementioned type, the method comprising the following steps:
  • the method according to the invention can include one or more of the following features, taken alone or in any technically possible combination:
  • the electronic device comprises a mirror and at least one camera suitable for filming a user of the mirror, the video being recorded by the camera;
  • the electronic device comprises at least two cameras suitable for recording, respectively, a general view and a detail view of the user, the method comprising the simultaneous capture by each camera of two videos including the same cosmetic routine;
  • the step of associating keywords comprises the inscription of at least one keyword of cosmetic interest into the metadata of each video sequence file
  • the keyword of cosmetic interest is chosen from a type of cosmetic action, a cosmetic operation performed during the routine, a portion of the body affected by the routine, and a type of product associated with the routine;
  • the method comprises a step of associating at least one context data item with each video sequence, the context data being chosen from:
  • - spatiotemporal data recorded by the electronic device in particular a date and/or time of capture, a capture length, a geographic location of the capture, and
  • the step of associating keywords of cosmetic interest is at least partially automated
  • the routing of video data involves the use of a secure transmission protocol between the electronic device and the storage server;
  • the video data is compressed by the electronic device before being sent to the storage server and is then decompressed by the server;
  • the method comprises the analysis of at least one video sequence in order to deduce at least one parameter of the cosmetic routine associated with the keyword of cosmetic interest;
  • the step of analysis of a video sequence file comprises a phase of extracting a movement of at least one body part of the user performed in the video sequence, followed by a phase of recording parameters of cosmetic routines characteristic of the movement performed, in a movement database;
  • the or each cosmetic routine parameter is exported to an automated system capable of reproducing the sequence of cosmetic interest;
  • the analysis step comprises a video reconstruction phase in order to filter, on each image of the video, at least one area of interest and at least one background, the cosmetic routine parameter being extracted from the or each area of interest;
  • the video analysis step comprises a phase of determining sensory parameters of the user, such as a dimensional modification of a body area or a change in color or temperature of a body area;
  • the step of selecting sequences of cosmetic interest includes a phase of removing noisy sequences.
  • the invention also relates to a system for analyzing cosmetic routines comprising the following steps:
  • an electronic device for capture and recording of at least one video including a cosmetic routine of a user and spatiotemporal data associated with the cosmetic routine
  • processing application suitable for selecting, in the video, sequences of cosmetic interest, and creating a video sequence file from each sequence of interest, the processing application being suitable for associating at least one keyword of cosmetic interest with each video sequence.
  • FIG. 1 is a schematic representation of a first system for analyzing cosmetic routines according to the invention, with the system for analyzing cosmetic routines comprising electronic capture device, as well as a storage server;
  • FIG. 2 is a schematic representation before the electronic capture device, with the electronic device comprising a mirror and a central unit;
  • FIG. 3 is a rear schematic representation of the electronic capture device
  • FIG. 4 is a schematic representation of the central unit electronic device according to the invention.
  • FIG. 5 is a schematic representation of an application of the central unit
  • FIG. 6 is a schematic representation of the storage server
  • FIG. 7 is a schematic representation of an application of the storage server
  • FIG. 8 is a flow chart of an example of the method according to the invention for analyzing cosmetic routines of users
  • video designates a video signal, comprising a plurality of successive images, at a frequency higher than 1 Hz, and advantageously, the audio signal that is associated with it.
  • FIG. 1 A system 10 for analyzing cosmetic routines according to the invention is shown in Figure 1 .
  • the cosmetic routine performed by the user comprises gestures, expressions or actions usually carried out by a user when the user uses a cosmetic product in a given environment.
  • This routine is for example an application routine for a care and/or makeup cosmetic product on a portion of the face.
  • the system 10 comprises an electronic device 12, advantageously mobile, able to acquire video data of a cosmetic routine and data on the characteristics of the environment in which the routine is performed, and a storage server 16 of the video data, located remotely from the electronic device 12, in order to collect, store and process the data collected by the device 12.
  • the system 10 further comprises a unit for access 18 to the serve able to communicate with the server 16 and to send it instructions. It optionally comprises an automated system 20 for using the video data processed by the server 16.
  • the electronic device 12 comprises a mirror 22, a lighting system 24, and at least one camera 26 fixed on the mirror 22.
  • the mirror 22 comprises a fastening arrangement 23 that allows the user to position the mirror in different locations.
  • the lighting system 24 is advantageously located on the mirror 22 in such a way as to illuminate the area of the body of interest.
  • the lighting system 24 comprises for example LEDs.
  • the camera 26 is of a size that is suitable for being fastened onto the mirror 22.
  • the electronic device 12 comprises at least two cameras 26 able to simultaneously capture two different views of the user performing a cosmetic routine, for example, a first view of the entire body of the user and a second detailed view of a portion of the body of the user.
  • the cameras 26 preferably have separate field depths and apertures.
  • the electronic device 12 is arranged in a water-resistant support, not shown in figure 1 .
  • the central unit 28, advantageously mobile, is able to be detached from the other elements of the electronic device 12.
  • the central unit 28 comprises a processor 34, a memory 38 comprising a software application 42 for data management, a man-machine interface 46 and a component for transmitting 47 data to the storage server 16.
  • the software application 42 is able to be executed by the processor 34 as controlled by the user via the man-machine interface 46. Alternatively, the software application 42 is able to be executed by a remote command received via the transmission component 47 or be executed automatically according to a schedule pre-recorded in the memory 38, for example with predefined durations each morning and/or each evening.
  • the software application 42 comprises a context software module 48, able to collect and record in the memory 38 context data including profile data of the user and spatiotemporal data relating to the video capture, and a module 51 for the management of each camera 26, able to activate and receive the video data of the camera 26 in order to store it in the memory 38.
  • the software application 42 comprises a compression module 52 in order to compress the video data recorded in the memory 38 and a module for transferring 54 data to the server 16 by the intermediary of the transmission component 47.
  • the context module 48 comprises a user sub-module 56 able to collect the profile data of the user, optionally entered directly by the user by means of the man-machine interface 46 or pre-recorded by a system manger, as well as a spatiotemporal sub-module 58 able to collect the spatiotemporal data relating to the video capture.
  • the profile data includes, for example, information of the age of the user, where the user lives, their lifestyle or their cosmetic preferences.
  • the spatiotemporal data is, for example, the date, time, duration and location of the capture, for example characterized using GPS data. It includes in certain cases data loaded from remote systems, in particular the internet network, such as meteorological data.
  • the management module 51 is able to control the activation of each camera 26 and the density of the video flow taken by each camera 26. It is able to create and store in the memory 38 basic video files comprising the video signal acquired by each camera 26.
  • the management module 51 is able to be implemented automatically, for example based on temporal information or data from a sensor detecting the presence of the user or as controlled by the user, via the activation control 29.
  • the compression module 52 is able to compress the video data collected by each camera 26 present in the basic video files. It uses for example compression methods of the ZIP or RAR (Roshal ARchive) type.
  • the transfer module 54 is configured to implement the transfer of the video data, collected in the memory 38 and compressed by the compression module 52, to the server 16 according to a secure transfer protocol.
  • the transfer protocol is, for example, a protocol according to the HTTPS standard.
  • the basic video files are advantageously associated with spatiotemporal data and with the user profile data corresponding to the cosmetic routine contained in the basic video files.
  • the data transfer is preferably carried out using a wireless transmission protocol according to the standards of the group IEEE 802.1 1 (Wi-Fi) or of the group IEEE 802.15 (Bluetooth) or according to a cellular telecommunication network protocol such as the protocols according to the GSM (Global System for Mobile Communications) standard or according to the UMTS (Universal Mobile Telecommunications System) or 4G technologies.
  • a wireless transmission protocol according to the standards of the group IEEE 802.1 1 (Wi-Fi) or of the group IEEE 802.15 (Bluetooth) or according to a cellular telecommunication network protocol such as the protocols according to the GSM (Global System for Mobile Communications) standard or according to the UMTS (Universal Mobile Telecommunications System) or 4G technologies.
  • the activation control 29 is able to activate or deactivate the turning on of the lighting system 24, of the camera 26 and of the central unit 28.
  • the activation control 29 comprises, for example, an activation button that is accessible to the user.
  • the activation control 29 is able to deactivate the lighting system 24 and the camera 26 after a delay or in the absence of activity during a predetermined period of time.
  • the storage server 16 comprises a processor 66 and a memory 68 comprising at least one decompression application 62 able to decompress the video data that was compressed beforehand by the compression module 52, and a processing application 70 that can be run by the processor 66.
  • the server 16 is able to receive, store and process the data coming from several electronic devices 12 associated with one or a plurality of users. It is able to be controlled by one or a plurality of access units 18.
  • the server 16 is for example a dedicated server or a shared server, of the "cloud" type, that can be accessed remotely by the intermediary of a wired and/or wireless telecommunication network.
  • the processing application 70 comprising a transmission and control module 72 able to securely access data that is processed or not, present on the server 16 in order to allow it to be viewed by a user.
  • Access to the video data is, for example, carried out using the Ethernet protocol via the Internet network.
  • this access is protected via a login and a password.
  • the transmission and control module 72 is also able to route to the server 16 instructions from the access unit 18.
  • the processing application 70 also comprises a module for sequencing 74 video data coming from the basic video files, a module for associating context data 76 to the sequences performed by the sequencing module 74, and a module for associating keywords 78 of cosmetic interest to each sequence carried out.
  • the processing application 70 further comprises advantageously a module for analyzing 80 sequences.
  • the sequencing module 74 is able to process the video data of each basic video file in order to suppress the noisy sequences and to select at least one video sequence of cosmetic interest within each basic video file.
  • This processing is performed automatically by a dedicated software application, and/or manually by an operator of the access unit 18.
  • the sequencing module 74 is able to create and record in the memory 68 a video sequence file comprising a video sequence of cosmetic interest coming from a basic video file, as well as metadata associated with the sequence.
  • the module for associating context data 76 is configured to associate with each video sequence file, the context data relative to the user and to the video capture.
  • the module for associating keywords 78 is configured to associate with each video sequence file, at least one keyword of cosmetic interest into metadata.
  • the keywords of cosmetic interest are chosen in particular from a type of cosmetic gesture (for example, habitual gestures such as rinsing a cleansing product, massaging with a cosmetic care product, applying makeup), a cosmetic operation carried out during the routine (for example, an association and a sequence of chaining various cosmetic products or recourse to accessories), a portion of the body affected by the routine (for example, the nose, the eyelids, the lips) or a type of product associated with the routine (for example, a makeup product, a care product or a commercial name of the product used).
  • This association is carried out automatically, for example on the spatiotemporal database and/or user profile associated with the basic video files, and/or manually by an operator by means of the access unit 18.
  • an algorithm for processing images and/or recognizing shapes is implemented in association with a computer index.
  • the computer index in particular contains a library of images and of reference image sequences associated with keywords of cosmetic interest listing, in particular, types of cosmetic gestures, typical cosmetic operations, parts of the human body or types of cosmetic products.
  • Such an algorithm allows in association with the computer index to identify characteristics on an image, for example, a part of the body such as a face or a type of cosmetic product.
  • an algorithm makes it possible in association with the computer index to identify characteristics on successive image sequences, making it possible to identify a gesture or a cosmetic operation, for example, an operation of applying makeup, an operation of applying a cleansing product or an operation of massaging with a cosmetic care product.
  • the images or sequences of images identified are associated with the keyword of cosmetic interest that refers to the characteristics of the image or of the reference sequence of images that allowed it to be identified.
  • the module for associating keywords 78 is able to provide the operator with a list of key words of cosmetic interest to be selected for example by means of the access unit 18.
  • the analysis module 80 when it is present, comprises for example a sub-module 92 for determining sensory parameters of a sequence of cosmetic interest, a sub-module 94 for extracting movements in a sequence of cosmetic interest and possibly a sub- module 96 for video reconstruction.
  • the analysis module 80 is able to determine at least one cosmetic routine parameter associated with the keyword of cosmetic interest.
  • the cosmetic routine parameters for example sensory parameters and/or movement parameters.
  • the sub-module 92 for determining sensory parameters is able to determine sensory parameters of the user, using an area of the body that is visible on the sequence. These sensory parameters are for example a dimensional modification of an area of the body, a change in color or temperature of an area of the body.
  • the sub-module 94 for extracting movements is configured to position, in each sequence of cosmetic interest, virtual sensors on areas of the body of the user, with a sensor being represented by at least one given point of movement of the area of the body, and to record with a time step the movement parameters, describing the movements of these virtual sensors, in particular the positions and the trajectories of the sensors according to the time, in a movement database 1 16 located in the memory 68.
  • the movement parameters in particular describe the characteristics of the movement of a member of the user, for example the displacement of a forearm during the cosmetic routine, based on the position data of each virtual sensor determined in the sequence.
  • the sub-module 96 for video reconstruction is configured to reconstruct on a new background each sequence of cosmetic interest and to record the reconstructed video in a reconstructed file in the memory 68.
  • This reconstructed file can optionally be viewed by an operator by means of the access unit 18.
  • the sub-module 96 for video reconstruction is able to delimit on each image constituting the video at least one area comprising data from the basic video file, in particular in the areas of the body of the user that implement the cosmetic routine, and at least one background area, wherein the data of the basic video file have been replaced with a reconstructed background, for example monochrome.
  • the sub-module 96 for video reconstruction is for example able to retain solely the data that represents the face and/or a member of the user, by replacing the other data, in particular that illustrating the background behind the user with a reconstructed background.
  • the reconstructed video sequence contained in the reconstructed file then comprises the movements of the predefined areas of the body of the user, without the parasite background of the initial video.
  • the analysis of the cosmetic routine of the user is as such facilitated.
  • the access unit 18 which can be seen in figure 1 , is a man-machine interface able to be used by an operator to send instructions to the server 16, for example instructions to control the execution of the various processing application modules 70.
  • the access unit 18 comprises a processor, a memory and software applications.
  • the access unit 18 is for example a computer.
  • the automated system 20 which can be seen in figure 1 , is able to receive the data of the reconstructed file via the intermediary of a transfer support, for example a USB key, or a telecommunication network, and to use this data to reproduce the movements of the predefined areas of the body carried out by the user during their cosmetic routine.
  • a transfer support for example a USB key, or a telecommunication network
  • the automated system 20 is, for example, a robot.
  • the user enters the context data concerning their profile, in particular information such as their age, lifestyle or cosmetic preferences using the man-machine interface 46.
  • This information is collected by the user sub-module 56 of the context module 48 which records it in the memory 38.
  • This user profile information can also be prerecorded by a manager of the system before remitting the equipment to the user.
  • step 210 in preparation for the implementation of a cosmetic routine, the user positions herself opposite the electronic device 12 and uses the activation control 29 to start the lighting system 24, the camera 26 and the central unit 28.
  • the user selects her user profile.
  • the presence of the user is detected automatically by the activation control 29.
  • step 220 the user places herself facing the mirror 22, and performs her cosmetic routine.
  • the user performs a cosmetic routine for applying a cosmetic product on a portion of her body, for example mascara on her eyelashes.
  • Each camera 26 films the scene of the cosmetic routine and generates a video signal.
  • the management module 51 creates basic video files based on the video signal and records this basic video files in the memory 38 of the device 12.
  • the spatiotemporal sub-module 58 collects the spatiotemporal data relating to the video capture, for example the date, the time and the location of the capture and records them in the memory 38.
  • the video data contained in each basic video file and optionally the context data associated with the video data is compressed via the compression module 52.
  • the basic video files are routed according to a secure transfer protocol implemented by the transfer module 54 to the server 16, then recorded in the memory 68 of the server 16.
  • the video data is decompressed by the decompression application 62 of the server 16.
  • the server 16 adds the basic video files to a video file database by associating in the database the basic video files with at least one piece of context data concerning each one of the basic video files.
  • a sequencing step 270 is then implemented.
  • the video data of the basic video files are divided into sequences by the sequencing module 74, automatically or by an operator as described hereinabove.
  • the sequencing module 74 suppresses the noisy sequences and selects in the video the sequences of cosmetic interest.
  • the sequencing module associates with each sequence of cosmetic interest a video sequence file comprising the video data of the sequence of cosmetic interest and at least one piece of context data.
  • an associating step 280 is implemented. During this step the module for associating context data 76 associates with each video sequence file, the context data relative to the user and to the video capture.
  • the module for associating keywords 78 then associates with each video sequence file, at least one keyword of cosmetic interest into metadata.
  • the keywords are chosen from a type of cosmetic gesture, a cosmetic operation performed during the routine, a portion of the body affected by the routine or a type of product associated with the routine.
  • Each sequence of cosmetic interest is as such extracted, connected to its context via the intermediary of context data and associated to keywords of cosmetic interest.
  • a database of video sequences of cosmetic interest, accurately referenced can be established very simply, in the natural context of users implementing cosmetic routines and recording these routine using the mobile electronic device 12, which is hardly intrusive.
  • the device 12 has at least two cameras 26 filming different field depths, it is furthermore possible to obtain two separate video sequences from the same cosmetic routine, which enriches the information available for later analysis.
  • a step of analyzing 290 is then implemented, based on at least one selected video sequence.
  • the analysis module proceeds with an analysis of the video sequence file.
  • This analysis step 290 comprises at least one phase chosen from a first phase 300 of determining sensory parameters, a second phase 310 for extracting and recording movements and a third phase 320 of video reconstruction.
  • the sub-module 92 for determining sensory parameters determines the sensory parameters of the user, such as a dimensional modification of a body area, a change in color or temperature of a body area.
  • the sub- module 94 for extracting movements positions, in each sequence of cosmetic interest, virtual sensors on the body areas of the user, and records with a time step the movement parameters, describing the movements of these sensors, in a database of movements 1 16 located in the memory 68.
  • a forearm is modeled by two sensors positioned at the two ends of the forearm and the positions of these two sensors are recorded every half-second in the movement database 1 16.
  • the sub-module 96 for reconstruction reconstructs on a new background each sequence of cosmetic interest using the movement database 1 16, and records it in a reconstructed file in the memory 68.
  • This reconstructed file 120 is optionally viewed by an operator by means of the access unit 18.
  • At least one parameter of cosmetic interest as for example a displacement, in particular a cosmetic applicator, a frequency of reproducing a movement, a duration of a movement etc., based on actual cosmetic routines performed by users and collected by an electronic device 12.
  • the or each parameter of cosmetic interest can be used subsequently to the analysis in order to compare cosmetic routines, and in order to optimize the cosmetic products and/or the cosmetic tools needed to implement this routine.
  • the data of the file 120 is transferred to an automated system 20 by the intermediary of a transfer support or a telecommunication network.
  • the automated system 20 for example uses this data to reproduce the predefined movements performed by the user during her cosmetic routine.
  • the automated system 20 is a robot comprising a mobile arm able to apply a cosmetic product on a surface that represents a body surface of a user, which can be displaced using a programmable automaton.
  • the automaton for displacing the arm is programmed on the basis of the parameter or parameters of cosmetic interest determined by the analysis module 80 of the processing application 70 in order to accurately reproduce a displacement during one or a plurality of cosmetic routines implemented by users.

Landscapes

  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)

Abstract

The method comprises the following steps: - capture and recording by an electronic device (12) of at least one video including a cosmetic routine of a user and spatiotemporal data associated with the cosmetic routine, - routing of the video to a storage server (16), - sequencing of the video by a processing application (70) in order to select, in the video, at least one sequence of cosmetic interest, and creation of a video sequence file on the basis of each sequence of cosmetic interest, - association of at least one keyword of cosmetic interest with each video sequence file.

Description

Method for analyzing cosmetic routines of users and associated system
This invention relates to a method for analyzing cosmetic routines of users.
"Cosmetic routine" means gestures, expressions or actions usually carried out by a user when the user uses a cosmetic product in an environment.
The cosmetic product is for example a fluid product such as a liquid, a gel, a foam, a cream or a powder.
More generally, a "cosmetic product" is a product as defined in EC Regulation N° 1223/2009 of the European Parliament and the Council of November 30, 2009, relating to cosmetic products.
The method for analysis according to the invention is advantageously implemented using a capture device comprising a camera. The capture device is preferably mobile, in order to be displaced with the user, in order to capture cosmetic routines in any location.
A known capture device is described in document US 2003/0065256. Such a device generally comprises an electronic apparatus able to acquire a static image of the user and to send this image to a processing unit so that it can be analyzed. The image is then analyzed in order to establish a dermatological diagnosis, medical follow-up or to assess the efficacy over time of cosmetic products used by the user.
This device is therefore suited for analyzing the condition of a part of the body of a user at a given instant.
However, such a device is not able to carry out an analysis of cosmetic routines.
Indeed, the information collected with the device is insufficient to fully and reliably target the habits of a user, and the analysis carried out is limited to a local observation of one surface of the body.
One purpose of the invention is to obtain a method that makes it possible to enter data pertaining to a cosmetic routine associated with a user, in an environment, in a simple and reproducible manner, for the purposes of a subsequent analysis and/or a reproduction of this routine.
For this purpose, the invention relates to a method of the aforementioned type, the method comprising the following steps:
- capture and recording by an electronic device of at least one video including a cosmetic routine of a user and spatiotemporal data associated with the cosmetic routine,
- routing of the video to a storage server,
- sequencing of the video by a processing application in order to select, in the video, at least one sequence of cosmetic interest, and creation of a video sequence file on the basis of each sequence of cosmetic interest, - association of at least one keyword of cosmetic interest with each video sequence file.
The method according to the invention can include one or more of the following features, taken alone or in any technically possible combination:
- the electronic device comprises a mirror and at least one camera suitable for filming a user of the mirror, the video being recorded by the camera;
- the electronic device comprises at least two cameras suitable for recording, respectively, a general view and a detail view of the user, the method comprising the simultaneous capture by each camera of two videos including the same cosmetic routine;
- the step of associating keywords comprises the inscription of at least one keyword of cosmetic interest into the metadata of each video sequence file;
- the keyword of cosmetic interest is chosen from a type of cosmetic action, a cosmetic operation performed during the routine, a portion of the body affected by the routine, and a type of product associated with the routine;
- the method comprises a step of associating at least one context data item with each video sequence, the context data being chosen from:
- spatiotemporal data recorded by the electronic device, in particular a date and/or time of capture, a capture length, a geographic location of the capture, and
- user and/or device identification data;
- the step of associating keywords of cosmetic interest is at least partially automated;
- the routing of video data involves the use of a secure transmission protocol between the electronic device and the storage server;
- the video data is compressed by the electronic device before being sent to the storage server and is then decompressed by the server;
- the method comprises the analysis of at least one video sequence in order to deduce at least one parameter of the cosmetic routine associated with the keyword of cosmetic interest;
- the step of analysis of a video sequence file comprises a phase of extracting a movement of at least one body part of the user performed in the video sequence, followed by a phase of recording parameters of cosmetic routines characteristic of the movement performed, in a movement database;
- the or each cosmetic routine parameter is exported to an automated system capable of reproducing the sequence of cosmetic interest; - the analysis step comprises a video reconstruction phase in order to filter, on each image of the video, at least one area of interest and at least one background, the cosmetic routine parameter being extracted from the or each area of interest;
- the video analysis step comprises a phase of determining sensory parameters of the user, such as a dimensional modification of a body area or a change in color or temperature of a body area;
- the step of selecting sequences of cosmetic interest includes a phase of removing noisy sequences.
The invention also relates to a system for analyzing cosmetic routines comprising the following steps:
- an electronic device for capture and recording of at least one video including a cosmetic routine of a user and spatiotemporal data associated with the cosmetic routine,
- a storage server to which the video data is routed,
- a processing application suitable for selecting, in the video, sequences of cosmetic interest, and creating a video sequence file from each sequence of interest, the processing application being suitable for associating at least one keyword of cosmetic interest with each video sequence.
The invention will be easier to understand in view of the following description, provided solely as an example and with reference to the appended drawings, wherein:
- figure 1 is a schematic representation of a first system for analyzing cosmetic routines according to the invention, with the system for analyzing cosmetic routines comprising electronic capture device, as well as a storage server;
- figure 2 is a schematic representation before the electronic capture device, with the electronic device comprising a mirror and a central unit;
- figure 3 is a rear schematic representation of the electronic capture device;
- figure 4 is a schematic representation of the central unit electronic device according to the invention;
- figure 5 is a schematic representation of an application of the central unit;
- figure 6 is a schematic representation of the storage server;
- figure 7 is a schematic representation of an application of the storage server;
- figure 8 is a flow chart of an example of the method according to the invention for analyzing cosmetic routines of users;
- figure 9 is a detailed view of the flow chart of figure 8. Hereinafter, the term "video" designates a video signal, comprising a plurality of successive images, at a frequency higher than 1 Hz, and advantageously, the audio signal that is associated with it.
A system 10 for analyzing cosmetic routines according to the invention is shown in Figure 1 .
The cosmetic routine performed by the user comprises gestures, expressions or actions usually carried out by a user when the user uses a cosmetic product in a given environment.
This routine is for example an application routine for a care and/or makeup cosmetic product on a portion of the face.
As shown in figure 1 , the system 10 according to the invention comprises an electronic device 12, advantageously mobile, able to acquire video data of a cosmetic routine and data on the characteristics of the environment in which the routine is performed, and a storage server 16 of the video data, located remotely from the electronic device 12, in order to collect, store and process the data collected by the device 12. The system 10 further comprises a unit for access 18 to the serve able to communicate with the server 16 and to send it instructions. It optionally comprises an automated system 20 for using the video data processed by the server 16.
In the particular example shown in figures 2 and 3, the electronic device 12 comprises a mirror 22, a lighting system 24, and at least one camera 26 fixed on the mirror 22.
It further comprises a central unit 28 connected to the camera 26, an activation control 29 and a power supply battery 30 able to supply the lighting system 24, the camera 26 and the central unit 28.
The mirror 22 comprises a fastening arrangement 23 that allows the user to position the mirror in different locations.
The lighting system 24 is advantageously located on the mirror 22 in such a way as to illuminate the area of the body of interest. The lighting system 24 comprises for example LEDs.
The camera 26 is of a size that is suitable for being fastened onto the mirror 22.
Advantageously, the electronic device 12 comprises at least two cameras 26 able to simultaneously capture two different views of the user performing a cosmetic routine, for example, a first view of the entire body of the user and a second detailed view of a portion of the body of the user. The cameras 26 preferably have separate field depths and apertures. Optionally, the electronic device 12 is arranged in a water-resistant support, not shown in figure 1 .
The central unit 28, advantageously mobile, is able to be detached from the other elements of the electronic device 12.
As shown in figure 4, the central unit 28 comprises a processor 34, a memory 38 comprising a software application 42 for data management, a man-machine interface 46 and a component for transmitting 47 data to the storage server 16.
The software application 42 is able to be executed by the processor 34 as controlled by the user via the man-machine interface 46. Alternatively, the software application 42 is able to be executed by a remote command received via the transmission component 47 or be executed automatically according to a schedule pre-recorded in the memory 38, for example with predefined durations each morning and/or each evening.
As shown in figure 5, the software application 42 comprises a context software module 48, able to collect and record in the memory 38 context data including profile data of the user and spatiotemporal data relating to the video capture, and a module 51 for the management of each camera 26, able to activate and receive the video data of the camera 26 in order to store it in the memory 38.
According to the invention, the software application 42 comprises a compression module 52 in order to compress the video data recorded in the memory 38 and a module for transferring 54 data to the server 16 by the intermediary of the transmission component 47.
The context module 48 comprises a user sub-module 56 able to collect the profile data of the user, optionally entered directly by the user by means of the man-machine interface 46 or pre-recorded by a system manger, as well as a spatiotemporal sub-module 58 able to collect the spatiotemporal data relating to the video capture.
The profile data includes, for example, information of the age of the user, where the user lives, their lifestyle or their cosmetic preferences. The optionally include an image of the face of the user, advantageously used in a method for identifying the face of the user.
The spatiotemporal data is, for example, the date, time, duration and location of the capture, for example characterized using GPS data. It includes in certain cases data loaded from remote systems, in particular the internet network, such as meteorological data. The management module 51 is able to control the activation of each camera 26 and the density of the video flow taken by each camera 26. It is able to create and store in the memory 38 basic video files comprising the video signal acquired by each camera 26.
The management module 51 is able to be implemented automatically, for example based on temporal information or data from a sensor detecting the presence of the user or as controlled by the user, via the activation control 29.
The compression module 52 is able to compress the video data collected by each camera 26 present in the basic video files. It uses for example compression methods of the ZIP or RAR (Roshal ARchive) type.
The transfer module 54 is configured to implement the transfer of the video data, collected in the memory 38 and compressed by the compression module 52, to the server 16 according to a secure transfer protocol. The transfer protocol is, for example, a protocol according to the HTTPS standard.
During the transfer, the basic video files are advantageously associated with spatiotemporal data and with the user profile data corresponding to the cosmetic routine contained in the basic video files.
The data transfer is preferably carried out using a wireless transmission protocol according to the standards of the group IEEE 802.1 1 (Wi-Fi) or of the group IEEE 802.15 (Bluetooth) or according to a cellular telecommunication network protocol such as the protocols according to the GSM (Global System for Mobile Communications) standard or according to the UMTS (Universal Mobile Telecommunications System) or 4G technologies.
As shown in figure 3, the activation control 29 is able to activate or deactivate the turning on of the lighting system 24, of the camera 26 and of the central unit 28.
The activation control 29 comprises, for example, an activation button that is accessible to the user. The activation control 29 is able to deactivate the lighting system 24 and the camera 26 after a delay or in the absence of activity during a predetermined period of time.
As shown in figures 1 and 6, the storage server 16 comprises a processor 66 and a memory 68 comprising at least one decompression application 62 able to decompress the video data that was compressed beforehand by the compression module 52, and a processing application 70 that can be run by the processor 66.
The server 16 is able to receive, store and process the data coming from several electronic devices 12 associated with one or a plurality of users. It is able to be controlled by one or a plurality of access units 18. The server 16 is for example a dedicated server or a shared server, of the "cloud" type, that can be accessed remotely by the intermediary of a wired and/or wireless telecommunication network.
As shown in figure 7, the processing application 70 comprising a transmission and control module 72 able to securely access data that is processed or not, present on the server 16 in order to allow it to be viewed by a user. Access to the video data is, for example, carried out using the Ethernet protocol via the Internet network. Optionally, this access is protected via a login and a password.
The transmission and control module 72 is also able to route to the server 16 instructions from the access unit 18.
The processing application 70 also comprises a module for sequencing 74 video data coming from the basic video files, a module for associating context data 76 to the sequences performed by the sequencing module 74, and a module for associating keywords 78 of cosmetic interest to each sequence carried out. The processing application 70 further comprises advantageously a module for analyzing 80 sequences.
The sequencing module 74 is able to process the video data of each basic video file in order to suppress the noisy sequences and to select at least one video sequence of cosmetic interest within each basic video file.
This processing is performed automatically by a dedicated software application, and/or manually by an operator of the access unit 18.
The sequencing module 74 is able to create and record in the memory 68 a video sequence file comprising a video sequence of cosmetic interest coming from a basic video file, as well as metadata associated with the sequence.
The module for associating context data 76 is configured to associate with each video sequence file, the context data relative to the user and to the video capture.
The module for associating keywords 78 is configured to associate with each video sequence file, at least one keyword of cosmetic interest into metadata.
The keywords of cosmetic interest are chosen in particular from a type of cosmetic gesture (for example, habitual gestures such as rinsing a cleansing product, massaging with a cosmetic care product, applying makeup), a cosmetic operation carried out during the routine (for example, an association and a sequence of chaining various cosmetic products or recourse to accessories), a portion of the body affected by the routine (for example, the nose, the eyelids, the lips) or a type of product associated with the routine (for example, a makeup product, a care product or a commercial name of the product used). This association is carried out automatically, for example on the spatiotemporal database and/or user profile associated with the basic video files, and/or manually by an operator by means of the access unit 18.
In the case where the association is carried out automatically, an algorithm for processing images and/or recognizing shapes is implemented in association with a computer index. The computer index in particular contains a library of images and of reference image sequences associated with keywords of cosmetic interest listing, in particular, types of cosmetic gestures, typical cosmetic operations, parts of the human body or types of cosmetic products.
Such an algorithm allows in association with the computer index to identify characteristics on an image, for example, a part of the body such as a face or a type of cosmetic product. Advantageously, such an algorithm makes it possible in association with the computer index to identify characteristics on successive image sequences, making it possible to identify a gesture or a cosmetic operation, for example, an operation of applying makeup, an operation of applying a cleansing product or an operation of massaging with a cosmetic care product.
After identification, the images or sequences of images identified are associated with the keyword of cosmetic interest that refers to the characteristics of the image or of the reference sequence of images that allowed it to be identified.
In the case where the association is carried out manually, the operators views the sequence of interest. The module for associating keywords 78 is able to provide the operator with a list of key words of cosmetic interest to be selected for example by means of the access unit 18.
The analysis module 80 when it is present, comprises for example a sub-module 92 for determining sensory parameters of a sequence of cosmetic interest, a sub-module 94 for extracting movements in a sequence of cosmetic interest and possibly a sub- module 96 for video reconstruction.
The analysis module 80 is able to determine at least one cosmetic routine parameter associated with the keyword of cosmetic interest.
The cosmetic routine parameters for example sensory parameters and/or movement parameters.
The sub-module 92 for determining sensory parameters is able to determine sensory parameters of the user, using an area of the body that is visible on the sequence. These sensory parameters are for example a dimensional modification of an area of the body, a change in color or temperature of an area of the body. The sub-module 94 for extracting movements is configured to position, in each sequence of cosmetic interest, virtual sensors on areas of the body of the user, with a sensor being represented by at least one given point of movement of the area of the body, and to record with a time step the movement parameters, describing the movements of these virtual sensors, in particular the positions and the trajectories of the sensors according to the time, in a movement database 1 16 located in the memory 68.
The movement parameters in particular describe the characteristics of the movement of a member of the user, for example the displacement of a forearm during the cosmetic routine, based on the position data of each virtual sensor determined in the sequence.
The sub-module 96 for video reconstruction is configured to reconstruct on a new background each sequence of cosmetic interest and to record the reconstructed video in a reconstructed file in the memory 68. This reconstructed file can optionally be viewed by an operator by means of the access unit 18.
The sub-module 96 for video reconstruction is able to delimit on each image constituting the video at least one area comprising data from the basic video file, in particular in the areas of the body of the user that implement the cosmetic routine, and at least one background area, wherein the data of the basic video file have been replaced with a reconstructed background, for example monochrome.
The sub-module 96 for video reconstruction is for example able to retain solely the data that represents the face and/or a member of the user, by replacing the other data, in particular that illustrating the background behind the user with a reconstructed background.
The reconstructed video sequence contained in the reconstructed file then comprises the movements of the predefined areas of the body of the user, without the parasite background of the initial video. The analysis of the cosmetic routine of the user is as such facilitated.
The access unit 18, which can be seen in figure 1 , is a man-machine interface able to be used by an operator to send instructions to the server 16, for example instructions to control the execution of the various processing application modules 70. Advantageously, the access unit 18 comprises a processor, a memory and software applications. The access unit 18 is for example a computer.
The automated system 20, which can be seen in figure 1 , is able to receive the data of the reconstructed file via the intermediary of a transfer support, for example a USB key, or a telecommunication network, and to use this data to reproduce the movements of the predefined areas of the body carried out by the user during their cosmetic routine.
The automated system 20 is, for example, a robot.
A process for analyzing cosmetic routines that uses the system 10 shall now be described.
During an initial step 200 for entering the user profile, which can be seen in figure 8, the user enters the context data concerning their profile, in particular information such as their age, lifestyle or cosmetic preferences using the man-machine interface 46. This information is collected by the user sub-module 56 of the context module 48 which records it in the memory 38. This user profile information can also be prerecorded by a manager of the system before remitting the equipment to the user.
In step 210, in preparation for the implementation of a cosmetic routine, the user positions herself opposite the electronic device 12 and uses the activation control 29 to start the lighting system 24, the camera 26 and the central unit 28. Optionally, if several users are likely to use the device 12, the user selects her user profile.
Alternatively, the presence of the user is detected automatically by the activation control 29.
In step 220, the user places herself facing the mirror 22, and performs her cosmetic routine. For example, the user performs a cosmetic routine for applying a cosmetic product on a portion of her body, for example mascara on her eyelashes.
Each camera 26 films the scene of the cosmetic routine and generates a video signal. The management module 51 creates basic video files based on the video signal and records this basic video files in the memory 38 of the device 12.
Parallel, the spatiotemporal sub-module 58 collects the spatiotemporal data relating to the video capture, for example the date, the time and the location of the capture and records them in the memory 38.
Then, during a compression step 230, the video data contained in each basic video file and optionally the context data associated with the video data is compressed via the compression module 52.
Then, during a routing step 240, the basic video files are routed according to a secure transfer protocol implemented by the transfer module 54 to the server 16, then recorded in the memory 68 of the server 16.
During a decompression step 250, the video data is decompressed by the decompression application 62 of the server 16. The server 16 adds the basic video files to a video file database by associating in the database the basic video files with at least one piece of context data concerning each one of the basic video files.
A sequencing step 270 is then implemented. The video data of the basic video files are divided into sequences by the sequencing module 74, automatically or by an operator as described hereinabove.
The sequencing module 74 suppresses the noisy sequences and selects in the video the sequences of cosmetic interest. The sequencing module associates with each sequence of cosmetic interest a video sequence file comprising the video data of the sequence of cosmetic interest and at least one piece of context data.
Then, an associating step 280 is implemented. During this step the module for associating context data 76 associates with each video sequence file, the context data relative to the user and to the video capture.
The module for associating keywords 78 then associates with each video sequence file, at least one keyword of cosmetic interest into metadata. The keywords are chosen from a type of cosmetic gesture, a cosmetic operation performed during the routine, a portion of the body affected by the routine or a type of product associated with the routine.
Each sequence of cosmetic interest is as such extracted, connected to its context via the intermediary of context data and associated to keywords of cosmetic interest.
A database of video sequences of cosmetic interest, accurately referenced can be established very simply, in the natural context of users implementing cosmetic routines and recording these routine using the mobile electronic device 12, which is hardly intrusive.
It is then simple to isolate in this database of video sequences of cosmetic interest, one or a plurality of sequences relating to a particular context and/or cosmetic interest, so that they can be analyzed.
When the device 12 has at least two cameras 26 filming different field depths, it is furthermore possible to obtain two separate video sequences from the same cosmetic routine, which enriches the information available for later analysis.
Advantageously a step of analyzing 290 is then implemented, based on at least one selected video sequence. During this step, the analysis module proceeds with an analysis of the video sequence file. This analysis step 290 comprises at least one phase chosen from a first phase 300 of determining sensory parameters, a second phase 310 for extracting and recording movements and a third phase 320 of video reconstruction.
As shown in figure 9, during the phase 300, the sub-module 92 for determining sensory parameters determines the sensory parameters of the user, such as a dimensional modification of a body area, a change in color or temperature of a body area.
During the phase 310, which can be run in parallel of the phase 300, the sub- module 94 for extracting movements positions, in each sequence of cosmetic interest, virtual sensors on the body areas of the user, and records with a time step the movement parameters, describing the movements of these sensors, in a database of movements 1 16 located in the memory 68.
For example, a forearm is modeled by two sensors positioned at the two ends of the forearm and the positions of these two sensors are recorded every half-second in the movement database 1 16.
During the phase 320, which can be seen in figure 8, the sub-module 96 for reconstruction reconstructs on a new background each sequence of cosmetic interest using the movement database 1 16, and records it in a reconstructed file in the memory 68. This reconstructed file 120 is optionally viewed by an operator by means of the access unit 18.
It is as such possible to simply, reliably and in a reproducible manner measure at least one parameter of cosmetic interest, as for example a displacement, in particular a cosmetic applicator, a frequency of reproducing a movement, a duration of a movement etc., based on actual cosmetic routines performed by users and collected by an electronic device 12.
The or each parameter of cosmetic interest can be used subsequently to the analysis in order to compare cosmetic routines, and in order to optimize the cosmetic products and/or the cosmetic tools needed to implement this routine.
Optionally, during a step 340, the data of the file 120 is transferred to an automated system 20 by the intermediary of a transfer support or a telecommunication network.
The automated system 20 for example uses this data to reproduce the predefined movements performed by the user during her cosmetic routine.
In an advantageous embodiment, the automated system 20 is a robot comprising a mobile arm able to apply a cosmetic product on a surface that represents a body surface of a user, which can be displaced using a programmable automaton. The automaton for displacing the arm is programmed on the basis of the parameter or parameters of cosmetic interest determined by the analysis module 80 of the processing application 70 in order to accurately reproduce a displacement during one or a plurality of cosmetic routines implemented by users.
Each parameter of cosmetic interest resulting from an accurate analysis of sequences of cosmetic interest such as defined hereinabove, the robot reproduces movements close or identical to those performed by an actual user implementing the routine.

Claims

1 . - Method for analyzing cosmetic routines of users, the method comprising the following steps:
- capture and recording by an electronic device (12) of at least one video including a cosmetic routine of a user and spatiotemporal data associated with the cosmetic routine,
- routing of the video to a storage server (16),
- sequencing of the video by a processing application (70) in order to select, in the video, at least one sequence of cosmetic interest, and creation of a video sequence file on the basis of each sequence of cosmetic interest,
- association of at least one keyword of cosmetic interest with each video sequence file.
2. - Method according to claim 1 , wherein the electronic device (12) comprises a mirror (22) and at least one camera (26) suitable for filming a user of the mirror (22), the video being recorded by the camera (26).
3. - Method according to claim 2, wherein the electronic device (12) comprises at least two cameras (26) suitable for recording, respectively, a general view and a detail view of the user, the method comprising the simultaneous capture by each camera (26) of two videos including the same cosmetic routine.
4. - Method according to any one of the previous claims, wherein the step of associating keywords comprises the inscription of at least one keyword of cosmetic interest into the metadata of each video sequence file.
5. - Method according to any one of the previous claims, wherein the keyword of cosmetic interest is chosen from a type of cosmetic action, a cosmetic operation performed during the routine, a portion of the body affected by the routine, and a type of product associated with the routine.
6. - Method according to any one of the previous claims, comprising a step of associating at least one context data item with each video sequence, the context data being chosen from: - spatiotemporal data recorded by the electronic device (12), in particular a date and/or time of capture, a capture length, a geographic location of the capture, and
- user and/or device identification data.
7.- Method according to any one of the previous claims, wherein the step of associating keywords of cosmetic interest is at least partially automated.
8. - Method according to any one of the previous claims, wherein the routing of video data involves the use of a secure transmission protocol between the electronic device (12) and the storage server (16).
9. - Method according to any one of the previous claims, wherein the video data is compressed by the electronic device (12) before being sent to the storage server (16) and is then decompressed by the server (16).
10. - Method according to any one of the previous claims, including the analysis of at least one video sequence in order to deduce at least one parameter of the cosmetic routine associated with the keyword of cosmetic interest.
1 1 .- Method according to claim 10, wherein the step of analysis of a video sequence file comprises a phase of extracting a movement of at least one body part of the user performed in the video sequence, followed by a phase of recording parameters of cosmetic routines characteristic of the movement performed, in a movement database (1 16).
12.- Method according to claim 1 1 , wherein the or each cosmetic routine parameter is exported to an automated system (20) capable of reproducing the sequence of cosmetic interest.
13.- Method according to claims 10 to 12, wherein the analysis step comprises a video reconstruction phase in order to filter, on each image of the video, at least one area of interest and at least one background, the cosmetic routine parameter being extracted from the or each area of interest.
14. - Method according to any one of claims 10 to 13, wherein the video analysis step comprises a phase of determining sensory parameters of the user, such as a dimensional modification of a body area or a change in color or temperature of a body area.
15. - Method according to any one of the previous claims, wherein the step of selecting sequences of cosmetic interest includes a phase of removing noisy sequences.
16. - System for analyzing cosmetic routines of a user, comprising:
- an electronic device (12) for capture and recording of at least one video including a cosmetic routine of a user and spatiotemporal data associated with the cosmetic routine,
- a storage server (16) to which the video data is routed,
- a processing application (70) suitable for selecting, in the video, sequences of cosmetic interest, and creating a video sequence file from each sequence of interest, the processing application (70) being suitable for associating at least one keyword of cosmetic interest with each video sequence.
PCT/EP2015/064887 2014-06-30 2015-06-30 Method for analyzing cosmetic routines of users and associated system WO2016001248A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1456140 2014-06-30
FR1456140A FR3023110B1 (en) 2014-06-30 2014-06-30 METHOD FOR ANALYZING USER COSMETIC ROUTINES AND ASSOCIATED SYSTEM

Publications (1)

Publication Number Publication Date
WO2016001248A1 true WO2016001248A1 (en) 2016-01-07

Family

ID=52450225

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/064887 WO2016001248A1 (en) 2014-06-30 2015-06-30 Method for analyzing cosmetic routines of users and associated system

Country Status (2)

Country Link
FR (1) FR3023110B1 (en)
WO (1) WO2016001248A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020252498A1 (en) * 2019-06-10 2020-12-17 The Procter & Gamble Company Method of generating user feedback information to enhance product use results
US11093749B2 (en) * 2018-12-20 2021-08-17 L'oreal Analysis and feedback system for personal care routines
US11257142B2 (en) 2018-09-19 2022-02-22 Perfect Mobile Corp. Systems and methods for virtual application of cosmetic products based on facial identification and corresponding makeup information

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120257048A1 (en) * 2009-12-17 2012-10-11 Canon Kabushiki Kaisha Video information processing method and video information processing apparatus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120257048A1 (en) * 2009-12-17 2012-10-11 Canon Kabushiki Kaisha Video information processing method and video information processing apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LUDOVIC FERY: "Visitez le nouveau centre de recherche capillaire de LOréal", 30 March 2012 (2012-03-30), pages 1 - 3, XP002738907, Retrieved from the Internet <URL:http://www.industrie-techno.com/visitez-le-nouveau-centre-de-recherche-capillaire-de-l-oreal.13003> [retrieved on 20150423] *
THIBAULT LE PELLEC, 28 April 2012 (2012-04-28), pages 1 - 2, XP002738908, Retrieved from the Internet <URL:http://www.meilleurcoiffeur.com/expert-zone/loreal-ouvre-un-centre-de-recherche-capillaire-a-st-ouen.html> [retrieved on 20150423] *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11257142B2 (en) 2018-09-19 2022-02-22 Perfect Mobile Corp. Systems and methods for virtual application of cosmetic products based on facial identification and corresponding makeup information
US11682067B2 (en) 2018-09-19 2023-06-20 Perfect Mobile Corp. Systems and methods for virtual application of cosmetic products based on facial identification and corresponding makeup information
US11093749B2 (en) * 2018-12-20 2021-08-17 L'oreal Analysis and feedback system for personal care routines
US11756298B2 (en) 2018-12-20 2023-09-12 L'oreal Analysis and feedback system for personal care routines
WO2020252498A1 (en) * 2019-06-10 2020-12-17 The Procter & Gamble Company Method of generating user feedback information to enhance product use results
JP2022535823A (en) * 2019-06-10 2022-08-10 ザ プロクター アンド ギャンブル カンパニー How to generate user feedback information to improve product usage results
US11544764B2 (en) 2019-06-10 2023-01-03 The Procter & Gamble Company Method of generating user feedback information to enhance product use results
JP7319393B2 (en) 2019-06-10 2023-08-01 ザ プロクター アンド ギャンブル カンパニー How to generate user feedback information to improve product usage results

Also Published As

Publication number Publication date
FR3023110B1 (en) 2017-10-13
FR3023110A1 (en) 2016-01-01

Similar Documents

Publication Publication Date Title
US10901508B2 (en) Fused electroencephalogram and machine learning for precognitive brain-computer interface for computer control
US11122206B2 (en) Personal care device with camera
US9894266B2 (en) Cognitive recording and sharing
US9399290B2 (en) Enhancing sensor data by coordinating and/or correlating data attributes
US10157324B2 (en) Systems and methods of updating user identifiers in an image-sharing environment
WO2011074206A1 (en) Video information processing method and video information processing apparatus
US10045076B2 (en) Entertainment content ratings system based on physical expressions of a spectator to scenes of the content
CN104007807B (en) Obtain the method and electronic equipment of user terminal use information
US11197639B2 (en) Diagnosis using a digital oral device
CN112785278B (en) 5G intelligent mobile ward round method and system based on edge cloud cooperation
WO2016001248A1 (en) Method for analyzing cosmetic routines of users and associated system
EP3785221A2 (en) Evaluation method for the hair transplant process using the image processing and robotic technologies and the system of the method
EP3566831A1 (en) Intelligent hood and control method therefor, and terminal
CN106559631A (en) Method for processing video frequency and device
US11497455B2 (en) Personalized monitoring of injury rehabilitation through mobile device imaging
KR20120092889A (en) Skin care management system and method thereof, and portable device supporting the same
CN105528077A (en) Theme setting method and device
US11769077B2 (en) Methods and systems to characterize the user of a personal care device
US11157549B2 (en) Emotional experience metadata on recorded images
WO2020227436A1 (en) Personal care device with camera
WO2015087323A1 (en) Emotion based 3d visual effects
AU2014361862A1 (en) Data-integrated interface and methods of reviewing electromyography and audio data
KR101741905B1 (en) System for providing razor through analyzing user&#39;s utilization pattern
Celestrini et al. Applying remote health monitoring to understand users’ QoE in multisensory applications in real-time
CN112598745B (en) Method and device for determining person-goods association event

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15734629

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15734629

Country of ref document: EP

Kind code of ref document: A1