CN110119868B - System and method for generating and analyzing user behavior indexes in makeup consultation conference - Google Patents

System and method for generating and analyzing user behavior indexes in makeup consultation conference Download PDF

Info

Publication number
CN110119868B
CN110119868B CN201811599221.XA CN201811599221A CN110119868B CN 110119868 B CN110119868 B CN 110119868B CN 201811599221 A CN201811599221 A CN 201811599221A CN 110119868 B CN110119868 B CN 110119868B
Authority
CN
China
Prior art keywords
make
client device
user
generating
user behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811599221.XA
Other languages
Chinese (zh)
Other versions
CN110119868A (en
Inventor
李宛娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CN110119868A publication Critical patent/CN110119868A/en
Application granted granted Critical
Publication of CN110119868B publication Critical patent/CN110119868B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0281Customer communication at a business location, e.g. providing product or service information, consulting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Marketing (AREA)
  • Finance (AREA)
  • Game Theory and Decision Science (AREA)
  • Accounting & Taxation (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Tourism & Hospitality (AREA)
  • Operations Research (AREA)
  • Image Analysis (AREA)
  • Telephonic Communication Services (AREA)

Abstract

A system and method for generating and analyzing user behavior metrics in a make-up consultation meeting includes detecting, with a server device, initiation of a videoconference session between a consultation device used by a make-up professional and a client device used by a user to receive make-up consultation from the make-up professional. The server device extracts data from the client device during the videoconference session, the data describing a user's behavior on the client device with respect to a plurality of suggested cosmetic effects transmitted to the client device by the cosmetic professional through the advisory facility. The server device applies the plurality of weight values to the extracted data and generates one or more hesitation metrics based on the plurality of weight values and causes the one or more hesitation metrics to be displayed in a user interface on the advisory device.

Description

System and method for generating and analyzing user behavior indexes in makeup consultation conference
Technical Field
The present disclosure relates generally to make-up consultation and, more particularly, to systems and methods for generating and analyzing user behavioral indicators during a make-up consultation meeting.
Background
Although cosmetic professionals often assist individuals in applying cosmetics to achieve a desired make-up, it is sometimes difficult for individuals to provide feedback on the exact combination of cosmetics currently preferred by the individual, especially when there may be nuances between the cosmetics recommended by the cosmetic professionals. Accordingly, there is a need for an improved platform for providing feedback to cosmetic professionals to facilitate recommendation and fit applications for cosmetics in a make-up consultation.
Disclosure of Invention
According to one embodiment, a method for generating and analyzing user behavior metrics during a make-up consultation meeting includes detecting, with a server device, initiation of a videoconference session between a consultation device used by a make-up professional and a client device used by a user to receive make-up consultation from the make-up professional. The server device extracts data from the client device during the videoconference session, the data describing a user's behavior on the client device with respect to a plurality of suggested cosmetic effects transmitted to the client device by the cosmetic professional through the advisory facility. The server device applies the plurality of weight values to the extracted data and generates one or more hesitation metrics based on the plurality of weight values and causes the one or more hesitation metrics to be displayed in a user interface on the advisory device.
Preferably, extracting the data from the client device includes extracting data of a predetermined grouping of a plurality of target events corresponding to user behavior on the client device.
Preferably, applying a plurality of said weight values to said extracted data comprises applying a plurality of predetermined weight values to each of a plurality of said target events in said groupings of said target events.
Preferably, said grouping of a plurality of said target events comprises at least one of: a makeup effect category selected by the user on the client device; selecting, by the user, on the client device, a change in an attribute of the selected makeup effect category; a makeup effect category removed by the user on the client device.
Preferably, each of the makeup effect categories corresponds to a makeup effect for a different facial feature.
Preferably, the change in the attribute of the category of the selected make-up effect comprises a change in color of the category of the make-up effect.
Preferably, the change in attribute of the selected makeup effect category includes an enhancement or a reduction in makeup effect.
Preferably, causing the one or more hesitation indicators to be displayed in the user interface on the advisory facility comprises: ranking the one or more hesitation metrics; and causing the one or more hesitation indicators that are ranked to be displayed in the user interface on the advisory facility.
Another embodiment is a system for generating and analyzing user behavior metrics during a make-up consultation meeting, including a memory for storing a plurality of instructions and a processor coupled to the memory. The processor is configured by the plurality of instructions to detect initiation of a videoconference session between a counseling device used by the cosmetic professional and a client device used by the user to receive a makeup counseling from the cosmetic professional. The processor is also configured to extract data from the client device during the videoconference session, the data describing a user's behavior on the client device with respect to a plurality of suggested cosmetic effects transmitted to the client device by the cosmetic professional through the advisory facility. The processor is also configured to apply a plurality of weight values to the extracted data and generate one or more hesitation indicators based on the plurality of weight values and cause the one or more hesitation indicators to be displayed in a user interface on the advisory facility.
Preferably, the processor extracts the data from the client device by extracting a predetermined grouping of data for a plurality of target events corresponding to user behavior on the client device.
Preferably, the processor applies a plurality of the weight values to the extracted data by applying a plurality of predetermined weight values to each of a plurality of the target events in the grouping of the target events.
Preferably, said grouping of a plurality of said target events comprises at least one of: a makeup effect category selected by the user on the client device; selecting, by the user, on the client device, a change in an attribute of the selected makeup effect category; a makeup effect category removed by the user on the client device.
Preferably, each of the makeup effect categories corresponds to a makeup effect for a different facial feature.
Preferably, the change in the attribute of the category of the selected make-up effect comprises a change in color of the category of the make-up effect.
Preferably, the processor causes the one or more hesitation indicators to be displayed in the user interface on the advisory facility by: ranking the one or more hesitation metrics; and causing the one or more hesitation indicators that are ranked to be displayed in the user interface on the advisory facility.
Another embodiment is a non-transitory computer-readable storage medium for generating and analyzing user behavior metrics during a make-up consultation meeting, storing a plurality of instructions implemented by a computing device having a processor, wherein the plurality of instructions, when executed by the processor, cause the computing device to detect initiation of a videoconference session between a consultation device used by a make-up professional and a client device used by a user to accept make-up consultation from the make-up professional. The processor is also configured to extract data from the client device during the videoconference session, the data describing a user's behavior on the client device with respect to a plurality of suggested cosmetic effects transmitted to the client device by the cosmetic professional through the advisory facility. The processor is also configured to apply a plurality of weight values to the extracted data and generate one or more hesitation indicators based on the plurality of weight values and cause the one or more hesitation indicators to be displayed in a user interface on the advisory facility.
Preferably, said grouping of a plurality of said target events comprises at least one of: a makeup effect category selected by the user on the client device; selecting, by the user, on the client device, a change in an attribute of the selected makeup effect category; a makeup effect category removed by the user on the client device.
Preferably, each of the makeup effect categories corresponds to a makeup effect for a different facial feature.
Preferably, the change in the attribute of the category of the selected make-up effect comprises a change in color of the category of the make-up effect.
For a further understanding of the nature and the technical aspects of the present invention, reference should be made to the following detailed description of the invention and the accompanying drawings, which are provided for purposes of reference only and are not intended to limit the invention.
Drawings
Various embodiments of the present application may be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present application. Furthermore, in the drawings, like reference numerals designate corresponding parts throughout the several views.
Fig. 1 is a block diagram of a networked environment for generating and analyzing multiple user behavior indicators during a make-up consultation meeting according to various embodiments of the present disclosure.
FIG. 2 is a schematic diagram of a plurality of computing devices shown in FIG. 1, according to various embodiments of the disclosure.
Fig. 3 is a top-level flow chart illustrating an example of functionality implemented as part of the server device of fig. 1 for generating and analyzing multiple user behavior indicators during a make-up consultation meeting according to various embodiments of the present disclosure.
Fig. 4 is a diagram illustrating an example of a user selecting a make-up effect in a user interface provided on a display of the client device in fig. 1, according to various embodiments of the present disclosure.
Fig. 5 is a diagram illustrating an example of a user adjusting properties of a makeup effect in a user interface provided on a display of the client device in fig. 1, according to various embodiments of the present disclosure.
Fig. 6 is a diagram illustrating an example of another attribute of a user adjusting a make-up effect in a user interface provided on a display of a client device, according to various embodiments of the present disclosure.
Fig. 7 is a diagram illustrating the calculation of a plurality of hesitation indicators based on lipstick and eye shadow for a plurality of target events involving a plurality of user interfaces in fig. 4-6, according to various embodiments of the present disclosure.
Fig. 8 is a diagram illustrating an example of a user removing a make-up effect in a user interface provided on a display of the client device in fig. 1, according to various embodiments of the present disclosure.
Fig. 9 is a diagram illustrating the calculation of a plurality of hesitation indicators based on lipstick and eye shadow for a plurality of target events involving a plurality of user interfaces in fig. 4-8, according to various embodiments of the present disclosure.
Fig. 10 is a diagram illustrating the calculation of a plurality of hesitation indicators related to color based on a plurality of target events involving a plurality of user interfaces in fig. 5 and 6, according to various embodiments of the present disclosure.
Detailed Description
While cosmetic professionals often assist individuals in applying cosmetics to achieve a desired make-up, it is sometimes difficult for individuals to provide feedback on the exact combination of cosmetics currently preferred by the individual, especially when there may be nuances between the multiple cosmetics. Furthermore, individuals who sometimes attend make-up consultation meetings may not feel comfortable providing feedback to the cosmetic professional. The present invention addresses the shortcomings of the conventional makeup counseling platform by analyzing the behavior of a user with respect to makeup effects provided by a makeup professional during a makeup counseling session. By generating and analyzing the user behavior indexes, the cosmetic professional can be helped to more effectively suggest the makeup effect to the user, thereby improving the quality and the effect of the makeup consultation meeting.
An illustration of a networking environment for implementing the techniques herein will now be described, followed by a discussion of the operation of the components within the system. Fig. 1 is a block diagram of a networked environment for generating and analyzing multiple user behavior indicators during a make-up consultation meeting. The networking environment includes a server device 102, which may include a server computer or any other system that provides computing power. Alternatively, server device 102 may employ multiple computing devices, which may be configured in, for example, one or more server libraries, calculator libraries, or other configurations. These computing devices may be located in a single device or may be distributed across different geographic locations.
User behavior analyzer 104 executes on a processor of server device 102 and includes meeting monitor 106, data extractor 108, index generator 110, and User Interface (UI) component 112. Conference monitor 106 is configured to detect initiation of a video conference session between a client device 122 used by a user and a counseling device 126 used by a cosmetic professional for a make-up counseling conference by the cosmetic professional.
The data extractor 108 is configured to extract data from the client device 122 during the videoconference session, wherein the extracted data describes a user's behavior on the client device 122 with respect to a plurality of suggested cosmetic effects transmitted by the cosmetic professional through the advisory device 126. The metric generator 110 is configured to apply a plurality of predetermined weight values to the extracted data and generate one or more hesitation metrics. In the context of the present disclosure, a hesitation index is an index that generally reflects a user's preference (or dislike/priority) with respect to cosmetic effects suggested by a cosmetic professional. As described in more detail below, the plurality of weight values may correspond to various target events (or functions of the cosmetic application) corresponding to user behavior on the client device.
The target event may include, for example, a type of make-up effect selected by the user on the client device 122, a change in a property of the type of make-up effect selected by the user, and/or a type of make-up effect removed by the user. Each selection may be performed using a particular function of the application. Thus, for some embodiments, the weight values may be predetermined by the software manufacturer and vary according to functionality. These weight values 117 may be retrieved by the metric generator 110 from the database 114, the database 114 storing definitions of the plurality of target events 116 and corresponding weight values 117. The UI component 112 causes one or more hesitation indicators to be displayed in a user interface on the advisory facility 126.
Cosmetic user device 122 and advisory device 126 may be implemented as computing devices such as, but not limited to, smart phones, tablet computing devices, notebook computers, and the like. Server device 102, client device 122, and advisory device 126 are interactively coupled through network 120, such as the Internet (Internet), an internal network (intranet), an external network, wide Area Networks (WANs), local Area Networks (LANs), wired networks, wireless networks, or other suitable networks, or the like, or any combination of two or more such networks.
The virtual cosmetic application 124 executes on the processor of the client device 122 and allows a user of the client device 122 to participate in a make-up consultation meeting with a make-up professional via the consultation device 126. The virtual make-up application 124 is also configured to receive advice regarding make-up effects from the advisory facility 126. The virtual make-up application 124 executes a virtual application of such make-up effects to a live video feed or facial region of a digital image of the user of the client device 122. The advisory service 128 executing on the processor of the advisory facility 126 allows the make-up professional to conduct make-up advisory sessions with the user of the client device 122 and is used to provide recommendations regarding various make-up effects for various facial features of the user.
Fig. 2 is a schematic block diagram of each of the server device 102, the client device 122, and the advisory device 126 in fig. 1. Each of these computing devices 102, 122, 126 may be implemented in any of a variety of wired and/or wireless computing devices, such as desktop computers, portable computers, dedicated server computers, multiprocessor computing devices, smartphones, tablet computers, and the like. As shown in FIG. 2, each of these computing devices 102, 122, 126 includes a memory 214, a processing device 202, a plurality of input/output interfaces 204, a network interface 206, a display 208, a peripheral interface 211, and a mass storage device 226, each of which are connected by a local data bus 210.
The processing device 202 may include any of a number of custom made or commercially available processors associated with the computing device 102, a Central Processing Unit (CPU) or co-processor, a semiconductor-based microprocessor (in the form of a microchip), a macroprocessor, one or more Application Specific Integrated Circuits (ASICs), a plurality of suitably configured digital logic gates, and other well known electrical configurations comprising discrete components, both alone and in various combinations, to coordinate the overall operation of the computing system.
Memory 214 may include any one of a combination of volatile memory components (e.g., random access memory (RAM, such as DRAM and SRAM, etc.) and nonvolatile memory components (e.g., ROM, hard drive, tape, CDROM, etc.). Memory 214 typically includes a native operating system 216, one or more native applications, an emulation system, or an emulation application for any of a variety of operating systems and/or emulation hardware platforms, emulation operating systems, etc. For example, the application may include application specific software, which may include some or all of the components of the computing devices 102, 122, 126 depicted in fig. 1. According to such embodiments, the components are stored in memory 214 and executed by processing device 202, thereby causing processing device 202 to perform operations/functions related to the features disclosed herein. Those of ordinary skill in the art will appreciate that the memory 214 can, and will generally, include other components omitted for brevity. For some embodiments, components in computing device 102 may be implemented by hardware and/or software.
The input/output interface 204 provides any number of interfaces for the input and output of data. For example, if computing device 102 comprises a personal computer, these components may interface with one or more user input/output interfaces 204, which may include a keyboard or mouse, as shown in FIG. 2. The display 208 may include a computer monitor, a plasma screen for a PC, a Liquid Crystal Display (LCD) on a handheld device, a touch screen, or other display device.
In the context of the present disclosure, a non-transitory computer readable medium stores a program for use by or in connection with an instruction execution system, apparatus, or device. More specific examples of the computer-readable medium may include, by way of example and not limitation: portable computer diskette, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM, EEPROM or flash memory), and portable compact disc read-only memory (CDROM) (optical).
Referring now to fig. 3, which illustrates a top-level flow chart of an example of functionality implemented as part of the server device of fig. 1 for generating and analyzing a plurality of user behavior indicators during a make-up consultation meeting in accordance with various embodiments of the present disclosure. It should be understood that the flowchart 300 of fig. 3 provides only examples of the different types of functional configurations that may be used to implement the operation of the server device 102 depicted in fig. 1. Alternatively, the flowchart 300 of fig. 3 may be viewed as an example of the steps of a method implemented in the server device 102, depicted in accordance with one or more embodiments.
Although the flow chart 300 of fig. 3 shows a particular order of execution, it is understood that the order of execution may vary from that described. For example, the order of execution of two or more blocks may be pieced together relative to the order shown. Also, blocks having a sequential order in FIG. 3 may be performed at the same time or partially at the same time, and it should be understood that all such variations are within the scope of the present disclosure.
At block 310, the server device 102 detects initiation of a videoconference session between the advisory facility 126 (fig. 1) used by the make-up professional and the client device 122 (fig. 1) used by the user to receive make-up advisory from the make-up professional. At block 320, server device 102 extracts data from client device 122 during the videoconference session, wherein the data describes a user's behavior on the client device with respect to a plurality of suggested cosmetic effects transmitted to client device 122 by the cosmetic professional through advisory device 126.
For some embodiments, the server device 102 extracts data from the client device 122 by extracting data for a predetermined grouping of a plurality of target events corresponding to user behavior on the client device 122. For some embodiments, the groupings of the plurality of target events may include a makeup effect category selected by the user on the client device 122, where each makeup effect category corresponds to a makeup effect for a different facial feature. For example, the target event may include a user clicking on a touch screen of the client device 122 to select a particular make-up effect suggested by the make-up professional.
The grouping of the plurality of target events may also include a change in an attribute of the selected makeup effect category selected by the user on the client device 122. For example, the target event may include a user clicking on a touch screen of the client device 122 to alter the color or shade of a particular make-up effect (e.g., lipstick) recommended by the make-up professional. The groupings of the plurality of target events may also include a makeup effect category that is removed by the user on the client device 122. For example, the target event may include a user performing a gesture (e.g., a swipe gesture) to remove a particular make-up effect recommended by the cosmetic operator. As another example, the target event may include a user using a plurality of user interface tools to enhance or reduce a make-up effect.
In block 330, the server device 102 applies the plurality of weight values 117 (fig. 1) to the extracted data and generates one or more hesitation indicators based on the plurality of weight values. For some embodiments, the server device 102 applies the plurality of weight values to the extracted data by applying a plurality of predetermined weight values to each target event in a grouping of the plurality of target events.
In block 340, the server device 102 causes one or more hesitation indicators to be displayed in a user interface on the advisory facility. For some embodiments, this involves the server device 102 ordering one or more hesitation metrics and causing the ordered one or more hesitation metrics to be displayed in a user interface on the advisory device 126. Thereafter, the flow in fig. 3 ends.
Fig. 4 is a diagram illustrating an example of a user selection of a make-up effect in a user interface 402 provided on a display of the client device 122 in fig. 1, according to various embodiments of the present disclosure. As shown, the user interface 402 includes a virtual mirror window 404 that describes a live video feed of the user's face region. Alternatively, virtual mirror window 404 may depict a still image of the user's face region. The user interface 402 also includes a second window 406 that depicts a real-time video feed of the cosmetic professional.
The user interface 402 also includes a plurality of graphical thumbnail representations 408, each graphical thumbnail representation 408 corresponding to a particular make-up effect recommended by a make-up professional using the advisory facility 126 (fig. 1). In the example shown, graphical thumbnail representations for different effects (effect #1 to effect # 5) are shown. Further illustrated, effect #1 may correspond to a first cosmetic effect (e.g., applying a foundation to a facial area), effect #2 may correspond to a second cosmetic effect (e.g., applying a blush to a facial area), effect #3 may correspond to a third effect (e.g., applying a lipstick), effect #4 may correspond to a fourth cosmetic effect (e.g., applying an eyeliner), effect #5 may correspond to a fifth cosmetic effect (e.g., applying a cosmetic to eyelashes), and so forth.
In the example shown, the user selects one of a plurality of make-up effects to try. According to some embodiments, the selection of the makeup effect corresponds to a target event, and the data extractor 108 (fig. 1) executing in the server device 102 obtains data related to the event. In addition, this target event includes the selection of a make-up effect (FIG. 1). The percentages shown in fig. 4 represent one or more hesitation indicators that are subsequently provided to the cosmetic professional by the server device 102 in order to recommend a make-up effect. For example, the selection of effect #1 corresponds to a hesitation index of 25% with respect to all make-up effect selections made by the user. The selection of effect #3 corresponds to a hesitation index of 50% with respect to all make-up effect selections made by the user. The selection of effect #4 corresponds to a 15% hesitation index with respect to all make-up effect selections made by the user. The selection of effect #5 corresponds to a 10% hesitation index with respect to all make-up effect selections made by the user. Note that the hesitation index is not limited to a percentage value, and may include any type of index that indicates a preference level (e.g., an "a" rating versus a "C" rating, a 5 star rating versus a 3 star rating).
Fig. 5 is a diagram illustrating an example of a user adjusting properties of a make-up effect in a user interface 502 provided on a display of the client device 122 in fig. 1, according to various embodiments of the present disclosure. As shown, the user interface 502 includes a virtual mirror window 504 that describes a live video feed of the user's face region. Alternatively, the virtual mirror window 504 may depict a still image of the user's face region. The user interface 502 also includes product data 506, which may include, for example, images of products corresponding to the make-up effect, descriptions of the products, pricing information for the products, and the like.
The user interface 502 also includes user interface controls that allow a user to alter the properties (colors) of the make-up effect. According to some embodiments, the change in the attribute of the makeup effect by the user corresponds to a target event, and the data extractor 108 (fig. 1) executing in the server device 102 obtains data related to this event. In the example shown, the number of selections of color 4 among all color changes made by the user is recorded, while the number of selections of color 6 among all color changes made by the user is also recorded. (in the example shown, color 5 was not selected by the user.) suppose that in this example, the first target event contains a selection of color and a predetermined weight value of 60% is assigned in the first user interface. The second target event includes a change in a property of the make-up effect (e.g., a change in color intensity) and may be assigned a value. A predetermined weight value of 20% is assigned in the third user interface for the target event including the attribute change.
Fig. 7 illustrates the calculation of a plurality of hesitation indicators based on lipstick and eye shadow relating to a plurality of target events of a plurality of user interfaces of fig. 4-6. 4-6. The hesitation index may then be calculated as a function of the weight value and the corresponding target event. As shown, the number of lips tried by the user (e.g., 2 times-color 4 times and color 6 times), the lip area was changed 5 times by the user, and the intensity was changed once. These target events are multiplied by corresponding weight values. This example assumes that the score of the eye shadow is 2.6. On this basis, the lipstick hesitation index of the lipstick effect is calculated while taking the eye shadow effect into consideration. The hesitation index may be used by the cosmetic professional when making a suggestion to the user.
For further explanation, please consider the following additional examples relating to eye shadow effects. Suppose the user clicks three times on the eyeshadow effect using the user interface in fig. 4, and this target event has a predetermined weight value of 50%. Suppose the user also clicks three times in a particular eyeshadow color (e.g., color 3) using the user interface in fig. 5, and this target event has a predetermined weight value of 30%. Finally, assume that the user adjusts the intensity of the selected eye shadow twice using the user interface in FIG. 6, and that this target event has a predetermined weight value of 20%. From the occurrence of these target events, the scores of eye shadows can be calculated using the respective weight values. In this example, the eye shadow score is calculated as follows: eye shadow fraction=3×0.5+3×0.3+2×0.2=2.8. Reviewing the example of fig. 7, a number of target events related to lipstick and eye shadow effects will then be considered. In this example, the hesitation index provides an indication to the cosmetic professional of what is more interesting to a user between two products. As shown in fig. 7, the hesitation index of the lipstick was 54%, and the hesitation index of the eye shadow was 46%. From these criteria, the cosmetic professional can infer that the user's interest in lipstick is greater than that of eye shadow. The hesitation index may be used by the cosmetic professional when making a suggestion to the user.
Fig. 8 is a diagram illustrating an example of a user removing a make-up effect in a user interface 802 provided on a display of the client device 122 in fig. 1, according to various embodiments of the present disclosure. In the example shown, the user removes the make-up effect 608 suggested by the make-up professional. According to some embodiments, the removal of the makeup effect 808 by the user corresponds to a target event, and the data extractor 108 executing in the server device 102 obtains data related to this event. (fig. 1) in addition, this target event, including removal of the make-up effect, may be assigned a value corresponding to the weight value 117 (fig. 1). This value may then be used to generate one or more hesitation indicators that are subsequently provided to the cosmetic professional by the server device 102 in order to recommend a make-up effect.
Fig. 9 is a diagram illustrating the calculation of a plurality of hesitation indicators based on lipstick and eye shadow for a plurality of target events involving a plurality of user interfaces in fig. 4-8, according to various embodiments of the present disclosure. As shown, the hesitation index changes due to the removal of the eye shadow effect. Upon removal of the eye shadow effect (e.g., by a gesture performed by the user), the total score will become 3.3. The hesitation index of the eye shadow effect became 0%, and the hesitation index of the lipstick effect became 100%. Based on these hesitation criteria, the cosmetic professional may decide in the future to avoid suggesting eye shadow effects to the user.
Fig. 10 is a diagram illustrating the calculation of a plurality of hesitation indicators related to color based on a plurality of target events involving a plurality of user interfaces in fig. 5 and 6, according to various embodiments of the present disclosure. In the example shown, the corresponding number of selections made by the user for each color (colors 1-6) will be recorded. (in the example shown, color 5 was not selected by the user.) suppose that in the present example, the first target event includes a selection of a particular color and a predetermined weight value of 60% is assigned in the first user interface. The second target event includes a change in a property of the make-up effect (e.g., a change in color intensity) and designates a predetermined weight value of 40%.
As shown in fig. 10, the hesitation index of each color is calculated according to the number of times the user selects each color, the number of times the user adjusts the intensity of each color, and the corresponding weight value assigned to each target event. The hesitation index may then be used by the cosmetic professional to facilitate the recommendation of the make-up product to the user. According to the fact that color 6 has the highest hesitation percentage in the example shown in fig. 10, the cosmetic professional can use this information to recommend a cosmetic product of this particular color (such as lipstick). Similarly, based on the fact that color 5 is the lowest hesitation percentage in this example, the cosmetic professional can use this information to avoid suggesting a cosmetic of this particular color.
It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiments without departing substantially from the spirit and principles of the invention. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
The foregoing disclosure is only a preferred embodiment of the present invention and is not intended to limit the scope of the claims, so that all equivalent technical changes made by the application of the present invention and the accompanying drawings are included in the scope of the claims.

Claims (21)

1. A method for generating and analyzing user behavior metrics during a make-up consultation meeting implemented in a server device, comprising:
detecting initiation of a videoconference session between a counseling device used by a make-up professional and a client device used by a user to receive make-up counseling from the make-up professional;
extracting data from the client device during the videoconference session, the data describing a behavior of the user on the client device with respect to a plurality of suggested cosmetic effects transmitted by the cosmetic professional to the client device through the advisory facility;
applying a plurality of weight values to the extracted data;
generating one or more hesitation indicators based on a plurality of the weight values; and
the one or more hesitation indicators are caused to be displayed in a user interface on the advisory facility.
2. The method for generating and analyzing user behavior metrics during a make-up consultation meeting of claim 1, wherein extracting the data from the client device includes extracting data for a predetermined grouping of a plurality of target events corresponding to user behavior on the client device.
3. The method for generating and analyzing user behavior metrics during a make-up consultation meeting of claim 2 in which applying a plurality of the weight values to the extracted data includes applying a plurality of predetermined weight values to each of the target events in the groupings of a plurality of the target events.
4. The method for generating and analyzing user behavior metrics during a make-up consultation meeting according to claim 3, wherein said grouping of a plurality of said target events includes at least one of:
a makeup effect category selected by the user on the client device;
a change in a property of a makeup effect category selected by the user on the client device;
a makeup effect category removed by the user on the client device.
5. The method for generating and analyzing user behavior metrics during a make-up consultation meeting according to claim 4, wherein each of the make-up effect categories corresponds to make-up effects for different facial features.
6. The method for generating and analyzing user behavior metrics during a make-up consultation meeting according to claim 4, wherein the attribute changes in the category of the selected make-up effect include color changes in the category of the make-up effect.
7. The method for generating and analyzing user behavior metrics during a make-up consultation meeting according to claim 4, wherein the attribute change of the make-up effect category selected includes an increase or decrease in make-up effect.
8. The method for generating and analyzing user behavior metrics during a make-up consultation meeting of claim 1, wherein causing the one or more hesitation metrics to be displayed in the user interface on the consultation device includes:
ranking the one or more hesitation metrics; and
causing the one or more ranked hesitation indicators to be displayed in the user interface on the advisory facility.
9. A system for generating and analyzing user behavior metrics during a make-up consultation meeting, comprising:
a memory for storing a plurality of instructions;
a processor coupled to the memory and configured by a plurality of the instructions to at least:
detecting initiation of a videoconference session between a counseling device used by the make-up professional and a client device used by the user to receive make-up counseling from the make-up professional;
extracting data from the client device during the videoconference session, the data describing a behavior of the user on the client device with respect to a plurality of suggested cosmetic effects transmitted by the cosmetic professional to the client device through the advisory facility;
applying a plurality of weight values to the extracted data and generating one or more hesitation indicators based on the plurality of weight values; and
the one or more hesitation indicators are displayed in a user interface on the advisory facility.
10. The system for generating and analyzing user behavior metrics during a make-up consultation meeting of claim 9, wherein the processor extracting the data from the client device is by extracting data for a predetermined grouping of a plurality of target events corresponding to user behavior on the client device.
11. The system for generating and analyzing user behavior metrics during a make-up consultation meeting of claim 10 in which the processor applies a plurality of the weight values to the extracted data by applying a plurality of predetermined weight values to each of a plurality of the target events in the groupings of the target events.
12. The system for generating and analyzing user behavior metrics during a make-up consultation meeting according to claim 11, wherein said grouping of a plurality of said target events includes at least one of:
a makeup effect category selected by the user on the client device;
a change in a property of a makeup effect category selected by the user on the client device;
a makeup effect category removed by the user on the client device.
13. The system for generating and analyzing user behavior metrics during a make-up consultation meeting according to claim 12, wherein each of the make-up effect categories corresponds to make-up effects for different facial features.
14. The system for generating and analyzing user behavior metrics during a make-up consultation meeting according to claim 12, wherein the attribute changes in the category of the selected make-up effect include color changes in the category of the make-up effect.
15. The system for generating and analyzing user behavior metrics during a make-up consultation meeting of claim 9, wherein the processor causes the one or more hesitation metrics to be displayed in the user interface on the consultation device by:
ranking the one or more hesitation metrics; and
causing the one or more ranked hesitation indicators to be displayed in the user interface on the advisory facility.
16. A non-transitory computer-readable storage medium for generating and analyzing user behavior metrics during a make-up consultation meeting, the non-transitory computer-readable storage medium for generating and analyzing user behavior metrics during a make-up consultation meeting storing a plurality of instructions implemented by a computing device having a processor, wherein a plurality of the instructions, when executed by the processor, cause the computing device to at least:
detecting initiation of a videoconference session between a counseling device used by the make-up professional and a client device used by the user to receive make-up counseling from the make-up professional;
extracting data from the client device during the videoconference session, the data describing a behavior of the user on the client device with respect to a plurality of suggested cosmetic effects transmitted by the cosmetic professional to the client device through the advisory facility;
applying a plurality of weight values to the extracted data and generating one or more hesitation indicators based on the plurality of weight values; and
the one or more hesitation indicators are displayed in a user interface on the advisory facility.
17. The non-transitory computer-readable storage medium for generating and analyzing user behavior metrics during a make-up consultation meeting of claim 16, wherein the processor extracting the data from the client device is by extracting data for a predetermined grouping of a plurality of target events corresponding to user behavior on the client device.
18. The non-transitory computer-readable storage medium for generating and analyzing user behavior metrics during a make-up consultation meeting of claim 17, wherein the processor applies a plurality of the weight values to the extracted data by applying a plurality of predetermined weight values to each of a plurality of the target events in the groupings of the target events.
19. The non-transitory computer-readable storage medium for generating and analyzing user behavior metrics during a make-up consultation meeting of claim 18, wherein the grouping of a plurality of the target events includes at least one of:
a makeup effect category selected by the user on the client device;
a change in a property of a makeup effect category selected by the user on the client device;
a makeup effect category removed by the user on the client device.
20. The non-transitory computer-readable storage medium for generating and analyzing user behavior metrics during a make-up consultation meeting of claim 19, wherein each of the make-up effect categories corresponds to make-up effects for different facial features.
21. A non-transitory computer-readable storage medium for generating and analyzing user behavior metrics during a make-up consultation meeting according to claim 19, wherein the attribute changes at the category of the selected make-up effect include color changes of the make-up effect category.
CN201811599221.XA 2018-02-06 2018-12-26 System and method for generating and analyzing user behavior indexes in makeup consultation conference Active CN110119868B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862627010P 2018-02-06 2018-02-06
US62/627,010 2018-02-06

Publications (2)

Publication Number Publication Date
CN110119868A CN110119868A (en) 2019-08-13
CN110119868B true CN110119868B (en) 2023-05-16

Family

ID=67519796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811599221.XA Active CN110119868B (en) 2018-02-06 2018-12-26 System and method for generating and analyzing user behavior indexes in makeup consultation conference

Country Status (1)

Country Link
CN (1) CN110119868B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258242B (en) * 2020-11-02 2024-06-18 上海汽车集团股份有限公司 Form configuration item data pushing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9460462B1 (en) * 2012-05-22 2016-10-04 Image Metrics Limited Monetization using video-based simulation of cosmetic products
CN106294489A (en) * 2015-06-08 2017-01-04 北京三星通信技术研究有限公司 Content recommendation method, Apparatus and system
US9674485B1 (en) * 2015-12-23 2017-06-06 Optim Corporation System and method for image processing
CN107122989A (en) * 2017-03-21 2017-09-01 浙江工业大学 A kind of multi-angle towards cosmetics mixes recommendation method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140280890A1 (en) * 2013-03-15 2014-09-18 Yahoo! Inc. Method and system for measuring user engagement using scroll dwell time

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9460462B1 (en) * 2012-05-22 2016-10-04 Image Metrics Limited Monetization using video-based simulation of cosmetic products
CN106294489A (en) * 2015-06-08 2017-01-04 北京三星通信技术研究有限公司 Content recommendation method, Apparatus and system
US9674485B1 (en) * 2015-12-23 2017-06-06 Optim Corporation System and method for image processing
CN107122989A (en) * 2017-03-21 2017-09-01 浙江工业大学 A kind of multi-angle towards cosmetics mixes recommendation method

Also Published As

Publication number Publication date
CN110119868A (en) 2019-08-13

Similar Documents

Publication Publication Date Title
US10691932B2 (en) Systems and methods for generating and analyzing user behavior metrics during makeup consultation sessions
JP6650423B2 (en) Simulator and computer-readable storage medium
JP6243008B2 (en) Skin diagnosis and image processing method
US20190026013A1 (en) Method and system for interactive cosmetic enhancements interface
EP3522095A1 (en) Systems and methods for makeup consultation using an improved user interface
US8498456B2 (en) Method and system for applying cosmetic and/or accessorial enhancements to digital images
US11178956B1 (en) System, method and mobile application for cosmetic and skin analysis
US10373348B2 (en) Image processing apparatus, image processing system, and program
TWI657799B (en) Electronic apparatus and method for providing skin detection information thereof
US20220202168A1 (en) Digital makeup palette
JP2016081441A (en) Beauty care assist device, beauty care assist system, and beauty care assist method
US11776187B2 (en) Digital makeup artist
JP2023531264A (en) Systems and methods for improved facial attribute classification and its use
US11961169B2 (en) Digital makeup artist
CN110119868B (en) System and method for generating and analyzing user behavior indexes in makeup consultation conference
KR20200069695A (en) Apparatus and method for recommending cosmetic using skin analyzer
US20190053607A1 (en) Electronic apparatus and method for providing makeup trial information thereof
Masclet et al. A socio-cognitive analysis of evaluation and idea generation activities during co-creative design sessions supported by spatial augmented reality
US11321882B1 (en) Digital makeup palette
JP2023121082A (en) Information providing device, information providing method, and information providing program
WO2020116065A1 (en) Information processing device, cosmetics manufacturing device, and program
CN110134264A (en) It is rendered in the system for calculating equipment, method and storage media
CN112287744A (en) Method and system for implementing makeup effect suggestion and storage medium
CN110149301A (en) System and method for the color make-up advisory meeting based on event
KR20230108886A (en) Artificial intelligence-based facial makeup processing method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant