KR20170097380A - Method and apparatus for emotion classification of smart device user - Google Patents

Method and apparatus for emotion classification of smart device user Download PDF

Info

Publication number
KR20170097380A
KR20170097380A KR1020160019040A KR20160019040A KR20170097380A KR 20170097380 A KR20170097380 A KR 20170097380A KR 1020160019040 A KR1020160019040 A KR 1020160019040A KR 20160019040 A KR20160019040 A KR 20160019040A KR 20170097380 A KR20170097380 A KR 20170097380A
Authority
KR
South Korea
Prior art keywords
emotion
smart device
application
information
user
Prior art date
Application number
KR1020160019040A
Other languages
Korean (ko)
Other versions
KR101783183B1 (en
Inventor
조위덕
이종익
최선탁
Original Assignee
아주대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 아주대학교산학협력단 filed Critical 아주대학교산학협력단
Priority to KR1020160019040A priority Critical patent/KR101783183B1/en
Publication of KR20170097380A publication Critical patent/KR20170097380A/en
Application granted granted Critical
Publication of KR101783183B1 publication Critical patent/KR101783183B1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
    • A61B5/6898Portable consumer electronic devices, e.g. music players, telephones, tablet computers
    • G06F17/30705
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Software Systems (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Psychiatry (AREA)
  • Cardiology (AREA)
  • Theoretical Computer Science (AREA)
  • Social Psychology (AREA)
  • Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Physiology (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

And a method of classifying the emotional state of a smart device user. The method of classifying emotional state of a smart device user according to an embodiment of the present invention includes collecting first application use information including usage information for each application type from a smart device; Collecting emotion information including information on an emotion state for a predetermined time interval from a user of the smart device a plurality of times; And generating an emotion state determination model indicating a relationship between the usage information and the emotion state of the smart device user based on the application type, based on the emotion information and the first application usage information for each predetermined time period.

Description

METHOD AND APPARATUS FOR EMULATION CLASSIFICATION OF SMART DEVICE USER [0002]

The present invention relates to a method and apparatus for classifying an emotional state of a smart device user from an application usage pattern of a smart device, and more particularly, to a method and apparatus for classifying an emotional state of a smart device user, And a method and an apparatus for classifying the emotion state of a user using only application use information by using the relationship.

As smart devices such as smart phones and tablets become widespread throughout the world, the use of smart devices by general users has surged, and research has been conducted on how users' emotional states are related to the use of such smart devices.

Conventionally, research has been conducted to classify the user's emotional state by analyzing the voice of the user, the contents of the characters transmitted to the other party, and the facial expression of the user photographed by the camera when the user uses the smart device to communicate with the other party come. This is based on the assumption that the user's instant emotion is highly likely to be reflected in the voices at the time of the call, the contents of the characters to be transmitted, and the facial expression of the image to be photographed, and can be commonly applied to a large number of users.

However, research on a method for classifying the user's emotional state based on usage information for each type of application for individual users has not been conducted, and a method for classifying the emotional state more effectively by the user's individual application use pattern There is a growing need.

Korean Prior Art No. 10-1480668 entitled " A terminal equipped with an emotion recognition application using voice and a control method thereof, public date: Jan. 2, 2015 is known as a related art.

The present invention provides a method and apparatus for calculating a relation between usage information and emotion state of a smart device user by application type and classifying the emotion state of the user based on usage information for each application type by using the relationship.

The problems to be solved by the present invention are not limited to the above-mentioned problem (s), and another problem (s) not mentioned can be clearly understood by those skilled in the art from the following description.

In order to achieve the above object, the present invention provides a method of classifying emotional state of a smart device user, comprising: collecting first application use information including usage information for each of a plurality of application types from a smart device; Collecting emotion information including information on an emotion state for a predetermined time interval from a user of the smart device a plurality of times; And generating an emotion state determination model indicating a relationship between the usage information and the emotion state of the smart device user based on the application type, based on the emotion information and the first application usage information for each predetermined time period.

Preferably, the step of generating the emotion state determination model may further comprise the steps of: using the recognition accuracy and the threshold value for each application type usage information calculated based on the emotion information and the first application usage information for each predetermined time period Generating a plurality of weak classifiers for classifying emotional states; Generating a strong classifier based on at least one weak classifier having a recognition rate equal to or higher than a recognition rate threshold among the generated plurality of weak classifiers; Calculating an emotion state determination threshold, which is a result of the strong classifier, such that the recognition rate of the strong classifier is maximized; And generating the emotion state determination model using the strong classifier and the emotion state determination threshold.

Preferably, the resultant value of the strong classifier can be calculated by Equation (1).

[Equation 1]

Figure pat00001

Here, S is a result value of the strong classifier for the predetermined time interval, n is a total number of the at least one classifiers, Ws k is an emotion classification result by the kth classifying classifier for the predetermined time period Is 0 or 1, and Wa k is the recognition rate of the kth sorting classifier.

Preferably, the first application use information includes information on usage frequency and usage time for each application type, and the recognition rate and the threshold value for each application type usage information include a recognition rate and a threshold value for the usage frequency for each application type, The recognition rate and the threshold value for the time can be determined to be different from each other.

Preferably, the method further comprises: collecting second application usage information, which is usage information for each of the plurality of application types, after the generation of the emotion state determination model; And determining an emotion state of the smart device user corresponding to the second application usage information based on the second application use information and the emotion state determination model.

Preferably, when the smart device user uses a screening application which is a preset important application, a step of selectively outputting a message to the smart device user based on the determined emotion state, As shown in FIG.

Advantageously, the method further comprises recommending to the smart device user the use of a predetermined application corresponding to the determined emotional state.

Advantageously, the step of collecting emotional information including information on the emotional state may comprise receiving a user input indicative of an emotional state from a user of the smart device.

Preferably, collecting emotion information including information on the emotional state comprises: measuring a pulse rate of the smart device user using a pulse sensor interlocked with the smart device; And determining an emotional state of the smart device user based on the measured pulse rate.

Preferably, the type of emotional state is classified as stressed, excited, relaxed, or bored based on a 2D < RTI ID = 0.0 > emotional model, And can be generated for each kind of classified emotional state.

In order to achieve the above object, a smart device user's emotional state classification apparatus provided in the present invention collects first application use information including usage information for each of a plurality of application types from a smart device, A collecting unit for collecting emotion information including information on emotion states for a predetermined time period a plurality of times; And a model generation unit that generates an emotion state determination model that indicates a relationship between the usage information and the emotion state of each application type of the smart device user based on the emotion information and the first application usage information for each predetermined time period.

Preferably, the model generating unit includes a plurality of weak classifiers for classifying emotional states using recognition rates and thresholds for each application type usage information calculated based on the emotion information and the first application use information for each predetermined time period, A weak classifying parasitic part which generates the parasitic part; A strong classification parasitic section for generating a strong classifier based on a sorting classifier that is at least one weak classifier having a recognition rate equal to or higher than a recognition rate threshold among the plurality of weak classifiers generated; A threshold value calculation unit for calculating an emotion state determination threshold value which is a result value of the strong classifier for maximizing the recognition rate of the strong classifier; And a decision model generation unit for generating the emotion state determination model using the strong classifier and the emotion state determination threshold.

Preferably, the resultant value of the strong classifier can be calculated by Equation (1).

[Equation 1]

Figure pat00002

Here, S is a result value of the strong classifier for the predetermined time interval, n is a total number of the at least one classifiers, Ws k is an emotion classification result by the kth classifying classifier for the predetermined time period Is 0 or 1, and Wa k is the recognition rate of the kth sorting classifier.

Preferably, the first application use information includes information on usage frequency and usage time for each application type, and the recognition rate and the threshold value for each application type usage information include a recognition rate and a threshold value for the usage frequency for each application type, The recognition rate and the threshold value for the time can be determined to be different from each other.

Preferably, after the generation of the emotional state determination model, the emotional state determination model may include at least one of emotional state determination model, second application use information, And the collecting unit may further collect the second application use information.

Preferably, when the user of the smart device uses a screening application which is a preset important application, an output unit for outputting a message to the smart device user on the basis of the determined emotion state, ; ≪ / RTI >

Preferably, the smart device user may further include a recommendation unit for recommending use of a predetermined application corresponding to the determined emotional state.

Advantageously, said collector is capable of receiving a user input indicative of an emotional state from a user of said smart device.

Preferably, the collecting unit includes a pulse measuring unit for measuring a pulse rate of the smart device user using a pulse sensor interlocked with the smart device; And a pulse analyzer for determining an emotional state of the smart device user based on the measured pulse rate.

Preferably, the type of the emotion state is classified into stress, excitement, comfort, or boredom based on the 2D emotion model, and the emotion state determination model may be generated for each type of the emotion state classified.

The present invention has the effect of being able to calculate the relationship between the usage information and the emotional state for each application type of the smart device user.

Further, the present invention has an effect of classifying the emotional state of the user using only the usage information for each application type, by using the relationship between the usage information and the emotional state of each smart device user by application type.

In addition, the present invention generates weaker classifiers and strong classifiers based on the collected information, and generates emotion state determination models by calculating emotion state determination thresholds of strong classifiers, thereby making it possible to classify the emotion states of users more precisely It is effective.

FIG. 1 is a flowchart illustrating a method of classifying a user's emotional state according to an exemplary embodiment of the present invention. Referring to FIG.
2 is a flowchart illustrating a method for generating an emotion state determination model according to an exemplary embodiment of the present invention.
FIG. 3 is a view for explaining a 2D emotion model according to an embodiment of the present invention.
4 is a view showing usage information for each application type stored using a collector program according to an embodiment of the present invention.
FIG. 5 is a diagram illustrating a threshold value and a recognition rate of a weak classifier according to an embodiment of the present invention. Referring to FIG.
FIG. 6 is a view for explaining a selection criterion of a sorting classifier according to an embodiment of the present invention.
FIG. 7 is a diagram illustrating a sorting classifier according to an embodiment of the present invention. Referring to FIG.
FIG. 8 is a view for explaining information on an emotional state according to an embodiment of the present invention.
9 is a diagram for explaining a method of calculating an emotion state determination threshold using a strong classifier according to an embodiment of the present invention.
FIG. 10 is a diagram illustrating an algorithm for generating an emotion state determination model using a weak classifier and a strong classifier according to an embodiment of the present invention. Referring to FIG.
11 is a view for explaining an emotion state classifying apparatus of a smart device user according to an embodiment of the present invention.
12 is a view for explaining a model generating unit according to an embodiment of the present invention.

While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the invention is not intended to be limited to the particular embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. Like reference numerals are used for like elements in describing each drawing.

The terms first, second, A, B, etc. may be used to describe various elements, but the elements should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component. And / or < / RTI > includes any combination of a plurality of related listed items or any of a plurality of related listed items.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between.

The terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In the present application, the terms "comprises" or "having" and the like are used to specify that there is a feature, a number, a step, an operation, an element, a component or a combination thereof described in the specification, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the contextual meaning of the related art and are to be interpreted as either ideal or overly formal in the sense of the present application Do not.

Hereinafter, preferred embodiments according to the present invention will be described in detail with reference to the accompanying drawings.

The smart device according to the present invention refers to an electronic device that is not limited in functionality and can change functions or extend its use through an application. These smart devices can include smartphones, tablets, pavilets, PCs, smart watches and other wearables.

FIG. 1 is a flowchart illustrating a method of classifying a user's emotional state according to an exemplary embodiment of the present invention. Referring to FIG.

In step S110, the emotion state classification apparatus collects first application use information including usage information for each application type from the smart device.

Here, the first application use information means usage information for each application type of the smart device user collected in the process of generating the emotion state determination model.

For example, the emotion state classifier may continuously acquire the first application usage information until the emotion state determination model is generated from the smart device using the collector program of the application type. Referring to FIG. 4, the name, execution time, end time, and screen state (ON or OFF) information of the application currently running on the current uppermost screen can be collected and stored using the collector program.

At this time, the execution time and end time of the application may include information of year, month, day, hour, minute, and second of the corresponding time, and in particular, .

In another embodiment, the first application usage information may include information on the frequency of use and the usage time of each application type.

The frequency and duration of use of an application can be understood as an important measure of how much the application has been used.

On the other hand, information on the frequency of use and the usage time of the application can be acquired using a collector program. More specifically, information on the execution time and end time of a specific application stored by the collector program can be accumulated to acquire information on the use time of the application, and the information on the use frequency Can be obtained.

In step S120, the emotion state classification apparatus collects emotion information including information on the emotion state for a predetermined time interval from the user of the smart apparatus a plurality of times.

Here, the predetermined time period may be a divided time period so that the user can easily determine the emotional state. For example, the predetermined time period may be one hour, two hours, one day, or the like, and may be variably set to be from 1 hour to 24 hours.

More specifically, the emotion state classifying apparatus can collect the emotion information of the user for a predetermined time interval over a plurality of times. At this time, the reason for collecting a plurality of times is that if the amount of the collected emotion information and the amount of the first application use information become sufficiently large, the reliability of the classified emotion state becomes higher.

For example, referring to FIG. 8, the emotional state classifier sets a predetermined time period to 1 day, and for 14 days from June 1, 2014 to June 14, 2014, the emotional state of the user is stressed, excited, And boredom. ≪ / RTI >

In another embodiment, the emotional state can be classified as stressed, excited, relaxed, or bored based on a 2D < RTI ID = 0.0 > emotional < / RTI > model.

The 2D emotion model was developed by James Russell, which states that emotional states can be distributed in a two-dimensional circular space, as shown in Fig. 3, according to their classification. Meanwhile, the stress, excitement, comfort and boredom indicated in red in FIG. 3 are divided into four quadrants in an emotional state representing each quadrant of the circular space.

In yet another embodiment, the emotional state classifier may receive user input indicative of an emotional state from a user of the smart device to collect emotional information.

For example, the emotion state classification apparatus can receive the emotion state of the user representing the predetermined time period from the user immediately after the predetermined time period passes. At this time, the user input may be for a predetermined emotion state such as stress, excitement, comfort or boredom.

In another embodiment, the emotion state classifier may measure the pulse rate of the smart device user using a pulse sensor interlocked with the smart device, and determine the emotional state of the user based on the measured pulse rate.

For example, instead of receiving information on the emotion state explicitly from the user of the smart device, the emotion state classifier may measure the pulse rate of the user within a predetermined time interval using a pulse sensor interlocked with the smart device, By analyzing the measurement results, the user's emotional state can be determined to be one of stress, excitement, comfort and boredom.

On the other hand, the pulse sensor may be used as an auxiliary emotional state determination means, and may be used in conjunction with receiving a user input indicative of an emotional state from a user. For example, when the user's emotion state determined using the pulse sensor and the emotion state received from the user do not match, the emotion state classification apparatus can request the user to re-verify the emotion state.

Finally, in step S130, the emotion state classifier generates an emotion state determination model that indicates the relationship between the application information and the emotion state of the smart device user based on the emotion information and the first application usage information for the predetermined time period.

For example, the emotion state classifier may analyze the first application use information collected by the collector program for each predetermined time interval and the emotion information received from the user, and calculate the relationship between the usage information and the emotion state for each application type. Then, the emotion state determination model can be generated using the calculated relationship.

At this time, the reason based on the emotion information for each predetermined time interval and the first application use information is analyzed using the emotion information for the same time period and the first application use information, so that the relationship between the emotion information and the first application usage information can be accurately calculated .

Further, when the predetermined time interval is variably set (for example, two to eight hours are designated at intervals of two hours), the variable time interval is divided into a common time unit (e.g., one hour or two hours) The emotion state determination model can be generated.

On the other hand, a detailed description of the method for generating the emotion state determination model will be described later in detail with reference to FIG.

In another embodiment, an emotion state determination model may be generated for each type of emotion state of the smart device user in which the emotion state classifier is collected.

That is, when the emotion state classification apparatus collects information on a plurality of types of user emotion states, the same number of emotion state decision models can be generated.

For example, when the user's emotional state is divided into four types of stress, excitement, comfort, or boredom, four emotional state determination models can be generated in total, and the user's usage information for each application type can be determined Input to the model, and judges whether or not it can be classified into the corresponding emotion state.

More specifically, when four emotion state determination models are generated, the three emotion state determination models are not classified into the corresponding emotion states, but if the emotion state determination model for excitement is classified into the emotion state of excited state, The emotional state of the user can be judged to be excited.

In another embodiment, after the emotion state determination model is generated, the emotion state classification apparatus collects second application usage information, which is usage information for each application type, based on the second application usage information and the emotion state determination model The emotion state of the smart device user can be determined.

Here, the second application use information means usage information for each application type of the smart device user collected to classify the emotional state of the user using the emotional state determination model after the emotional state determination model is created.

For example, the emotion state classification apparatus collects second application usage information for a predetermined time period from the smart device, substitutes the collected second application usage information into the already generated emotion state determination model, Can be classified as stress, excitement, comfort, or boredom.

At this time, the predetermined time interval may be a time interval having the same length as the predetermined time interval, but may be a shorter time interval. If the time interval is shorter than that, the accuracy of the classified emotion state can be lowered corresponding to the shorter degree. However, it may be necessary to use the second application usage information for a time period shorter than the predetermined time period in order to make the emotion state classification more timely, as in the case of performing a specific task corresponding to the current emotion state of the user .

In another embodiment, when a smart device user uses a predefined screening application that is a predefined application, the emotion state classifying device may selectively send a message to the smart device user to confirm whether to perform the screening application function based on the determined emotion state Can be output.

In other words, when the user uses the smart device selection application after the smart device user's emotional state is determined, he or she can: 1) send money, 2) purchase or sell services or goods, 3) A message for confirming whether or not to perform a function that is difficult to be reversed, such as transmission of a message, can be selectively output.

For example, a confirmation message may be output to prevent a user from purchasing an unnecessary item on an online shopping, or being fooled by voice phishing and electronically remitting a large amount of money. That is, an application including a purchase or remittance function is preset as a screening application, and when the emotion state of the smart device user is determined to be excited, the emotion state classification apparatus reaffirms the user's intention before the purchase or remittance function is performed A message can be output.

In another embodiment, the emotion state classifier may recommend to the smart device user the use of a predetermined application in response to the determined emotional state.

For example, when the emotion state of the smart device user is determined as stress, the emotion state classification apparatus can recommend use of the application including the humorous content in which stress is relieved. In addition, when the emotional state of the user is determined to be boring, the use of an interesting game application can be recommended. In addition, when the emotional state of the user is determined to be excited, the use of a music application for comforting the mind can be recommended.

As described above, according to an embodiment of the present invention, a method for classifying emotional state of a user of a smart device calculates a relation between usage information and emotional state of each type of application of a user, and classifies the emotional state using only the usage information for each application type There is an effect that can be.

2 is a flowchart illustrating a method for generating an emotion state determination model according to an exemplary embodiment of the present invention.

In step S210, the emotion state classifying apparatus generates a plurality of weaknesses for classifying the emotion state by using the recognition accuracy and the threshold value for each application type usage information calculated based on the emotion information for each predetermined time interval and the first application use information Create a classifier.

Weak classifiers can be created for each application type and for each type of application, such as frequency of use or usage time. In addition, the weak classifier generated in this way plays a role of determining whether the user is in a specific emotional state using the specific application type usage information. Referring to FIG. 7, it can be confirmed that a weak classifier is generated for each type of application for each frequency of use and use time (cumulative time).

At this time, the weak classifier can determine whether it is a specific emotion state based on the accuracy and the threshold. The threshold is a value that is used as a criterion by which the weak classifier compares usage information for each application type to determine the emotional state. Further, the recognition rate indicates the accuracy of the determined emotion state when the specific emotion state is determined based on the comparison result of the usage information and the threshold value for each application type.

On the other hand, the recognition rate and the threshold value of the weak classifier can be determined differently depending on the usage frequency and the usage time of each application type.

Also, referring to FIG. 5, a method of determining the recognition rate and the threshold value of the weak classifier can be known.

More specifically, since the position of the target (red dot) at which the emotion state is determined in the graph at the upper left of FIG. 5 is shifted to the right of the graph, the threshold (black vertical dotted line) is shifted from the left side of the graph to the right side 5, the recognition rate (accuracy) of the weak classifier for each threshold value can be calculated. In this case, the threshold value for accurately classifying the largest number of input data among the total number of data, which is the sum of the number of red dots and the number of blue dots, is defined as the threshold value of the weak classifier, and the maximum recognition rate at that time is defined as the recognition rate of the weak classifier .

On the other hand, as shown in the right graph of FIG. 5, even when the position of the target (red dot) is shifted to the left of the graph, the recognition rate and the threshold value of the weak classifier can be determined in the same manner.

In step S220, the emotion state classifier generates a strong classifier based on the sorting classifier, which is at least one weak classifier having a recognition rate equal to or higher than the recognition rate threshold among the generated plurality of weak classifiers.

The recognition rate threshold is a minimum reference value set to determine that the recognition rate (i.e., accuracy) of the emotion state determined by the weak classifier is significant. That is, a weak classifier having a recognition rate equal to or lower than the recognition rate threshold is determined to be unreliable. Preferably, the recognition rate threshold may be 0.57, which can be taken to mean that the classification result of the weak classifier is significant.

For example, referring to FIG. 6, in the case where the recognition rate threshold (Min Acc.) Is 0.57, since the weak classifier whose recognition rate is lower than the recognition rate is low, the recognition rate of the emotional state is too low and the accuracy is low. Only a weak classifier having a recognition rate can be selected as shown in FIG. 7 to generate a strong classifier.

Referring to FIG. 7, the items classified as Over and Under are recognized as 1 when the sorting classifier is larger than the threshold value, and when the classification classifier is smaller than the threshold value, If it is recognized and if it is bigger than 0, it is understood as Under.

For example, since the frequency of the African TV No. 1 is 1.003 and the frequency is over, if the frequency is 2, the value is greater than 1.003, so that the classification classifier can output the recognition result as 1.

On the other hand, a strong classifier can be defined as the following equation (1), and only a weak classifier having a recognition rate equal to or higher than the recognition rate threshold value is selected and generated, so that the accuracy of the classifier can be further improved.

In addition, both weak classifiers and strong classifiers can be generated based on an adaboost (adaptive boosting) algorithm.

In step S230, the emotion state classifying apparatus calculates an emotion state determination threshold, which is a result of the strong classifier for a predetermined time period in which the recognition rate of the strong classifier is maximized.

For example, when a strong classifier is generated and the first application use information is input to the strong classifier every predetermined time interval, a strong classifier result value can be calculated for each predetermined time interval. It is possible to calculate the resultant value of the strong classifier that maximizes the recognition rate of the strong classifier based on the generated plurality of strong classifier result values, and the calculated result value becomes the emotion state determination threshold value.

Referring to FIG. 9, the abscissa of the two graphs is the result of a strong classifier, and the graph at the top shows the result of the strong classifier and the emotional state of the user (the red x indicates the stress state and the blue point indicates the non- . At this time, since the stress state is mainly located on the right side of the upper graph, the recognition rate (accuracy) can be calculated while moving the result value (i.e., threshold value) of the strong classifier from left to right. At this time, the maximum recognition rate is defined as the recognition rate of the strong classifier, and the threshold at that time is defined as the threshold value for determining the emotion state of the strong classifier. In Fig. 9, when the emotion state determination threshold value is 4.78, the recognition rate becomes 0.9286. In this case, the recognition rate is 0.9286 because it is possible to correctly distinguish the stress state and the non-stress state from the 12 result values out of the total of 13 strong classifiers.

Finally, in step S240, the emotion state classifier generates an emotion state determination model using the strong classifier and the emotion state determination threshold.

That is, if a strong classifier is defined as shown in Equation (1) below and an emotion state determination threshold value for maximizing the recognition rate of a strong classifier is calculated, an emotion state determination model can be generated.

In other words, when the emotion state classification apparatus collects the second application use information and uses the strong classifier and the emotion state determination threshold as in Equation (1), the emotion state of the user can be classified from the second application use information.

For example, referring to FIG. 9, a result of a strong classifier is calculated by substituting the collected second application use information into Equation 1 below. If the calculated result value is larger than the emotion state determination threshold value (4.78) State, and the recognition rate (accuracy) is 0.9286.

In another embodiment, a strong classifier may be defined as: < RTI ID = 0.0 > (1) < / RTI >

Figure pat00003

Here, S is a result value of a strong classifier, n is a total number of at least one classifier, Ws k is a recognition result indicating that the emotion classification result by the kth classifier for a predetermined time interval is 0 or 1, Wa k is the recognition rate of the kth sort classifier.

As a result, the resultant value of the strong classifier can be understood as the sum of values of Ws k (recognition result) and Wa k (recognition rate) for all the classifiers as shown in Equation (1). In this case, Wa k (recognition rate) in Equation (1) plays the same role as the weight multiplied by Ws k ( recognition result).

Referring to FIG. 10, after the emotion state classification apparatus extracts information on usage time and frequency of use for each type of application from application-specific usage information, a plurality of weak classifiers are generated using the information, and a recognition rate The weaker classifier with the strongest classifier can be selected and the whole process of generating a strong classifier can be known.

As described above, the method for generating the emotion state determination model according to an embodiment of the present invention generates the emotion state determination model by generating the weak classifier and the strong classifier, and calculating the emotion state determination threshold of the strong classifier, There is an effect that the state can be more finely classified.

11, the smart-device user's emotion-state classifying apparatus 1100 includes a collecting unit 1110 and a model generating unit 1120. The emotion state classification apparatus 1100 may further include an emotion determination unit (not shown), an output unit (not shown), and a recommendation unit (not shown).

The collecting unit 1110 collects first application use information including usage information for each application type from the smart device and acquires emotion information including information on the emotion state for a predetermined time period from a user of the smart device Collect it once.

In another embodiment, the first application usage information may include information on the frequency of use and the usage time of each application type.

In another embodiment, the collection unit 1110 may receive user input indicative of an emotional state from a user of the smart device.

In another embodiment, the collecting unit 1110 may include a pulse measuring unit (not shown) for measuring the pulse rate of the user of the smart device using a pulse sensor interlocked with the smart device, And a pulse analysis unit (not shown) for determining the emotional state of the user.

The model generation unit 1120 generates an emotion state determination model that indicates a relationship between the application type usage information and the emotion state of the smart device user based on the emotion information and the first application usage information collected for each predetermined time period.

Details of the model generating unit 1120 will be described later in detail with reference to FIG.

After the generation of the emotion state determination model, the emotion determination unit (not shown) determines, based on the second application usage information that is the usage information for each of the plurality of application types and the generated emotion state determination model, The user's emotional state can be determined. At this time, the collecting unit 1110 can further collect the second application use information.

The output (not shown) The smart device user can selectively output a message to the smart device user on the basis of the determined emotion state to confirm whether or not the function of the screening application is performed when the smart device user uses the screening application which is a predetermined important application.

Finally, the recommendation unit (not shown) can recommend the use of a predetermined application in response to the emotion state determined by the smart device user.

In another embodiment, the type of emotional state of the smart device user may be classified as stress, excitement, comfort, or boredom based on the 2D emotional model, and the emotional state determination model may be generated for each of the classified emotional state types.

12, the model generating unit 1120 includes a weak parasitic parasitic element 1122, a strong parasitic parasitic element 1124, a threshold value calculation unit 1126, and a decision model generation unit 1128.

The weak classification parasitic component 1122 generates a plurality of weak classifiers for classifying the emotion state using the recognition rate and the threshold value for each application type usage information calculated on the basis of the emotion information for each predetermined time interval and the first application use information .

In another embodiment, when the first application use information includes information on the usage frequency and the usage time for each application type, the recognition rate and the threshold value for each application type usage information may be determined based on a recognition rate and a threshold value for the usage frequency for each application type, The recognition rate and the threshold value for the time can be determined to be different from each other.

The strong classification parasitic element 1124 generates a strong classifier based on the sorting classifier, which is at least one weak classifier having a recognition rate equal to or higher than the recognition rate threshold among the generated plurality of weak classifiers.

The threshold value calculation unit 1126 calculates an emotion state determination threshold value, which is a result of a strong classifier that allows a strong classifier to have a maximum recognition rate.

Finally, the decision model generation unit 1128 generates the emotion state decision model using the strong classifier and the emotion state determination threshold.

On the other hand, a strong classifier can be defined as shown in Equation (1), and therefore, the resultant value of the strong classifier can be calculated by Equation (1).

The above-described embodiments of the present invention can be embodied in a general-purpose digital computer that can be embodied as a program that can be executed by a computer and operates the program using a computer-readable recording medium.

The computer readable recording medium includes a magnetic storage medium (e.g., ROM, floppy disk, hard disk, etc.), optical reading medium (e.g., CD ROM, DVD, etc.).

The present invention has been described with reference to the preferred embodiments. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the disclosed embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present invention is defined by the appended claims rather than by the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present invention.

Claims (20)

Collecting first application usage information including usage information for each application type from the smart device;
Collecting emotion information including information on an emotion state for a predetermined time interval from a user of the smart device a plurality of times; And
Generating an emotion state determination model that indicates a relationship between an application type usage information and an emotion state of the smart device user based on the emotion information and the first application usage information for each predetermined time period;
The method comprising the steps of:
The method according to claim 1,
The step of generating the emotion state determination model
Generating a plurality of weak classifiers for classifying emotional states by using recognition accuracy and a threshold value for each application type usage information calculated based on the emotion information and the first application use information for each predetermined time period;
Generating a strong classifier based on at least one weak classifier having a recognition rate equal to or higher than a recognition rate threshold among the generated plurality of weak classifiers;
Calculating an emotion state determination threshold, which is a result of the strong classifier, such that the recognition rate of the strong classifier is maximized; And
Generating the emotion state determination model using the strong classifier and the emotion state determination threshold;
The method comprising the steps of:
3. The method of claim 2,
Wherein the result of the strong classifier is calculated according to Equation (1).
[Equation 1]
Figure pat00004

Here, S is a result value of the strong classifier, n is a total number of the at least one classifier, Ws k is 0 or 1 indicating a result of emotion classification by the kth classifier for the predetermined time interval And Wa k is the recognition rate of the kth sorting classifier.
3. The method of claim 2,
The first application use information includes
Information about usage frequency and usage time of each application type,
The recognition rate and the threshold value for each of the application type-
Wherein the recognition rate and the threshold value for the usage frequency of each application type are different from the recognition rate and the threshold value for the usage time of each application type.
The method according to claim 1,
After the generation of the emotion state determination model,
Collecting second application usage information that is usage information for each of a plurality of application types; And
Determining an emotion state of the smart device user corresponding to the second application use information based on the second application use information and the emotion state determination model
The method further comprising the steps of:
6. The method of claim 5,
When the smart device user uses a screening application that is a pre-set critical application,
Selectively outputting a message for confirming whether or not the function of the screening application is performed to the smart device user based on the determined emotion state;
The method further comprising the steps of:
6. The method of claim 5,
Recommending use of a predetermined application corresponding to the determined emotional state to the smart device user
The method further comprising the steps of:
The method according to claim 1,
The step of collecting the emotion information including the information on the emotion state
And receiving a user input indicating an emotional state from a user of the smart device.
The method according to claim 1,
The step of collecting the emotion information including the information on the emotion state
Measuring a pulse rate of the smart device user using a pulse sensor interlocked with the smart device; And
Determining an emotional state of the user of the smart device based on the measured pulse rate;
The method comprising the steps of:
The method according to claim 1,
The type of the emotion state is
Are classified as stressed, excited, relaxed, or bored based on a 2D < RTI ID = 0.0 >
The emotion state determination model
Wherein the generated emotional state is generated for each type of the classified emotional state.
A collecting unit collecting first application usage information including usage information for each application type from the smart device and collecting emotion information including information on the emotion state for a predetermined time period from the user of the smart device a plurality of times; And
A model generating unit that generates an emotion state determination model that indicates a relationship between an application type usage information and an emotion state of the smart device user based on the emotion information and the first application usage information for each predetermined time period;
Wherein the smart device user classifies the emotional state of the smart device user.
12. The method of claim 11,
The model generation unit
A weak classification parasitic section for generating a plurality of weak classifiers for classifying emotional states by using recognition rates and thresholds for each application type usage information calculated on the basis of the emotion information and the first application use information for each predetermined time interval;
A strong classification parasitic section for generating a strong classifier based on a sorting classifier that is at least one weak classifier having a recognition rate equal to or higher than a recognition rate threshold among the plurality of weak classifiers generated;
A threshold value calculation unit for calculating an emotion state determination threshold value which is a result value of the strong classifier for maximizing the recognition rate of the strong classifier; And
A decision model generation unit for generating the emotion state determination model using the strong classifier and the emotion state determination threshold;
Containing
And the smart device user's emotional state classification device.
13. The method of claim 12,
And the result of the strong classifier is calculated by Equation (1).
[Equation 1]
Figure pat00005

Here, S is a result value of the strong classifier, n is a total number of the at least one classifier, Ws k is 0 or 1 indicating a result of emotion classification by the kth classifier for the predetermined time interval And Wa k is the recognition rate of the kth sorting classifier.
13. The method of claim 12,
The first application use information includes
Information about usage frequency and usage time of each application type,
The recognition rate and the threshold value for each of the application type-
Wherein the recognition rate and the threshold value for the use frequency of each application type are different from the recognition rate and the threshold value for the usage time of each application type.
12. The method of claim 11,
After the generation of the emotion state determination model,
An emotion determination unit that determines an emotion state of the smart device user corresponding to the second use-of-use information based on the second-use-use information, which is usage information for each of the plurality of application types,
Further comprising:
And the collecting unit further collects the second application use information.
16. The method of claim 15,
When the smart device user uses a screening application that is a pre-set critical application,
An output unit for selectively outputting a message to the user of the smart device based on the determined emotion state to confirm whether the function of the screening application is performed;
Wherein the smart device user classifies the emotional state of the user.
16. The method of claim 15,
A recommendation section for recommending use of a predetermined application corresponding to the determined emotion state to the smart device user;
Wherein the smart device user classifies the emotional state of the user.
12. The method of claim 11,
The collecting unit
And receives a user input indicating an emotional state from a user of the smart device.
12. The method of claim 11,
The collecting unit
A pulse measuring unit for measuring a pulse rate of the smart device user using a pulse sensor interlocked with the smart device; And
A pulse analyzing unit for determining an emotional state of the user of the smart device based on the measured pulse rate,
Wherein the smart device user classifies the emotional state of the smart device user.
12. The method of claim 11,
The type of the emotion state is
Based on the 2D emotion model, it is classified as stress, excitement, comfort or boredom,
The emotion state determination model
Wherein the generated emotional state is generated for each type of the classified emotional state.
KR1020160019040A 2016-02-18 2016-02-18 Method and apparatus for emotion classification of smart device user KR101783183B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160019040A KR101783183B1 (en) 2016-02-18 2016-02-18 Method and apparatus for emotion classification of smart device user

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160019040A KR101783183B1 (en) 2016-02-18 2016-02-18 Method and apparatus for emotion classification of smart device user

Publications (2)

Publication Number Publication Date
KR20170097380A true KR20170097380A (en) 2017-08-28
KR101783183B1 KR101783183B1 (en) 2017-10-23

Family

ID=59759676

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160019040A KR101783183B1 (en) 2016-02-18 2016-02-18 Method and apparatus for emotion classification of smart device user

Country Status (1)

Country Link
KR (1) KR101783183B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019182265A1 (en) * 2018-03-21 2019-09-26 엘지전자 주식회사 Artificial intelligence device and method for operating same
KR102300194B1 (en) * 2020-12-24 2021-09-10 주식회사 후원 Method and device for neuro/biofeedback serious game to reduce mental stress based on biomedcal signals
WO2021201924A1 (en) * 2020-04-01 2021-10-07 UDP Labs, Inc. Systems and methods for remote patient screening and triage

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019182265A1 (en) * 2018-03-21 2019-09-26 엘지전자 주식회사 Artificial intelligence device and method for operating same
WO2021201924A1 (en) * 2020-04-01 2021-10-07 UDP Labs, Inc. Systems and methods for remote patient screening and triage
KR102300194B1 (en) * 2020-12-24 2021-09-10 주식회사 후원 Method and device for neuro/biofeedback serious game to reduce mental stress based on biomedcal signals

Also Published As

Publication number Publication date
KR101783183B1 (en) 2017-10-23

Similar Documents

Publication Publication Date Title
US9202121B2 (en) Liveness detection
EP3244372B1 (en) Effect generating device, effect generating method, and program
CN101378455B (en) Apparatus to specify image region of main subject from obtained image, and method to specify image region of main subject from obtained image
KR101574884B1 (en) Facial gesture estimating apparatus, controlling method, controlling program, and recording medium
KR101749706B1 (en) Method and system for expecting user's mood based on status information and biometric information acquired by using user equipment
US7362886B2 (en) Age-based face recognition
CN109800320A (en) A kind of image processing method, equipment and computer readable storage medium
CN108765131A (en) Credit authorization method, apparatus, terminal and readable storage medium storing program for executing based on micro- expression
KR101783183B1 (en) Method and apparatus for emotion classification of smart device user
CN109934704A (en) Information recommendation method, device, equipment and storage medium
EP3058873A1 (en) Device for measuring visual efficacy
KR20160029655A (en) Identification appartus and control method for identification appartus
CN110008673B (en) Identity authentication method and device based on face recognition
CN109286848B (en) Terminal video information interaction method and device and storage medium
CN107688790A (en) Human bodys' response method, apparatus, storage medium and electronic equipment
CN113111690B (en) Facial expression analysis method and system and satisfaction analysis method and system
CN108921178A (en) Obtain method, apparatus, the electronic equipment of the classification of image fog-level
CN109739354A (en) A kind of multimedia interaction method and device based on sound
KR20170110350A (en) Apparatus and Method for Measuring Concentrativeness using Personalization Model
CN110149531A (en) The method and apparatus of video scene in a kind of identification video data
CN107358280B (en) Book reading detection method and device for children
JP2011154130A (en) Voice identification device and voice identification system using the same
KR102114273B1 (en) Method for personal image diagnostic providing and computing device for executing the method
JP2021067468A (en) Method, device, program, and system for estimating soil quality
CN110301892A (en) A kind of detection method and Related product based on hand vein recognition

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant