US11315362B2 - Emotion-recognition-based service provision apparatus for vehicle and method of controlling the same - Google Patents

Emotion-recognition-based service provision apparatus for vehicle and method of controlling the same Download PDF

Info

Publication number
US11315362B2
US11315362B2 US17/084,010 US202017084010A US11315362B2 US 11315362 B2 US11315362 B2 US 11315362B2 US 202017084010 A US202017084010 A US 202017084010A US 11315362 B2 US11315362 B2 US 11315362B2
Authority
US
United States
Prior art keywords
emotion
time
driving situation
determining
currently generated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US17/084,010
Other versions
US20220036048A1 (en
Inventor
Jin Mo Lee
Young Bin Min
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hyundai Motor Co
Kia Corp
Original Assignee
Hyundai Motor Co
Kia Motors Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hyundai Motor Co, Kia Motors Corp filed Critical Hyundai Motor Co
Assigned to KIA MOTORS CORPORATION, HYUNDAI MOTOR COMPANY reassignment KIA MOTORS CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEE, JIN MO, MIN, YOUNG BIN
Publication of US20220036048A1 publication Critical patent/US20220036048A1/en
Application granted granted Critical
Publication of US11315362B2 publication Critical patent/US11315362B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0872Driver physiology

Definitions

  • the present disclosure relates to a method of controlling a vehicle function based on emotion recognition of a driver in the vehicle, and more particularly, to a method of providing an emotion-recognition-based service for a vehicle and a method of controlling the apparatus for determining whether to provide a service depending on a form of the emotion of the driver.
  • conventional emotion-recognition-based services determine only whether the emotional state of a user in a vehicle is positive or negative, and merely provides feedback for adjusting the output of components in the vehicle based on whether the determined emotional state is positive or negative.
  • a driver emotion may be rapidly restored to a neutral emotion after an emotion occurs while traveling, emotions may continuously change, and a specific emotion may repeatedly and frequently occur.
  • a service intends to be provided whenever an emotion is detected even if the current emotions correspond to a transient emotion in the interest of vehicle service provision, there is a problem in that the need for the service is low if a driver emotion is already restored to a neutral emotion at the time at which the service is to be provided.
  • An object of the present disclosure is to provide an emotion-recognition-based service for a vehicle and a method of controlling the same for determining whether to provide a service by further considering the form of an emotion as well as a type of the emotion.
  • the present disclosure provides an emotion-recognition-based service for a vehicle and a method of controlling the same for effectively determining an emotion for which it is required to actually provide service depending on the form of the emotion.
  • a method of providing an emotion-recognition-based service for a vehicle includes monitoring an occupant and a driving situation, when recognizing an emotion of the occupant during the monitoring, comparing at least one of a first emotion as a type of a currently generated emotion or a first driving situation as a driving situation corresponding to a first time which is as a time at which the emotion is recognized, with at least one of a second emotion as a type of an emotion that previously occurs compared with the emotion or a second driving situation as a driving situation corresponding to a second time which is as a time at which the previously generated emotion is recognized, and based on a result of the comparing, determining the currently generated emotion as one of a transient emotion, a sequential emotion, and a repetitive emotion.
  • an apparatus for providing an emotion-recognition-based service for a vehicle includes an emotion recognizer configured to determine an emotion of an occupant, a driving situation determiner configured to monitor a diving situation, and an emotion form determiner configured to, when the emotion recognizer recognizes the emotion of the occupant, compare at least one of a first emotion as a type of a currently generated emotion or a first driving situation as a driving situation corresponding to a first time which is as a time at which the emotion is recognized, with at least one of a second emotion as a type of an emotion that previously occurs compared with the emotion or a second driving situation as a driving situation corresponding to a second time which is as a time at which the previously generated emotion is recognized, and to determine the currently generated emotion as one of a transient emotion, a sequential emotion, and a repetitive emotion based on a result of the comparing.
  • FIG. 1 is a diagram showing an example of an apparatus for providing an emotion-recognition-based service for a vehicle according to an embodiment of the present disclosure
  • FIG. 2 is a flowchart showing an example of a procedure of providing an emotion-recognition-based service according to an embodiment of the present disclosure
  • FIG. 3 is a diagram for explaining a reference for determining the form of an emotion according to an embodiment of the present disclosure
  • FIG. 4 is a diagram showing an example of determination of the form of an emotion according to an embodiment of the present disclosure
  • FIGS. 5A to 5C show experimental examples of determination of an emotion form according to an embodiment of the present disclosure
  • FIG. 6 is a diagram for explaining a reference for determining the form of an emotion according to another embodiment of the present disclosure.
  • FIG. 7 is a flowchart showing an example of a procedure of providing an emotion-recognition-based service according to another embodiment of the present disclosure.
  • An embodiment of the present disclosure may provide a service when a specific form of an emotion is recognized by further considering the form of the emotion as well as the type of an emotion in order to efficiently provide the service in response to an emotion of a vehicle occupant, e.g., a driver at a more significant time.
  • the form of the emotion to which embodiments of the present disclosure are applicable may include a transient emotion, a sequential emotion, and a repetitive emotion.
  • the sequential emotion may refer to the case in which a plurality of emotions simultaneously or sequentially occurs when one driving situation event sequentially occurs.
  • An example in which various emotions sequentially occur may include the case in which a driver simultaneously or sequentially expresses emotions of laughing, sadness, and regret while conversing.
  • processing in which an emotion change during a unit time is analyzed to determine that one emotion occurs while an event occurs may be appropriate for an actual driver emotion rather than processing in which emotions are determined to sequentially occur.
  • a user may be rather inconvenienced in that the service is provided while the emotion changes, and thus there is a need to provide the emotion recognition result and to execute the related service at the time at which the emotion is terminated with regard to the sequential emotion.
  • the repetitive emotion may refer to the case in which the same emotion occurs again at a small time interval.
  • An example of the repetitive emotion may include the case in which a driver repeatedly feels bored or sleepy irrespective of a driving situation after the driver feels bored or sleepy due to continuous cruising and the case in which the driver becomes sensitive to movement of another vehicle irrespective of the driving situation after the driver is surprised by the sudden stop of a foregoing vehicle while traveling.
  • a phenomenon in which an emotion that occurs once is repeatedly expressed even after the reason for the emotion does not exist any longer may be seen from research in the field of artificial emotion modeling, and a psychological phenomenon of existing research result in which memory about stimulation in the past is a source for stimulating a new emotion may be seen in a driving situation.
  • the transient emotion may refer to an individual emotion that does not correspond to emotions included in the repetitive emotion or the sequential emotion.
  • an emotion-recognition-based service provision apparatus and a method of controlling the same will be described.
  • the target of emotion recognition is assumed to be the driver among vehicle occupants, but this is for convenience of description, and in reality, needless to say, it will be obvious to one of ordinary skill in the art to provide an emotion-based service for an arbitrary occupant as a target of emotion recognition, irrespective of the role of driving a vehicle, without exceeding the range that does not contradict the following description.
  • FIG. 1 is a diagram showing an example of an apparatus for providing an emotion-recognition-based service for a vehicle according to an embodiment of the present disclosure.
  • the apparatus for providing an emotion-recognition-based service for a vehicle may include an emotion recognizer 110 for recognizing a driver emotion, a service provision determiner 120 for determining the form of an emotion based on the driver emotion recognized by the emotion recognizer 110 , a driving situation, and a previous determination history, and determining whether to provide a service depending on the form of the emotion, and an output unit 130 for providing a service in a vehicle according to determination of the service provision determiner 120 .
  • the emotion recognizer 110 may include an information acquirer 111 , a preprocessor 112 , a feature extractor 113 , an emotion classifier 114 , and a basic emotion determiner 115 .
  • the information acquirer 111 may include a camera for acquiring an image containing at least the face of the driver. Depending on the emotion recognition method, the information acquirer 111 may further include a microphone for receiving a driver voice or a biosensor (e.g., a heart rate sensor) as well as the camera.
  • a biosensor e.g., a heart rate sensor
  • the preprocessor 112 may perform correction and noise removal on the information acquired by the information acquirer 111 .
  • the preprocessor 112 may perform image-quality correction and noise removal on raw data of the image acquired through the camera through preprocessing.
  • the feature extractor 113 may perform feature extraction on a feature point as the basis of determination of an emotion, movement, etc. from the preprocessed image.
  • Technology for emotion feature extraction may broadly include three methods.
  • One method is a holistic method of extracting a feature by modeling or expressing the intensity of pixel values in an image of the entire face.
  • Another method is a geometrical method of extracting a feature by searching for the geometrical arrangement and position of features in the face.
  • an active appearance model (AAM), which is formed by combining the holistic method and the geometrical method, may also be applied.
  • AAM active appearance model
  • the feature extraction method may be any method as long as the method is a method for extracting a feature for determining an emotion.
  • the emotion classifier 114 may classify an expression in a given image into emotional states based on classification of patterns of the extracted feature.
  • the emotion classifier 114 may classify emotions using various methods including a Bayesian network using a predefined conditional probability, a K-nearest neighbor algorithm, an artificial neural network, or the like.
  • the basic emotion determiner 115 may finally determine the classified emotional state as a preset type of basic emotion.
  • the basic emotion may include six emotions of misery, anger, happiness, disgust, sadness, and surprise that are classified based on the facial expression, which is proposed by P. Ekman.
  • this is also merely exemplary, and the type and number of basic emotions may be set in different ways.
  • the basic emotion according to the present embodiment may further include drowsiness, contempt, boredom, or the like as well as the six basic emotions of P. Ekman.
  • the service provision determiner 120 may include a driving situation determiner 121 , an emotion form determiner 122 , and a memory 123 .
  • the driving situation determiner 121 may determine the driving situation depending on the road or traffic situation and vehicle manipulation by a driver, such as driving in a congested section, cruising, making a sudden stop, or driving in a curved section.
  • the driving situation determiner 121 may also determine a driver situation (e.g., making a telephone call or engaged in conversation with occupants) irrespective of the traffic situation or vehicle manipulation.
  • the driving situation determiner 121 may acquire information required to determine the driving situation from various vehicle sensors (a vehicle speed sensor or an acceleration sensor), a navigation system for providing information on the road and a traffic volume, a head unit, or the like.
  • the emotion form determiner 122 may determine whether the current recognized emotion corresponds to a transient emotion, a continuous emotion, or a repetitive emotion in consideration of at least one of the driving situation at the corresponding time, the previous emotion, the time at which a previous emotion was recognized, or the driving situation at the time of recognizing the previous emotion.
  • the emotion form determiner 122 may determine whether to provide a service corresponding to an emotion included in the recognized emotion form or a representative emotion based on the emotion form.
  • the memory 123 may store the type and form of the recognized emotion, and the driving situation at the time at which the emotion is recognized whenever the emotion recognizer 110 recognizes an emotion.
  • the output unit 130 may provide a corresponding form of service through a display 131 for providing a service through visual output, a speaker 132 for providing a service through audible output, a vibration output unit 133 for providing a service through vibration output, or the like.
  • the output unit 130 may output music having a fast tempo through the speaker 132 .
  • the output unit 130 may provide an image for guidance of deep breathing through the display 131 .
  • the form in which service is provided is merely exemplary, and any service may be provided, as long as the service corresponds to the recognized emotion of the driver.
  • FIG. 2 is a flowchart showing an example of a procedure of providing an emotion-recognition-based service according to an embodiment of the present disclosure.
  • a driver and a driving situation may be monitored (S 210 ).
  • an image containing the face of the driver may be captured, a driver emotion may be determined based on the image captured by the emotion recognizer 110 , and the driving situation determiner 121 may monitor a driving situation.
  • the emotion form determiner 122 may search the memory 123 and may determine whether the current emotion occurs within a preset time T after the previous emotion occurs (S 230 ).
  • the emotion form determiner 122 may determine the form of the current emotion as a transient emotion (S 260 ), and may store the time at which the current emotion occurs, the type thereof (i.e., the type of the basic emotion), and information on the driving situation at the corresponding time in the memory 123 (S 290 ).
  • the emotion form determiner 122 may determine whether the driving situation when the current emotion occurs is the same as a driving situation when the previous emotion occurs (S 240 ).
  • the emotion form determiner 122 may determine that the previous emotion and the current emotion are included in the sequential emotion and may determine a representative emotion of the sequential emotion (S 270 ). For example, the emotion form determiner 122 may generate one type of representative emotion by combining information on the previous emotion and information on the new emotion.
  • the representative emotion may be determined as emotion that occurs the most frequently among a plurality of emotions included in the sequential emotion, a positive emotion may be determined as the representative emotion in the case in which the emotions have the same frequency, or the representative emotion may be determined according to a preset priority of preset representative emotions, but the present disclosure is not limited thereto.
  • candidate representative emotions may be primarily determined based on the frequency at which an emotion occurs, a representative emotion may be determined by preliminary applying magnitude of emotions for the respective candidate representative emotions to determine the average magnitude of the emotions, and weights may also be applied to different emotions for respective driving situations.
  • the emotion form determiner 122 may store the time at which the representative emotion occurs, the type of the representative emotion, and information on the driving situation at the time in the memory 123 (S 290 ).
  • the emotion form determiner 122 may determine whether the previous emotion and the current emotion are the same-type emotions (S 250 ).
  • the emotion form determiner 122 may determine the current emotion as the repetitive emotion and may provide a service corresponding to the type of the current emotion through the output unit 130 (S 280 ). Needless to say, in the case of the repetitive emotion, information on the time at which the current emotion occurs, the type of the current emotion, and the driving situation at the corresponding time may be stored in the memory 123 (S 290 ).
  • the current emotion when the current emotion is an emotion that occurs within a predetermined time after the previous emotion occurs or when both the driving situation and the type of the current emotion are different from those of the previous emotion (NO of S 250 ), the current emotion may be processed as the transient emotion (S 260 ), and related information may be stored in the memory 123 (S 290 ).
  • FIG. 3 is a diagram for explaining a reference for determining the form of an emotion according to an embodiment of the present disclosure.
  • the horizontal axis indicates a time that elapses after a vehicle stars travelling, and the vertical axis indicates the magnitude of an emotion.
  • FIG. 3 shows an example in which a driver is sensitive to a driving situation and repeatedly feels a surprise after a surprise occurs due to a sudden stop of a vehicle, etc. in a section 1.
  • a surprise 310 that occurs first may be processed as an emotion that occurs transiently, but a surprise 320 that occurs again within a preset threshold time T is the same emotion as the surprise 310 in a different driving situation, and accordingly, the surprise 310 and the surprise 320 may be processed as a repetitive emotion, and the vehicle may provide a service corresponding to the emotion and perform vehicle control at the time at which the emotion is repeatedly recognized.
  • the drawing shows an example in which various emotions continuously occur within the threshold time T in the situation of conversation between a driver and an occupant in a section 2.
  • all emotion changes in a driving situation in which conversation between the driver and the occupant continues may be integrated and processed as one emotion change.
  • the frequency at which the emotion of pleasure occurs is the highest, and thus emotion changes may be processed as a single occurrence of pleasure as a representative emotion.
  • surprise 330 occurs, but occurs after the threshold time T elapses compared with the surprise 320 that occurs last, and thus the surprise 330 may not be processed as the repetitive emotion even if driving situations are different from each other.
  • FIG. 4 is a diagram showing an example of determination of the form of an emotion according to an embodiment of the present disclosure.
  • the vertical axis indicates the magnitude of an emotion and the horizontal axis indicates a time that elapses after a vehicle starts travelling.
  • a driving situation and an emotion form are further illustrated below the graph.
  • the provided service may be recommendation and playback of music in order to overcome boredom because a driver feels boredom once in traffic congestion and then a driver repeatedly feels boredom during cruising after the traffic congestion, but the present disclosure is not limited thereto.
  • FIGS. 5A to 5C show experimental examples of determination of an emotion form according to an embodiment of the present disclosure.
  • contempt 1 traffic congestion occurs, and a driver experiences contempt (contempt 1), and then repetitive contempt (contempt 2) is detected even in a situation in which the traffic congestion does not exist.
  • contempt 2 When the contempt 1 and the contempt 2 occur within a threshold time, the contempt may be processed as a repetitive emotion, and service corresponding to contempt may be provided.
  • drowsiness when the driver feels drowsiness (drowsiness 1) in a cruising situation, and then repetitive drowsiness (drowsiness 2) is detected even in a situation that is not cruising within a threshold time, drowsiness may be processed as a repetitive emotion, and a service corresponding to drowsiness (e.g., opening of a window, an operation of air conditioning, or multimedia playback) may be provided.
  • a service corresponding to drowsiness e.g., opening of a window, an operation of air conditioning, or multimedia playback
  • a threshold time for a time interval between the previous emotion and the current emotion for processing the sequential emotion and a threshold time for a time interval between the previous emotion and the current emotion for processing the repetitive emotion may be set differently from each other, which will be described with reference to FIGS. 6 and 7 .
  • FIG. 6 is a diagram for explaining a reference for determining the form of an emotion according to another embodiment of the present disclosure.
  • a first threshold time T 1 which is a time difference between a previous emotion and a current emotion for processing the continuous emotion
  • a second threshold time T 2 which is a time difference between a previous emotion and a current emotion for processing the repetitive emotion.
  • the first threshold time may be set to be shorter than the second threshold time. This is because the sequential emotion generally occurs within a short time but the repetitive emotion occurs at a long time interval compared with the sequential emotion.
  • T 1 and T 2 are set differently, the procedure of providing the emotion-recognition-based service described above with reference to FIG. 2 may be modified as shown in FIG. 7 .
  • FIG. 7 is a flowchart showing an example of a procedure of providing an emotion-recognition-based service according to another embodiment of the present disclosure.
  • FIG. 7 is different from FIG. 2 in that operation S 230 of FIG. 2 is divided into operations S 230 A and S 230 B in FIG. 7 .
  • FIG. 7 is similar to FIG. 2 except for this difference, and thus will be described in terms of this difference.
  • the emotion form determiner 122 may determine whether the current emotion occurs within T 2 from the previous emotion (S 230 B).
  • the emotion form determiner 122 may determine whether the current emotion is the same as the previous emotion (S 250 ), and when T 2 elapses (NO of S 230 B), the emotion form determiner 122 may process the current emotion as a transient emotion (S 260 ).
  • the apparatus for providing an emotion-recognition-based service for a vehicle related to at least one embodiment of the present disclosure as configured above may determine whether to provide a service by further considering the form of an emotion.
  • the sequential emotion and the repetitive emotion may be differentiated depending on the form of an emotion, and a service corresponding thereto may be provided.
  • the aforementioned present disclosure can also be embodied as computer readable code stored on a computer readable recording medium such as a non-transitory computer readable recording medium.
  • a computer readable recording medium such as a non-transitory computer readable recording medium.
  • the method or the operations performed by individual components such as the emotion recognizer 110 or sub-components thereof and the service provision determiner 120 or sub-components thereof can be embodied as computer readable code stored on a memory implemented by, for example, a computer readable recording medium such as a non-transitory computer readable recording medium.
  • the computer readable recording medium is any data storage device that can store data which can thereafter be read by a computer.
  • Examples of the computer readable recording medium include a hard disk drive (HDD), a solid state drive (SSD), a silicon disc drive (SDD), read-only memory (ROM), random-access memory (RAM), CD-ROM, magnetic tapes, floppy disks, optical data storage devices, etc.
  • the emotion recognizer 110 and the service provision determiner 120 may be implemented as a computer, a processor, or a microprocessor.
  • the preprocessor 112 , the feature extractor 113 , the emotion classifier 114 , and the basic emotion determiner 115 , the driving situation determiner 121 , and the emotion form determiner 122 each, or together, may be implemented as a computer, a processor, or a microprocessor.
  • the computer, the processor, or the microprocessor reads and executes the computer readable code stored in the computer readable recording medium
  • the computer, the processor, or the microprocessor may be configured to perform the above-described operations/method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A method of providing an emotion-recognition-based service for a vehicle includes monitoring an occupant and a driving situation, when recognizing an emotion of the occupant during the monitoring, comparing at least one of a first emotion as a type of a currently generated emotion or a first driving situation as a driving situation corresponding to a first time which is as a time at which the emotion is recognized, with at least one of a second emotion as a type of an emotion that is previously recognized compared with the emotion or a second driving situation as a driving situation corresponding to a second time which is as a time at which the previously generated emotion is recognized, and based on a result of the comparing, determining the currently generated emotion as one of a transient emotion, a sequential emotion, and a repetitive emotion.

Description

This application claims the benefit of Korean Patent Application No. 10-2020-0094593, filed on Jul. 29, 2020, which is hereby incorporated by reference as if fully set forth herein.
TECHNICAL FIELD
The present disclosure relates to a method of controlling a vehicle function based on emotion recognition of a driver in the vehicle, and more particularly, to a method of providing an emotion-recognition-based service for a vehicle and a method of controlling the apparatus for determining whether to provide a service depending on a form of the emotion of the driver.
BACKGROUND
Recently, research has been actively conducted into technology for determining the emotional state of a user in a vehicle. In addition, research has also been actively conducted into technology for inducing a positive emotion of a user in a vehicle based on the determined emotional state of the user.
However, conventional emotion-recognition-based services determine only whether the emotional state of a user in a vehicle is positive or negative, and merely provides feedback for adjusting the output of components in the vehicle based on whether the determined emotional state is positive or negative.
However, a driver emotion may be rapidly restored to a neutral emotion after an emotion occurs while traveling, emotions may continuously change, and a specific emotion may repeatedly and frequently occur. Thus, when a service intends to be provided whenever an emotion is detected even if the current emotions correspond to a transient emotion in the interest of vehicle service provision, there is a problem in that the need for the service is low if a driver emotion is already restored to a neutral emotion at the time at which the service is to be provided.
SUMMARY
An object of the present disclosure is to provide an emotion-recognition-based service for a vehicle and a method of controlling the same for determining whether to provide a service by further considering the form of an emotion as well as a type of the emotion.
In particular, the present disclosure provides an emotion-recognition-based service for a vehicle and a method of controlling the same for effectively determining an emotion for which it is required to actually provide service depending on the form of the emotion.
Additional advantages, objects, and features of the disclosure will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the disclosure. The objectives and other advantages of the disclosure may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
To achieve these objects and other advantages and in accordance with the purpose of the disclosure, as embodied and broadly described herein, a method of providing an emotion-recognition-based service for a vehicle is disclosed. The method includes monitoring an occupant and a driving situation, when recognizing an emotion of the occupant during the monitoring, comparing at least one of a first emotion as a type of a currently generated emotion or a first driving situation as a driving situation corresponding to a first time which is as a time at which the emotion is recognized, with at least one of a second emotion as a type of an emotion that previously occurs compared with the emotion or a second driving situation as a driving situation corresponding to a second time which is as a time at which the previously generated emotion is recognized, and based on a result of the comparing, determining the currently generated emotion as one of a transient emotion, a sequential emotion, and a repetitive emotion.
In another aspect of the present disclosure, an apparatus for providing an emotion-recognition-based service for a vehicle includes an emotion recognizer configured to determine an emotion of an occupant, a driving situation determiner configured to monitor a diving situation, and an emotion form determiner configured to, when the emotion recognizer recognizes the emotion of the occupant, compare at least one of a first emotion as a type of a currently generated emotion or a first driving situation as a driving situation corresponding to a first time which is as a time at which the emotion is recognized, with at least one of a second emotion as a type of an emotion that previously occurs compared with the emotion or a second driving situation as a driving situation corresponding to a second time which is as a time at which the previously generated emotion is recognized, and to determine the currently generated emotion as one of a transient emotion, a sequential emotion, and a repetitive emotion based on a result of the comparing.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the disclosure and together with the description serve to explain the principle of the disclosure. In the drawings:
FIG. 1 is a diagram showing an example of an apparatus for providing an emotion-recognition-based service for a vehicle according to an embodiment of the present disclosure;
FIG. 2 is a flowchart showing an example of a procedure of providing an emotion-recognition-based service according to an embodiment of the present disclosure;
FIG. 3 is a diagram for explaining a reference for determining the form of an emotion according to an embodiment of the present disclosure;
FIG. 4 is a diagram showing an example of determination of the form of an emotion according to an embodiment of the present disclosure;
FIGS. 5A to 5C show experimental examples of determination of an emotion form according to an embodiment of the present disclosure;
FIG. 6 is a diagram for explaining a reference for determining the form of an emotion according to another embodiment of the present disclosure; and
FIG. 7 is a flowchart showing an example of a procedure of providing an emotion-recognition-based service according to another embodiment of the present disclosure.
DETAILED DESCRIPTION
Exemplary embodiments of the present disclosure are described in detail with reference to the accompanying drawings so that those of ordinary skill in the art may easily implement the same. However, the present disclosure may be implemented in various different forms, and is not limited to these embodiments. To clearly describe the present disclosure, parts not concerning the description are omitted from the drawings, and like reference numerals denote like elements throughout the specification.
In addition, when a certain part “includes” a certain component, this indicates that the part may further include other components, rather than necessarily excluding such other components, unless there is no disclosure to the contrary. The same reference numbers will be used throughout the drawings and the specification to refer to the same parts.
An embodiment of the present disclosure may provide a service when a specific form of an emotion is recognized by further considering the form of the emotion as well as the type of an emotion in order to efficiently provide the service in response to an emotion of a vehicle occupant, e.g., a driver at a more significant time.
The form of the emotion to which embodiments of the present disclosure are applicable may include a transient emotion, a sequential emotion, and a repetitive emotion.
The sequential emotion may refer to the case in which a plurality of emotions simultaneously or sequentially occurs when one driving situation event sequentially occurs. An example in which various emotions sequentially occur may include the case in which a driver simultaneously or sequentially expresses emotions of laughing, sadness, and regret while conversing. In this case, processing in which an emotion change during a unit time is analyzed to determine that one emotion occurs while an event occurs may be appropriate for an actual driver emotion rather than processing in which emotions are determined to sequentially occur. If the sequential emotion is recognized and a related service is provided immediately when the emotion occurs, a user may be rather inconvenienced in that the service is provided while the emotion changes, and thus there is a need to provide the emotion recognition result and to execute the related service at the time at which the emotion is terminated with regard to the sequential emotion.
The repetitive emotion may refer to the case in which the same emotion occurs again at a small time interval. An example of the repetitive emotion may include the case in which a driver repeatedly feels bored or sleepy irrespective of a driving situation after the driver feels bored or sleepy due to continuous cruising and the case in which the driver becomes sensitive to movement of another vehicle irrespective of the driving situation after the driver is surprised by the sudden stop of a foregoing vehicle while traveling. A phenomenon in which an emotion that occurs once is repeatedly expressed even after the reason for the emotion does not exist any longer may be seen from research in the field of artificial emotion modeling, and a psychological phenomenon of existing research result in which memory about stimulation in the past is a source for stimulating a new emotion may be seen in a driving situation.
In addition, the transient emotion may refer to an individual emotion that does not correspond to emotions included in the repetitive emotion or the sequential emotion.
Hereinafter, an emotion-recognition-based service provision apparatus and a method of controlling the same according to an embodiment of the present disclosure will be described. In the following description, the target of emotion recognition is assumed to be the driver among vehicle occupants, but this is for convenience of description, and in reality, needless to say, it will be obvious to one of ordinary skill in the art to provide an emotion-based service for an arbitrary occupant as a target of emotion recognition, irrespective of the role of driving a vehicle, without exceeding the range that does not contradict the following description.
FIG. 1 is a diagram showing an example of an apparatus for providing an emotion-recognition-based service for a vehicle according to an embodiment of the present disclosure.
Referring to FIG. 1, the apparatus for providing an emotion-recognition-based service for a vehicle may include an emotion recognizer 110 for recognizing a driver emotion, a service provision determiner 120 for determining the form of an emotion based on the driver emotion recognized by the emotion recognizer 110, a driving situation, and a previous determination history, and determining whether to provide a service depending on the form of the emotion, and an output unit 130 for providing a service in a vehicle according to determination of the service provision determiner 120.
First, the emotion recognizer 110 may include an information acquirer 111, a preprocessor 112, a feature extractor 113, an emotion classifier 114, and a basic emotion determiner 115.
The information acquirer 111 may include a camera for acquiring an image containing at least the face of the driver. Depending on the emotion recognition method, the information acquirer 111 may further include a microphone for receiving a driver voice or a biosensor (e.g., a heart rate sensor) as well as the camera.
The preprocessor 112 may perform correction and noise removal on the information acquired by the information acquirer 111. For example, the preprocessor 112 may perform image-quality correction and noise removal on raw data of the image acquired through the camera through preprocessing.
The feature extractor 113 may perform feature extraction on a feature point as the basis of determination of an emotion, movement, etc. from the preprocessed image.
Technology for emotion feature extraction may broadly include three methods. One method is a holistic method of extracting a feature by modeling or expressing the intensity of pixel values in an image of the entire face. Another method is a geometrical method of extracting a feature by searching for the geometrical arrangement and position of features in the face. In addition, an active appearance model (AAM), which is formed by combining the holistic method and the geometrical method, may also be applied. One or more of these feature extraction methods may be implemented by the feature extractor 113.
However, these feature extraction methods are merely exemplary, and the present disclosure is not limited thereto, and thus the feature extraction method may be any method as long as the method is a method for extracting a feature for determining an emotion.
The emotion classifier 114 may classify an expression in a given image into emotional states based on classification of patterns of the extracted feature.
The emotion classifier 114 may classify emotions using various methods including a Bayesian network using a predefined conditional probability, a K-nearest neighbor algorithm, an artificial neural network, or the like.
The basic emotion determiner 115 may finally determine the classified emotional state as a preset type of basic emotion. Here, the basic emotion may include six emotions of misery, anger, happiness, disgust, sadness, and surprise that are classified based on the facial expression, which is proposed by P. Ekman. However, this is also merely exemplary, and the type and number of basic emotions may be set in different ways. For example, the basic emotion according to the present embodiment may further include drowsiness, contempt, boredom, or the like as well as the six basic emotions of P. Ekman.
The service provision determiner 120 may include a driving situation determiner 121, an emotion form determiner 122, and a memory 123.
The driving situation determiner 121 may determine the driving situation depending on the road or traffic situation and vehicle manipulation by a driver, such as driving in a congested section, cruising, making a sudden stop, or driving in a curved section. The driving situation determiner 121 may also determine a driver situation (e.g., making a telephone call or engaged in conversation with occupants) irrespective of the traffic situation or vehicle manipulation. To this end, the driving situation determiner 121 may acquire information required to determine the driving situation from various vehicle sensors (a vehicle speed sensor or an acceleration sensor), a navigation system for providing information on the road and a traffic volume, a head unit, or the like.
When the emotion recognizer 110 recognizes a basic emotion, the emotion form determiner 122 may determine whether the current recognized emotion corresponds to a transient emotion, a continuous emotion, or a repetitive emotion in consideration of at least one of the driving situation at the corresponding time, the previous emotion, the time at which a previous emotion was recognized, or the driving situation at the time of recognizing the previous emotion. The emotion form determiner 122 may determine whether to provide a service corresponding to an emotion included in the recognized emotion form or a representative emotion based on the emotion form.
The memory 123 may store the type and form of the recognized emotion, and the driving situation at the time at which the emotion is recognized whenever the emotion recognizer 110 recognizes an emotion.
When determining to provide the service corresponding to the emotion recognized by the service provision determiner 120, the output unit 130 may provide a corresponding form of service through a display 131 for providing a service through visual output, a speaker 132 for providing a service through audible output, a vibration output unit 133 for providing a service through vibration output, or the like.
For example, when a service corresponding to boredom is provided, the output unit 130 may output music having a fast tempo through the speaker 132. In another example, when a service corresponding to surprise is provided, the output unit 130 may provide an image for guidance of deep breathing through the display 131. Needless to say, the form in which service is provided is merely exemplary, and any service may be provided, as long as the service corresponds to the recognized emotion of the driver.
Hereinafter, a procedure of providing a service according to an embodiment will be described based on the aforementioned configuration of the apparatus.
FIG. 2 is a flowchart showing an example of a procedure of providing an emotion-recognition-based service according to an embodiment of the present disclosure.
Referring to FIG. 2, first, a driver and a driving situation may be monitored (S210). For example, an image containing the face of the driver may be captured, a driver emotion may be determined based on the image captured by the emotion recognizer 110, and the driving situation determiner 121 may monitor a driving situation.
When the emotion recognizer 110 determines that an emotion occurs while the driving situation is monitored (YES of S220), the emotion form determiner 122 may search the memory 123 and may determine whether the current emotion occurs within a preset time T after the previous emotion occurs (S230).
When the previous emotion occurs after the preset time T (NO of S230), the emotion form determiner 122 may determine the form of the current emotion as a transient emotion (S260), and may store the time at which the current emotion occurs, the type thereof (i.e., the type of the basic emotion), and information on the driving situation at the corresponding time in the memory 123 (S290).
In contrast, when the previous emotion occurs within the preset time T (YES of S230), the emotion form determiner 122 may determine whether the driving situation when the current emotion occurs is the same as a driving situation when the previous emotion occurs (S240).
As the determination result, when the current emotion occurs in the same driving situation within a preset time from the time at which the previous emotion occurs, the emotion form determiner 122 may determine that the previous emotion and the current emotion are included in the sequential emotion and may determine a representative emotion of the sequential emotion (S270). For example, the emotion form determiner 122 may generate one type of representative emotion by combining information on the previous emotion and information on the new emotion. The representative emotion may be determined as emotion that occurs the most frequently among a plurality of emotions included in the sequential emotion, a positive emotion may be determined as the representative emotion in the case in which the emotions have the same frequency, or the representative emotion may be determined according to a preset priority of preset representative emotions, but the present disclosure is not limited thereto. For example, candidate representative emotions may be primarily determined based on the frequency at which an emotion occurs, a representative emotion may be determined by preliminary applying magnitude of emotions for the respective candidate representative emotions to determine the average magnitude of the emotions, and weights may also be applied to different emotions for respective driving situations.
When determining that the current emotion is a continuous emotion, the emotion form determiner 122 may store the time at which the representative emotion occurs, the type of the representative emotion, and information on the driving situation at the time in the memory 123 (S290).
When the current emotion occurs within a preset time from the time at which the previous emotion occurs but driving situations are different, the emotion form determiner 122 may determine whether the previous emotion and the current emotion are the same-type emotions (S250).
When the previous emotion and the current emotion are determined to be the same-type emotions (YES of S250), the emotion form determiner 122 may determine the current emotion as the repetitive emotion and may provide a service corresponding to the type of the current emotion through the output unit 130 (S280). Needless to say, in the case of the repetitive emotion, information on the time at which the current emotion occurs, the type of the current emotion, and the driving situation at the corresponding time may be stored in the memory 123 (S290).
In contrast, when the current emotion is an emotion that occurs within a predetermined time after the previous emotion occurs or when both the driving situation and the type of the current emotion are different from those of the previous emotion (NO of S250), the current emotion may be processed as the transient emotion (S260), and related information may be stored in the memory 123 (S290).
Hereinafter, a method of determining the form of an emotion will be described in more detail with reference to FIGS. 3 to 5C.
FIG. 3 is a diagram for explaining a reference for determining the form of an emotion according to an embodiment of the present disclosure. In FIG. 3, the horizontal axis indicates a time that elapses after a vehicle stars travelling, and the vertical axis indicates the magnitude of an emotion.
FIG. 3 shows an example in which a driver is sensitive to a driving situation and repeatedly feels a surprise after a surprise occurs due to a sudden stop of a vehicle, etc. in a section 1. A surprise 310 that occurs first may be processed as an emotion that occurs transiently, but a surprise 320 that occurs again within a preset threshold time T is the same emotion as the surprise 310 in a different driving situation, and accordingly, the surprise 310 and the surprise 320 may be processed as a repetitive emotion, and the vehicle may provide a service corresponding to the emotion and perform vehicle control at the time at which the emotion is repeatedly recognized.
The drawing shows an example in which various emotions continuously occur within the threshold time T in the situation of conversation between a driver and an occupant in a section 2. In this case, all emotion changes in a driving situation in which conversation between the driver and the occupant continues may be integrated and processed as one emotion change. In the section 2, the frequency at which the emotion of pleasure occurs is the highest, and thus emotion changes may be processed as a single occurrence of pleasure as a representative emotion.
In the section 2, surprise 330 occurs, but occurs after the threshold time T elapses compared with the surprise 320 that occurs last, and thus the surprise 330 may not be processed as the repetitive emotion even if driving situations are different from each other.
FIG. 4 is a diagram showing an example of determination of the form of an emotion according to an embodiment of the present disclosure.
Similarly to FIG. 3, in FIG. 4, the vertical axis indicates the magnitude of an emotion and the horizontal axis indicates a time that elapses after a vehicle starts travelling. However, compared with FIG. 3, in FIG. 4, a driving situation and an emotion form are further illustrated below the graph.
Referring to FIG. 4, even if contempt and surprise, which correspond to transient emotions, are recognized while traveling, vehicle control and service provision may not be performed.
In contrast, in the graph, when boredom (which is a representative emotion of the sequential emotion) is recognized and boredom is recognized again within a threshold time T, this may be processed as a repetitive emotion, and service provision may be performed. In this case, the provided service may be recommendation and playback of music in order to overcome boredom because a driver feels boredom once in traffic congestion and then a driver repeatedly feels boredom during cruising after the traffic congestion, but the present disclosure is not limited thereto.
FIGS. 5A to 5C show experimental examples of determination of an emotion form according to an embodiment of the present disclosure.
First, referring to FIG. 5A, traffic congestion occurs, and a driver experiences contempt (contempt 1), and then repetitive contempt (contempt 2) is detected even in a situation in which the traffic congestion does not exist. When the contempt 1 and the contempt 2 occur within a threshold time, the contempt may be processed as a repetitive emotion, and service corresponding to contempt may be provided.
Referring to FIG. 5B, when the driver feels drowsiness (drowsiness 1) in a cruising situation, and then repetitive drowsiness (drowsiness 2) is detected even in a situation that is not cruising within a threshold time, drowsiness may be processed as a repetitive emotion, and a service corresponding to drowsiness (e.g., opening of a window, an operation of air conditioning, or multimedia playback) may be provided.
Referring to FIG. 5C, when conversation between a driver and an occupant continues, an emotion change is detected to occur sequentially. In this case, it may be seen that a representative emotion of the sequential emotion is extracted and the sequential emotion is processed as one representative emotion.
According to another embodiment of the present disclosure, a threshold time for a time interval between the previous emotion and the current emotion for processing the sequential emotion and a threshold time for a time interval between the previous emotion and the current emotion for processing the repetitive emotion may be set differently from each other, which will be described with reference to FIGS. 6 and 7.
FIG. 6 is a diagram for explaining a reference for determining the form of an emotion according to another embodiment of the present disclosure.
Referring to FIG. 6, a first threshold time T1, which is a time difference between a previous emotion and a current emotion for processing the continuous emotion, may be set differently from a second threshold time T2, which is a time difference between a previous emotion and a current emotion for processing the repetitive emotion.
Here, the first threshold time may be set to be shorter than the second threshold time. This is because the sequential emotion generally occurs within a short time but the repetitive emotion occurs at a long time interval compared with the sequential emotion. As such, when T1 and T2 are set differently, the procedure of providing the emotion-recognition-based service described above with reference to FIG. 2 may be modified as shown in FIG. 7.
FIG. 7 is a flowchart showing an example of a procedure of providing an emotion-recognition-based service according to another embodiment of the present disclosure.
FIG. 7 is different from FIG. 2 in that operation S230 of FIG. 2 is divided into operations S230A and S230B in FIG. 7. FIG. 7 is similar to FIG. 2 except for this difference, and thus will be described in terms of this difference.
Referring to FIG. 7, when an emotion occurs (YES of S220) while a driver and a driving situation are monitored (S210), whether the current emotion occurs within T1 after the previous emotion occurs may be determined (S230A). When the current emotion occurs within T1 (YES of S230A), whether the driving situation in which the current occurs is the same as that of the previous emotion may be determined (S240) as in FIG. 2.
In contrast, when the current emotion occurs after T1 elapses from the previous emotion (NO of S230A), the emotion form determiner 122 may determine whether the current emotion occurs within T2 from the previous emotion (S230B).
When the current emotion occurs within T2 (YES of S230B), the emotion form determiner 122 may determine whether the current emotion is the same as the previous emotion (S250), and when T2 elapses (NO of S230B), the emotion form determiner 122 may process the current emotion as a transient emotion (S260).
The remaining procedures are similar to those of FIG. 2, and thus a repeated description thereof will be omitted for clarity.
The apparatus for providing an emotion-recognition-based service for a vehicle related to at least one embodiment of the present disclosure as configured above may determine whether to provide a service by further considering the form of an emotion.
In particular, according to the present disclosure, the sequential emotion and the repetitive emotion may be differentiated depending on the form of an emotion, and a service corresponding thereto may be provided.
It will be appreciated by persons skilled in the art that that the effects that could be achieved with the present disclosure are not limited to what has been particularly described hereinabove and other advantages of the present disclosure will be more clearly understood from the detailed description.
The aforementioned present disclosure can also be embodied as computer readable code stored on a computer readable recording medium such as a non-transitory computer readable recording medium. For example, the method or the operations performed by individual components such as the emotion recognizer 110 or sub-components thereof and the service provision determiner 120 or sub-components thereof can be embodied as computer readable code stored on a memory implemented by, for example, a computer readable recording medium such as a non-transitory computer readable recording medium. The computer readable recording medium is any data storage device that can store data which can thereafter be read by a computer. Examples of the computer readable recording medium include a hard disk drive (HDD), a solid state drive (SSD), a silicon disc drive (SDD), read-only memory (ROM), random-access memory (RAM), CD-ROM, magnetic tapes, floppy disks, optical data storage devices, etc. The emotion recognizer 110 and the service provision determiner 120, each, or together, may be implemented as a computer, a processor, or a microprocessor. Alternatively, the preprocessor 112, the feature extractor 113, the emotion classifier 114, and the basic emotion determiner 115, the driving situation determiner 121, and the emotion form determiner 122, each, or together, may be implemented as a computer, a processor, or a microprocessor. When the computer, the processor, or the microprocessor reads and executes the computer readable code stored in the computer readable recording medium, the computer, the processor, or the microprocessor may be configured to perform the above-described operations/method.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present disclosure without departing from the spirit or scope of the disclosure. Thus, it is intended that the present disclosure cover the modifications and variations of this disclosure provided they come within the scope of the appended claims and their equivalents.

Claims (17)

What is claimed is:
1. A method of providing an emotion-recognition-based service for a vehicle, the method comprising:
monitoring an occupant and a driving situation;
when recognizing an emotion of the occupant during the monitoring, comparing at least one of a first emotion as a type of a currently generated emotion or a first driving situation as a driving situation corresponding to a first time which is as a time at which the emotion is recognized, with at least one of a second emotion as a type of an emotion that is previously recognized compared with the emotion or a second driving situation as a driving situation corresponding to a second time which is as a time at which the previously generated emotion is recognized; and
based on a result of the comparing, determining the currently generated emotion as one of a transient emotion, a sequential emotion, and a repetitive emotion,
wherein the determining comprises:
determining whether the first time is within a threshold time from the second time,
when the first time is within the threshold time from the second time, comparing the first driving situation with the second driving situation.
2. The method of claim 1, further comprising: when the currently generated emotion is determined to be the repetitive emotion, providing a service corresponding to the first emotion.
3. The method of claim 1, further comprising: when the currently generated emotion is determined to be the sequential emotion, determining a representative emotion among a plurality of emotions comprising at least the first emotion and the second emotion and configuring the sequential emotion.
4. The method of claim 1, wherein the determining further comprises: when the first time occurs after the threshold time from the second time, determining the currently generated emotion as the transient emotion.
5. The method of claim 1, wherein the determining further comprises: when the first driving situation and the second driving situation are the same, determining the currently generated emotion as the sequential emotion.
6. The method of claim 1, wherein the determining further comprises: when the first driving situation and the second driving situation are different from each other, comparing the first emotion with the second emotion.
7. The method of claim 6, wherein the determining further comprises: when the first emotion and the second emotion are the same, determining the currently generated emotion as the repetitive emotion.
8. The method of claim 6, wherein the determining further comprises: when the first emotion and the second emotion are different from each other, determining the currently generated emotion as the transient emotion.
9. The method of claim 1, wherein the determining further comprises determining whether the first time is within a preset first threshold time from the second time.
10. The method of claim 9, wherein the determining further comprises: when the first time occurs after the threshold time from the second time, determining whether the first time is within a second threshold time longer than the first threshold time from the second time.
11. The method of claim 10, wherein the determining further comprises: when the first time occurs after the threshold time from the second time, determining the currently generated emotion as the transient emotion.
12. The method of claim 10, wherein the determining further comprising: when the first time is within the second threshold time from the second time, comparing the first emotion with the second emotion.
13. The method of claim 12, wherein the determining further comprises:
when the first emotion and the second emotion are the same, determining the currently generated emotion as the repetitive emotion; and
when the first emotion and the second emotion are different from each other, determining the currently generated emotion as the transient emotion.
14. The method of claim 9, wherein the determining further comprising: when the first time is within the first threshold time from the second time, comparing the first driving situation with the second driving situation.
15. The method of claim 14, wherein the determining further comprises:
when the first driving situation and the second driving situation are the same, determining the currently generated emotion as the sequential emotion; and
when the first driving situation and the second driving situation are different from each other, comparing the first emotion with the second emotion and determining that the currently generated emotion is the repetitive emotion or the transient emotion according to whether the first emotion and the second emotion are the same.
16. A non-transitory computer-readable recording medium having recorded thereon a program for executing the method of claim 1.
17. An apparatus for providing an emotion-recognition-based service for a vehicle, the apparatus comprising:
an emotion recognizer configured to determine an emotion of an occupant;
a driving situation determiner configured to monitor a diving situation; and
an emotion form determiner configured to, when the emotion recognizer recognizes the emotion of the occupant, compare at least one of a first emotion as a type of a currently generated emotion or a first driving situation as a driving situation corresponding to a first time which is as a time at which the emotion recognized, with at least one of a second emotion as a type of an emotion that is previously recognized compared with the emotion or a second driving situation as a driving situation corresponding to a second time which is as a time at which the previously generated emotion is recognized, and to determine the currently generated emotion as one of a transient emotion, a sequential emotion, and a repetitive emotion depending on a result of the comparing and to compare the first driving situation with the second driving situation when the first time is within a threshold time from the second time.
US17/084,010 2020-07-29 2020-10-29 Emotion-recognition-based service provision apparatus for vehicle and method of controlling the same Active US11315362B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200094593A KR20220014674A (en) 2020-07-29 2020-07-29 In-vehicle emotion based service providing device and method of controlling the same
KR10-2020-0094593 2020-07-29

Publications (2)

Publication Number Publication Date
US20220036048A1 US20220036048A1 (en) 2022-02-03
US11315362B2 true US11315362B2 (en) 2022-04-26

Family

ID=80004448

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/084,010 Active US11315362B2 (en) 2020-07-29 2020-10-29 Emotion-recognition-based service provision apparatus for vehicle and method of controlling the same

Country Status (3)

Country Link
US (1) US11315362B2 (en)
KR (1) KR20220014674A (en)
CN (1) CN114084146A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11574203B2 (en) * 2017-03-30 2023-02-07 Huawei Technologies Co., Ltd. Content explanation method and apparatus

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240104975A1 (en) * 2022-09-27 2024-03-28 Bendix Commercial Vehicle Systems Llc System and method for detecting and evaluating bursts of driver performance events

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150345981A1 (en) * 2014-05-29 2015-12-03 GM Global Technology Operations LLC Adaptive navigation and location-based services based on user behavior patterns
US20160127641A1 (en) * 2014-11-03 2016-05-05 Robert John Gove Autonomous media capturing
US20160217321A1 (en) * 2015-01-23 2016-07-28 Shindig. Inc. Systems and methods for analyzing facial expressions within an online classroom to gauge participant attentiveness
US20170311863A1 (en) * 2015-02-13 2017-11-02 Omron Corporation Emotion estimation device and emotion estimation method
US20180050696A1 (en) * 2016-08-16 2018-02-22 Honda Motor Co., Ltd. Vehicle data selection system for modifying automated driving functionalities and method thereof
US20180144185A1 (en) * 2016-11-21 2018-05-24 Samsung Electronics Co., Ltd. Method and apparatus to perform facial expression recognition and training
KR101901417B1 (en) 2011-08-29 2018-09-27 한국전자통신연구원 System of safe driving car emotion cognitive-based and method for controlling the same
US20180303397A1 (en) * 2010-06-07 2018-10-25 Affectiva, Inc. Image analysis for emotional metric evaluation
US10187690B1 (en) * 2017-04-24 2019-01-22 Gopro, Inc. Systems and methods to detect and correlate user responses to media content
US20190163965A1 (en) * 2017-11-24 2019-05-30 Genesis Lab, Inc. Multi-modal emotion recognition device, method, and storage medium using artificial intelligence
US20190276036A1 (en) 2016-11-28 2019-09-12 Honda Motor Co., Ltd. Driving assistance device, driving assistance system, program, and control method for driving assistance device
US20200282980A1 (en) * 2019-03-07 2020-09-10 Honda Motor Co., Ltd. System and method for teleoperation service for vehicle
US10880601B1 (en) * 2018-02-21 2020-12-29 Amazon Technologies, Inc. Dynamically determining audience response to presented content using a video feed

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180303397A1 (en) * 2010-06-07 2018-10-25 Affectiva, Inc. Image analysis for emotional metric evaluation
KR101901417B1 (en) 2011-08-29 2018-09-27 한국전자통신연구원 System of safe driving car emotion cognitive-based and method for controlling the same
US20150345981A1 (en) * 2014-05-29 2015-12-03 GM Global Technology Operations LLC Adaptive navigation and location-based services based on user behavior patterns
US20160127641A1 (en) * 2014-11-03 2016-05-05 Robert John Gove Autonomous media capturing
US20160217321A1 (en) * 2015-01-23 2016-07-28 Shindig. Inc. Systems and methods for analyzing facial expressions within an online classroom to gauge participant attentiveness
US20170311863A1 (en) * 2015-02-13 2017-11-02 Omron Corporation Emotion estimation device and emotion estimation method
US20180050696A1 (en) * 2016-08-16 2018-02-22 Honda Motor Co., Ltd. Vehicle data selection system for modifying automated driving functionalities and method thereof
US20180144185A1 (en) * 2016-11-21 2018-05-24 Samsung Electronics Co., Ltd. Method and apparatus to perform facial expression recognition and training
US20190276036A1 (en) 2016-11-28 2019-09-12 Honda Motor Co., Ltd. Driving assistance device, driving assistance system, program, and control method for driving assistance device
JP6648304B2 (en) 2016-11-28 2020-02-14 本田技研工業株式会社 Driving support device, driving support system, program, and control method of driving support device
US10187690B1 (en) * 2017-04-24 2019-01-22 Gopro, Inc. Systems and methods to detect and correlate user responses to media content
US20190163965A1 (en) * 2017-11-24 2019-05-30 Genesis Lab, Inc. Multi-modal emotion recognition device, method, and storage medium using artificial intelligence
US10880601B1 (en) * 2018-02-21 2020-12-29 Amazon Technologies, Inc. Dynamically determining audience response to presented content using a video feed
US20200282980A1 (en) * 2019-03-07 2020-09-10 Honda Motor Co., Ltd. System and method for teleoperation service for vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11574203B2 (en) * 2017-03-30 2023-02-07 Huawei Technologies Co., Ltd. Content explanation method and apparatus

Also Published As

Publication number Publication date
US20220036048A1 (en) 2022-02-03
KR20220014674A (en) 2022-02-07
CN114084146A (en) 2022-02-25

Similar Documents

Publication Publication Date Title
JP4286860B2 (en) Operation content determination device
US11854550B2 (en) Determining input for speech processing engine
JP4728972B2 (en) Indexing apparatus, method and program
US10529328B2 (en) Processing speech signals in voice-based profiling
US10800043B2 (en) Interaction apparatus and method for determining a turn-taking behavior using multimodel information
US11315362B2 (en) Emotion-recognition-based service provision apparatus for vehicle and method of controlling the same
EP3618063B1 (en) Voice interaction system, voice interaction method and corresponding program
US11393249B2 (en) Apparatus and method of providing vehicle service based on individual emotion recognition
Erzin et al. Multimodal person recognition for human-vehicle interaction
Mower et al. A hierarchical static-dynamic framework for emotion classification
WO2022142614A1 (en) Dangerous driving early warning method and apparatus, computer device and storage medium
CN112307816A (en) In-vehicle image acquisition method and device, electronic equipment and storage medium
KR20190056520A (en) Analysis Method for Forward Concentration using a Facial Expression Recognition Technology
Tong et al. Automatic assessment of dysarthric severity level using audio-video cross-modal approach in deep learning
KR101950721B1 (en) Safety speaker with multiple AI module
KR20220014943A (en) Method and system for determining driver emotion in conjunction with driving environment
Pant et al. Driver's Companion-Drowsiness Detection and Emotion Based Music Recommendation System
US11450209B2 (en) Vehicle and method for controlling thereof
JP7511374B2 (en) Speech activity detection device, voice recognition device, speech activity detection system, speech activity detection method, and speech activity detection program
Noor et al. Audio visual emotion recognition using cross correlation and wavelet packet domain features
Malcangi et al. Evolving fuzzy-neural method for multimodal speech recognition
KR102350068B1 (en) Deception detection method using biometric information
Bedoya et al. Laughter detection based on the fusion of local binary patterns, spectral and prosodic features
KR102479400B1 (en) Real-time Lip Reading Interface System based on Deep Learning Model Using Video
KR101092489B1 (en) Speech recognition system and method

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: KIA MOTORS CORPORATION, KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, JIN MO;MIN, YOUNG BIN;REEL/FRAME:054218/0841

Effective date: 20201021

Owner name: HYUNDAI MOTOR COMPANY, KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, JIN MO;MIN, YOUNG BIN;REEL/FRAME:054218/0841

Effective date: 20201021

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE