WO2018119924A1 - 一种调节用户情绪的方法及装置 - Google Patents

一种调节用户情绪的方法及装置 Download PDF

Info

Publication number
WO2018119924A1
WO2018119924A1 PCT/CN2016/113149 CN2016113149W WO2018119924A1 WO 2018119924 A1 WO2018119924 A1 WO 2018119924A1 CN 2016113149 W CN2016113149 W CN 2016113149W WO 2018119924 A1 WO2018119924 A1 WO 2018119924A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
data
terminal device
emotion
information
Prior art date
Application number
PCT/CN2016/113149
Other languages
English (en)
French (fr)
Inventor
李浩然
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2016/113149 priority Critical patent/WO2018119924A1/zh
Priority to US16/473,946 priority patent/US11291796B2/en
Priority to CN201680080603.4A priority patent/CN108604246A/zh
Priority to EP16924971.1A priority patent/EP3550450A4/en
Publication of WO2018119924A1 publication Critical patent/WO2018119924A1/zh
Priority to HK18116138.9A priority patent/HK1257017A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M21/02Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/486Bio-feedback
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7405Details of notification to user or communication with user or patient ; user input means using sound
    • A61B5/741Details of notification to user or communication with user or patient ; user input means using sound using synthesised speech
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/3827Portable transceivers
    • H04B1/385Transceivers carried on the body, e.g. in helmets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0027Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0044Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense
    • A61M2021/005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense images, e.g. video
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/35Communication
    • A61M2205/3576Communication with non implanted data transmission devices, e.g. using external transmitter or receiver
    • A61M2205/3584Communication with non implanted data transmission devices, e.g. using external transmitter or receiver using modem, internet or bluetooth
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/50General characteristics of the apparatus with microprocessors or computers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/50General characteristics of the apparatus with microprocessors or computers
    • A61M2205/52General characteristics of the apparatus with microprocessors or computers with memories providing a history of measured variating parameters of apparatus or patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/04Heartbeat characteristics, e.g. ECG, blood pressure modulation
    • A61M2230/06Heartbeat rate only
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user
    • A61M2230/30Blood pressure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/15Biometric patterns based on physiological signals, e.g. heartbeat, blood flow

Definitions

  • the present application relates to the field of information technology, and in particular, to a method and apparatus for adjusting user emotions.
  • the embodiment of the present application provides a method and apparatus for adjusting user emotions, and provides a convenient and effective method for adjusting people's emotions.
  • the embodiment of the present application provides a method for adjusting user emotions, including:
  • the terminal device acquires data for characterizing a physical condition of the user, wherein the data for characterizing the physical condition of the user includes first data, and the first data is a wearable device connected to the terminal device for the user Detecting at least one parameter value; the first data may be at least one of the following parameter values: pulse strength, heart rate value, blood pressure value, and the like.
  • the terminal device acquires emotion information determined based on the data; the terminal device performs an operation corresponding to the emotion information for adjusting the emotion of the user.
  • the above solution provides a convenient and effective method for adjusting user emotions, and no special equipment, such as a finger-clip probe, is needed, but a parameter value for characterizing the user's physical condition is obtained in real time through the wearable device, and is sent to the terminal.
  • the device, the terminal device determines the user's emotion based on the parameter value, and then performs an operation of adjusting the user's emotion, provides convenience to the user, and has high real-time and operability.
  • the The second data detected by the terminal device so that the data obtained by the terminal device for characterizing the physical condition of the user may further include second data, where the second data is at least one parameter of the user detected by the terminal device value;
  • the terminal device obtains the emotion information determined based on the data, which may be implemented by: the terminal device acquiring the emotion information determined based on the first data and the second data.
  • the emotion is determined by the wearable device and the two kinds of data detected by the terminal device itself, and the accuracy of the determination is improved.
  • the terminal device when the emotion information is determined, it may be executed by the terminal device, so that when the terminal device obtains the emotion information determined based on the data, the terminal device may be implemented as follows:
  • the terminal device determines the emotion information according to a parameter value of each parameter in the data and a corresponding weight.
  • the weight corresponding to each parameter can be saved in the user parameter table. Specifically, the terminal device searches the user parameter table to obtain a weight corresponding to each parameter in the at least one parameter, and multiplies the parameter value corresponding to each parameter by its corresponding weight and obtains the emotion of the user. a value; the terminal device generates the emotion information based on the emotion value.
  • the user parameter table may be pre-configured on the terminal device, or may be sent by a server.
  • the emotion information when it is determined, it may be executed by a server, and the terminal device acquires the emotion information determined based on the data, which may be implemented as follows:
  • the terminal device sends the acquired data for characterizing the physical condition of the user to the server, and receives the emotion information returned by the server according to the data.
  • the server may generate the emotion information based on the user parameter table and the data, and then send the information to the terminal device, so that the terminal device receives the emotion information sent by the server.
  • the user parameter table may be: the server receives the number based on the distance currently. Receiving data for characterizing the physical condition of the user, and data corresponding to the data for characterizing the physical condition of the user, confirmed by the user, in the first set time period closest to the moment of the moment The emotional information is updated after the update.
  • the server determines the emotion information, which can reduce the computing resources of the occupied terminal device and improve the operation efficiency.
  • the emotions are determined to use a fixed parameter table, so that the user's difference is not taken into consideration, such as the user's heart rate is relatively fast, and the user's heart rate is relatively slow.
  • To update the parameter weights can improve the accuracy of the judgment emotions.
  • the operation performed by the terminal device to adjust the emotion of the user corresponding to the emotion information may be implemented as follows:
  • the terminal device prompts the current emotional state of the user.
  • the user can know whether the terminal device displays the emotional state accurately.
  • the terminal device prompts the current emotional state of the user, which can be implemented as follows:
  • the terminal device prompts the voice state or the interface to prompt the current emotional state of the user; or the terminal device sends the emotional state to the wearable device, and the device is prompted by the wearable device Describe the emotional state of the user.
  • the above design prompts the user's emotional state through voice or interface, so that the user can know whether the terminal device displays the emotional state accurately, and can realize the interaction between the user and the terminal device, thereby realizing human-machine harmony.
  • the terminal device performs an operation for adjusting the user's emotion corresponding to the emotion information, and may also be implemented as follows:
  • the terminal device recommends activity information or interactive information for adjusting emotions to the user
  • the interactive information is used to interact with other users.
  • the server may determine the activity information of the user with good surrounding emotions, and send the information to the terminal device, so that the terminal device receives the activity information sent by the server for recommending to the user to adjust the emotion. And display the activity information to the user. Due to the adjustment of the surrounding people Emotional methods are often equally applicable to other users, so the effectiveness of adjustments is increased by considering the way people around them adjust their emotions.
  • the terminal device after the terminal device prompts the user to be in an emotional state, the terminal device receives an operation instruction triggered by the user, and the operation instruction is used to indicate that the user approves the Or modifying the emotion information; and the terminal device sends the user-approved emotion information or the modified emotion information to the server.
  • the server modifies the weight of each parameter in the user parameter table based on the corrected emotion information and the data used to represent the user's physical condition. , thereby improving the accuracy of the server to determine the user's emotions.
  • the method may further include:
  • the terminal device detects an input instruction triggered by the user, and the input instruction carries activity information corresponding to an activity performed by the user in a second set time period that is closest to the current time;
  • the terminal device Receiving, by the terminal device, user information of other users sent by the server; the other user is a user who performs the same activity as the activity of the user in a second set time period that is closest to the current time; Other users have the same emotional information as the user.
  • the terminal device displays the user information to the user, so that the user interacts with other users corresponding to the user information.
  • the server uses the data mining technology to synthesize the user's emotional information around the user and the user's sensor data, and provides corresponding suggestions to the user. For example, if the user's mood is sad, the user's data around the user's mood is happy, and the user's data is comprehensively processed to provide suggestions to the user (for example, the server side filters the user's emotions around the user's mood for the recent time period to run in a certain park, then Advise the user to go to the park to run).
  • the server side filters the user's emotions around the user's mood for the recent time period to run in a certain park, then Advise the user to go to the park to run).
  • the server side filters the user's emotions around the user's mood for the recent time period to run in a certain park, then Advise the user to go to the park to run).
  • the server side filters the user's emotions around the user's mood for the recent time period to run in a certain park, then Advise the user to go to the
  • the terminal device may store interaction information corresponding to different emotion information, so that the terminal device may perform an operation for adjusting the emotion of the user corresponding to the emotion information, where the terminal device may The activity information corresponding to the emotion information for recommendation to the user is directly determined and the activity information is displayed.
  • the terminal device adjusts the user's emotions by interacting with the user, making the terminal device more humanized, and embodying the harmony of man and machine.
  • the operation performed by the terminal device to adjust the emotion of the user corresponding to the emotion information may be implemented as follows:
  • the terminal device updates a theme, a wallpaper, or a ringtone of the terminal device based on the emotion information.
  • the user's emotion when the operation for adjusting the user's emotion corresponding to the emotion information is performed, the user's emotion can be adjusted in one way or multiple ways, and the utility model is higher in practicability and higher in operation.
  • the embodiment of the present application further provides a method for adjusting user emotions, where the method is applied to a wearable device, including:
  • the wearable device detects at least one parameter value used to characterize a user's physical condition
  • the wearable device transmits the at least one parameter value to the terminal device connected to the wearable device, such that the terminal device performs an operation for the user mood adjustment based on the at least one parameter value.
  • the above solution provides a convenient and effective method for adjusting user emotions, and no special equipment, such as a finger-clip probe, is needed, but a parameter value for characterizing the user's physical condition is obtained in real time through the wearable device, and is sent to the terminal.
  • the device whereby the terminal device determines the user's emotion based on the parameter value, and then performs an operation of adjusting the user's emotion, provides convenience to the user, and has high real-time and operability.
  • the wearable device receives activity information sent by the terminal device for recommendation to the user; the wearable device displays the activity information. Since the wearable device is carried on the user at any time, the activity information is displayed through the wearable device, and the real-time performance is higher.
  • the wearable device receives an instruction to update an theme, update a wallpaper, update a ringtone, update a prompt tone, or play music sent by the terminal device; the wearable device is based on the instruction based on the Instruct to update the theme, update the wallpaper, update the tone, update the ringtone, or play music. Since the wearable device is carried on the user at any time, the user's emotions are adjusted through the wearable device to adjust the ringtone, the theme, the wallpaper, and the like, and is more effective in real time.
  • the embodiment of the present application further provides a method for adjusting user emotions, where the method includes:
  • the server searches the user parameter table corresponding to the user to obtain a weight corresponding to each parameter in the at least one parameter, and multiplies the parameter value corresponding to each parameter by its corresponding weight and obtains the user's An emotion value;
  • the user parameter table is data received by the terminal device for characterizing a physical condition of the user within a first set time period in which the server is closest to a time when the data is currently received. And the corresponding emotional information confirmed by the user is updated;
  • the server generates the emotion information based on the sentiment value of the user and sends the sentiment information to the terminal device, where the emotion information is used to instruct the terminal device to perform an operation corresponding to the emotion information for adjusting the user's emotion.
  • the server determines the emotion information, which can reduce the computing resources of the occupied terminal device and improve the operation efficiency.
  • the emotions are determined to use a fixed parameter table, so that the user's difference is not taken into consideration, such as the user's heart rate is relatively fast, and the user's heart rate is relatively slow.
  • To update the parameter weights can improve the accuracy of the judgment emotions.
  • the server sends the user's emotional value to the terminal After the device, it also includes:
  • the server receives the sentiment information that is sent by the terminal device and confirmed by the user.
  • the user parameter table is updated as follows:
  • the user parameter table is updated by the user's own data and the determined emotion, and the accuracy of confirming the emotion information is improved.
  • the method before the server generates the emotion value of the user based on the user parameter table corresponding to the user and the data used to represent the physical condition of the user, the method further includes:
  • the method further includes:
  • the server acquires user information of other users in the first emotional state and the same as the activity information of the user;
  • the server sends the user information of the other user to the terminal device, so that the user interacts with other users corresponding to the user information.
  • the server will recommend other users who have the same interest to the user to the user, and interact with the two users to simultaneously adjust the emotions of the two users.
  • the embodiment of the present application further provides a device for adjusting user emotion, and a description of the corresponding effect of the device can be found in the method embodiment.
  • the device is applied to a terminal device, including:
  • a transceiver configured to receive first data sent by the wearable device connected to the terminal device; the first data is at least one parameter value detected by the wearable device for the user;
  • a processor configured to acquire data for characterizing a physical condition of the user, the data including the first data received by the transceiver, acquiring emotion information determined based on the data, and performing corresponding to the emotion information An operation for adjusting the emotion of the user.
  • the above solution provides a convenient and effective method for adjusting user emotions, and no special equipment, such as a finger-clip probe, is needed, but a parameter value for characterizing the user's physical condition is obtained in real time through the wearable device, and is sent to the terminal.
  • the device, the terminal device determines the user's emotion based on the parameter value, and then performs an operation of adjusting the user's emotion, provides convenience to the user, and has high real-time and operability.
  • the data for characterizing the physical condition of the user further includes second data
  • the device further includes:
  • At least one sensor for detecting second data for characterizing a physical condition of the user, the second data comprising at least one parameter value;
  • the processor When acquiring the emotion information determined based on the data, the processor is specifically configured to: acquire emotion information determined based on the first data and the second data.
  • the processor when acquiring the emotion information determined based on the data, is specifically used to:
  • the emotion information is determined based on parameter values of each parameter in the data and corresponding weights.
  • the transceiver is further configured to send the data acquired by the processor to a server, and receive the emotion information returned by the server according to the data.
  • the processor when performing the operation corresponding to the emotion information for adjusting the emotion of the user, is specifically used to:
  • the device may further include: a speaker for voice prompting;
  • the processor is specifically configured to send a voice through the speaker to prompt the user to be in an emotional state.
  • the device may further include: a display device, configured to display prompt information;
  • the processor is specifically configured to prompt the user to be in an emotional state by using the display device display interface.
  • the transceiver is further configured to send the emotional state to the wearable device, so that the wearable device prompts the user to be in an emotional state.
  • the processor is further configured to recommend, by using a display device, activity information or interaction information for adjusting emotions to the user;
  • the interactive information is used to interact with other users.
  • the first data includes at least one of the following parameter values: a heart rate value, a blood pressure value, or a pulse strength.
  • the second data includes at least one of the following parameter values: speech rate, speech intensity, pressing screen strength or facial expression;
  • the at least one sensor includes at least one of the following:
  • a voice receiver for detecting a speech rate and/or detecting a speech intensity
  • a pressure sensor for detecting the strength of pressing the screen
  • Image sensor for facial expressions
  • the device may further include:
  • a transceiver configured to receive an operation instruction triggered by the user after the processor prompts the emotional state that the user is currently located, and send the user-approved emotional information or the modified emotional information to the
  • the server is configured to instruct the user to approve the emotion information or modify the emotion information.
  • the embodiment of the present application further provides a device for adjusting user emotion, and a description of the corresponding effect of the device can be referred to the method embodiment.
  • the device is applied to a wearable device, including:
  • At least one sensor for detecting at least one parameter value for characterizing a physical condition of the user
  • a transceiver configured to send the at least one parameter value to the terminal device, to facilitate the The terminal device performs an operation for the user's emotion adjustment based on the at least one parameter value.
  • the above solution provides a convenient and effective method for adjusting user emotions, and no special equipment, such as a finger-clip probe, is needed, but a parameter value for characterizing the user's physical condition is obtained in real time through the wearable device, and is sent to the terminal.
  • the device whereby the terminal device determines the user's emotion based on the parameter value, and then performs an operation of adjusting the user's emotion, provides convenience to the user, and has high real-time and operability.
  • the transceiver is further configured to receive activity information sent by the terminal device for recommendation to the user;
  • a display device for displaying the activity information.
  • it is further configured to receive an instruction for updating an theme, updating a wallpaper, updating a prompt tone, updating a ringtone, or playing music sent by the terminal device;
  • the display device is further configured to display a wallpaper or a theme
  • the apparatus also includes a processor for updating a theme based on the instruction, updating a wallpaper, updating a prompt tone, updating a ringtone, or playing music;
  • the display device is further configured to display a wallpaper or theme updated by the processor
  • the apparatus also includes a speaker for issuing a prompt tone updated by the processor, or issuing a ringtone updated by the processor, or playing music.
  • the embodiment of the present application further provides a device for adjusting a user's emotions, where the device is applied to a terminal device, including:
  • a data collection module configured to acquire data for characterizing a physical condition of the user, where the data includes first data, where the first data is at least one parameter detected by the wearable device connected to the terminal device for the user value;
  • a data interaction module configured to acquire emotion information determined based on the data
  • an execution module configured to execute an operation corresponding to the emotion information for adjusting the emotion of the user.
  • the data collected by the data collection module for characterizing the physical condition of the user further includes second data, where the second data is at least one parameter of the user detected by the terminal device.
  • the data interaction module is specifically configured to acquire emotion information determined based on the first data and the second data.
  • the data interaction module is specifically configured to:
  • the emotion information is determined based on parameter values of each parameter in the data and corresponding weights.
  • the data interaction module is further configured to:
  • the execution module is specifically configured to:
  • the execution module is specifically configured to prompt the user to present an emotional state by voice or interface display, or send the emotional state to the wearable device.
  • the wearable device is configured to prompt the user to be in an emotional state.
  • the execution module is specifically configured to recommend activity information or interaction information for adjusting emotions to the user;
  • the interactive information is used to interact with other users.
  • the first data includes at least one of the following parameter values: a heart rate value, a blood pressure value, or a pulse strength.
  • the second data includes at least one of the following parameter values: speech rate, speech intensity, press screen strength, or facial expression.
  • the data interaction module is further configured to: after the execution module prompts the current emotional state of the user, receive the operation instruction triggered by the user, and approve the user
  • the emotion information or the modified emotion information is sent to the server; the operation instruction is used to instruct the user to approve the emotion information or modify the emotion information.
  • the embodiment of the present application further provides an apparatus for adjusting user emotions, including:
  • a detecting module configured to detect at least one parameter value used to represent a physical condition of the user
  • a sending module configured to send the at least one parameter value to a terminal device connected to the wearable device, so that the terminal device performs, for the user, based on the at least one parameter value The operation of emotional adjustment.
  • the device further includes:
  • a receiving module configured to receive activity information sent by the terminal device for recommending to the user
  • a display module for displaying the activity information.
  • the device further includes:
  • a receiving module configured to receive an update theme sent by the terminal device, update a wallpaper, update a prompt tone, update a ringtone, or play an instruction of playing music
  • a processing module that updates a theme based on the instructions, updates a wallpaper, updates a tone, updates a ringtone, or plays music.
  • FIG. 1 is a schematic diagram of a system framework for adjusting user emotions according to an embodiment of the present application
  • FIG. 2A is a schematic diagram of a terminal device according to an embodiment of the present application.
  • FIG. 2B is a schematic diagram of a wearable device according to an embodiment of the present application.
  • FIG. 3 is a flowchart of a method for adjusting user emotions according to an embodiment of the present application
  • FIG. 4 is a schematic flowchart of a method for updating a user parameter table according to an embodiment of the present application
  • FIG. 5A to FIG. 5C are schematic diagrams showing user emotional state display according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of displaying a user emotion state selection interface according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a method for adjusting user emotions according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of an interface display of a user input activity according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of activity information display according to an embodiment of the present application.
  • FIG. 10A is a schematic diagram of display of other user information according to an embodiment of the present application.
  • FIG. 10B is a schematic diagram of interaction between users according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a display of an emotional state of a wearable device according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram of an apparatus for adjusting user emotions according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic diagram of another apparatus for adjusting user emotions according to an embodiment of the present application.
  • HRV Heart Rate Variability
  • the above scheme can help the user to adjust the mood to some extent.
  • the above method for adjusting emotions requires a professional acquisition device to implement, and does not consider the difference of different users, and the solution for adjusting emotions does not change intelligently in real time. Therefore, for many users, the way to adjust emotions will be unsmart and the effect will be less obvious.
  • the embodiments of the present application provide a method and apparatus for adjusting user emotions, and provide a convenient and effective method for adjusting people's emotions.
  • the method and the device are based on the same inventive concept. Since the principles of the method and the device for solving the problem are similar, the implementation of the device and the method can be referred to each other, and the repeated description is not repeated.
  • the system framework for adjusting user emotions applied in the embodiment of the present application includes at least one terminal device and a server.
  • the adjustment system framework of the user emotion shown in FIG. 1 includes a terminal device 110a and a terminal device 110b, and a server 120.
  • the terminal device 110a and the terminal device 110b are connected by using a wireless manner, and the available wireless modes include, but are not limited to, various wireless short-distance communication modes, such as: Bluetooth, Near Field Communication (English: Near Field Communication, referred to as NFC) ), ZigBee, infrared, wireless fidelity (English: Wireless Fidelity, referred to as: WiFi).
  • NFC Near Field Communication
  • WiFi Wireless Fidelity
  • the wireless communication technology may be adopted, and the wireless communication method that can be adopted may be mobile communication, including but not only Limited to include second-generation mobile communication technology, third-generation mobile communication technology, or fourth-generation mobile communication technology, fifth-generation mobile communication technology, etc., or other wireless communication methods, such as WiFi, or other wireless short-range communication
  • the wireless communication method may be mobile communication, including but not only Limited to include second-generation mobile communication technology, third-generation mobile communication technology, or fourth-generation mobile communication technology, fifth-generation mobile communication technology, etc., or other wireless communication methods, such as WiFi, or other wireless short-range communication
  • the server of the embodiment of the present application may be a service computer, a mainframe computer, or the like.
  • the terminal device of the embodiment of the present application includes, but is not limited to, a personal computer, a handheld or laptop device, a mobile device (such as a mobile phone, a tablet, a smart bracelet, a smart watch, a personal digital assistant, etc.), and the following intelligent mobile
  • a mobile device such as a mobile phone, a tablet, a smart bracelet, a smart watch, a personal digital assistant, etc.
  • the following takes an example of interaction between a terminal device and a server, and specifically describes a method for adjusting user emotions.
  • the terminal device as the smart phone 300 as an example, as shown in FIG. 2A.
  • the smartphone 300 includes a display device 310, a processor 320, and a memory 330.
  • the memory 330 can be used to store software programs and data, and the processor 320 executes various functional applications and data processing of the smartphone 300 by running software programs and data stored in the memory 330.
  • the memory 330 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function, and the like; the storage data area may store data created according to the use of the smart phone 300 (eg, Audio data, phone book, etc.). Moreover, memory 330 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the processor 320 is a control center of the smartphone 300 that connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the smartphone 300 by running or executing software programs and/or data stored in the memory 330. The data is processed to monitor the terminal as a whole.
  • the processor 320 may include one or more general-purpose processors, and may also include one or more digital signal processors (DSPs) for performing related operations to implement the embodiments of the present application.
  • DSPs digital signal processors
  • the smartphone 300 may also include an input device 340 for receiving input digital information, character information or contact touch/contactless gestures, and generating signal inputs related to user settings and function control of the smartphone 300, and the like.
  • the input device 340 may include a touch panel 341.
  • the touch panel 341 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like, any suitable object or accessory in the touch
  • the panel 341 or the operation of the touch panel 341) drives the corresponding connecting device according to a preset program.
  • the touch panel 341 can include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 320 is provided and can receive commands from the processor 320 and execute them. For example, the user clicks an image thumbnail on the touch panel 341 with a finger, and the touch detection device detects the signal brought by the click, and then transmits the signal to the touch controller, and the touch controller then applies the signal. The signal is converted into coordinates and sent to the processor 320.
  • the processor 320 determines an operation performed on the image (such as image enlargement, full-screen display of the image) according to the coordinates and the type of the signal (click or double-click), and then determines to execute the The memory space occupied by the operation, if the occupied memory space is smaller than the free memory, the enlarged image is displayed on the display panel 311 included in the display device in full screen, thereby realizing image display.
  • an operation performed on the image such as image enlargement, full-screen display of the image
  • the type of the signal click or double-click
  • the touch panel 341 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input device 340 may also include other input devices 342, which may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like. One or more of them.
  • the display device 310 includes a display panel 311 for displaying information input by the user or information provided to the user, and various menu interfaces of the terminal device 300, etc., which are mainly used for displaying images in the smart phone 300 in the embodiment of the present application.
  • the display panel can be configured by using a liquid crystal display (English: Liquid Crystal Display, LCD for short) or an organic light-emitting diode (English: Organic Light-Emitting Diode, OLED for short).
  • the touch panel 341 can cover the display panel 311 to form a touch display screen.
  • the smart phone 300 may further include a transceiver.
  • the transceiver includes a wireless communication module, such as a radio frequency (RF) circuit 380 for network communication with the wireless network device, and may also include a WiFi module 381. For WiFi communication with other devices; it may also include an infrared module or a Bluetooth module and the like.
  • the smartphone 300 can also include a speaker 390 for playing music, voice prompts, or beeping sounds and the like.
  • the system for adjusting user emotion includes a wearable device 300a, such as a wristband or a watch, in addition to the smart phone 300.
  • the wearable device 300a may include a transceiver 301a, wherein the transceiver 301a may include at least one of the following: an infrared module, a Bluetooth module, or a wifi module. That is, the wearable device communicates with the smart phone 300 through the transceiver 301a.
  • the wearable device 300a may further include a radio frequency circuit, specifically connected to the server through the radio frequency circuit.
  • One or more sensors 302a may be included in the wearable device, such as a body temperature sensor, a pulse sensor, a blood pressure sensor, and the like.
  • the body temperature sensor is used to collect the user's body temperature
  • the pulse sensor is used to collect the user's pulse
  • the blood pressure sensor is used to collect the user's blood pressure.
  • the wearable device may further include a display device 303a for displaying prompt information, a picture, a theme, a wallpaper, or event information, and the like.
  • the wearable device 300a may also include a processor 304a for performing operations such as collection of various sensor data.
  • the wearable device 300a may also include a speaker 305a.
  • the speaker 305a is used for voice prompts, play sounds, play ringtones, and the like.
  • the wearable device 300a also includes a memory 306a that can be used to store data and stored software programs executed by the processor 304a, and the like.
  • Data for characterizing a user's physical condition which may be data directly or indirectly characterizing a user's physical state, the data including at least one parameter value of the user, such as heart rate, body temperature, pulse, blood pressure, and light perceived by the user.
  • Emotional information is used to reflect the user's current mood and health status. Emotion can be divided into positive Frequent emotions or abnormal emotions, such as joy and music are normal emotions, while anger, romance, and fear are abnormal emotions, and health status is classified as healthy or non-healthy. Unhealthy emotions are also unhealthy. Of course, non-health status also includes the possibility that the user may become ill, such as hyperthermia and fever. Or stop between heart beats and so on.
  • the emotional information may include the emotional value of the user and/or the emotional state of the user. Emotional states include normal or abnormal emotions, and may include joy, sadness, joy, anger, excitement, and the like.
  • different emotional states correspond to different ranges of emotional values. For example, when people are excited, emotions are relatively fluctuating, so the range of emotional values corresponding to excitement is relatively high. For hi, relative mood fluctuations are small, so the range of emotional values corresponding to hi is relatively low.
  • the specific emotion value range setting may be configured according to the proportion of each case data and the actual situation, which is not specifically limited in the embodiment of the present application.
  • different emotional states may also correspond to different parameter value ranges, for example, when the blood pressure is greater than the first preset value, the pulse is within the preset range, and the body temperature is less than the second preset value, indicating fear.
  • Activity information which is used to describe the activities that the user participates in at a certain time period, such as walking, running or reading in the park, etc. It can also be some suggestions to the user, such as reminding the user not to be excited, or to remind the user. Apply sunscreen and more.
  • User parameter table including weights corresponding to different parameters. For example, the weight corresponding to the body temperature is a1, the weight corresponding to the pulse is a2, and so on.
  • the operation of adjusting user emotions including but not limited to displaying activity information, displaying interactive information, playing music, popping up pictures, or playing voice, modifying the theme, modifying the wallpaper and interacting with other users, or modifying the prompt sound, vibration prompts, prompts The emotional state of the user, etc. It can also include updating the theme of the wearable device, wallpaper, alert tone, popping up a picture on the wearable device, playing a voice, and the like.
  • FIG. 3 it is a schematic diagram of a method for adjusting user emotions.
  • the wristband or watch monitors the user's heart rate, blood pressure, body temperature, and pulse strength, and transmits them to the smartphone 300 via Bluetooth or infrared.
  • the smartphone 300 acquires data for characterizing the physical condition of the user.
  • the data includes first data, the first data being at least one parameter value detected by the wearable device connected to the terminal device for the user.
  • the first data includes heart rate values, blood pressure values, body temperature, pulse strength, and the like.
  • the wearable device may be a wristband or a watch, and the wristband or watch may periodically report data for characterizing the physical condition of the user. For example every hour.
  • the wristband or the watch collects data at a plurality of time points in one cycle, and then calculates an average value of the parameter values corresponding to the same parameter collected by the user at a plurality of time points, and sends the average value to the smartphone 300.
  • the wristband or the watch can also collect the user's data at multiple time points in one cycle, and send it directly to the smart phone 300 without statistics.
  • the smart phone 300 acquires emotion information determined based on the data.
  • the smart phone 300 performs an operation corresponding to the emotion information for adjusting the emotion of the user.
  • the number of parameters included in the data for characterizing the physical condition of the user is not specifically limited in the embodiment of the present application.
  • the above solution provides a convenient and effective method for adjusting user emotions, and no special equipment, such as a finger-clip probe, is needed, but a parameter value for characterizing the user's physical condition is obtained in real time through the wearable device, and is sent to the terminal.
  • the device, the terminal device determines the user's emotion based on the parameter value, and then performs an operation of adjusting the user's emotion, provides convenience to the user, and has high real-time and operability.
  • step S302 when the smartphone 300 acquires emotion information determined based on data for characterizing the physical condition of the user, the smartphone 300 may determine the emotion information based on the data itself.
  • the second data may be detected by the smart phone 300, so that the data acquired by the smart phone 300 for characterizing the physical condition of the user may further include second data, where the second data includes At least one parameter value of the user.
  • the second data may include at least one of the following parameter values: speech rate, speech intensity, pressing screen strength or facial expression, and the like.
  • the smart phone 300 may determine the emotion information based on the parameter and the threshold relationship according to the threshold corresponding to each parameter.
  • the data only includes one parameter, such as body temperature, and the normal range of body temperature is 36 to 37 degrees Celsius.
  • the smartphone 300 can count the body temperature value in one cycle T, such as one hour.
  • the smartphone 300 can count the average value of the user's body temperature within one hour, and if the user's body temperature average is not within the normal range, the user is determined to have a fever.
  • the smartphone 300 displays interactive information to the user, such as displaying "Your current body temperature is not within the normal range, please pay attention".
  • the data includes three parameters, such as body temperature, pulse and blood pressure, the normal range of blood pressure is 90-130 mmhg, and the normal range of pulse is 60-100 times per minute.
  • the normal range of blood pressure is 90-130 mmhg
  • the normal range of pulse is 60-100 times per minute.
  • the user's blood pressure is higher than the normal range, for example, the blood pressure is greater than 130
  • the pulse is normal, such as the pulse is in the range of 60 to 100
  • the body temperature is low, such as the body temperature is less than 36.2, which may be caused by excessive fear and fear of the user.
  • the smartphone 300 determines that the user is in this state, can play music for relieving the user's emotions, and can also change the display wallpaper, such as pictures of the sea, and the like.
  • the embodiments of the present application can configure a music library, a picture library, and the like for different emotions.
  • the user's blood pressure is high, the pulse is fast, and the body temperature is high, it may be caused by excessive user excitement.
  • the smartphone 300 determines that the user is in the state, the user can play music for relieving the user's emotions, such as rhythm. A little slower light music.
  • a threshold may be set for the state corresponding to each parameter. For example, when the user's blood pressure is greater than the first threshold, the user's blood pressure is high, and other parameters are also the same. List one by one.
  • the smartphone 300 may determine the emotion information according to a parameter value of each parameter in the data and a corresponding weight.
  • the smartphone 300 can maintain a user parameter table, and the user parameter table includes weights corresponding to different parameters. Determining the emotion information based on the plurality of parameter values, the terminal device may obtain the weight corresponding to each parameter in the at least one parameter by searching the user parameter table, and multiply the parameter value corresponding to each parameter by its corresponding weight. And summing and obtaining the emotional value of the user.
  • the user parameter table includes weights corresponding to different parameters.
  • the volume of the voice has a weight of a1.
  • the heart rate value is a2
  • the frequency value of the voice is a3
  • the weights corresponding to other parameters are not enumerated in the embodiment of the present application.
  • the user parameter table may be configured on the smart phone 300.
  • User parameters may also be sent by the server to the smartphone 300.
  • the server may periodically update the user parameter table and then send the updated user parameter table to the smartphone 300.
  • the smartphone 300 can display the determined user emotion information to the user. If the emotion information cannot directly represent the emotional state, determine the emotional state of the user according to the emotional information, and then the user is located. The emotional state is displayed to the user, thereby alerting the user to confirm whether the emotional state is accurate.
  • the smart phone 300 may display the emotional state of the user by voice or interface display; the smart phone 300 may also send the emotional state to the
  • the wearable device is configured to prompt the user to be in an emotional state by the wearable device.
  • the emotional state can be displayed to the user to confirm at regular intervals.
  • the process of updating the user parameter table is as shown in FIG. 4 .
  • the smart phone 300 After the smartphone 300 displays the emotion information of the user to the user, the smart phone 300 receives an operation instruction triggered by the user, where the operation instruction is used to instruct the user to approve the emotion information or The emotional information is modified.
  • the user can configure the smart phone 300 according to his/her own situation, and does not trigger the display operation instruction.
  • the smart phone 300 sends the emotion information approved by the user or the modified emotion information to the server. Thereby, the server receives the sentiment information sent by the smart phone 300 after the user confirms.
  • the emotional information is hi.
  • the emotion information is displayed to the user through various expressions, such as the emotion information displayed on the display interface shown in FIG. 5A or 5B.
  • An icon that the user can confirm is also displayed in the display interface, such as "Yes" or "No” in FIG. 5A or 5B.
  • FIG. 5A for example, when the user clicks the icon “No”, the figure can be displayed. 6 corresponding display interface.
  • the user can select a corresponding emotional icon.
  • the user may present the emotion information calculated by the server 120 according to the uploaded data for characterizing the physical condition of the user according to the time period, and the form is as shown in Table 1. .
  • the specific display form can be as shown in FIG. 5C.
  • Time period 1 like Time period 2 like Time period 3 romance
  • the smartphone 100 can upload the corrected emotion information to the server, so that the emotional information confirmed by the server user updates the saved user parameter table.
  • the server collects data of different users, including uploaded data and emotional information of each user for characterizing the physical condition of the user, and stores the data in the database according to the user ID.
  • the data of the single user is in the form of Table 2:
  • the data 1 may include the strength of the keyboard, the pressure value perceived by the screen pressure sensor, the heart rate value, and the pulse strength.
  • the data 2 may be the same as or different from the parameters included in the data 1, such as the data, the volume, the frequency, and the expression data of the voice may be included on the basis of all the parameters included in the data 1.
  • the data 3 includes the same parameters as the data 1 or the data 2, and may be different from the data 1 or the data 2. This is not specifically limited.
  • the server updates the user parameter table corresponding to the user by:
  • the server acquires data sent by the smart phone 300 for characterizing the physical condition of the user in a first set time period, and an emotion value corresponding to the data confirmed by the user.
  • the server adjusts, according to the data used to represent the physical condition of the user and the emotion value corresponding to the data used to represent the physical condition of the user, the current time parameter is included in a user parameter table corresponding to the user.
  • the weight corresponding to the parameter used to characterize the physical condition of the user is obtained by the updated user parameter table corresponding to the user.
  • the first set time period may be one week, two weeks, one day, and the like. That is, the server updates the user parameter table every first set time period.
  • the server obtains data sent by the smart phone 300 for characterizing a physical condition of the user, and after the emotion value confirmed by the user, may first use data for characterizing the physical condition of the user and Emotional values are cleaned and de-dued.
  • the server may periodically clean or de-reprocess data and mood values used to characterize the user's physical condition. For example, one week, two weeks, one day, and the like, the embodiment of the present application is not specifically limited herein.
  • the server adjusts, according to the data for characterizing the physical condition of the user and the emotion value corresponding to the data for characterizing the physical condition of the user, the user parameter table corresponding to the user of the smart phone 300 at the current time is included.
  • the weights of different parameters correspond to each other, the weights can be trained and adjusted by artificial intelligence algorithms.
  • Artificial intelligence algorithms can be, but are not limited to, neural networks, genetic algorithms, polynomial algorithms, and the like.
  • the updated user parameter table can be as shown in Table 3.
  • the parameter 1 in Table 3 may include the volume of the voice of the user ID1, the frequency of the voice, the strength of the keyboard, the pressure value perceived by the screen pressure sensor, the heart rate value, the pulse strength, the facial expression, and the like, respectively.
  • the parameter 2 includes the weight of each parameter of the user ID2.
  • the emotions are judged by using a fixed parameter list, which does not take into account the user's differences, such as some users have a faster heart rate, and some users have a slower heart rate.
  • Using artificial intelligence algorithms to update parameters can improve the accuracy of judging emotions.
  • the smart phone 300 can also detect the user's parameter value through its built-in sensor.
  • the smartphone 300 determines the emotion information based on the received parameter value of the wristband or the watch and the detected parameter value.
  • the user's parameter value can be detected by a voice receiver built in the smartphone 300.
  • the smartphone 300 can initiate the user's own voice-related parameter values by self-starting or by manual activation.
  • the smartphone determines whether to open the voice-related parameter value by detecting the altitude or the moving speed, but is not limited thereto.
  • Related parameter values may include voice volume, speech rate, and speech intensity, to name a few. Taking the user's driving as an example, when the user is in a driving state, the driving mood may be affected by other drivers or road conditions around.
  • the user may have an emotional state of anxiety, or even a more emotional state of emotion, and say something more anxious.
  • the speaking speed will be faster and the tone will be increased, so that the smartphone 300 can determine the user's emotion through the situation.
  • the smartphone 300 can alert the user to "be careful not to be too excited.”
  • the smartphone 300 can also select soothing or beautiful music from the music stored in the database to improve the user's excitement. If the user's current mood is sad, the smartphone 300 can play positive, brisk music to improve the user's mood.
  • music playing is not limited, and the user's emotion may be improved by popping up a picture or playing a voice. For example, when the user is in a sad mood, play a voice joke or display a ghost face to improve the user's mood.
  • the terminal can detect information such as the user's voice frequency, voice rate, voice strength, or voice volume.
  • the user's emotions are judged together with the detected speech-related data and other detected data related to the user's physical condition.
  • the detected voice-related data may be combined with the detected user's heart rate or pulse, etc. to determine whether the user's mood is fluctuating or to determine the user's specific emotional state.
  • the user can obtain the facial expression data of the user through the image sensor built in the smart phone 300. Use the expression to judge the user's emotion. Therefore, the parameters of the voice detection and the facial expression data are combined to determine the user's emotion, and the accuracy is higher.
  • the embodiment of the present application can also set a trigger condition for turning on facial expression detection, such as detecting that the user's heart rate is high or the pulse rate is within an abnormal range, and the facial expression detection is turned on. It can be understood that the voice detection can also be similar to the trigger condition.
  • the embodiment of the present application does not limit the data of the user for characterizing the physical condition of the user by using one or two or more sensors.
  • the user's emotion can be judged by the parameter values obtained by the image sensor, the voice receiver, and the pressure sensor 3.
  • the input frequency of the pressure sensor is low, the pressure value is small; the volume collected by the voice receiver is small, and the expression information collected by the image sensor is collected. It is normal.
  • the three are used to judge the emotions the user is in.
  • the corresponding operation when performing an operation for adjusting the user's emotion, the corresponding operation may be performed based on the current state of the terminal device and the wearable device or the activity currently performed by the user, such as during a user call or During the video call, if the voice prompt or picture prompt is used through the terminal device, the current operation of the user may be affected, and the effect on adjusting the user's emotion may be small, so the activity may be displayed through the wearable device at this time. Information, or Voice prompts or vibrating alerts through wearable devices. For another example, if the user is in rock climbing, working at height, or driving, the user may not be able to view the screen at this time. Therefore, if the method of displaying pictures, modifying the theme, or modifying the wallpaper is used, the effect on adjusting the user's emotion is not great, so Prompt the user by voice prompts or by playing music or vibrating alerts.
  • the user may determine the emotional information of the user, and send the recommended information or the information of other users that can interact with the terminal.
  • FIG. Another embodiment of the method for adjusting the user's emotions provided by the embodiment is the same as that of the embodiment corresponding to FIG. 3 and FIG. 4, and the embodiments of the present application are not described herein again.
  • the wristband or the watch monitors at least one parameter value of the user, and sends the data to the smart phone 300 through Bluetooth or infrared.
  • the wristband or the wristwatch monitors at least one parameter value of the user, and sends the data to the smart phone 300 through Bluetooth or infrared.
  • the at least one parameter value of the wristband or watch monitoring user may include, but is not limited to, at least one of the following: pulse strength, heart rate value, blood pressure value, body temperature, and the like.
  • the smart phone 300 monitors at least one parameter value of the user by using a built-in sensor.
  • the at least one parameter value monitored by the smartphone 300 may include, but is not limited to, at least one of: a volume of the voice, a frequency of the voice, a pressure value perceived by the screen pressure sensor, and obtained user expression data of the user's photo taken by the camera.
  • At least one parameter value transmitted by the wristband or watch received by the smartphone 300 and at least one parameter value monitored by itself constitute data of the user for characterizing the physical condition of the user.
  • the smartphone 300 transmits data for characterizing the physical condition of the user to the server.
  • the server determines the user's emotional information based on the data used to characterize the physical condition of the user. Specifically, when determining the emotional information of the user, it can be implemented by the methods described in S430 and S440.
  • the server sends the emotional information of the user to the smart phone 300.
  • the smartphone 300 displays the emotional information of the user to the user.
  • the smart phone 300 receives an operation instruction triggered by the user, and the operation instruction is used to instruct the user to approve the emotion information or modify the emotion data.
  • the user can configure the smart phone 300 according to his/her own situation, and does not trigger the display operation instruction.
  • the smart phone 300 sends the emotion information approved by the user or the modified emotion information to the server.
  • the server receives the sentiment information that is sent by the smart phone 300 and is confirmed by the user, and the server updates the user corresponding according to the emotional information approved by the user or the modified emotional information and the data used to represent the physical condition of the user.
  • the specific parameter of the user parameter table refer to the embodiment shown in FIG. 4.
  • the method further includes:
  • each user may input an activity corresponding to the current time period or the time period displayed to the user, such as reading , running, listening to music, etc.
  • an activity corresponding to the current time period or the time period displayed to the user such as reading , running, listening to music, etc.
  • the display interface of FIG. 8 after the user of the smartphone 300 inputs an activity in the input box shown in FIG. 8, the "confirm" icon is clicked, so that the smart phone 300 receives the corresponding user-triggered input command.
  • the activity information of reading, running, listening to music, and the like input by the user is sent to the server.
  • the input command carries the current activity information of the user.
  • the smart phone 300 after acquiring the data of the corresponding user for characterizing the physical condition of the user, and transmitting the data for characterizing the physical condition of the user to the server, the smart phone 300 sends the indication information to the smart phone 300 to indicate that the user inputs the current time period.
  • Corresponding activities such that the smartphone 300 can acquire activities input by the user during the current time period, such as reading, running, listening to music, and the like. Therefore, each different terminal device sends the acquired information, such as reading, running, listening to music, and the like input by the user to the server.
  • the server determines a first emotional state corresponding to the emotional value of the user; wherein, the different The emotional state corresponds to different ranges of emotional values; wherein the emotional state may include normal emotions and abnormal emotions, and may also include joy, anger, and the like.
  • the server acquires an activity recommended by other users in the second emotional state to the user of the terminal device.
  • the other user in the second emotion may be a user whose distance from the user corresponding to the terminal device is less than or equal to a preset distance threshold, and the activity may be the latest time period, such as a day or a week.
  • the right activities to carry out For example, the first emotional state is an abnormal mood, and the second emotional state is a normal emotion. If the user is in an abnormal mood, such as grief, the server may select an activity performed by the user in the hi or music state to recommend to the user corresponding to the terminal device.
  • the embodiment of the present invention does not specifically limit the configuration manner.
  • the determination of the second emotional state is determined based on the configuration manner.
  • the terminal or the server may also perform smart screening according to the activity of the corresponding user. For example, if the amount of exercise of the user itself is high, such as the number of steps exceeds a preset threshold, then it is possible to exercise, or walk, and the adjustment ability of the user's emotion may be small, so when recommending the adjustment scheme to the user, the recommendation may be preferred. Other activities besides walking.
  • the server sends activity information of other users to the smart phone 300.
  • the smartphone 300 can also send activity information of other users to the bracelet or the watch.
  • the activity information of other users can be transmitted by the server to the wristband or the watch.
  • the smartphone 300 recommends the activity information of other users to the user of the smartphone 300.
  • the current user's emotional state is "sadness"
  • the server acquires the user who is in “hi” and the distance between the user who is in "hi” and the user of the smartphone 300 is less than 3 kilometers, for example, after confirmation, the user Li Si Meet the above requirements. Therefore, the server sends the activity information corresponding to the activity performed by Li Si in the second set time period closest to the current time to the smart phone 300, and the smart phone 300 displays the activity information to the user, such as the user.
  • the received display information is the information displayed on the display interface as shown in FIG.
  • the smartphone 300 can also transmit the activity information of other users to the wristband or the watch, so that the wristband or the watch displays the activity information of other users to the user.
  • the server uses the data mining technology to synthesize the user's emotional information around the user and the user's sensor data, and provides corresponding suggestions to the user.
  • the user’s mood is sad
  • comprehensively processing and providing suggestions to the user for example, the server side filters the user's emotions around the user's mood is happy.
  • the activity of the latest time period is running in a certain park, then the user is suggested to go to the user. Running in the park).
  • the data adjustment technology is adopted to take into account the adjustment of the surrounding people.
  • the emotional approach increases the effectiveness of the adjustment.
  • the user after the user displays the emotion information of the user to the user in S470-S471, the user triggers the input activity information corresponding to the activity performed in the second set time period that is closest to the current time.
  • the smart phone 300 detects an input command triggered by a user of the smart phone 300, and the input command carries activity information corresponding to an activity performed by the user within a second set time period that is closest to the current time.
  • the second set time period may be one hour, two hours, or one day, and the like, which is not specifically limited in the embodiment of the present application. Taking one hour as the second set time period as an example, for example, the current user input activity information at 2:10, the server can find the user who is active from 1 to 2 o'clock.
  • the smartphone 300 transmits the received activity information to the server, so that the server receives the activity information corresponding to the activity performed by the user in the second set time period closest to the current time sent by the smart phone 300; After the server generates the emotion value of the user of the smart phone 300 based on the user parameter table corresponding to the user and the user situation data, the server may determine a first emotional state corresponding to the emotional value of the user; Different emotional states correspond to different ranges of emotional values; the server obtains user information of other users in the first emotional state and having the same activity information as the user of the smartphone 300.
  • the method may further include:
  • the server sends the user information of the other user to the smart phone 300.
  • the smart phone 300 displays the user information to a user of the smart phone 300, so that the user of the smart phone 300 interacts with other users corresponding to the user information.
  • the server After the server receives the activity information of the user of the smartphone 300, the server searches for the smart hand in the database.
  • the matching user around the user of the machine 300 is recommended to the user, for example, the user inputs "reading", and the current emotion of the user of the smart phone 300 determined by the server 120 is "sorrow", so that the server looks for other users' feelings around the "sorrow”
  • the current activity is also recommended by the user who reads the book to the user of the smartphone 300, so that the two users can interact and improve the mood. For example, the user who satisfies the mood as "sorrow” and the current activity as reading is "small red”, and the smartphone 300 recommends the information of Xiaohong to the user of the smartphone 300, such as the display interface shown in FIG.
  • FIG. 10A wherein FIG. 10A
  • the image in the display interface may be used to represent the image of the red, which may be an avatar AA set by Xiaohong itself, which is not specifically limited in this embodiment of the present application.
  • the user of the smartphone 300 can interact with Xiaohong by clicking the "Little Red” icon, such as shown in FIG. 10B, and "BB" in FIG. 10B is used to represent the user of the smartphone 300.
  • the method may further include:
  • the smart phone 300 may further perform an operation for adjusting the emotion of the user corresponding to the emotion information. For example, the smart phone 300 displays the user's suggestion to the user, for example, recommending to the user a modifiable wallpaper, theme, or ringtone; and directly performing the operation of modifying the display of the wallpaper, theme, or ringtone of the smart phone 300, For example, the operation of modifying the prompt tone or playing music, and the like, in the example of modifying the display wallpaper, the theme or the ringtone of the smart phone 300 is taken as an example.
  • the smart phone 300 performs an update of a theme such as a wristband or a watch, a wallpaper, a prompt sound, and the like. Or the smart phone 300 can also perform an update tone and play music.
  • a theme such as a wristband or a watch, a wallpaper, a prompt sound, and the like.
  • the smart phone 300 can also perform an update tone and play music.
  • the emotional information of the user of the smart phone 300 is "sorrow", and the smart phone 300 modifies the wristband or the wallpaper of the watch by infrared or bluetooth, as shown in FIG.
  • the embodiment of the present application further provides a user emotion adjustment device, which may be implemented by the processor 320 in the smart phone 300.
  • the apparatus may include:
  • the data collection module 1201 is configured to acquire data for characterizing a physical condition of the user, where the data includes first data, where the first data is at least one detected by the wearable device connected to the terminal device for the user Parameter value
  • a data interaction module 1202 configured to acquire emotion information determined based on the data
  • the executing module 1203 is configured to perform an operation corresponding to the emotion information for adjusting the emotion of the user.
  • the data collected by the data collection module 1201 for characterizing the physical condition of the user further includes second data, where the second data is at least the user detected by the terminal device. a parameter value;
  • the data interaction module 1202 is specifically configured to acquire emotion information determined based on the first data and the second data.
  • the data interaction module 1202 is specifically configured to:
  • the emotion information is determined based on parameter values of each parameter in the data and corresponding weights.
  • the data interaction module 1202 is further configured to send the acquired data to a server, and receive the emotion information returned by the server according to the data.
  • the executing module 1203 is specifically configured to:
  • the executing module 1203 is specifically configured to prompt the user to be in an emotional state by voice or interface display, or send the emotional state to the wearable device.
  • the emotional state of the user is prompted by the wearable device.
  • the executing module 1203 is specifically configured to recommend activity information or interaction information for adjusting emotions to the user;
  • the interactive information is used to interact with other users.
  • the first data includes at least one of the following parameter values: a heart rate value, a blood pressure value, or a pulse strength.
  • the second data includes at least one of the following parameter values: a voice rate, a voice strength, a press screen strength, or a facial expression.
  • the data interaction module 1202 is further configured to: after the execution module 1203 prompts the user to be in an emotional state, receive the operation instruction triggered by the user, and Sending the emotional information approved by the user or the modified emotional information to the service
  • the operation instruction is used to instruct the user to approve the emotion information or modify the emotion information.
  • each functional module in each embodiment of the present application may be integrated into one processing. In the device, it can also be physically existed alone, or two or more modules can be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • FIG. 2A When implemented in hardware, the hardware implementation of the terminal can be referred to FIG. 2A and its related description.
  • a transceiver configured to receive first data sent by the wearable device connected to the terminal device; the first data is at least one parameter value detected by the wearable device for the user.
  • the processor 320 is configured to acquire data for characterizing a physical condition of the user, where the data includes the first data received by the transceiver, acquire emotion information determined based on the data, and perform corresponding to the emotion information. An operation for adjusting the mood of the user.
  • the data for characterizing the physical condition of the user further includes second data
  • the device further includes:
  • the processor 320 When acquiring the emotion information determined based on the data, the processor 320 is specifically configured to: acquire the emotion information determined based on the first data and the second data.
  • the processor 320 when acquiring the emotion information determined based on the data, is specifically used to:
  • the emotion information is determined based on parameter values of each parameter in the data and corresponding weights.
  • the transceiver is further configured to send the data acquired by the processor 320 to a server, and receive the emotion information returned by the server according to the data.
  • the processor 320 is specifically configured to: when performing an operation corresponding to the emotion information for adjusting the user's emotions,
  • the device may further include: a speaker 390 for voice prompting;
  • the processor 320 is specifically configured to send a voice through the speaker 390 to prompt the user to be in an emotional state.
  • the device may further include: a display device 310, configured to display prompt information;
  • the processor 320 is specifically configured to prompt the user to display an interface through the display device 310 to present an emotional state of the user.
  • the transceiver is further configured to send the emotional state to the wearable device, so that the wearable device prompts the user to be in an emotional state.
  • the processor 320 is further configured to: through the display device 330, recommend activity information or interaction information for adjusting emotions to the user;
  • the interactive information is used to interact with other users.
  • the first data includes at least one of the following parameter values: a heart rate value, a blood pressure value, or a pulse strength.
  • the second data includes at least one of the following parameter values: a voice rate, a voice strength, a pressing screen strength, or a facial expression;
  • the at least one sensor 370 includes at least one of: a voice receiver for detecting a voice rate and/or detecting a voice strength; a pressure sensor for detecting a pressing screen force; and an image sensor for a facial expression.
  • the transceiver is further configured to: after the processor 320 prompts the user to be in an emotional state, receive the user-triggered operation instruction, and approve the user
  • the emotion information or the modified emotion information is sent to the server, and the operation instruction is used to instruct the user to approve the emotion information or modify the emotion information.
  • the embodiment of the present application further provides a device for adjusting user emotions, and the device applies a wearable device.
  • the device includes:
  • the detecting module 1301 is configured to detect at least one parameter value used to represent a physical condition of the user;
  • a sending module 1302 configured to send the at least one parameter value to a terminal device connected to the wearable device, so that the terminal device performs, according to the at least one parameter value, User emotion adjustment operation.
  • the device may further include:
  • the receiving module 1303 is configured to receive activity information that is sent by the terminal device and is recommended for the user.
  • the display module 1304 is configured to display the activity information.
  • the receiving module 1303 is configured to receive an update theme sent by the terminal device, update a wallpaper, update a prompt tone, update a ringtone, or play an instruction for playing music;
  • the processing module 1305 updates the theme based on the instruction, updates the wallpaper, updates the prompt tone, updates the ringtone, or plays the music.
  • each functional module in each embodiment of the present application may be integrated into one processing. In the device, it can also be physically existed alone, or two or more modules can be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the hardware implementation of the wearable device can be referred to FIG. 2B and its related description.
  • At least one sensor 302a for detecting at least one parameter value for characterizing a physical condition of the user
  • the transceiver 301a is configured to send the at least one parameter value to the terminal device, so that the terminal device performs an operation for the user mood adjustment based on the at least one parameter value.
  • the transceiver 301a is further configured to receive activity information that is sent by the terminal device and is recommended for the user.
  • the display device 303a is configured to display the activity information.
  • the transceiver 301a is further configured to receive an update theme sent by the terminal device, update a wallpaper, update a prompt tone, update a ringtone, or play an instruction of playing music;
  • the display device 303a is further configured to display a wallpaper or a theme
  • the device also includes a processor 304a for updating a theme based on the instruction, updating a wallpaper, updating a prompt tone, updating a ringtone, or playing music;
  • the display device 303a is further configured to display a wallpaper or theme updated by the processor 304a;
  • the apparatus also includes a speaker 305a for issuing a prompt tone updated by the processor 304a, or issuing a ringtone updated by the processor 304a, or playing music.
  • the above solution provides a convenient and effective method for adjusting user emotions, and no special equipment, such as a finger-clip probe, is needed, but a parameter value for characterizing the user's physical condition is obtained in real time through the wearable device, and is sent to the terminal.
  • the device, the terminal device determines the user's emotion based on the parameter value, and then performs an operation of adjusting the user's emotion, provides convenience to the user, and has high real-time and operability.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Multimedia (AREA)
  • Physiology (AREA)
  • Cardiology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Anesthesiology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Social Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Educational Technology (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Acoustics & Sound (AREA)
  • Hematology (AREA)
  • Pain & Pain Management (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Pulmonology (AREA)
  • Neurosurgery (AREA)

Abstract

一种调节用户情绪的方法及装置,提供一种便利且有效的调节人们的情绪的方法。所述方法包括:终端设备获取用于表征用户身体情况的数据,所述数据包括第一数据,所述第一数据为与所述终端设备连接的可穿戴设备针对所述用户检测到的至少一个参数值;所述终端设备获取基于所述数据确定的情绪信息;所述终端设备执行所述情绪信息对应的用于调节所述用户情绪的操作。

Description

一种调节用户情绪的方法及装置 技术领域
本申请涉及信息技术领域,尤其涉及一种调节用户情绪的方法及装置。
背景技术
随着生活节奏的加快,人们情绪的波动也越来越大,而情绪对人们的生活和工作效率都有很大的影响,因此需要一种有效方法来随时调节人们的情绪。
现有没有一种便利且有效的调节人们的情绪的方法。
发明内容
本申请实施例提供一种调节用户情绪的方法及装置,提供一种便利且有效的调节人们的情绪的方法。
第一方面,本申请实施例提供了一种调节用户情绪的方法,包括:
所述终端设备获取用于表征用户身体情况的数据,其中,用于表征用户身体情况的数据中包括第一数据,所述第一数据为与所述终端设备连接的可穿戴设备针对所述用户检测到的至少一个参数值;第一数据中可以如下参数值中的至少一种:脉搏强度、心率值、血压值等等。所述终端设备获取基于所述数据确定的情绪信息;所述终端设备执行所述情绪信息对应的用于调节所述用户情绪的操作。
通过上述方案提供了一种便利有效的调节用户情绪的方法,不再需要专门的设备,比如指夹探头,而是通过可穿戴设备来实时获取用于表征用户身体情况的参数值,并发给终端设备,终端设备基于参数值来确定用户情绪,然后执行调节用户情绪的操作,给用户提供的便利性,并且实时性、可操作性较高。
在一种可能的设计中,为了提高确定用户情绪的精度,还可以获取所述 终端设备检测到的第二数据,从而终端设备获取到的用于表征用户身体情况的数据中还可以包括第二数据,所述第二数据为所述终端设备检测的所述用户的至少一个参数值;
所述终端设备获取基于所述数据确定的情绪信息,具体可以通过如下方式实现:所述终端设备获取基于所述第一数据和所述第二数据确定的情绪信息。
上述设计,通过可穿戴设备以及终端设备自身检测到的两种数据来确定情绪,提高了确定的准确度。
在一种可能的设计中,在确定情绪信息时,可以由终端设备来执行,从而所述终端设备在获取基于所述数据确定的情绪信息时,可以通过如下方式实现:
所述终端设备根据所述数据中每个参数的参数值和对应的权重来确定所述情绪信息。
其中,每个参数对应的权重可以保存在用户参数表中。具体的,所述终端设备查找用户参数表获取至少一个参数中每个参数对应的权重,并分别将所述每个参数对应的参数值乘上其对应的权重并求和得到所述用户的情绪值;所述终端设备基于所述情绪值生成所述情绪信息。
其中,所述用户参数表可以为预配置在所述终端设备上的,还可以是由服务器发来的。
在一种可能的设计中,在确定情绪信息时,可以由服务器来执行,所述终端设备获取基于所述数据确定的情绪信息,可以通过如下方式实现:
所述终端设备将获取的用于表征用户身体情况的数据发送给服务器,并接收所述服务器根据所述数据返回的所述情绪信息。
具体的,服务器在确定情绪信息时,可以基于用户参数表以及所述数据生成情绪信息,然后发送给终端设备,从而所述终端设备接收所述服务器发送的所述情绪信息。
其中,所述用户参数表可以为:所述服务器基于距离当前接收到所述数 据的时刻最近的第一设定时间段内,接收到的所述终端设备发送的用于表征用户身体情况的数据,以及由所述用户确认的所述用于表征用户身体情况的数据对应的情绪信息更新后得到的。
通过上述设计,由服务器来确定情绪信息,能够降低所占用的终端设备的计算资源,提高运行效率。另外现有技术中判断情绪都是使用固定的参数表,这样并没有考虑到用户的差异性,比如有的用户心率比较快,有的用户心率比较慢等,本申请实施例中使用人工智能算法来更新参数权重可以提高判断情绪的精度。
在一种可能的设计中,所述终端设备执行所述情绪信息对应的用于调节所述用户情绪的操作,可以通过如下方式实现:
所述终端设备提示所述用户当前所处的情绪状态。
通过上述设计使得用户能够得知终端设备显示情绪状态是否准确。
在一种可能的设计中,所述终端设备提示所述用户当前所处的情绪状态,可以通过如下方式实现:
所述终端设备通过语音或界面显示,来提示所述用户当前所处的情绪状态;或者,所述终端设备将所述情绪状态发送给所述可穿戴设备,通过所述可穿戴设备来提示所述用户当前所处的情绪状态。
上述设计,通过语音或者界面来提示用户的情绪状态,使得用户能够得知终端设备显示情绪状态是否准确,并且能够来实现用户与终端设备之间的交互,实现人机和谐。
在一种可能的设计中,所述终端设备执行所述情绪信息对应的用于调节所述用户情绪的操作,还可以通过如下方式实现:
所述终端设备给用户推荐调节情绪的活动信息或互动信息;
所述互动信息用于与其他用户进行互动。
具体的,服务器端可以确定周围情绪好的用户的所进行的活动信息,并发送给终端设备,从而所述终端设备接收到所述服务器发送的用于推荐给所述用户来调节情绪的活动信息并显示所述活动信息给用户。由于周围人调节 情绪的方法往往对其它用户也同样适用,因此通过考虑周围人调节情绪的方法,增加了调节的有效性。
在一种可能的设计中,所述终端设备提示所述用户当前所处的情绪状态之后,所述终端设备接收到所述用户触发的操作指令,所述操作指令用于指示所述用户认可所述情绪信息或者对所述情绪信息进行修改;从而所述终端设备将所述用户认可的情绪信息或者修改后的情绪信息发送给所述服务器。
通过上述设计,用户在得知情绪信息后,可以对显示的不正确的情绪信息进行更正,使得服务器基于更正的情绪信息以及用于表征用户身体情况的数据来修改用户参数表中各个参数的权重,从而提高服务器判断用户的情绪的准确度。
在一种可能的设计中,所述终端设备接收所述服务器发送的情绪信息之前,所述方法还可以包括:
所述终端设备检测到由所述用户触发的输入指令,所述输入指令携带所述用户在距离当前时刻最近的第二设定时间段所进行的活动对应的活动信息;
所述终端设备将所述活动信息发送给所述服务器;
所述终端设备接收到所述服务器发送的其它用户的用户信息;所述其它用户为在距离当前时刻最近的第二设定时间段所进行的活动与所述用户的活动相同的用户;其中,其它用户与所述用户的情绪信息相同。
所述终端设备将所述用户信息显示给所述用户,以便于所述用户与所述用户信息对应的其它用户进行互动。
通过上述实现方式,服务器利用数据挖掘技术综合用户周围的用户情绪信息和用户的传感器数据,给用户提供相应的建议。比如用户心情为伤心,则通过获取用户周围心情为开心的用户数据,综合处理,给该用户提供建议(例如服务器端筛选用户周围情绪为喜的用户最近时间段的活动为在某公园跑步,则向该用户建议去该公园跑步)。现有技术中仅考虑到调整个人的情绪,而现实中其他人,尤其是周围人调节情绪的方法往往对自己也有帮助,上述 实现方式中在提供建议的时候通过数据挖掘技术考虑到了周围人调节情绪的方法,增加了调节的有效性。
在一种可能的设计中,终端设备内部可以存储不同的情绪信息对应的互动信息,从而所述终端设备执行所述情绪信息对应的用于调节所述用户情绪的操作时,所述终端设备可以直接确定所述情绪信息对应的用于推荐给所述用户的活动信息并显示所述活动信息。
通过上述设计,终端设备通过与用户互动来调节用户的情绪,使得终端设备更加人性化,体现了人机的和谐。
在一种可能的设计中,所述终端设备执行所述情绪信息对应的用于调节所述用户情绪的操作,可以通过如下方式实现:
所述终端设备基于所述情绪信息更新所述可穿戴设备的主题、壁纸或者铃声;或者,
所述终端设备基于所述情绪信息更新所述终端设备的主题、壁纸或者铃声。
上述设计,在执行所述情绪信息对应的用于调节所述用户情绪的操作时,可以通过一种方式或者多种方式来调节用户情绪,实用性更高,可操行更高。
第二方面,本申请实施例还提供了一种调节用户情绪的方法,该方法应用于可穿戴设备,包括:
所述可穿戴设备检测用于表征用户身体情况的至少一个参数值;
所述可穿戴设备将所述至少一个参数值发送给与所述可穿戴设备连接的所述终端设备,以便于所述终端设备基于所述至少一个参数值执行针对所述用户情绪调节的操作。
通过上述方案提供了一种便利有效的调节用户情绪的方法,不再需要专门的设备,比如指夹探头,而是通过可穿戴设备来实时获取用于表征用户身体情况的参数值,并发给终端设备,从而终端设备基于参数值来确定用户情绪,然后执行调节用户情绪的操作,给用户提供的便利性,并且实时性、可操作性较高。
在一种可能的设计中,所述可穿戴设备接收所述终端设备发送的用于推荐给所述用户的活动信息;所述可穿戴设备显示所述活动信息。由于可穿戴设备随时携带在用户身上,从而通过可穿戴设备显示活动信息,实时性更高。
在一种可能的设计中,所述可穿戴设备接收所述终端设备发送的更新主题、更新壁纸、更新铃声、更新提示音或者播放音乐的指令;所述可穿戴设备基于所述指令基于所述指令更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐。由于可穿戴设备随时携带在用户身上,从而通过可穿戴设备调节铃声、主题以及壁纸等等来调节用户的情绪,更实时有效。
第三方面,本申请实施例还提供了一种调节用户情绪的方法,该方法包括:
服务器接收所述终端设备发送的用于表征用户的身体情况的数据,所述数据包括至少一个参数值;
所述服务器查找所述用户对应的用户参数表获取至少一个参数中每个参数对应的权重,并分别将所述每个参数对应的参数值乘上其对应的权重并求和得到所述用户的情绪值;所述用户参数表为所述服务器基于距离当前接收到所述数据的时刻最近的第一设定时间段内,接收到的所述终端设备发送的用于表征用户的身体情况的数据、以及其对应的由所述用户确认的情绪信息更新后得到的;
所述服务器基于所述用户的情绪值生成情绪信息发送给所述终端设备,所述情绪信息用于指示所述终端设备执行所述情绪信息对应的用于调节所述用户情绪的操作。
通过上述设计,由服务器来确定情绪信息,能够降低所占用的终端设备的计算资源,提高运行效率。另外现有技术中判断情绪都是使用固定的参数表,这样并没有考虑到用户的差异性,比如有的用户心率比较快,有的用户心率比较慢等,本申请实施例中使用人工智能算法来更新参数权重可以提高判断情绪的精度。
在一种可能的设计中,所述服务器将所述用户的情绪值发送给所述终端 设备后,还包括:
所述服务器接收所述终端设备发送的经过所述用户确认后的情绪信息。
在一种可能的设计中,所述用户参数表通过如下方式更新:
所述服务器获取在第一设定时间段内由所述终端设备发送的用于表征所述用户的身体情况的数据,以及其对应的由所述用户确认的情绪值;
基于确定的情绪值以及确认的情绪值对应的数据,调整当前时刻所述用户对应的用户参数表中包括的所述确认的情绪值对应的数据中的参数对应的权重得到更新后的用户参数表。
上述设计,通过用户自身的数据以及确定的情绪来更新用户参数表,提高了确认情绪信息的准确度。
在一种可能的设计中,所述服务器基于所述用户对应的用户参数表以及用于表征用户身体情况的数据生成所述用户的情绪值之前,所述方法还包括:
所述服务器接收所述终端设备发送的所述用户在距离当前时刻最近的第二设定时间段内所进行的活动对应的活动信息;
所述服务器基于所述用户对应的用户参数表以及接收到的用于表征用户身体情况的数据生成所述用户的情绪值之后,所述方法还包括:
所述服务器确定所述终端用户的情绪值对应的第一情绪状态;其中,不同的情绪状态对应不同的情绪值范围;
所述服务器获取处于所述第一情绪状态且与所述用户的活动信息相同的其它用户的用户信息;
所述服务器将所述其它用户的用户信息发送给所述终端设备,以便于所述用户与所述用户信息对应的其它用户进行互动。
通过上述设计,服务器将与用户有同样兴趣的其它用户推荐给该用户,通过两个用户进行互动来同时调节两个用户的情绪。
第四方面,基于与方法实施例同样的发明构思,本申请实施例还提供一种调节用户情绪的装置,该装置所对应的效果的描述可以参见方法实施例。所述装置应用于终端设备,包括:
收发器,用于接收所述终端设备连接的可穿戴设备发送的第一数据;第一数据为所述可穿戴设备针对所述用户检测到的至少一个参数值;
处理器,用于获取用于表征用户身体情况的数据,所述数据包括所述收发器接收到的所述第一数据,获取基于所述数据确定的情绪信息,并执行所述情绪信息对应的用于调节所述用户情绪的操作。
通过上述方案提供了一种便利有效的调节用户情绪的方法,不再需要专门的设备,比如指夹探头,而是通过可穿戴设备来实时获取用于表征用户身体情况的参数值,并发给终端设备,终端设备基于参数值来确定用户情绪,然后执行调节用户情绪的操作,给用户提供的便利性,并且实时性、可操作性较高。
在一种可能的设计中,所述用于表征用户身体情况的数据中还包括第二数据,所述装置还包括:
至少一个传感器,用于检测用于表征所述用户身体情况的第二数据,所述第二数据包括至少一个参数值;
所述处理器在获取基于所述数据确定的情绪信息时,具体用于:获取基于所述第一数据和所述第二数据确定的情绪信息。
在一种可能的设计中,所述处理器,在获取基于所述数据确定的情绪信息时,具体用于:
根据所述数据中每个参数的参数值和对应的权重来确定所述情绪信息。
在一种可能的设计中,所述收发器,还用于将所述处理器获取的所述数据发送给服务器,并接收所述服务器根据所述数据返回的所述情绪信息。
在一种可能的设计中,所述处理器,在执行所述情绪信息对应的用于调节所述用户情绪的操作时,具体用于:
提示所述用户当前所处的情绪状态。
在一种可能的设计中,所述装置还可以包括:扬声器,用于语音提示;
所述处理器,具体用于通过所述扬声器发出语音来提示所述用户当前所处的情绪状态。
在一种可能的设计中,所述装置还可以还包括:显示设备,用于显示提示信息;
所述处理器,具体用于通过所述显示设备显示界面来提示所述用户当前所处的情绪状态。
在一种可能的设计中,所述收发器,还用于所述情绪状态发送给所述可穿戴设备,以便于所述可穿戴设备提示所述用户当前所处的情绪状态。
在一种可能的设计中,所述处理器,还用于通过显示设备给用户推荐调节情绪的活动信息或互动信息;
所述互动信息用于与其他用户进行互动。
在一种可能的设计中,所述第一数据至少包括如下参数值的一种:心率值、血压值或脉搏强度。
在一种可能的设计中,所述第二数据至少包括如下参数值的一种:语音速率、语音强度、按压屏幕力度或面部表情;
所述至少一个传感器中包括以下至少一种:
语音受话器,用于检测语音速率和/或检测语音强度;
压力传感器,用于检测按压屏幕力度;
图像传感器,用于面部表情。
在一种可能的设计中,所述装置还可以包括:
收发器,用于在所述处理器提示所述用户当前所处的情绪状态之后,接收到所述用户触发的操作指令,并将所述用户认可的情绪信息或者修改后的情绪信息发送给所述服务器,所述操作指令用于指示所述用户认可所述情绪信息或者对所述情绪信息进行修改。
第五方面,基于与方法实施例同样的发明构思,本申请实施例还提供一种调节用户情绪的装置,该装置所对应的效果的描述可以参见方法实施例。所述装置应用于可穿戴设备,包括:
至少一个传感器,用于检测用于表征用户身体情况的至少一个参数值;
收发器,用于将所述至少一个参数值发送给所述终端设备,以便于所述 终端设备基于所述至少一个参数值执行针对所述用户情绪调节的操作。
通过上述方案提供了一种便利有效的调节用户情绪的方法,不再需要专门的设备,比如指夹探头,而是通过可穿戴设备来实时获取用于表征用户身体情况的参数值,并发给终端设备,从而终端设备基于参数值来确定用户情绪,然后执行调节用户情绪的操作,给用户提供的便利性,并且实时性、可操作性较高。
在一种可能的设计中,所述收发器,还用于接收所述终端设备发送的用于推荐给所述用户的活动信息;
显示设备,用于显示所述活动信息。
在一种可能的设计中,还用于接收所述终端设备发送的更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐的指令;
所述显示设备还用于显示壁纸或者主题;
所述装置还包括处理器,用于基于所述指令更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐;
所述显示设备,还用于显示所述处理器更新的壁纸或者主题;
所述装置还包括扬声器,用于发出所述处理器更新的提示音,或者发出所述处理器更新的铃声、或者播放音乐。
第六方面,本申请实施例还提供了一种调节用户情绪的装置,所述装置应用于终端设备,包括:
数据收集模块,用于获取用于表征用户身体情况的数据,所述数据包括第一数据,所述第一数据为与所述终端设备连接的可穿戴设备针对所述用户检测到的至少一个参数值;
数据交互模块,用于获取基于所述数据确定的情绪信息;
执行模块,用于执行所述情绪信息对应的用于调节所述用户情绪的操作。
在一种可能的设计中,所述数据收集模块获取到的用于表征用户身体情况的数据中还包括第二数据,所述第二数据为所述终端设备检测的所述用户的至少一个参数值;
所述数据交互模块,具体用于获取基于所述第一数据和所述第二数据确定的情绪信息。
在一种可能的设计中,所述数据交互模块,具体用于:
根据所述数据中每个参数的参数值和对应的权重来确定所述情绪信息。
在一种可能的设计中,所述数据交互模块还用于:
将获取的所述数据发送给服务器;并接收所述服务器根据所述数据返回的所述情绪信息。
在一种可能的设计中,所述执行模块,具体用于:
提示所述用户当前所处的情绪状态。
在一种可能的设计中,所述执行模块,具体用于通过语音或界面显示,来提示所述用户当前所处的情绪状态;或将所述情绪状态发送给所述可穿戴设备,通过所述可穿戴设备来提示所述用户当前所处的情绪状态。
在一种可能的设计中,所述执行模块,具体用于给用户推荐调节情绪的活动信息或互动信息;
所述互动信息用于与其他用户进行互动。
在一种可能的设计中,所述第一数据至少包括如下参数值的一种:心率值、血压值或脉搏强度。
在一种可能的设计中,所述第二数据至少包括如下参数值的一种:语音速率、语音强度、按压屏幕力度或面部表情。
在一种可能的设计中,所述数据交互模块,还用于在所述执行模块提示所述用户当前所处的情绪状态之后,接收到所述用户触发的操作指令,并将所述用户认可的情绪信息或者修改后的情绪信息发送给所述服务器;所述操作指令用于指示所述用户认可所述情绪信息或者对所述情绪信息进行修改。
第七方面,本申请实施例还提供了一种调节用户情绪的装置,包括:
检测模块,用于检测用于表征用户身体情况的至少一个参数值;
发送模块,用于将所述至少一个参数值发送给与所述可穿戴设备连接的终端设备,以便于所述终端设备基于所述至少一个参数值执行针对所述用户 情绪调节的操作。
在一种可能的设计中,所述装置还包括:
接收模块,用于接收所述终端设备发送的用于推荐给所述用户的活动信息;
显示模块,用于显示所述活动信息。
在一种可能的设计中,所述装置还包括:
接收模块,用于接收所述终端设备发送的更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐的指令;
处理模块,基于所述指令更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐。
附图说明
图1为本申请实施例提供的调节用户情绪的系统框架示意图;
图2A为本申请实施例提供的终端设备示意图;
图2B为本申请实施例提供的可穿戴设备示意图;
图3为本申请实施例提供的调节用户情绪的方法流程图;
图4为本申请实施例提供的用户参数表更新方法流程示意图;
图5A~图5C为本申请实施例提供的用户情绪状态显示示意图;
图6为本申请实施例提供的用户情绪状态选择界面显示示意图;
图7为本申请实施例提供的调节用户情绪的方法示意图;
图8为本申请实施例提供的用户输入活动的界面显示示意图;
图9为本申请实施例提供的活动信息显示示意图;
图10A为本申请实施例提供的其它用户信息显示示意图;
图10B为本申请实施例提供的用户之间交互示意图;
图11为本申请实施例提供的可穿戴设备显示情绪状态示意图;
图12为本申请实施例提供的一种调节用户情绪的装置示意图;
图13为本申请实施例提供的另一种调节用户情绪的装置示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述。
当前,可能有一些专门的训练系统,例如现有的心率变异性(Heart rate variability,HRV)系统,通过一些专用的采集装置来采集心率、血氧饱和度、脉搏强度等,并根据采集到的数据给出一些调节方案,例如放松训练、音乐调适和自主调节来帮助用户调节情绪。
上述的方案,或者与上述方案类似的其他方案,可一定程度上帮助用户调节情绪。但是,上述方案也存在很大的改进空间。具体的,上述的调节情绪的方法需要专业的采集装置来实现,并且没有考虑不同用户的差异性,给出的调节情绪的方案也不会实时智能变化。因此,对很多用户来说,调节情绪的方式会不智能,效果也会不明显。
基于此,本申请实施例提供一种调节用户情绪的方法及装置,提供一种便利且有效的调节人们的情绪的方法。其中,方法和装置是基于同一发明构思的,由于方法及装置解决问题的原理相似,因此装置与方法的实施可以相互参见,重复之处不再赘述。
本申请实施例应用的调节用户情绪的系统框架包括至少一个终端设备以及服务器。如图1所示,图1所示的用户情绪的调节系统框架包括终端设备110a以及终端设备110b,以及服务器120。
其中,终端设备110a与终端设备110b之间通过无线方式连接,可采用的无线方式包括但不限于各种无线短距离通信方式,比如:蓝牙、近场通信(英文:Near Field Communication,简称:NFC)、紫蜂(ZigBee)、红外、无线保真(英文:Wireless Fidelity,简称:WiFi)等。本申请实施例中对此不作具体限定。可以理解的是,终端设备110a与终端设备110b之间也可通过移动通信方式进行通信。终端设备110a以及终端设备110b在与服务器120连接时,可以通过无线通信技术,可采用的无线通信方式可以是移动通信,包括但不仅 限于包括第二代移动通信技术、第三代移动通信技术、或者第四代移动通信技术、第五代移动通信技术等等,也可以是其他无线通信方式,如WiFi,或其他无线短距离通信方式,本申请实施例对此不作具体限定。
本申请实施例的服务器可以是服务计算机、大型计算机等等。本申请实施例的终端设备包括但不限于个人计算机、手持式或膝上型设备、移动设备(比如移动电话、平板电脑、智能手环、智能手表、个人数字助理等等),下面以智能移动终端为例对本申请实施例提供的方案进行具体描述。
下面以一个终端设备与服务器交互为例,具体说明调节用户情绪的方法。以终端设备为智能手机300为例,如图2A所示。智能手机300包括显示设备310、处理器320以及存储器330。存储器330可用于存储软件程序以及数据,处理器320通过运行存储在存储器330的软件程序以及数据,从而执行智能手机300的各种功能应用以及数据处理。存储器330可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序等;存储数据区可存储根据智能手机300的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器330可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。处理器320是智能手机300的控制中心,利用各种接口和线路连接整个终端的各个部分,通过运行或执行存储在存储器330内的软件程序和/或数据,执行智能手机300的各种功能和处理数据,从而对终端进行整体监控。处理器320可以包括一个或多个通用处理器,还可包括一个或多个数字信号处理器(英文:Digital Signal Processor,简称:DSP),用于执行相关操作,以实现本申请实施例所提供的技术方案。
智能手机300还可以包括输入设备340,用于接收输入的数字信息、字符信息或接触式触摸操作/非接触式手势,以及产生与智能手机300的用户设置以及功能控制有关的信号输入等。具体地,本申请实施例中,该输入设备340可以包括触控面板341。触控面板341,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控 面板341上或在触控面板341的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板341可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器320,并能接收处理器320发来的命令并加以执行。例如,用户在触控面板341上用手指单击一张图像缩略图,触摸检测装置检测到此次单击带来的这个信号,然后将该信号传送给触摸控制器,触摸控制器再将这个信号转换成坐标发送给处理器320,处理器320根据该坐标和该信号的类型(单击或双击)确定对该图像所执行的操作(如图像放大、图像全屏显示),然后,确定执行该操作所需要占用的内存空间,若需要占用的内存空间小于空闲内存,则将该放大后的图像全屏显示在显示设备包括的显示面板311上,从而实现图像显示。
触控面板341可以采用电阻式、电容式、红外线以及表面声波等多种类型实现。除了触控面板341,输入设备340还可以包括其他输入设备342,其他输入设备342可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示设备310,包括的显示面板311,用于显示由用户输入的信息或提供给用户的信息以及终端设备300的各种菜单界面等,在本申请实施例中主要用于显示智能手机300中图像。可选的,显示面板可以采用液晶显示器(英文:Liquid Crystal Display,简称:LCD)或有机发光二极管(英文:Organic Light-Emitting Diode,简称:OLED)等形式来配置显示面板311。在其他一些实施例中,触控面板341可覆盖显示面板311上,形成触摸显示屏。
除以上之外,智能手机300还可以包括用于给其他模块供电的电源350以及用于拍摄照片或视频的摄像头360。智能手机300还可以包括一个或多个传感器370,例如光线传感器、语音受话器、压力传感器、以及图像传感器等。光线传感器用于采集光线值;语音受话器用于采用语音的音量以及语音的频率值、语音速率或者语音强度等;压力传感器用于采集用户的触屏的力度值, 图像传感器用于拍摄图像等等。
智能手机300还可以包括收发器,收发器包括无线通信模块,比如无线射频(英文:Radio Frequency,简称:RF)电路380,用于与无线网络设备进行网络通信,还可以包括WiFi模块381,用于与其他设备进行WiFi通信;还可以包括红外模块或者蓝牙模块等等。智能手机300还可以包括扬声器390,扬声器390用于播放音乐、语音提示或者发出提示音等等。
本申请实施例中,调节用户情绪的系统中除包括上述智能手机300以外,还包括可穿戴设备300a,比如手环或者手表。如图2B所示,可穿戴设备300a可以包括收发器301a,其中,收发器301a中可以包括如下至少一种:红外模块、蓝牙模块或者wifi模块等。即,可穿戴设备通过收发器301a与所述智能手机300通信。另外,可穿戴设备300a还可以包括射频电路,具体通过射频电路与服务器连接。可穿戴设备中可以包括一个或者多个传感器302a,比如体温传感器、脉搏传感器以及血压传感器等等。体温传感器用于采集用户的体温、脉搏传感器用于采集用户的脉搏、以及血压传感器用于采集用户的血压。可穿戴设备还可以包括显示设备303a,显示设备303a用于显示提示信息、图片、主题、壁纸或者活动信息等等。可穿戴设备300a还可以包括处理器304a,用于执行各种传感器数据的收集等操作。可穿戴设备300a还可以包括扬声器305a。扬声器305a用于语音提示、播放音、播放铃声等。可穿戴设备300a还包括存储器306a,存储器306a可用于存储数据以及处理器304a执行的存储软件程序等。
在说明具体终端用户情绪的调节方法之前,先对本申请实施例涉及的各个名词进行解释说明。
1、用于表征用户身体情况的数据,可以是直接或间接表征用户的身体状态的数据,所述数据中包括用户的至少一个参数值,比如心率、体温、脉搏、血压、用户感受到的光线大小、用户触摸显示设备的力度、语音的音量、语音的频率、语音强度、语音速率或者面部表情等等。
2、情绪信息,用于反映用户的当前情绪以及健康状况。情绪可以分为正 常情绪或者不正常情绪,比如喜、乐属于正常情绪,而怒、哀、恐惧属于不正常情绪,健康状况分为健康或者非健康等等。不正常情绪也属于非健康状态,当然非健康状态还包括用户可能出现病态,比如体温过高,发烧的情况。或者心脏跳动间停等等。情绪信息中可以包括用户的情绪值和/或用户所处的情绪状态。情绪状态包括可以正常情绪或者不正常情绪,还可以包括喜、悲伤、乐、生气、激动等等。
其中,不同的情绪状态对应不同的情绪值范围。比如人在激动的情绪下,情绪波动比较大,因此激动对应的情绪值范围相对比较高。对于喜来说,相对情绪波动较小,从而喜对应的情绪值范围相对比较低。具体情绪值范围设定,可根据各个情况数据所占的比重以及实际情况配置,本申请实施例对此不作具体限定。
另外,不同的情绪状态还可以对应不同的参数值范围,比如:在血压大于第一预设值,脉搏在预设范围内,体温小于第二预设值时,表示恐惧情绪。
3、活动信息,用于说明用户在某一个时间段所参与的活动,比如公园散步、跑步或者读书等等;还可以是给用户的一些建议,比如提醒用户注意情绪不要激动、或者提醒用户注意涂抹防晒霜等等。
4、互动信息,用于表征用户与其他用户之间的互动。
5、用户参数表,包括不同参数对应的权重。比如体温对应权重为a1,脉搏对应的权重为a2,等等。
6、调节用户情绪的操作,包括但不仅限于显示活动信息、显示互动信息,播放音乐、弹出图片、或者播放语音,修改主题、修改壁纸以及与其他用户互动、或者修改提示音、震动提示、提示用户所处的情绪状态等等。还可以包括更新可穿戴设备的主题,壁纸,提示音、在可穿戴设备上弹出图片、播放语音等等。
参见图3所示,为一种调节用户情绪的方法示意图。
在用户佩戴手环以及手表过程中,手环或者手表监控用户的心率值、血压值、体温以及脉搏强度等,并通过蓝牙或者红外发送给到智能手机300。
S301,智能手机300获取用于表征用户身体情况的数据。
所述数据包括第一数据,所述第一数据为与所述终端设备连接的可穿戴设备针对所述用户检测到的至少一个参数值。第一数据中包括心率值、血压值、体温以及脉搏强度等等。
可选地,可穿戴设备可以是手环或者手表,手环或者手表可以周期性上报用于表征用户身体情况的数据。比如每隔一个小时。比如,手环或者手表在一个周期内多个时间点采集数据,然后计算多个时间点的采集到用户的同一参数对应的参数值的平均值,并将平均值发给智能手机300。当然,手环或者手表还可以在一个周期内多个时间点采集用户的数据后,不作统计,直接发给智能手机300。
S302,所述智能手机300获取基于所述数据确定的情绪信息。
S303,所述智能手机300执行所述情绪信息对应的用于调节所述用户情绪的操作。
其中,用于表征用户身体情况的数据中包括的参数数量本申请实施例不作具体限定。
通过上述方案提供了一种便利有效的调节用户情绪的方法,不再需要专门的设备,比如指夹探头,而是通过可穿戴设备来实时获取用于表征用户身体情况的参数值,并发给终端设备,终端设备基于参数值来确定用户情绪,然后执行调节用户情绪的操作,给用户提供的便利性,并且实时性、可操作性较高。
在步骤S302中,智能手机300获取基于用于表征用户身体情况的数据确定的情绪信息时,可以由智能手机300自己基于所述数据确定情绪信息。
另外,为了提高确定用户情绪的精度,还可以由智能手机300检测第二数据,从而智能手机300获取到的用于表征用户身体情况的数据中还可以包括第二数据,所述第二数据包括所述用户的至少一个参数值。所述第二数据可以包括如下参数值的至少一种:语音速率、语音强度、按压屏幕力度或面部表情等。
在一种可能的实现方式中,所述智能手机300获取基于所述数据确定的情绪信息时,可以由智能手机300根据每个参数对应的阈值,基于参数与阈值关系来确定情绪信息。比如数据中仅包括一个参数,比如体温,一般体温的正常范围为36~37摄氏度。智能手机300可以统计在一个周期T内的体温值,比如一个小时。智能手机300可以统计一个小时内用户体温的平均值,若用户的体温平均值不在正常范围内,则确定用户发烧。智能手机300向所述用户显示互动信息,比如显示“您的当前体温未在正常范围内,请注意”。
再比如数据中包括3个参数,比如体温、脉搏以及血压,血压的正常范围90~130mmhg,脉搏的正常范围为:每分钟60~100次。假设用户的血压高于正常范围,比如:血压大于130,脉搏正常,比如脉搏处于60~100范围内、体温较低,比如体温小于36.2,该状况的可能是由于用户过度害怕、恐惧造成的,智能手机300确定用户处于该状态时,可以播放用于缓解用户的情绪的音乐,还可以更改显示壁纸,比如大海的图片等等。本申请实施例可以针对不同的情绪配置音乐库,以及图片库等等。在用户血压偏高、脉搏较快、体温偏高的情况下,可能是由于用户过度兴奋导致的,智能手机300确定用户处于该状态下时,可以播放用于缓解用户的情绪的音乐,比如节奏稍慢的轻音乐。
需要说明的是,本申请实施例中针对每种参数对应的状态可以设置一个阈值,比如当用户血压大于第一阈值时,表示用户血压偏高,其它参数亦如此,本申请实施例中不再一一列举。
在另一种可能的实现方式中,智能手机300可以根据所述数据中每个参数的参数值和对应的权重来确定所述情绪信息。具体的,智能手机300端可以维护一个用户参数表,用户参数表中包括不同参数对应的权重。在基于多个参数值确定情绪信息,所述终端设备可以通过查找用户参数表获取至少一个参数中每个参数对应的权重,并分别将所述每个参数对应的参数值乘上其对应的权重并求和得到所述用户的情绪值。
用户参数表中包括不同参数对应的权重。比如,语音的音量的权重为a1, 心率值为a2,语音的频率值为a3,其它参数对应的权重本申请实施例不再一一列举。
具体的,根据存储的权重计算用户现在的情绪值,例如情绪值E=a1*语音的音量+a2*心率值+a3*语音的频率值+……。
本申请实施例中,用户参数表可以是配置在智能手机300上的。用户参数还可以是由服务器发送给智能手机300的。服务器可以周期性的更新用户参数表,然后将更新后的用户参数表发送给智能手机300。如果情绪信息能够直接表征情绪状态,则智能手机300可以将确定后的用户情绪信息显示给用户,如果情绪信息不能直接表征情绪状态,根据情绪信息确定用户所处的情绪状态,然后将用户所处的情绪状态显示给用户,从而提醒用户针对情绪状态进行确认是否准确。在提示所述用户当前所处的情绪状态时,智能手机300可以通过语音或界面显示,来提示所述用户当前所处的情绪状态;智能手机300还可以将所述情绪状态发送给所述可穿戴设备,通过所述可穿戴设备来提示所述用户当前所处的情绪状态。具体的可以每隔一段时间将情绪状态显示给用户来确认。具体的,更新用户参数表的流程如图4所示。
S401,在智能手机300将所述用户的情绪信息显示给用户之后,所述智能手机300接收到所述用户触发的操作指令,所述操作指令用于指示所述用户认可所述情绪信息或者对所述情绪信息进行修改。
当然用户可以根据自身情况对智能手机300进行配置,不触发显示操作指令。
S402,所述智能手机300将用户认可的情绪信息或者修改后的情绪信息发送给所述服务器。从而所述服务器接收所述智能手机300发送的经过所述用户确认后的情绪信息。
例如,用户当时的情绪为高兴(喜),则此时情绪信息为喜。情绪信息在显示给用户可以通过多种表现形式,例如图5A或者5B所示的显示界面显示的情绪信息。显示界面中还显示用户可以确认的图标,例如图5A或者5B中的“是”、“否”。针对图5A为例,例如,用户点击图标“否”,则可以显示图 6对应的显示界面。从而用户可以选择一个对应的情绪图标。
另外,所述智能手机300在将所述情绪信息显示给用户时,可以按时间段向用户呈现服务器120根据上传的用于表征用户身体情况的数据计算出的情绪信息,形式如表1所示。具体显示形式可以如图5C所示。
表1
时间段1
时间段2
时间段3
经过用户根据自己的实际情况对情绪信息进行修正后,智能手机100可以基于修正后的情绪信息上传到服务器,从而服务器用户确认的情绪信息更新保存的用户参数表。
服务器会收集不同用户的数据,包括上传的各个用户的用于表征用户身体情况的数据和情绪信息,按用户ID存储到数据库中,单个用户的数据如表2的形式:
表2
时间段1 数据1 情绪信息1(喜) 用户是否确认(是)
时间段2 数据2 情绪信息2(喜) 用户是否确认(否)
时间段3 数据3 情绪信息3(悲伤) 用户是否确认(是)
比如:数据1可以包括敲击键盘的力度、屏幕压力传感器感知到的压力值、心率值、脉搏强度。数据2可以与数据1包括的参数相同,也可以不同,比如数据2中在数据1包括的所有的参数的基础上,还可以包括语音的音量、频率以及表情数据。比如在统计数据1的周期内,用户没有进行通话,而在统计数据2中周期内,用户与其他用户进行语音通话,在语音通话过程中开启了摄像头,从而获取到了用户的表情数据。当然数据3包括的参数可以与数据1或者数据2相同,还可以与数据1或者数据2均不同,本申请实施例 对此不作具体限定。
需要说明的是,用户是否确认字段如果是“是”,表示情绪信息判断正确,如果是“否”,表示用户尚未确认或更新。在存储时,在用户情绪信息发生变化时,才会另起一个时间段,如果情绪信息不变,则只更新时间段和用于表征用户身体情况的数据。即,如果服务器在数据库中查找距离当前时刻最近的时间段的情绪信息与当前确认的用户的情绪信息相同时,则更新距离当前时刻最近的时间段的用于表征用户身体情况的数据以及时间段。
可选地,所述服务器通过如下方式对所述用户对应的用户参数表进行更新:
S403,所述服务器获取在第一设定时间段内由所述智能手机300发送的用于表征所述用户的身体情况的数据,以及经过所述用户确认的所述数据对应的情绪值。
S404,服务器基于所述用于表征所述用户的身体情况的数据以及所述用于表征所述用户的身体情况的数据对应的情绪值,调整当前时刻所述用户对应的用户参数表中包括的用于表征用户的身体情况的参数对应的权重得到更新后的所述用户对应的用户参数表。其中,第一设定时间段可以是一周、两周、一天等等。也就是服务器每隔第一设定时间段更新下用户参数表。
可选地,所述服务器获取由所述智能手机300发送的用于表征用户的身体情况的数据,以及经过所述用户确认的情绪值后,可以先对用于表征用户的身体情况的数据以及情绪值进行清洗以及去重处理。所述服务器可以周期性对用于表征用户的身体情况的数据以及情绪值进行清洗或者去重处理。比如一周、两周、一天等等,本申请实施例在此不作具体限定。
具体的,服务器基于所述用于表征用户的身体情况的数据以及所述用于表征用户的身体情况的数据对应的情绪值,调整当前时刻所述智能手机300的用户对应的用户参数表中包括的不同参数对应的权值时,可以通过人工智能算法训练并调整权值。
人工智能算法可以但不仅限于神经网络、遗传算法以及多项式算法等等。
比如,更新后的用户参数表,可以如表3所示。
表3
用户ID1 参数1
用户ID2 参数2
表3中参数1可以包括用户ID1的语音的音量、语音的频率、敲击键盘的力度、屏幕压力传感器感知到的压力值、心率值、脉搏强度、面部表情等分别对应的权重。参数2中包括用户ID2的各个参数的权重。
现有技术中判断情绪都是使用固定的参数表,这样并没有考虑到用户的差异性,比如有的用户心率比较快,有的用户心率比较慢等。使用人工智能算法来更新参数可以提高判断情绪的精度。
在一种可能的实现方式中,智能手机300还可以通过自身内置的传感器检测用户的参数值。从而智能手机300基于接收到手环或者手表发送的参数值以及检测到的参数值来确定情绪信息。
比如,可以通过智能手机300内置的语音受话器来检测用户的参数值。当用户处于比较危险环境时、比如攀岩、高空作业或者驾驶时,智能手机300能够自行启动或者通过人工启动来获取用户的语音相关的参数值。比如智能手机通过检测海拔高度或者移动速度来确定是否开启获取语音相关的参数值,但不仅限于此。相关的参数值可以包括语音音量、语音速率以及语音强度等等。以用户驾驶为例,当用户处于驾驶状态时,可能因为周围其他驾驶员或者路况而影响驾驶心情。比如出现堵车时,用户可能会出现比较焦虑的情绪状态,甚至处于较为激动的情绪状态,而说出一些比较急躁的话。用户处于情绪激动的状态时,说话语速会变快,声调也会升高,从而智能手机300可以通过该情况确定用户的情绪。当用户处于情绪激动时,智能手机300可以提醒用户“注意不要太激动”。智能手机300还可以从数据库中存储的音乐中选择舒缓型或者优美的音乐来改善用户的激动的情绪。如果用户的当前情绪比较悲伤,智能手机300可以播放积极、轻快的音乐,来改善用户的情绪。 本申请实施例中并不限定播放音乐,还可以通过弹出图片、或者播放语音等方式来改善用户的情绪。比如当用户处于悲伤情绪时,播放一段语音笑话或者显示一个鬼脸等来改善用户的情绪。
另外,用户在语音通话过程中,终端可检测用户的语音频率、语音速率、语音强度或语音音量等信息。结合检测的语音相关的数据以及其他检测到的与用户身体情况相关的数据,一起来判断用户的情绪。例如,可结合检测的语音相关的数据,结合检测到的用户心率或脉搏等来判断用户情绪是否波动大,或判断用户具体处于什么情绪状态。类似的,用户在视频通话过程中,可通过智能手机300内置的图像传感器来获取用户的面部表情数据。通过表情来判断用户处于那种情绪。从而通过语音检测的参数与面部表情数据相结合来确定用户的情绪,准确度更高。
由于面部表情检测占用资源相对比较大,因此而本申请实施例还可以对开启面部表情检测设置触发条件,比如检测到用户心率偏高或者脉搏次数处于不正常范围内时,来开启面部表情检测。可以理解的是,语音检测也可以类似的触发条件。
当然本申请实施例中并不限定通过一种或两种或者多种传感器来获取用户的用于表征用户身体情况的数据。比如可以通过图像传感器、语音受话器以及压力传感器3者获取的参数值来判断用户情绪。
在正常情绪下相比不正常情绪,用户通过输入面板输入信息时,压力传感器采集到的输入频度低,压力值较小;通过语音受话器采集的音量小,以及通过图像传感器采集得到的表情信息为正常。从而通过三者来判断用户处于的情绪。
本申请实施例中,在执行用于调节所述用户情绪的操作时,可以基于终端设备以及可穿戴设备当前的状态或者用户当前所进行的活动来执行对应的操作,比如在用户通话过程中或者视频通话过程中,若采用通过终端设备来进行语音提示或者图片提示,会影响用户当前的操作,且对调节用户情绪起到的作用可能不大,因此在此时可以通过可穿戴设备来显示活动信息、或者 通过可穿戴设备进行语音提示或者震动提示等等方式。再比如,若用户处于攀岩、高空作业或者驾驶时,此时用户可能不方便查看屏幕,因此若采用显示图片、修改主题或者修改壁纸的方式,对调节用户情绪起到的作用不大,因此可以通过语音提示或者播放音乐或者震动提示等等方式来提示用户。
本申请实施例中,为了节省智能手机300的存储空间可以通过由服务器端确定用户的情绪信息,并向终端发送推荐的信息或者可互动的其它用户的信息,具体参见图7所示,本申请实施例提供的另一种调节用户情绪的方法示意图,与图3以及图4所对应的实施例重复的地方,本申请实施例在此不再赘述。
S410,在用户佩戴手环以及手表过程中,手环或者手表监控用户的至少一个参数值,并通过蓝牙或者红外发送给到智能手机300。具体的手环或者手表的监控情况可以参考图3所对应的实施例,本申请实施例在此不再赘述。
手环或者手表监控用户的至少一个参数值可以但不限于包括以下至少一项:脉搏强度、心率值、血压值以及体温等等。
S420,智能手机300通过内置传感器监控用户的至少一个参数值;
智能手机300监控到的至少一个参数值可以包括但不仅限于以下至少一项:语音的音量、语音的频率、屏幕压力传感器感知到的压力值以及通过相机拍摄用户照片的得到用户表情数据等。
智能手机300接收到的手环或者手表发送的至少一个参数值以及自身监控到的至少一个参数值构成了用户的用于表征用户身体情况的数据。
S430,智能手机300将用于表征用户身体情况的数据发送给服务器。
S440,服务器基于所述用于表征用户身体情况的数据确定用户的情绪信息。具体的,在确定用户的情绪信息时可以通过S430以及S440所述的方式实现。
S450,服务器将所述用户的情绪信息发送给智能手机300。
S460,智能手机300将所述用户的情绪信息显示给用户。
可选的,S461,在智能手机300将所述用户的情绪信息显示给用户之后, 所述智能手机300接收到所述用户触发的操作指令,所述操作指令用于指示所述用户认可所述情绪信息或者对所述情绪数据进行修改。
当然用户可以根据自身情况对智能手机300进行配置,不触发显示操作指令。
S462,所述智能手机300将用户认可的情绪信息或者修改后的情绪信息发送给所述服务器。从而所述服务器接收所述智能手机300发送的经过所述用户确认后的情绪信息,服务器根据用户认可的情绪信息或者修改后的情绪信息以及用于表征用户身体情况的数据来更新所述用户对应的用户参数表,具体方式可以参见图4所示的实施例。
可选地,所述方法还包括:
S470,在智能手机300将所述情绪信息显示给所述用户之后,各个用户(图7中以智能手机300的用户为例)可以输入在当前时段或者显示给用户的时段对应的活动,比如读书、跑步、听音乐等等。例如,如图8所示的显示界面,智能手机300的用户在图8所示的输入框中输入活动后,点击“确认”图标,从而S471,智能手机300接收到对应的用户触发的输入指令后,将用户输入的读书、跑步、听音乐等等活动信息发送给服务器。输入指令中携带用户当前的活动信息。或者智能手机300在获取其对应的用户的用于表征用户身体情况的数据后,并将用于表征用户身体情况的数据发送给服务器之前,向智能手机300发送指示信息,指示用户输入在当前时段对应的活动,从而智能手机300能够获取用户输入的在当前时段的活动,比如读书、跑步、听音乐等等。从而各个不同的终端设备将获取的对应的用户输入的读书、跑步、听音乐等等信息发送给服务器。
基于此,所述服务器基于用户对应的用户参数表以及用于表征用户身体情况的数据生成用户的情绪值后,所述服务器确定所述用户的情绪值对应的第一情绪状态;其中,不同的情绪状态对应不同的情绪值范围;其中情绪状态可以包括正常情绪以及不正常情绪,还可以包括喜、怒等等。所述服务器获取处于第二情绪状态的其它用户所进行的活动推荐给该终端设备的用户。 可选的,该处于第二情绪的其他用户可以是距离该终端设备对应的用户的距离小于等于预设距离阈值的用户,所进行的活动可以是最近一段时间,比如一天或一周等时间段内,进行的合适的活动。比如第一情绪状态为不正常情绪,则第二情绪状态为正常情绪。如果用户处于不正常的情绪,如悲伤,那么服务器可以选择处于喜或乐状态的用户所进行的活动推荐给该终端设备对应的用户。本发明实施例针对配置方式不作具体限定。另外第二情绪状态的确定基于配置方式来确定。
另外,终端或服务器在给用户推荐调节情绪的活动时,还可以根据对应用户自己的活动进行智能筛选。例如,若该用户本身的运动量很高,如步数超过预设阈值,那么可能运动,或者说步行,可能对该用户情绪的调节能力较小,那么在给用户推荐调节方案时,可优先推荐除步行外的其他活动。
S472,所述服务器将其它用户的活动信息发送给智能手机300。
可选地,S472a,智能手机300还可以将其它用户的活动信息发送给手环或手表。在手环或者手表与服务器通过无线连接的情况下,可以由服务器将其它用户的活动信息发送给手环或者手表。
S473,智能手机300将其它用户的活动信息推荐给智能手机300的用户。
例如,当前用户的情绪状态是“悲伤”,从而服务器获取处于“喜”的用户并且处于“喜”的用户与智能手机300的用户之间的距离小于3公里,比如经过确认后,用户李四满足上述要求。从而所述服务器将李四在距离当前时刻最近的第二设定时间段内所进行的活动对应的活动信息发送给智能手机300,所述智能手机300将所述活动信息显示给用户,比如用户收到的显示信息为如图9所示的显示界面显示的信息。
如果手环或者手表带有显示功能,则智能手机300还可以将其它用户的活动信息发送到手环或者手表,从而手环或者手表将其它用户的活动信息显示给用户。
通过上述实现方式,服务器利用数据挖掘技术综合用户周围的用户情绪信息和用户的传感器数据,给用户提供相应的建议。比如用户心情为伤心, 则通过获取用户周围心情为开心的用户数据,综合处理,给该用户提供建议(例如服务器端筛选用户周围情绪为喜的用户最近时间段的活动为在某公园跑步,则向该用户建议去该公园跑步)。现有技术中仅考虑到调整个人的情绪,而现实中其他人,尤其是周围人调节情绪的方法往往对自己也有帮助,上述实现方式中在提供建议的时候通过数据挖掘技术考虑到了周围人调节情绪的方法,增加了调节的有效性。
在一种可能的实现方式中,在S470-S471将用户的情绪信息显示给用户后,用户触发输入的在距离当前时刻最近的第二设定时间段内所进行的活动对应的活动信息,所述智能手机300检测到由所述智能手机300的用户触发的输入指令,所述输入指令携带用户在距离当前时刻最近的第二设定时间段内所进行的活动对应的活动信息。第二设定时间段可以是一个小时、两个小时、或者一天等等,本申请实施例不作具体限定。以一小时为第二设定时间段为例,比如当前用户在2点10分输入的活动信息,则服务器可以找在1点到2点进行活动的用户。
智能手机300将接收到活动信息发送给服务器,从而所述服务器接收所述智能手机300发送的所述用户在距离当前时刻最近的第二设定时间段内所进行的活动对应的活动信息;然后所述服务器基于所述用户对应的用户参数表以及所述用户情况数据生成所述智能手机300的用户的情绪值之后,所述服务器可以确定所述用户的情绪值对应的第一情绪状态;其中,不同的情绪状态对应不同的情绪值范围;所述服务器获取处于所述第一情绪状态且与所述智能手机300的用户的活动信息相同的其它用户的用户信息。
所述方法还可以包括:
S480,所述服务器将所述其它用户的用户信息发送给智能手机300。
S481,所述智能手机300将所述用户信息显示给所述智能手机300的用户,以便于所述智能手机300的用户与所述用户信息对应的其它用户进行互动。
服务器接收智能手机300的用户的活动信息后,在数据库中查找智能手 机300的用户周围匹配的用户推荐给用户,比如用户输入的是“读书”,且服务器120判定的智能手机300的用户当前的情绪是“哀”、从而服务器查找其它用户周围心情也为“哀”,当前活动也是读书的用户推荐给该智能手机300的用户,从而两个用户可以进行互动,改善情绪。比如查找的满足心情为“哀”、当前活动为读书的用户为“小红”,智能手机300将小红的信息推荐给智能手机300的用户,比如图10A所示的显示界面,其中图10A所示的显示界面中图像可以用于表示小红的图像,可以是小红自身设置的一个头像AA,本申请实施例对此不作具体限定。所述智能手机300的用户通过单击“小红”图标,可以与小红进行互动,比如图10B所示,图10B中“BB”用于表示智能手机300的用户。
可选地,所述方法还可以包括:
S490,所述智能手机300在获取到服务器发送的用户的情绪信息后,还可以执行所述情绪信息对应的用于调节所述用户情绪的操作。例如,所述智能手机300将给用户的建议显示给用户,例如:推荐给用户可修改的壁纸、主题或者铃声;还可以直接执行修改所述智能手机300的显示壁纸、主题或者铃声等操作,或者修改提示音或者播放音乐等操作,图7中以修改所述智能手机300的显示壁纸、主题或者铃声为例。
S491,所述智能手机300执行更新手环或者手表等设备的主题,壁纸,提示音等。或者所述智能手机300还可以执行更新提示音、播放音乐。
例如当前所述智能手机300的用户的情绪信息为“哀”,所述智能手机300通过红外或蓝牙修改手环或者手表的壁纸,如图11所示。
基于与方法实施例同样的发明构思,本申请实施例还提供的一种用户情绪的调节装置,该装置终端设备,具体可以由智能手机300中的处理器320来实现。如图12所示,所述装置可以包括:
数据收集模块1201,用于获取用于表征用户身体情况的数据,所述数据包括第一数据,所述第一数据为与所述终端设备连接的可穿戴设备针对所述用户检测到的至少一个参数值;
数据交互模块1202,用于获取基于所述数据确定的情绪信息;
执行模块1203,用于执行所述情绪信息对应的用于调节所述用户情绪的操作。
在一种可能的实现方式中,所述数据收集模块1201获取到的用于表征用户身体情况的数据中还包括第二数据,所述第二数据为所述终端设备检测的所述用户的至少一个参数值;
所述数据交互模块1202,具体用于获取基于所述第一数据和所述第二数据确定的情绪信息。
在一种可能的实现方式中,所述数据交互模块1202,具体用于:
根据所述数据中每个参数的参数值和对应的权重来确定所述情绪信息。
在一种可能的实现方式中,所述数据交互模块1202,还用于将获取的所述数据发送给服务器;并接收所述服务器根据所述数据返回的所述情绪信息。
在一种可能的实现方式中,所述执行模块1203,具体用于:
提示所述用户当前所处的情绪状态。
在一种可能的实现方式中,所述执行模块1203,具体用于通过语音或界面显示,来提示所述用户当前所处的情绪状态;或将所述情绪状态发送给所述可穿戴设备,通过所述可穿戴设备来提示所述用户当前所处的情绪状态。
在一种可能的实现方式中,所述执行模块1203,具体用于给用户推荐调节情绪的活动信息或互动信息;
所述互动信息用于与其他用户进行互动。
在一种可能的实现方式中,所述第一数据至少包括如下参数值的一种:心率值、血压值或脉搏强度。
在一种可能的实现方式中,所述第二数据至少包括如下参数值的一种:语音速率、语音强度、按压屏幕力度或面部表情。
在一种可能的实现方式中,所述数据交互模块1202,还用于在所述执行模块1203提示所述用户当前所处的情绪状态之后,接收到所述用户触发的操作指令,并将所述用户认可的情绪信息或者修改后的情绪信息发送给所述服 务器;所述操作指令用于指示所述用户认可所述情绪信息或者对所述情绪信息进行修改。
本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,另外,在本申请各个实施例中的各功能模块可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
采用硬件实现时,该终端的硬件实现可参考图2A及其相关描述。
收发器,用于接收所述终端设备连接的可穿戴设备发送的第一数据;第一数据为所述可穿戴设备针对所述用户检测到的至少一个参数值。
处理器320,用于获取用于表征用户身体情况的数据,所述数据包括所述收发器接收到的所述第一数据,获取基于所述数据确定的情绪信息,并执行所述情绪信息对应的用于调节所述用户情绪的操作。
在一种可能的设计中,所述用于表征用户身体情况的数据中还包括第二数据,所述装置还包括:
至少一个传感器370,用于检测用于表征所述用户身体情况的第二数据,所述第二数据包括至少一个参数值;
所述处理器320在获取基于所述数据确定的情绪信息时,具体用于:获取基于所述第一数据和所述第二数据确定的情绪信息。
在一种可能的设计中,所述处理器320,在获取基于所述数据确定的情绪信息时,具体用于:
根据所述数据中每个参数的参数值和对应的权重来确定所述情绪信息。
在一种可能的设计中,所述收发器,还用于将所述处理器320获取的所述数据发送给服务器,并接收所述服务器根据所述数据返回的所述情绪信息。
在一种可能的设计中,所述处理器320,在执行所述情绪信息对应的用于调节所述用户情绪的操作时,具体用于:
提示所述用户当前所处的情绪状态。
在一种可能的设计中,所述装置还可以包括:扬声器390,用于语音提示;
所述处理器320,具体用于通过所述扬声器390发出语音来提示所述用户当前所处的情绪状态。
在一种可能的设计中,所述装置还可以还包括:显示设备310,用于显示提示信息;
所述处理器320,具体用于通过所述显示设备310显示界面来提示所述用户当前所处的情绪状态。
在一种可能的设计中,所述收发器,还用于所述情绪状态发送给所述可穿戴设备,以便于所述可穿戴设备提示所述用户当前所处的情绪状态。
在一种可能的设计中,所述处理器320,还用于通过显示设备330给用户推荐调节情绪的活动信息或互动信息;
所述互动信息用于与其他用户进行互动。
其中,所述第一数据至少包括如下参数值的一种:心率值、血压值或脉搏强度。所述第二数据至少包括如下参数值的一种:语音速率、语音强度、按压屏幕力度或面部表情;
所述至少一个传感器370中包括以下至少一种:语音受话器,用于检测语音速率和/或检测语音强度;压力传感器,用于检测按压屏幕力度;图像传感器,用于面部表情。
在一种可能的实现方式中,所述收发器还用于在所述处理器320提示所述用户当前所处的情绪状态之后,接收到所述用户触发的操作指令,并将所述用户认可的情绪信息或者修改后的情绪信息发送给所述服务器,所述操作指令用于指示所述用户认可所述情绪信息或者对所述情绪信息进行修改。
基于与方法实施例同样的发明构思,本申请实施例还提供了一种调节用户情绪的装置,所述装置应用可穿戴设备,如图13所示,所述装置包括:
检测模块1301,用于检测用于表征用户身体情况的至少一个参数值;
发送模块1302,用于将所述至少一个参数值发送给与所述可穿戴设备连接的终端设备,以便于所述终端设备基于所述至少一个参数值执行针对所述 用户情绪调节的操作。
在一种可能的实现方式中,该装置还可以包括:
接收模块1303,用于接收所述终端设备发送的用于推荐给所述用户的活动信息;
显示模块1304,用于显示所述活动信息。
在一种可能的实现方式中,接收模块1303,用于接收所述终端设备发送的更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐的指令;
处理模块1305,基于所述指令更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐。
本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,另外,在本申请各个实施例中的各功能模块可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
采用硬件实现时,该可穿戴设备的硬件实现可参考图2B及其相关描述。
至少一个传感器302a,用于检测用于表征用户身体情况的至少一个参数值;
收发器301a,用于将所述至少一个参数值发送给所述终端设备,以便于所述终端设备基于所述至少一个参数值执行针对所述用户情绪调节的操作。
在一种可能的实现方式中,所述收发器301a,还用于接收所述终端设备发送的用于推荐给所述用户的活动信息;
显示设备303a,用于显示所述活动信息。
在一种可能的实现方式中,所述收发器301a,还用于接收所述终端设备发送的更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐的指令;
所述显示设备303a还用于显示壁纸或者主题;
所述装置还包括处理器304a,用于基于所述指令更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐;
所述显示设备303a,还用于显示所述处理器304a更新的壁纸或者主题;
所述装置还包括扬声器305a,用于发出所述处理器304a更新的提示音,或者发出所述处理器304a更新的铃声、或者播放音乐。
通过上述方案提供了一种便利有效的调节用户情绪的方法,不再需要专门的设备,比如指夹探头,而是通过可穿戴设备来实时获取用于表征用户身体情况的参数值,并发给终端设备,终端设备基于参数值来确定用户情绪,然后执行调节用户情绪的操作,给用户提供的便利性,并且实时性、可操作性较高。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的 处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请实施例进行各种改动和变型而不脱离本申请实施例的精神和范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (41)

  1. 一种调节用户情绪的方法,其特征在于,包括:
    终端设备获取用于表征用户身体情况的数据,所述数据包括第一数据,所述第一数据为与所述终端设备连接的可穿戴设备针对所述用户检测到的至少一个参数值;
    所述终端设备获取基于所述数据确定的情绪信息;
    所述终端设备执行所述情绪信息对应的用于调节所述用户情绪的操作。
  2. 如权利要求1所述的方法,其特征在于,终端设备获取到的用于表征用户身体情况的数据中还包括第二数据,所述第二数据为所述终端设备检测的所述用户的至少一个参数值;
    所述终端设备获取基于所述数据确定的情绪信息,具体包括:所述终端设备获取基于所述第一数据和所述第二数据确定的情绪信息。
  3. 如权利要求1或2所述的方法,其特征在于,所述终端设备获取基于所述数据确定的情绪信息,包括:
    所述终端设备根据所述数据中每个参数的参数值和对应的权重来确定所述情绪信息。
  4. 如权利要求1或2所述的方法,其特征在于,所述终端设备获取基于所述数据确定的情绪信息,包括:
    所述终端设备将获取的所述数据发送给服务器,并接收所述服务器根据所述数据返回的所述情绪信息。
  5. 如权利要求1-4任一项所述的方法,其特征在于,所述终端设备执行所述情绪信息对应的用于调节所述用户情绪的操作,包括:
    所述终端设备提示所述用户当前所处的情绪状态。
  6. 如权利要求5所述的方法,其特征在于,所述终端设备提示所述用户当前所处的情绪状态,包括:
    所述终端设备通过语音或界面显示,来提示所述用户当前所处的情绪状 态;或
    所述终端设备将所述情绪状态发送给所述可穿戴设备,通过所述可穿戴设备来提示所述用户当前所处的情绪状态。
  7. 如权利要求1-4任一项所述的方法,其特征在于,所述终端设备执行所述情绪信息对应的用于调节所述用户情绪的操作,包括:
    所述终端设备给用户推荐调节情绪的活动信息或互动信息;
    所述互动信息用于与其他用户进行互动。
  8. 如权利要求1-7任一项所述的方法,其特征在于,所述第一数据至少包括如下参数值的一种:心率值、血压值或脉搏强度。
  9. 如权利要求2-8任一项所述的方法,其特征在于,所述第二数据至少包括如下参数值的一种:语音速率、语音强度、按压屏幕力度或面部表情。
  10. 如权利要求5-9任一项所述的方法,其特征在于,所述终端设备提示所述用户当前所处的情绪状态之后,所述方法还包括:
    所述终端设备接收到所述用户触发的操作指令,所述操作指令用于指示所述用户认可所述情绪信息或者对所述情绪信息进行修改;
    所述终端设备将所述用户认可的情绪信息或者修改后的情绪信息发送给所述服务器。
  11. 一种调节用户情绪的方法,其特征在于,包括:
    所述可穿戴设备检测用于表征用户身体情况的至少一个参数值;
    所述可穿戴设备将所述至少一个参数值发送给与所述可穿戴设备连接的终端设备,以便于所述终端设备基于所述至少一个参数值执行针对所述用户情绪调节的操作。
  12. 如权利要求11所述的方法,其特征在于,还包括:
    所述可穿戴设备接收所述终端设备发送的用于推荐给所述用户的活动信息;
    所述可穿戴设备显示所述活动信息。
  13. 如权利要求11或12所述的方法,其特征在于,还包括:
    所述可穿戴设备接收所述终端设备发送的更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐的指令;
    所述可穿戴设备基于所述指令更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐。
  14. 一种调节用户情绪的装置,其特征在于,包括:
    收发器,用于接收所述终端设备连接的可穿戴设备发送的第一数据;第一数据为所述可穿戴设备针对所述用户检测到的至少一个参数值;
    处理器,用于获取用于表征用户身体情况的数据,所述数据包括所述收发器接收到的所述第一数据,获取基于所述数据确定的情绪信息,并执行所述情绪信息对应的用于调节所述用户情绪的操作。
  15. 如权利要求14所述的装置,其特征在于,所述用于表征用户身体情况的数据中还包括第二数据,所述装置还包括:
    至少一个传感器,用于检测用于表征所述用户身体情况的第二数据,所述第二数据包括至少一个参数值;
    所述处理器在获取基于所述数据确定的情绪信息时,具体用于:获取基于所述第一数据和所述第二数据确定的情绪信息。
  16. 如权利要求14或15所述的装置,其特征在于,所述处理器,在获取基于所述数据确定的情绪信息时,具体用于:
    根据所述数据中每个参数的参数值和对应的权重来确定所述情绪信息。
  17. 如权利要求14或15所述的装置,其特征在于,所述收发器,还用于将所述处理器获取的所述数据发送给服务器,并接收所述服务器根据所述数据返回的所述情绪信息。
  18. 如权利要求14-17任一项所述的装置,其特征在于,所述处理器,在执行所述情绪信息对应的用于调节所述用户情绪的操作时,具体用于:
    提示所述用户当前所处的情绪状态。
  19. 如权利要求18所述的装置,其特征在于,还包括:
    扬声器,用于语音提示;
    所述处理器,具体用于通过所述扬声器发出语音来提示所述用户当前所处的情绪状态。
  20. 如权利要求18或19所述的装置,其特征在于,还包括:
    显示设备,用于显示提示信息;
    所述处理器,具体用于通过所述显示设备显示界面来提示所述用户当前所处的情绪状态。
  21. 如权利要求18-20任一项所述的装置,其特征在于,所述收发器,还用于所述情绪状态发送给所述可穿戴设备,以便于所述可穿戴设备提示所述用户当前所处的情绪状态。
  22. 如权利要求14-21任一项所述的装置,其特征在于,所述处理器,还用于通过显示设备给用户推荐调节情绪的活动信息或互动信息;
    所述互动信息用于与其他用户进行互动。
  23. 根据权利要求14-22任一项所述的装置,其特征在于,所述第一数据至少包括如下参数值的一种:心率值、血压值或脉搏强度。
  24. 根据权利要求15-23任一项所述的装置,其特征在于,所述第二数据至少包括如下参数值的一种:语音速率、语音强度、按压屏幕力度或面部表情;
    所述至少一个传感器中包括以下至少一种:
    语音受话器,用于检测语音速率和/或检测语音强度;
    压力传感器,用于检测按压屏幕力度;
    图像传感器,用于面部表情。
  25. 如权利要求18-24任一项所述的装置,其特征在于,所述收发器,还用于在所述处理器提示所述用户当前所处的情绪状态之后,接收到所述用户触发的操作指令,并将所述用户认可的情绪信息或者修改后的情绪信息发送给所述服务器,所述操作指令用于指示所述用户认可所述情绪信息或者对所述情绪信息进行修改。
  26. 一种调节用户情绪的装置,其特征在于,所述装置应用于可穿戴设 备,包括:
    至少一个传感器,用于检测用于表征用户身体情况的至少一个参数值;
    收发器,用于将所述至少一个参数值发送给所述终端设备,以便于所述终端设备基于所述至少一个参数值执行针对所述用户情绪调节的操作。
  27. 如权利要求26所述的装置,其特征在于,所述收发器,还用于接收所述终端设备发送的用于推荐给所述用户的活动信息;
    显示设备,用于显示所述活动信息。
  28. 如权利要求26或27所述的装置,其特征在于,所述收发器,还用于接收所述终端设备发送的更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐的指令;
    所述显示设备还用于显示壁纸或者主题;
    所述装置还包括处理器,用于基于所述指令更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐;
    所述显示设备,还用于显示所述处理器更新的壁纸或者主题;
    所述装置还包括扬声器,用于发出所述处理器更新的提示音,或者发出所述处理器更新的铃声、或者播放音乐。
  29. 一种调节用户情绪的装置,其特征在于,包括:
    数据收集模块,用于获取用于表征用户身体情况的数据,所述数据包括第一数据,所述第一数据为与所述终端设备连接的可穿戴设备针对所述用户检测到的至少一个参数值;
    数据交互模块,用于获取基于所述数据确定的情绪信息;
    执行模块,用于执行所述情绪信息对应的用于调节所述用户情绪的操作。
  30. 如权利要求29所述的装置,其特征在于,所述数据收集模块获取到的用于表征用户身体情况的数据中还包括第二数据,所述第二数据为所述终端设备检测的所述用户的至少一个参数值;
    所述数据交互模块,具体用于获取基于所述第一数据和所述第二数据确定的情绪信息。
  31. 如权利要求29或30所述的装置,其特征在于,所述数据交互模块,具体用于:
    根据所述数据中每个参数的参数值和对应的权重来确定所述情绪信息。
  32. 如权利要求29或30所述的装置,其特征在于,所述数据交互模块,还用于将获取的所述数据发送给服务器;并接收所述服务器根据所述数据返回的所述情绪信息。
  33. 如权利要求29-32任一项所述的装置,其特征在于,所述执行模块,具体用于:
    提示所述用户当前所处的情绪状态。
  34. 如权利要求33所述的装置,其特征在于,所述执行模块,具体用于通过语音或界面显示,来提示所述用户当前所处的情绪状态;或将所述情绪状态发送给所述可穿戴设备,通过所述可穿戴设备来提示所述用户当前所处的情绪状态。
  35. 如权利要求29-34任一项所述的装置,其特征在于,所述执行模块,具体用于给用户推荐调节情绪的活动信息或互动信息;
    所述互动信息用于与其他用户进行互动。
  36. 如权利要求29-35任一项所述的装置,其特征在于,所述第一数据至少包括如下参数值的一种:心率值、血压值或脉搏强度。
  37. 如权利要求30-36任一项所述的装置,其特征在于,所述第二数据至少包括如下参数值的一种:语音速率、语音强度、按压屏幕力度或面部表情。
  38. 如权利要求33-37任一项所述的装置,其特征在于,所述数据交互模块,还用于在所述执行模块提示所述用户当前所处的情绪状态之后,接收到所述用户触发的操作指令,并将所述用户认可的情绪信息或者修改后的情绪信息发送给所述服务器;所述操作指令用于指示所述用户认可所述情绪信息或者对所述情绪信息进行修改。
  39. 一种调节用户情绪的装置,其特征在于,所述装置应用可穿戴设备,包括:
    检测模块,用于检测用于表征用户身体情况的至少一个参数值;
    发送模块,用于将所述至少一个参数值发送给与所述可穿戴设备连接的终端设备,以便于所述终端设备基于所述至少一个参数值执行针对所述用户情绪调节的操作。
  40. 如权利要求39所述的装置,其特征在于,还包括:
    接收模块,用于接收所述终端设备发送的用于推荐给所述用户的活动信息;
    显示模块,用于显示所述活动信息。
  41. 如权利要求39或40所述的装置,其特征在于,还包括:
    接收模块,用于接收所述终端设备发送的更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐的指令;
    处理模块,基于所述指令更新主题、更新壁纸、更新提示音、更新铃声或者播放音乐。
PCT/CN2016/113149 2016-12-29 2016-12-29 一种调节用户情绪的方法及装置 WO2018119924A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
PCT/CN2016/113149 WO2018119924A1 (zh) 2016-12-29 2016-12-29 一种调节用户情绪的方法及装置
US16/473,946 US11291796B2 (en) 2016-12-29 2016-12-29 Method and apparatus for adjusting user emotion
CN201680080603.4A CN108604246A (zh) 2016-12-29 2016-12-29 一种调节用户情绪的方法及装置
EP16924971.1A EP3550450A4 (en) 2016-12-29 2016-12-29 METHOD AND DEVICE FOR ADJUSTING THE HUMOR BEHAVIOR OF A USER
HK18116138.9A HK1257017A1 (zh) 2016-12-29 2018-12-17 一種調節用戶情緒的方法及裝置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/113149 WO2018119924A1 (zh) 2016-12-29 2016-12-29 一种调节用户情绪的方法及装置

Publications (1)

Publication Number Publication Date
WO2018119924A1 true WO2018119924A1 (zh) 2018-07-05

Family

ID=62706569

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/113149 WO2018119924A1 (zh) 2016-12-29 2016-12-29 一种调节用户情绪的方法及装置

Country Status (5)

Country Link
US (1) US11291796B2 (zh)
EP (1) EP3550450A4 (zh)
CN (1) CN108604246A (zh)
HK (1) HK1257017A1 (zh)
WO (1) WO2018119924A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109491499A (zh) * 2018-11-05 2019-03-19 广州创维平面显示科技有限公司 一种电器设备控制方法、装置、电器设备和介质
CN109567774A (zh) * 2019-01-17 2019-04-05 中国人民解放军陆军军医大学士官学校 人体情绪预警调控系统及人体情绪预警调控方法
CN110471534A (zh) * 2019-08-23 2019-11-19 靖江市人民医院 基于情绪识别的信息处理方法及远程医疗管理系统
CN110825503A (zh) * 2019-10-12 2020-02-21 平安科技(深圳)有限公司 主题切换方法、装置及存储介质、服务器
CN111528869A (zh) * 2020-06-10 2020-08-14 歌尔科技有限公司 一种健康管理方法、装置及可穿戴设备
CN113704504A (zh) * 2021-08-30 2021-11-26 平安银行股份有限公司 基于聊天记录的情绪识别方法、装置、设备及存储介质

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6839818B2 (ja) * 2017-05-17 2021-03-10 パナソニックIpマネジメント株式会社 コンテンツ提供方法、コンテンツ提供装置及びコンテンツ提供プログラム
JP7073640B2 (ja) * 2017-06-23 2022-05-24 カシオ計算機株式会社 電子機器、感情情報取得システム、プログラム及び感情情報取得方法
WO2019073661A1 (ja) * 2017-10-13 2019-04-18 ソニー株式会社 情報処理装置、情報処理方法、情報処理システム、表示装置及び予約システム
CN109903392B (zh) * 2017-12-11 2021-12-31 北京京东尚科信息技术有限公司 增强现实方法和装置
EP3501385A1 (en) * 2017-12-21 2019-06-26 IMEC vzw System and method for determining a subject's stress condition
CN109669535A (zh) * 2018-11-22 2019-04-23 歌尔股份有限公司 音响控制方法及系统
CN109948780A (zh) * 2019-03-14 2019-06-28 江苏集萃有机光电技术研究所有限公司 基于人工智能的辅助决策方法、装置及设备
US11786694B2 (en) * 2019-05-24 2023-10-17 NeuroLight, Inc. Device, method, and app for facilitating sleep
CN113780546B (zh) * 2020-05-21 2024-08-13 华为技术有限公司 一种评估女性情绪的方法及相关装置、设备
CN112099743A (zh) * 2020-08-17 2020-12-18 数智医疗(深圳)有限公司 交互系统、交互设备及交互方法
US11350863B2 (en) * 2020-10-01 2022-06-07 Charles Isgar Worry stone device and system
RU2768551C1 (ru) * 2020-10-07 2022-03-24 Самсунг Электроникс Ко., Лтд. Способ локального генерирования и представления потока обоев и вычислительное устройство, реализующее его
CN112043253A (zh) * 2020-10-10 2020-12-08 上海健康医学院 一种根据传感数据自动判别用户情绪的方法及腕表
US20220157434A1 (en) * 2020-11-16 2022-05-19 Starkey Laboratories, Inc. Ear-wearable device systems and methods for monitoring emotional state
CN114666443A (zh) * 2020-12-22 2022-06-24 成都鼎桥通信技术有限公司 基于情绪的应用程序运行方法及设备
WO2022209499A1 (ja) * 2021-03-29 2022-10-06 ソニーグループ株式会社 情動情報を表示する情報処理システム
CN113144374A (zh) * 2021-04-09 2021-07-23 上海探寻信息技术有限公司 一种基于智能穿戴设备调节用户状态的方法及设备
CN113397512B (zh) * 2021-06-08 2023-09-29 广东科谷智能科技有限公司 一种环境调节方法、调节系统及其存储介质
US20230021336A1 (en) * 2021-07-12 2023-01-26 Isabelle Mordecai Troxler Methods and apparatus for predicting and preventing autistic behaviors with learning and ai algorithms
US11996179B2 (en) 2021-09-09 2024-05-28 GenoEmote LLC Method and system for disease condition reprogramming based on personality to disease condition mapping
CN114117116B (zh) * 2022-01-28 2022-07-01 中国传媒大学 基于生物特征交互的音乐解锁方法及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574088A (zh) * 2015-02-04 2015-04-29 华为技术有限公司 支付认证的方法和装置
CN105615901A (zh) * 2014-11-06 2016-06-01 中国移动通信集团公司 一种监测情绪的方法和系统
CN105871696A (zh) * 2016-05-25 2016-08-17 维沃移动通信有限公司 一种信息发送、接收方法及移动终端
CN106037635A (zh) * 2016-05-11 2016-10-26 南京邮电大学 一种基于可穿戴设备的智能预警系统和方法
CN106202860A (zh) * 2016-06-23 2016-12-07 南京邮电大学 一种情绪调节业务推送方法及可穿戴协同推送系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101337103B (zh) 2008-08-28 2013-03-20 北京阳光易德科技发展有限公司 一种压力与情绪调节系统及一种生理信号采集装置
CN101437079B (zh) 2008-12-31 2014-06-11 华为终端有限公司 一种移动终端用户情绪缓解方法及移动终端
US20120011477A1 (en) 2010-07-12 2012-01-12 Nokia Corporation User interfaces
CN102929660A (zh) 2012-10-09 2013-02-13 广东欧珀移动通信有限公司 一种终端设备心情主题的控制方法及其终端设备
CN105607822A (zh) 2014-11-11 2016-05-25 中兴通讯股份有限公司 一种用户界面的主题切换方法、装置及终端
US11392580B2 (en) * 2015-02-11 2022-07-19 Google Llc Methods, systems, and media for recommending computerized services based on an animate object in the user's environment
CN204950975U (zh) 2015-07-01 2016-01-13 京东方科技集团股份有限公司 可穿戴电子设备
US20170143246A1 (en) * 2015-11-20 2017-05-25 Gregory C Flickinger Systems and methods for estimating and predicting emotional states and affects and providing real time feedback
CN105726045A (zh) 2016-01-28 2016-07-06 惠州Tcl移动通信有限公司 一种情绪监控方法及其移动终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105615901A (zh) * 2014-11-06 2016-06-01 中国移动通信集团公司 一种监测情绪的方法和系统
CN104574088A (zh) * 2015-02-04 2015-04-29 华为技术有限公司 支付认证的方法和装置
CN106037635A (zh) * 2016-05-11 2016-10-26 南京邮电大学 一种基于可穿戴设备的智能预警系统和方法
CN105871696A (zh) * 2016-05-25 2016-08-17 维沃移动通信有限公司 一种信息发送、接收方法及移动终端
CN106202860A (zh) * 2016-06-23 2016-12-07 南京邮电大学 一种情绪调节业务推送方法及可穿戴协同推送系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3550450A4 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109491499A (zh) * 2018-11-05 2019-03-19 广州创维平面显示科技有限公司 一种电器设备控制方法、装置、电器设备和介质
CN109491499B (zh) * 2018-11-05 2022-05-24 创维集团智能科技有限公司 一种电器设备控制方法、装置、电器设备和介质
CN109567774A (zh) * 2019-01-17 2019-04-05 中国人民解放军陆军军医大学士官学校 人体情绪预警调控系统及人体情绪预警调控方法
CN109567774B (zh) * 2019-01-17 2024-05-03 中国人民解放军陆军军医大学士官学校 人体情绪预警调控系统及人体情绪预警调控方法
CN110471534A (zh) * 2019-08-23 2019-11-19 靖江市人民医院 基于情绪识别的信息处理方法及远程医疗管理系统
CN110471534B (zh) * 2019-08-23 2022-11-04 靖江市人民医院 基于情绪识别的信息处理方法及远程医疗管理系统
CN110825503A (zh) * 2019-10-12 2020-02-21 平安科技(深圳)有限公司 主题切换方法、装置及存储介质、服务器
CN110825503B (zh) * 2019-10-12 2024-03-19 平安科技(深圳)有限公司 主题切换方法、装置及存储介质、服务器
CN111528869A (zh) * 2020-06-10 2020-08-14 歌尔科技有限公司 一种健康管理方法、装置及可穿戴设备
CN113704504A (zh) * 2021-08-30 2021-11-26 平安银行股份有限公司 基于聊天记录的情绪识别方法、装置、设备及存储介质
CN113704504B (zh) * 2021-08-30 2023-09-19 平安银行股份有限公司 基于聊天记录的情绪识别方法、装置、设备及存储介质

Also Published As

Publication number Publication date
US11291796B2 (en) 2022-04-05
EP3550450A1 (en) 2019-10-09
CN108604246A (zh) 2018-09-28
HK1257017A1 (zh) 2019-10-11
US20190336724A1 (en) 2019-11-07
EP3550450A4 (en) 2019-11-06

Similar Documents

Publication Publication Date Title
WO2018119924A1 (zh) 一种调节用户情绪的方法及装置
US20220386901A1 (en) Workout monitor interface
JP6833740B2 (ja) 身体活動及びトレーニングモニタ
US10504339B2 (en) Mobile device with instinctive alerts
WO2015085795A1 (zh) 一种穿戴式电子设备及其显示方法
US10620593B2 (en) Electronic device and control method thereof
US8519835B2 (en) Systems and methods for sensory feedback
EP3139261A1 (en) User terminal apparatus, system, and method for controlling the same
WO2017172551A1 (en) Digital assistant experience based on presence detection
WO2018090533A1 (zh) 一种基于用户状态的分析推荐方法和装置
KR20190061681A (ko) 생체 정보에 기반하여 외부 오디오 장치와 연동하여 동작하는 전자 장치 및 방법
US20190188604A1 (en) Machine learning system for predicting optimal interruptions based on biometric data colllected using wearable devices
EP3268882A1 (en) Wearable and detachable health parameter sensor
US20210272698A1 (en) Personality based wellness coaching
JP2015126869A (ja) 睡眠補助システム及び睡眠補助方法
JP6354144B2 (ja) 電子機器、方法及びプログラム
EP3067780A1 (en) Method for controlling terminal device, and wearable electronic device
WO2019132772A1 (en) Method and system for monitoring emotions
WO2019104779A1 (zh) 一种压力检测的方法及终端
EP3553788A2 (en) Vitals monitoring system
US20170287305A1 (en) Adaptive client device notification system and methods
KR102376874B1 (ko) 전자 장치 및 이의 녹음 제어 방법
KR20200068058A (ko) 인공지능 학습을 통해 자기 계발을 도와주는 방법 및 장치
KR20200068057A (ko) 인공지능 학습을 통해 자기 계발을 도와주는 방법 및 장치
KR20240054813A (ko) 전자 장치 및 이의 동작 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16924971

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2016924971

Country of ref document: EP

Effective date: 20190703