CN104939810B - A kind of method and device controlled the emotion - Google Patents

A kind of method and device controlled the emotion Download PDF

Info

Publication number
CN104939810B
CN104939810B CN201410113624.4A CN201410113624A CN104939810B CN 104939810 B CN104939810 B CN 104939810B CN 201410113624 A CN201410113624 A CN 201410113624A CN 104939810 B CN104939810 B CN 104939810B
Authority
CN
China
Prior art keywords
temper
state
acoustic information
emotion
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410113624.4A
Other languages
Chinese (zh)
Other versions
CN104939810A (en
Inventor
周中宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Feixun Data Communication Technology Co Ltd
Original Assignee
Shanghai Feixun Data Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Feixun Data Communication Technology Co Ltd filed Critical Shanghai Feixun Data Communication Technology Co Ltd
Priority to CN201410113624.4A priority Critical patent/CN104939810B/en
Publication of CN104939810A publication Critical patent/CN104939810A/en
Application granted granted Critical
Publication of CN104939810B publication Critical patent/CN104939810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0027Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Psychiatry (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Hospice & Palliative Care (AREA)
  • Biophysics (AREA)
  • Cardiology (AREA)
  • Acoustics & Sound (AREA)
  • Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Pathology (AREA)
  • Hematology (AREA)
  • Anesthesiology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Physiology (AREA)
  • Social Psychology (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Reverberation, Karaoke And Other Acoustics (AREA)
  • Telephone Function (AREA)

Abstract

The invention provides a kind of method and device controlled the emotion, wherein, methods described includes:A)Obtain a pulse frequency;B) when the value of the pulse frequency is higher than a predeterminated frequency value, an acoustic information is obtained;C the intensity of sound and tempo in the acoustic information) are detected;D) judge user whether in the state that loses one's temper according to the intensity of sound and tempo;E) when in lose one's temper state when, stop obtaining the acoustic information, play a preset audio, solving automatic detection by above-mentioned steps loses one's temper state and the problem of implement to control the emotion automatically according to above-mentioned state.

Description

A kind of method and device controlled the emotion
Technical field
The present invention relates to emotion control field, specifically a kind of method and device controlled the emotion.
Background technology
Generally, people is reason, but when mood is in state out of control, people often show inadequate reason The state of intelligence, and usually find oneself should not so get excited afterwards, how to solve when people are in and lost one's temper, automatic identification Mood out of control and take corresponding mood releive measure into it is current to be resolved the problem of.
The content of the invention
The problem of present invention is solved is automatic identification mood out of control and takes the mood of fragrant shadow to releive measure.
To solve the above problems, the present invention provides a kind of method controlled the emotion, methods described includes:
A)Obtain a pulse frequency;
B) when the value of the pulse frequency is higher than a predeterminated frequency value, an acoustic information is obtained;
C the intensity of sound and tempo in the acoustic information) are detected;
D) judge user whether in the state that loses one's temper according to the intensity of sound and tempo;
E) when in lose one's temper state when, stop obtaining the acoustic information, play a preset audio.
It is preferred that, methods described also includes the acoustic information being converted into text information, the step D) also include:Root Judge user whether in the state that loses one's temper according to the text information.
It is preferred that, methods described is in step B) and the step D) between also include the acoustic information is converted into word The step of information, the step D) also include:Judge user whether in the state that loses one's temper according to the text information.
It is preferred that, step E) also comprising according to the step D) judge described in lose one's temper the type of state, according to difference The step of state plays that lose one's temper of type default corresponding audio.
It is preferred that, the audio includes the acoustic information.
It is preferred that, methods described also includes:As step E)At the end of execution, return to step A), audio described in recording played Number of times, when the broadcasting time be more than a preset value when, into a default Affiliates List, into the Affiliates List Contact person send and a presupposed information or call.
Present invention also offers a kind of device controlled the emotion, described device includes:
One acquiring unit, the acquiring unit is used to obtain a pulse frequency, obtains an acoustic information;
One processing unit, is electrically connected with the acquiring unit, for being preset when the value for judging the pulse frequency higher than one Acquisition acoustic information is sent during frequency values to instruct to the acquiring unit;For detect the intensity of sound in the acoustic information with Tempo, and judge user whether in the state that loses one's temper according to the intensity of sound and tempo;When user is in During the state that loses one's temper, instructed for sending stopping acquisition acoustic information to the acquiring unit.
One broadcast unit, is electrically connected with the processing unit, for sending the stopping acquisition sound when the processing unit During sound information command, a preset audio is played.
It is preferred that, the processing unit is additionally operable to the acoustic information being converted into text information, and according to the word Whether information judges user in the state that loses one's temper.
It is preferred that, the processing unit be additionally operable to judge described in lose one's temper the type of state, according to different types of institute State the state that loses one's temper and send a default correspondence play instruction to the broadcast unit, the broadcast unit receives the correspondence Play instruction plays a default corresponding audio.
It is preferred that, the audio includes the acoustic information.
It is preferred that, the processing unit is additionally operable to the number of times of audio described in recording played, when the broadcasting time is more than one During preset value, into a default Affiliates List, the contact person into the Affiliates List send a presupposed information or Call.
Technical scheme has advantages below:
1)Because the performance of anxious state of mind initially is often the exception of pulse, therefore acquisition is used as by the pulse for measuring human body The condition of acoustic information, reduces other detection programs while the process opened.
2)Using the intensity of sound and tempo detected in acoustic information, increase judges the accurate of state that lose one's temper Property.
3)When judging to lose one's temper, using a preset audio is played, such as the music releived, regulation mood is played Effect.
Further, text information acoustic information being converted into as judge in the condition of state lost one's temper it One, make the judgement state that loses one's temper more accurate.
Further, by the state classification that loses one's temper, the corresponding audio set is played, adds and different moods is lost The control device of type is controlled, the effect out of control that controls the emotion is added.
Further, audio includes acoustic information, by playing the acoustic information for the person of losing one's temper, reaches what is controlled the emotion Effect.
Further, the number of times of recording played audio, when number of times reaches a default time value, dials default contact person Number sends a presupposed information.In the case where device can not play control mood out of control, notify other people right in time automatically The person of losing one's temper carries out mood and dredged.
Brief description of the drawings
Fig. 1 is a kind of schematic flow sheet of the method controlled the emotion of the present invention;
Fig. 2 is a kind of structural representation of the device controlled the emotion of the present invention;
Embodiment
In order that relevant technical staff in the field more fully understands technical scheme, below in conjunction with of the invention real The accompanying drawing of mode is applied, the technical scheme in embodiment of the present invention is clearly and completely described, it is clear that described reality The mode of applying is only a part of embodiment of the invention, rather than whole embodiments.
The invention provides a kind of method controlled the emotion, as shown in figure 1, methods described step is as follows:
Step S1:Obtain a pulse frequency;
Step S2:When the value of the pulse frequency is higher than a predeterminated frequency value, an acoustic information is obtained.
Because the performance of anxious state of mind initially is often the exception of pulse, therefore it is used as acquisition sound by measuring the pulse of human body The condition of message breath, reduces other detection programs while the process opened.
Step S3:Detect the intensity of sound and tempo in the acoustic information.
Step S4:Judge user whether in the state that loses one's temper according to the intensity of sound and tempo.
Using the intensity of sound and tempo detected in acoustic information, increase judges to lose one's temper the accuracy of state.
Further, it can also include the acoustic information being converted into text information between step S2 and step S4 Whether step, and in the step S4, will judge user emotion as judging user in losing one's temper according to text information One of condition of state.
For example:It will differentiate whether contain default common quarrel everyday words in text information as judging that user emotion is One of condition of no runaway condition.
Embodiment one:When the pulse frequency of user is higher than a preset value, acoustic information is obtained, when the acoustic information is in sound In the case that loudness of a sound degree meets preparatory condition with tempo, it is defined as the state of losing one's temper.
Embodiment two:When the pulse frequency of user is higher than a preset value, acoustic information is obtained, when the acoustic information is in sound In the case that loudness of a sound degree meets preparatory condition with tempo, and judge to preset in the text information that acoustic information is converted into Common quarrel everyday words, then judge emotional state for the state of losing one's temper.
Embodiment two due to adding the condition for the state that judges to lose one's temper, makes to sentence more smart compared to embodiment one Really.
Step S5:When in lose one's temper state when, stop obtaining the acoustic information, play a preset audio.
When judging to lose one's temper, using a preset audio is played, such as the music releived, the work of regulation mood is played With.
Further, the audio includes the acoustic information, by playing the acoustic information for the person of losing one's temper, reaches control The effect of mood processed.
It is emphasized that the audio played can also be treated acoustic information, such as by acoustic information system Other dialects, or by acoustic information system into sound effect of child etc..
Further, step S5 can also include the type for the state that lost one's temper according to judging the step S4, root The step of corresponding audio default according to the different types of state plays that lose one's temper.
To be lost one's temper state classification, play the corresponding audio set, add the type that lost one's temper to difference Control device, adds the effect out of control that controls the emotion.
Further, at the end of step S5 is performed, return to step S1, the number of times of audio described in recording played, when described When broadcasting time is more than a preset value, into a default Affiliates List, the contact person into the Affiliates List sends One presupposed information is called.
In the case where control mood out of control can not be played by playing audio, other people are notified to play in time to mood automatically Person out of control carries out the effect that mood is dredged.
Present invention also offers a kind of device controlled the emotion, as shown in Fig. 2 described device includes acquiring unit 1, processing Unit 2, broadcast unit 3, wherein:
Acquiring unit 1, for obtaining a pulse frequency, obtains an acoustic information;
Processing unit 2, is electrically connected with the acquiring unit 1, for being preset when the value for judging the pulse frequency higher than one Acquisition acoustic information is sent during frequency values to instruct to the acquiring unit 1;For detecting the intensity of sound in the acoustic information With tempo, and judge whether user is in the state that loses one's temper according to the intensity of sound and tempo;At user In lose one's temper state when, for send stop obtain acoustic information instruct to the acquiring unit 1.
Broadcast unit 3, is electrically connected with the processing unit 2, for sending the stopping acquisition sound when the processing unit 2 During sound information command, a preset audio is played.
Processing unit 2 is additionally operable to the acoustic information being converted into text information, and judges use according to the text information Whether family is in the state that loses one's temper.
The processing unit 1 loses one's temper the type of state described in being additionally operable to judge, according to the different types of mood Runaway condition sends a default correspondence play instruction to the broadcast unit 3, and the broadcast unit 3 receives the correspondence and played A default corresponding audio is played in instruction.
The audio includes the acoustic information.
The processing unit 1 is additionally operable to the number of times of audio described in recording played, when the broadcasting time is more than a preset value When, into a default Affiliates List, the contact person into the Affiliates List sends a presupposed information or dials electricity Words.
Although present disclosure is as above, the present invention is not limited to this.Any those skilled in the art, are not departing from this In the spirit and scope of invention, it can make various changes or modifications, therefore protection scope of the present invention should be with claim institute The scope of restriction is defined.

Claims (8)

1. a kind of method controlled the emotion, it is characterised in that methods described includes:
A a pulse frequency) is obtained;
B) when the value of the pulse frequency is higher than a predeterminated frequency value, an acoustic information is obtained;
C the intensity of sound and tempo in the acoustic information) are detected;
D) judge user whether in the state that loses one's temper according to the intensity of sound and tempo;
E) when in lose one's temper state when, stop obtaining the acoustic information, play a preset audio;The audio includes institute State acoustic information.
2. the method according to claim 1 controlled the emotion, it is characterised in that methods described is in step B) and the step D the step of also including the acoustic information being converted into text information between), the step D) also include:According to the word Whether information judges user in the state that loses one's temper.
3. the method according to claim 1 or 2 controlled the emotion, it is characterised in that step E) also include according to the step Rapid D) judge described in lose one's temper the type of state, it is default corresponding according to the different types of state plays that lose one's temper The step of audio.
4. the method according to claim 1 controlled the emotion, it is characterised in that methods described also includes:
As step E) perform at the end of, return to step A), the number of times of audio described in recording played, when the broadcasting time be more than one During preset value, into a default Affiliates List, the contact person into the Affiliates List send a presupposed information or Call.
5. a kind of device controlled the emotion, it is characterised in that described device includes:
One acquiring unit, for obtaining a pulse frequency, obtains an acoustic information;
One processing unit, is electrically connected with the acquiring unit, for being higher than a predeterminated frequency when the value for judging the pulse frequency Acquisition acoustic information is sent during value to instruct to the acquiring unit;For detecting intensity of sound and beat in the acoustic information Speed, and judge user whether in the state that loses one's temper according to the intensity of sound and tempo;When user is in mood During runaway condition, instructed for sending stopping acquisition acoustic information to the acquiring unit;
One broadcast unit, is electrically connected with the processing unit, for believing when the processing unit sends the acquisition sound that stops During breath instruction, a preset audio is played;The audio includes the acoustic information.
6. the device according to claim 5 controlled the emotion, it is characterised in that the processing unit is additionally operable to the sound Whether message breath is converted into text information, and judge user in the state that loses one's temper according to the text information.
7. the device controlled the emotion according to claim 5 or 6, it is characterised in that the processing unit is additionally operable to judge The type of the state that loses one's temper, a default correspondence play instruction is sent according to the different types of state that loses one's temper To the broadcast unit, the broadcast unit receives the correspondence play instruction and plays a default corresponding audio.
8. the device according to claim 5 controlled the emotion, it is characterised in that the processing unit is additionally operable to recording played The number of times of the audio, when the broadcasting time is more than a preset value, into a default Affiliates List, to the contact Contact person in roster sends a presupposed information or called.
CN201410113624.4A 2014-03-25 2014-03-25 A kind of method and device controlled the emotion Active CN104939810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410113624.4A CN104939810B (en) 2014-03-25 2014-03-25 A kind of method and device controlled the emotion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410113624.4A CN104939810B (en) 2014-03-25 2014-03-25 A kind of method and device controlled the emotion

Publications (2)

Publication Number Publication Date
CN104939810A CN104939810A (en) 2015-09-30
CN104939810B true CN104939810B (en) 2017-09-01

Family

ID=54155294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410113624.4A Active CN104939810B (en) 2014-03-25 2014-03-25 A kind of method and device controlled the emotion

Country Status (1)

Country Link
CN (1) CN104939810B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105232063B (en) * 2015-10-22 2017-03-22 广东小天才科技有限公司 User mental health detection method and intelligent terminal
CN105380655B (en) * 2015-10-23 2017-03-08 广东小天才科技有限公司 Emotion early warning method and device of mobile terminal and mobile terminal
CN107038840A (en) * 2016-02-04 2017-08-11 中兴通讯股份有限公司 A kind of information processing method of wearable device, device and wearable device
CN107993674A (en) * 2016-10-27 2018-05-04 中兴通讯股份有限公司 A kind of emotion control method and device
CN107714056A (en) * 2017-09-06 2018-02-23 上海斐讯数据通信技术有限公司 A kind of wearable device of intellectual analysis mood and the method for intellectual analysis mood
CN109009170A (en) * 2018-07-20 2018-12-18 深圳市沃特沃德股份有限公司 Detect the method and apparatus of mood
CN110433380A (en) * 2019-08-12 2019-11-12 上海亦蓁健康科技有限公司 A kind of mood regulating device
CN112733548B (en) * 2020-12-30 2022-12-02 上海市杨浦区青少年科技站 Household atmosphere adjusting method, system and equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3676969B2 (en) * 2000-09-13 2005-07-27 株式会社エイ・ジー・アイ Emotion detection method, emotion detection apparatus, and recording medium
US20070238934A1 (en) * 2006-03-31 2007-10-11 Tarun Viswanathan Dynamically responsive mood sensing environments
JP4941966B2 (en) * 2006-09-22 2012-05-30 国立大学法人 東京大学 Emotion discrimination method, emotion discrimination device, atmosphere information communication terminal
CN101822863A (en) * 2010-01-28 2010-09-08 深圳先进技术研究院 Emotion regulating device and method thereof
CN102485165A (en) * 2010-12-02 2012-06-06 财团法人资讯工业策进会 Physiological signal detection system and device capable of displaying emotions, and emotion display method
CN202313367U (en) * 2011-10-31 2012-07-11 苏州市职业大学 Emotion reminding machine
CN202619669U (en) * 2012-04-27 2012-12-26 浙江吉利汽车研究院有限公司杭州分公司 Driver emotion monitoring device
CN103111006A (en) * 2013-01-31 2013-05-22 江苏中京智能科技有限公司 Intelligent mood adjustment instrument
CN203314953U (en) * 2013-06-27 2013-12-04 武汉大学 Emotional watch

Also Published As

Publication number Publication date
CN104939810A (en) 2015-09-30

Similar Documents

Publication Publication Date Title
CN104939810B (en) A kind of method and device controlled the emotion
TWI590228B (en) Voice control system, electronic device having the same, and voice control method
US10013977B2 (en) Smart home control method based on emotion recognition and the system thereof
US8323191B2 (en) Stressor sensor and stress management system
Rocheron et al. Temporal envelope perception in dyslexic children
CN108419096A (en) Speech-sound intelligent playback method and system
CN101419795B (en) Audio signal detection method and device, and auxiliary oral language examination system
Lien et al. Effects of phonetic context on relative fundamental frequency
DE602007004061D1 (en) Estimation of own voice activity with a hearing aid system based on the relationship between direct sound and echo
US20160217322A1 (en) System and method for inspecting emotion recognition capability using multisensory information, and system and method for training emotion recognition using multisensory information
CN107625527B (en) Lie detection method and device
Van Stan et al. Integration of motor learning principles into real-time ambulatory voice biofeedback and example implementation via a clinical case study with vocal fold nodules
CN108630240A (en) A kind of chorus method and device
Soury et al. Stress detection from audio on multiple window analysis size in a public speaking task
Astolfi et al. Speech level parameters in very low and excessive reverberation measured with a contact-sensor-based device and a headworn microphone
Duyck et al. Improving accuracy in detecting acoustic onsets.
Noh et al. Smart home with biometric system recognition
CN105551504B (en) A kind of method and device based on crying triggering intelligent mobile terminal functional application
US20210030358A1 (en) State of discomfort determination device
CN107239822B (en) Information interaction method and system and robot
CN101999902A (en) Voiceprint lie detector and voiceprint lie detecting method
CN112583673B (en) Control method and device for awakening equipment
Mori et al. Between-frequency and between-ear gap detections and their relation to perception of stop consonants
JP2019207233A5 (en)
JP5834521B2 (en) Speech analyzer

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PP01 Preservation of patent right
PP01 Preservation of patent right

Effective date of registration: 20180313

Granted publication date: 20170901

PD01 Discharge of preservation of patent

Date of cancellation: 20210313

Granted publication date: 20170901

PD01 Discharge of preservation of patent
PP01 Preservation of patent right

Effective date of registration: 20210313

Granted publication date: 20170901

PP01 Preservation of patent right
PD01 Discharge of preservation of patent

Date of cancellation: 20240313

Granted publication date: 20170901

PD01 Discharge of preservation of patent