CN110534135A - A method of emotional characteristics are assessed with heart rate response based on language guidance - Google Patents
A method of emotional characteristics are assessed with heart rate response based on language guidance Download PDFInfo
- Publication number
- CN110534135A CN110534135A CN201910995416.4A CN201910995416A CN110534135A CN 110534135 A CN110534135 A CN 110534135A CN 201910995416 A CN201910995416 A CN 201910995416A CN 110534135 A CN110534135 A CN 110534135A
- Authority
- CN
- China
- Prior art keywords
- heart rate
- mood
- event
- language
- varhrtn
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
- A61B5/02405—Determining heart rate variability
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
- A61B5/02444—Details of sensor
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
Abstract
The present invention discloses a kind of method for assessing emotional characteristics with heart rate response based on language guidance, the method is assessed based on the voice guide of specification and party's heart rate response and the phonetic feature of response, including being identified and recording to changes in heart rate and the phonetic feature of response is identified and is recorded.Heart rate important change can be carried out Real-time Feedback with phonetic feature and effectively be identified by the present invention.Pass through the voice guide of specification, subject is allowed actively to recall past important emotional event, establish user's basic emotion-heart rate response, mood-phonetic feature reaction baseline, it realizes with heart rate and phonetic feature and quantifies the basic emotion of individuation, and mood-heart rate response of individuation, mood-phonetic feature reaction feature database can be established.It can be applied to the assessment that wound, different mood, self-negation with other people negate etc. dependent events and experience.
Description
Technical field
The present invention relates to a kind of psychological assessment methods, more particularly to a kind of guided based on language with heart rate response to assess feelings
The method of thread feature.
Background technique
Mood and physiology and Behavioral change contact closely, especially have with changes in heart rate and verbal exposition and closely contact.
Past mood experience is stored among memory by event, and party expresses thing in the past in the thing of being mentioned over or in narration
Mood can be reproduced when part, corresponding heart rate and verbal exposition can change, i.e., the phonetic feature of voice can change.Using the heart
Rate reaction or phonetic feature variation are when instructing psychotherapy, to need first to extreme emotion event and heart rate response and phonetic feature
Relationship defined, establish the fundamental relation of personal mood and heart rate, mood and phonetic feature.Therefore, a kind of base is established
In specification voice guide and party's heart rate response and phonetic feature come the method for assessing party's emotional characteristics, very must be
It wants.
Summary of the invention
The present invention is intended to provide a kind of guided with heart rate response based on language come the method for assessing emotional characteristics, can be identified for that
With the heart rate and phonetic feature being recorded under the standard guidance voice that different extreme emotion events wake up.
In order to achieve the above objectives, realization that the present invention adopts the following technical solutions:
The method of assessment party's emotional characteristics disclosed by the invention is voice guide and party's heart rate based on specification
The phonetic feature of reaction and response is assessed, including changes in heart rate is identified and recorded, described carry out to changes in heart rate
Mark and record include:
Including the automatic label method of changes in heart rate, the automatic label method of changes in heart rate is set using the processing of sensor combined data
It is standby that each heart rate is shown and recorded automatically, and significant changes in heart rate is identified automatically, it is described significant
Changes in heart rate includes changes in heart rate and psychological degree of variation variation.
Changes in heart rate is that heart rate deviates normal value more than first predetermined value;
The variation of heart rate variability degree is that heart rate variability degree per minute is greater than second predetermined value;
Using the time as horizontal axis coordinate, heart rate is ordinate of orthogonal axes, graphically shows that heart rate and heart rate per minute become
Different degree changed time point;
Further include in a period of time heart rate and changes in heart rate record, including different heart rate range is in the period
Shared ratio and average heart rate in different time periods, maximum heart rate, HR min, heart rate variability degree;
It is described that changes in heart rate is identified and is recorded is the mark carried out under the voice guide of specification and record;Specification
Voice guide be used for, purposefully wake up party experience different mood events recording, the mood includes positive disposition
Thread and Negative Emotional, the Positive emotion include it is happy, happiness, loosen, take pride in and/or tranquil, the Negative Emotional includes probably
Fear, sadness, indignation, compunction, shame and/or detest;
The data processing equipment includes emotional characteristics evaluation module, and the emotional characteristics evaluation module is according to the mark
Emotional characteristics evaluation is carried out with record.
Preferably, it is described changes in heart rate is identified and record further include to insertion mood event when to changes in heart rate into
Line identifier and record, the mood event are the event for influence party's mood that party recalls.
Preferably, the phonetic feature variation includes Speed variation and/or acoustic amplitudes or frequency variation, and records word speed
And/or the amplitude and frequency of sound, the word speed are the average minute clock number of words of speaker;
The calculation method of the word speed is as follows: caller carries out automatic identification to language and characters and counts, and calculates one section
The total number of word Xn that party speaks after voice guide, the time of speaking accordingly is Tn, calculates word speed Sn=Xn/Tn;
The amplitude A n and frequency Fn of sound in this time are obtained by audible spectrum analysis module.
Preferably, the Speed variation and/or sound intensity variation pass through speech recognition apparatus automatic identification, the speech recognition
Equipment connects data processing equipment.
Preferably, the voice guide of the specification refers to the guidance for allowing party by step according to different emotionalities
Language, associates extreme event experienced in the event of different moods, including memory, and the extreme event includes collision event, wound
Hurt event, growth event or happy event.
Preferably, it is t that described a period of time, which is the broadcasting of voice guide language first 30 seconds of specification,n-1, n-th section of n expression, specification
Voice guide language play when be tn0, non-voice vectoring phase tn1It does not speak for party after the voice guide language broadcasting of specification
Stage, non-voice vectoring phase tn2For specification voice guide language play after party speak the stage;The corresponding period pair
Answer heart rate response index: average heart rate MeanHRtn-1、MeanHRtn0、MeanHRtn1、MeanHRtn2, maximum heart rate
MaxHRtn-1、MaxHRtn0、MaxHRtn1、MaxHRtn2, HR min MinHRtn-1、MinHRtn0、MinHRtn1、MinHRtn2,
Heart rate variability degree VarHRtn-1、VarHRtn0、VarHRtn1、VarHRtn2;Non-voice vectoring phase tn2Phonetic feature include
Speak time tn2, average word speed Sn, average Voice amplitude A n, average Voice frequency Fn.
Further, the invention also includes under the voice guide scene and non-voice guidance scene in specification, whether record
There are the mood event and related person accordingly associated, associated, the record mood event is using multiple-choice question and Characters side
Formula;And time identifier can be inserted by key mode according to the heart rate anomalous variation of sensor feedback.
Further, the subjective intensity being waken up the invention also includes user to specific mood is evaluated and is identified.
Further, the invention also includes to certain extreme emotion event carry out memory wake-up leading question under, party
It in heart rate feature in different time periods wherein, is calculate by the following formula, obtains certain feelings quantified by heart rate
The objective intensity value En of threadx;
Enmean0=| MeanHRtn0-MeanHRtn-1|/MeanHRtn-1
Enmean1=| MeanHRtn1-MeanHRtn-1|/MeanHRtn-1
Enmean2=| MeanHRtn2-MeanHRtn-1|/MeanHRtn-1;
Enmax0=| MaxHRtn0-MaxHRtn-1|/MaxHRtn-1
EnMax1=| MaxHRtn1-MaxHRtn-1|/MaxHRtn-1
EnMax2=| MaxHRtn2-MaxHRtn-1|/MaxHRtn-1;
EnMin0=| MinHRtn0-MinHRtn-1|/MinHRtn-1
EnMin1=| MinHRtn1-MinHRtn-1|/MinHRtn-1
EnMin2=| MinHRtn2-MinHRtn-1|/MinHRtn-1;
EnVar0=| VarHRtn0-VarHRtn-1|/VarHRtn-1
EnVar1=| VarHRtn1-VarHRtn-1|/VarHRtn-1
EnVar2=| VarHRtn2-VarHRtn-1|/VarHRtn-1。
Further, the invention also includes to certain extreme emotion event carry out memory wake-up leading question under, party
When starting with its mood event of language expression, the time tn that speaks that speech recognition and sound audio software for analyzing spectrum automatically analyze out2、
Average word speed Sn, average Voice amplitude An, average Voice frequency Fn, by compared with word speed S when reading aloud a segment standard text0、
Acoustic amplitudes A0, sound frequency F0, it is calculate by the following formula, obtains certain mood quantified by phonetic feature
Objective intensity value Eny:
EnSn=| Sn-S0|/S0
EnAn=| An-A0|/A0
EnFn=| Fn-F0|/F0
Beneficial effects of the present invention are as follows:
The present invention can be under the voice guide of specification by extreme emotion (event) and changes in heart rate, mood (event) and language
Sound feature establishes connection, and heart rate important change is able to carry out Real-time Feedback and effectively mark, and the phonetic feature of response can be timely
It is recorded and is identified, professional person (such as psychotherapist) or party can be helped to quickly understand mood (event) and the heart
Connection between rate, mood (event) and phonetic feature (speaking).Can be applied to wound, different mood, self-negation and other people
The assessment of the dependent events and experience such as negative.
Detailed description of the invention
Fig. 1 is the process schematic that the heart rate guided through received pronunciation and phonetic feature establish sentiment indicator library.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing, to the present invention into
Row is further described.
As shown in Figure 1, the present invention is used based on heart rate (pulse frequency) monitoring and speech recognition, a kind of language in specification is proposed
By changes in heart rate and a kind of method of phonetic feature and mood event correlation analysis under sound guidance.
Under the voice guide of specification, party recalls and associates to corresponding mood event, in the process heart rate
Real-time change or heart rate in special time period (before voice guide, during voice guide, after voice guide in party
Before verbal exposition, during verbal exposition) mode and variation, prompt certain physiological status and variation and it is associated
Mood, the mode and variation of cognition or behavior (mental process).How to know from heart rate and noematic linkage change procedure
Not Chu mood and its variation, to psychotherapy have important impulse.
Under the voice guide of specification, party recalls and associates to corresponding mood event, in response process
When carrying out verbal exposition (saying dependent event and experience), duration of speaking, word speed, the amplitude of voice and frequency are able to reflect out and work as
Mood, cognition or the mode of behavior of thing people and variation.How to know from phonetic feature and noematic linkage change procedure
Not Chu mood and its variation, to psychotherapy have important impulse.
Using: by the voice guide of specification, allows subject actively to recall past important emotional event, establish user's base
The baseline of this mood-heart rate response, mood-phonetic feature reaction realizes and quantifies individuation with heart rate and phonetic feature
Basic emotion, and mood-heart rate response of individuation, mood-phonetic feature reaction feature database can be established.It can help specially
Industry personage (such as psychotherapist) or party contact the foundation of changes in heart rate, phonetic feature and the emotional change of party,
Also the variation of changes in heart rate, phonetic feature and behavior and cognition (memory, association, attention, sensory perception and thinking etc.) can be built
Changes in heart rate, phonetic feature and external environment can also be changed (stimulation: the voice guide language of specification) and establish connection by vertical connection
System.It will be applied to wound, different mood, self-negation with other people and negate etc. the assessment of dependent events and experience.
First one step process (the automatic label method of changes in heart rate): sensor, combination of hardware software program draw in the voice of specification
Under leading and (inviting party to associate extreme emotion event, and it undergoes with locutionary act), software and hardware automatically carries out each heart rate
Display and record, and significant changes in heart rate is identified automatically.Changed to establish heart rate and party's psychology physiological
The basic connection of journey lays the foundation.
Calculate heart rate according to RR interphase, heart rate=60/RR time interval (second), i.e., according to complete heart time every time and
The beats per minute calculated.
To being widely varied of each heart rate (such as more than or 5 beats/min of decline), identified automatically.
Heart rate variability degree per minute is calculated, 3 beats/min are greater than (before heart rate is widely varied to degree of variation per minute
It is identified afterwards).
Can heart rate to a period of time and changes in heart rate carry out System Reports:
A period of time of the voice guide (n-th of guidance) of different mood events wake-up is provided, that is, the language standardized
Sound guidance plays first 30 seconds as tn-1, is the tn0, (voice of specification of non-voice vectoring phase 1 when the voice guide language of specification plays
Party does not speak the stage after leading question plays) it is tn1, the non-voice vectoring phase 2 is (after the voice guide language of specification plays
Party speaks the stage) be tn2 average heart rate, maximum heart rate, HR min, heart rate variability degree;
Using the time as horizontal axis coordinate, heart rate is ordinate of orthogonal axes, graphically shows heart rate and heart rate variability degree weight
Change the time point of generation.
Second one step process (manual identification's method of changes in heart rate): it is based on rhythm of the heart, psychotherapist or party can
The heart rate of important change is identified in time during the voice guide of specification and party's verbal exposition.
The method of mark:
1) striking some control key or icon on computer or mobile phone can be inserted into time identifier;
2) then can also (whether party after normal voice guidance carries out speech table by multiple-choice question or typing text
Reach, whether clear, the mood event topic of statement, personage, time, place etc. if expressing), i.e. record mood event;
3) can also to normal voice guide after wake up mood (fear, sadness, indignation, compunction, shame detest, happily,
Happiness is loosened, takes pride in, calmness etc.) intensity evaluated and identified.
The display of mark: it is based on identified time, the heart rate exceptional value that can show different time points and text are (if recorded
Enter text, and can modify).
Based on identified time, 30 seconds (or other setting times) the heart rate average values in front and back and heart rate variability degree are calculated, and is shown
Show.
Third one step process (heart rate combination speech robot method of identification): being based on speech monitoring, and software program automatic identification works as thing
People starts the time of verbal exposition and the time of end, word speed (number of words stated per minute) and acoustic amplitudes and frequency variation simultaneously
Give variation mark.
Speech robot recognizer carries out automatic identification to language and characters and counts, and calculates party after one section of voice guide
The total number of word spoken is Xn, and the time of speaking accordingly is Tn (i.e. tn2), calculates word speed Sn=Xn/Tn.
The amplitude A n and frequency Fn of sound in this time are obtained using spectrum analysis program.
Certainly, the present invention can also have other various embodiments, for example the voice guide of specification can be replaced with specification
Video guidance, without deviating from the spirit and substance of the present invention, those skilled in the art can be according to the present invention
Various corresponding changes and modifications are made, but these corresponding changes and modifications all should belong to appended claims of the invention
Protection scope.
Claims (10)
1. a kind of method for assessing emotional characteristics with heart rate response based on language guidance, which is characterized in that the method is base
It is assessed in the voice guide of specification with party's heart rate response and the phonetic feature of response, including changes in heart rate is marked
Know and record, it is described changes in heart rate to be identified and record includes:
Including the automatic label method of changes in heart rate, the automatic label method of changes in heart rate uses sensor combined data processing equipment certainly
It is dynamic that each heart rate is shown and recorded, and significant changes in heart rate is identified automatically, the significant heart rate
Variation includes changes in heart rate and psychological degree of variation variation.
Changes in heart rate is that heart rate deviates normal value more than first predetermined value;
The variation of heart rate variability degree is that heart rate variability degree per minute is greater than second predetermined value;
Using the time as horizontal axis coordinate, heart rate is ordinate of orthogonal axes, graphically shows heart rate and heart rate variability degree per minute
Changed time point;
Further include in a period of time heart rate and changes in heart rate record, including different heart rate range is shared by the period
Ratio and average heart rate in different time periods, maximum heart rate, HR min, heart rate variability degree;
It is described that changes in heart rate is identified and is recorded is the mark carried out under the voice guide of specification and record;The language of specification
Sound guidance be used for, purposefully wake up party experience different mood events recording, the mood include Positive emotion and
Negative Emotional, the Positive emotion include it is happy, happiness, loosen, take pride in and/or tranquil, the Negative Emotional includes frightened, sad
Wound, indignation, compunction, shame and/or detest;
The data processing equipment includes emotional characteristics evaluation module, and the emotional characteristics evaluation module is according to the mark and note
Record carries out emotional characteristics evaluation.
2. the method according to claim 1 for assessing emotional characteristics with heart rate response based on language guidance, feature exist
In, it is described changes in heart rate is identified and record further include to insertion mood event when changes in heart rate is identified and is remembered
Record, the mood event are the event for influence party's mood that party recalls.
3. the method according to claim 1 for assessing emotional characteristics with heart rate response based on language guidance, feature exist
Include Speed variation and/or acoustic amplitudes or frequency variation in, phonetic feature variation, and records word speed and/or sound
Amplitude and frequency, the word speed are the average minute clock number of words of speaker;
The calculation method of the word speed is as follows: caller carries out automatic identification to language and characters and counts, and calculates a Duan Yuyin
The total number of word Xn that party speaks after guidance, the time of speaking accordingly is Tn, calculates word speed Sn=Xn/Tn;
The amplitude A n and frequency Fn of sound in this time are obtained by audible spectrum analysis module.
4. the method according to claim 3 for assessing emotional characteristics with heart rate response based on language guidance, feature exist
In by speech recognition apparatus automatic identification, the speech recognition apparatus connects data for the Speed variation and/or sound intensity variation
Processing equipment.
5. the method according to claim 1 for assessing emotional characteristics with heart rate response based on language guidance, feature exist
In the voice guide of the specification refers to the leading question for allowing party by step according to different emotionalities, and association does not sympathize with
Extreme event experienced in the event of thread, including memory, the extreme event include collision event, traumatic event, growth thing
Part or happy event.
6. the method according to claim 1 for assessing emotional characteristics with heart rate response based on language guidance, feature exist
In described a period of time is that the broadcasting of voice guide language first 30 seconds of specification are tn-1, n-th section of n expression, the voice guide language of specification
It is t when broadcastingn0, non-voice vectoring phase tn1It does not speak the stage for party after the voice guide language broadcasting of specification, non-voice draws
Lead stage tn2For specification voice guide language play after party speak the stage;The corresponding period corresponds to heart rate response and refers to
Mark: average heart rate MeanHRtn-1、MeanHRtn0、MeanHRtn1、MeanHRtn2, maximum heart rate MaxHRtn-1、MaxHRtn0、
MaxHRtn1、MaxHRtn2, HR min MinHRtn-1、MinHRtn0、MinHRtn1、MinHRtn2, heart rate variability degree
VarHRtn-1、VarHRtn0、VarHRtn1、VarHRtn2;Non-voice vectoring phase tn2Phonetic feature include speaking time tn2、
Average word speed Sn, average Voice amplitude A n, average Voice frequency Fn.
7. the method according to claim 1 for assessing emotional characteristics with heart rate response based on language guidance, feature exist
In, further include specification voice guide scene and non-voice guidance scene under, record whether have the feelings accordingly associated, associated
Thread event and related person, the record mood event is using multiple-choice question and Characters mode;And it can be anti-according to sensor
The heart rate anomalous variation of feedback is inserted into time identifier by key mode.
8. the method according to claim 6 for assessing emotional characteristics with heart rate response based on language guidance, feature exist
In further including that the subjective intensity that is waken up to specific mood of user is evaluated and identified.
9. the method according to claim 6 for assessing emotional characteristics with heart rate response based on language guidance, feature exist
In further including under the leading question for carrying out memory wake-up to certain extreme emotion event, party is in different time periods wherein
Heart rate feature, is calculate by the following formula, and obtains the objective intensity value En of certain mood quantified by heart ratex;
Enmean0=| MeanHRtn0-MeanHRtn-1|/MeanHRtn-1
Enmean1=| MeanHRtn1-MeanHRtn-1|/MeanHRtn-1
Enmean2=| MeanHRtn2-MeanHRtn-1|/MeanHRtn-1;
Enmax0=| MaxHRtn0-MaxHRtn-1|/MaxHRtn-1
EnMax1=| MaxHRtn1-MaxHRtn-1|/MaxHRtn-1
EnMax2=| MaxHRtn2-MaxHRtn-1|/MaxHRtn-1;
EnMin0=| MinHRtn0-MinHRtn-1|/MinHRtn-1
EnMin1=| MinHRtn1-MinHRtn-1|/MinHRtn-1
EnMin2=| MinHRtn2-MinHRtn-1|/MinHRtn-1;
EnVar0=| VarHRtn0-VarHRtn-1|/VarHRtn-1
EnVar1=| VarHRtn1-VarHRtn-1|/VarHRtn-1
EnVar2=| VarHRtn2-VarHRtn-1|/VarHRtn-1。
10. the method according to claim 6 for assessing emotional characteristics with heart rate response based on language guidance, feature exist
In further including under the leading question for carrying out memory wake-up to certain extreme emotion event, party starts with its mood of language expression
When event, the time tn that speaks that speech recognition and sound audio software for analyzing spectrum automatically analyze out2, average word speed Sn, average Voice vibration
Width An, average Voice frequency Fn, by compared with word speed S when reading aloud a segment standard text0, acoustic amplitudes A0, sound frequency F0,
It is calculate by the following formula, obtains the objective intensity value En of certain mood quantified by phonetic featurey:
EnSn=| Sn-S0|/S0
EnAn=| An-A0|/A0
EnFn=| Fn-F0|/F0。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910995416.4A CN110534135A (en) | 2019-10-18 | 2019-10-18 | A method of emotional characteristics are assessed with heart rate response based on language guidance |
PCT/CN2020/121972 WO2021073646A1 (en) | 2019-10-18 | 2020-10-19 | Method for evaluating emotional characteristics based on language guidance and heart rate response |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910995416.4A CN110534135A (en) | 2019-10-18 | 2019-10-18 | A method of emotional characteristics are assessed with heart rate response based on language guidance |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110534135A true CN110534135A (en) | 2019-12-03 |
Family
ID=68672102
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910995416.4A Pending CN110534135A (en) | 2019-10-18 | 2019-10-18 | A method of emotional characteristics are assessed with heart rate response based on language guidance |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110534135A (en) |
WO (1) | WO2021073646A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111407255A (en) * | 2020-04-01 | 2020-07-14 | 四川大学华西医院 | Method for recording, evaluating and intervening emotion through heart rate change and voice |
WO2021073646A1 (en) * | 2019-10-18 | 2021-04-22 | 四川大学华西医院 | Method for evaluating emotional characteristics based on language guidance and heart rate response |
CN113509154A (en) * | 2020-04-10 | 2021-10-19 | 广东小天才科技有限公司 | Human body temperature determination method and device, computer equipment and storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115641837A (en) * | 2022-12-22 | 2023-01-24 | 北京资采信息技术有限公司 | Intelligent robot conversation intention recognition method and system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080221401A1 (en) * | 2006-10-27 | 2008-09-11 | Derchak P Alexander | Identification of emotional states using physiological responses |
CN101822863A (en) * | 2010-01-28 | 2010-09-08 | 深圳先进技术研究院 | Emotion regulating device and method thereof |
CN106803423A (en) * | 2016-12-27 | 2017-06-06 | 智车优行科技(北京)有限公司 | Man-machine interaction sound control method, device and vehicle based on user emotion state |
CN106910514A (en) * | 2017-04-30 | 2017-06-30 | 上海爱优威软件开发有限公司 | Method of speech processing and system |
CN107212896A (en) * | 2017-04-05 | 2017-09-29 | 天津大学 | A kind of emotional stability overall evaluation system and information processing method |
CN107714056A (en) * | 2017-09-06 | 2018-02-23 | 上海斐讯数据通信技术有限公司 | A kind of wearable device of intellectual analysis mood and the method for intellectual analysis mood |
CN108307037A (en) * | 2017-12-15 | 2018-07-20 | 努比亚技术有限公司 | Terminal control method, terminal and computer readable storage medium |
CN109087670A (en) * | 2018-08-30 | 2018-12-25 | 西安闻泰电子科技有限公司 | Mood analysis method, system, server and storage medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2713881B1 (en) * | 2011-06-01 | 2020-10-07 | Koninklijke Philips N.V. | Method and system for assisting patients |
CN105374366A (en) * | 2015-10-09 | 2016-03-02 | 广东小天才科技有限公司 | Method and system for wearable device to identify meaning |
JP6761598B2 (en) * | 2016-10-24 | 2020-09-30 | 富士ゼロックス株式会社 | Emotion estimation system, emotion estimation model generation system |
CN108634969B (en) * | 2018-05-16 | 2021-03-12 | 京东方科技集团股份有限公司 | Emotion detection device, emotion detection system, emotion detection method, and storage medium |
CN109394209B (en) * | 2018-10-15 | 2021-07-06 | 汕头大学 | Personalized emotion adjusting system and method for pregnant woman music treatment |
CN110534135A (en) * | 2019-10-18 | 2019-12-03 | 四川大学华西医院 | A method of emotional characteristics are assessed with heart rate response based on language guidance |
CN111407255A (en) * | 2020-04-01 | 2020-07-14 | 四川大学华西医院 | Method for recording, evaluating and intervening emotion through heart rate change and voice |
-
2019
- 2019-10-18 CN CN201910995416.4A patent/CN110534135A/en active Pending
-
2020
- 2020-10-19 WO PCT/CN2020/121972 patent/WO2021073646A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080221401A1 (en) * | 2006-10-27 | 2008-09-11 | Derchak P Alexander | Identification of emotional states using physiological responses |
CN101822863A (en) * | 2010-01-28 | 2010-09-08 | 深圳先进技术研究院 | Emotion regulating device and method thereof |
CN106803423A (en) * | 2016-12-27 | 2017-06-06 | 智车优行科技(北京)有限公司 | Man-machine interaction sound control method, device and vehicle based on user emotion state |
CN107212896A (en) * | 2017-04-05 | 2017-09-29 | 天津大学 | A kind of emotional stability overall evaluation system and information processing method |
CN106910514A (en) * | 2017-04-30 | 2017-06-30 | 上海爱优威软件开发有限公司 | Method of speech processing and system |
CN107714056A (en) * | 2017-09-06 | 2018-02-23 | 上海斐讯数据通信技术有限公司 | A kind of wearable device of intellectual analysis mood and the method for intellectual analysis mood |
CN108307037A (en) * | 2017-12-15 | 2018-07-20 | 努比亚技术有限公司 | Terminal control method, terminal and computer readable storage medium |
CN109087670A (en) * | 2018-08-30 | 2018-12-25 | 西安闻泰电子科技有限公司 | Mood analysis method, system, server and storage medium |
Non-Patent Citations (1)
Title |
---|
贾丽萍: "《情绪与行为》", 30 June 2019, 青岛:中国海洋大学出版社 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021073646A1 (en) * | 2019-10-18 | 2021-04-22 | 四川大学华西医院 | Method for evaluating emotional characteristics based on language guidance and heart rate response |
CN111407255A (en) * | 2020-04-01 | 2020-07-14 | 四川大学华西医院 | Method for recording, evaluating and intervening emotion through heart rate change and voice |
CN113509154A (en) * | 2020-04-10 | 2021-10-19 | 广东小天才科技有限公司 | Human body temperature determination method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2021073646A1 (en) | 2021-04-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110534135A (en) | A method of emotional characteristics are assessed with heart rate response based on language guidance | |
McFarland | Respiratory markers of conversational interaction | |
EP1423846B1 (en) | Method and apparatus for speech analysis | |
US9131053B1 (en) | Method and system for improving call-participant behavior through game mechanics | |
Heldner et al. | Pitch similarity in the vicinity of backchannels | |
Hall et al. | Fundamental frequency, jitter, and shimmer in preschoolers who stutter | |
Kreiman et al. | The relationship between acoustic and perceived intraspeaker variability in voice quality | |
CN111407255A (en) | Method for recording, evaluating and intervening emotion through heart rate change and voice | |
JP4587854B2 (en) | Emotion analysis device, emotion analysis program, program storage medium | |
Gallardo et al. | The nautilus speaker characterization corpus: Speech recordings and labels of speaker characteristics and voice descriptions | |
CN114664328A (en) | Voice guidance system and operation method thereof | |
US7308407B2 (en) | Method and system for generating natural sounding concatenative synthetic speech | |
Scherer et al. | Automatic verification of emotionally stressed speakers: The problem of individual differences | |
JP2012198726A (en) | Conversation support device and conversation support method | |
Baiat et al. | Topic change detection based on prosodic cues in unimodal setting | |
Campbell | Individual traits of speaking style and speech rhythm in a spoken discourse | |
Kurtić et al. | Fundamental frequency height as a resource for the management of overlap in talk-in-interaction | |
JP2018045692A (en) | Co-thinking support program, conversation support device and conversation support method | |
WO2018108284A1 (en) | Audio recording device for presenting audio speech missed due to user not paying attention and method thereof | |
US7092884B2 (en) | Method of nonvisual enrollment for speech recognition | |
Tao et al. | Cuempathy: A counseling speech dataset for psychotherapy research | |
Amino et al. | Effects of linguistic contents on perceptual speaker identification: Comparison of familiar and unknown speaker identifications | |
CN109672787A (en) | A kind of device intelligence based reminding method | |
Welkowitz et al. | An automated system for the analyses of temporal speech patterns: description of the hardware and software | |
Mongkolnavin et al. | Prediction of Forthcoming Anger of Customer in Call Center Dialogs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191203 |
|
RJ01 | Rejection of invention patent application after publication |