CN109550133B - Emotion pacifying method and system - Google Patents

Emotion pacifying method and system Download PDF

Info

Publication number
CN109550133B
CN109550133B CN201811423682.1A CN201811423682A CN109550133B CN 109550133 B CN109550133 B CN 109550133B CN 201811423682 A CN201811423682 A CN 201811423682A CN 109550133 B CN109550133 B CN 109550133B
Authority
CN
China
Prior art keywords
emotion
conversation
parents
user
soothing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811423682.1A
Other languages
Chinese (zh)
Other versions
CN109550133A (en
Inventor
赵司源
林欣鑫
张若瑶
洪莫凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201811423682.1A priority Critical patent/CN109550133B/en
Publication of CN109550133A publication Critical patent/CN109550133A/en
Application granted granted Critical
Publication of CN109550133B publication Critical patent/CN109550133B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7282Event detection, e.g. detecting unique waveforms indicative of a medical condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0027Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2230/00Measuring parameters of the user

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Psychiatry (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Psychology (AREA)
  • Physiology (AREA)
  • Artificial Intelligence (AREA)
  • Developmental Disabilities (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Educational Technology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Acoustics & Sound (AREA)
  • Anesthesiology (AREA)
  • Hematology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention discloses an emotion placating method, which specifically comprises the following steps: a. inputting life conversation information in advance, and extracting emotion types according to conversation contents; b. obtaining conversation content, judging whether the emotion is in a foreign-emotion state, if so, executing a soothing operation, and if not, not responding; c. and judging the type of the abnormal emotion, and selecting a soothing mode according to the type of the abnormal emotion. Wherein the step a specifically comprises: setting a data acquisition device in an activity area with frequent user conversation, acquiring life conversation information of a user, manually selecting a conversation information interval corresponding to the emotion type, and finishing a preset value; and the data acquisition device is preset with an emotion recognition module for judging emotion types in the conversation content in advance, receiving the matching content if the judgment is correct, and manually adjusting the range of the conversation information interval if the judgment is incorrect.

Description

Emotion pacifying method and system
Technical Field
The invention relates to the field of computers, in particular to an emotion placating method and system.
Background
Parents are the first teachers of children, and many of the parents' speech behavior can affect the bodies of the children. In some situations where mood swings are not desirable, for example, when the child is still small, parents may hurt the self-esteem of the child by saying something more inaudible because some behaviors of the child are not well done, but the child may be afraid that the parents are not afraid to speak, thereby causing the child to be in the proper character and having a bad influence on the subsequent character and character shaping of the child.
Disclosure of Invention
In order to solve the problems, the invention provides an emotion soothing method and system to solve the problem that parents cannot be reminded in time when the parents cury a child, so that the child's personality is inward, and the subsequent personality shaping of the child is adversely affected.
In order to achieve the purpose, the invention provides the following technical scheme:
an emotion placating method specifically comprises the following steps:
a. inputting life conversation information in advance, and extracting emotion types according to conversation contents;
b. obtaining conversation content, judging whether the emotion is in a foreign-emotion state, if so, executing a soothing operation, and if not, not responding;
c. and judging the type of the abnormal emotion, and selecting a soothing mode according to the type of the abnormal emotion.
In some embodiments, step a specifically comprises:
setting a data acquisition device in an activity area with frequent user conversation, acquiring life conversation information of a user, manually selecting a conversation information interval corresponding to the emotion type, and finishing a preset value;
in some embodiments, the data acquisition device is preset with an emotion recognition module, which is configured to pre-determine an emotion type in the session content, accept the matching content if the emotion type is determined correctly, and manually adjust the range of the session information interval if the emotion type is determined incorrectly.
In some embodiments, the mood categories include: one or more of angry, calm, sadness, surprise, and happy.
In some embodiments, the identifying features include short-term energy, gene frequency, mel-frequency cepstral coefficients, and formants.
In some embodiments, the determination of the ventilatory status is made when the mean of the gene frequencies of the identified sounds is between 207.3Hz and 248.9 Hz.
In some embodiments, the soothing means includes one or more of voice alert, music, and light.
The invention also provides an emotion placating system, comprising:
the data acquisition unit is used for acquiring user voice information;
the control unit is used for receiving the user voice information so as to match and analyze emotion categories;
and the soothing unit is used for responding to the abnormal emotion of the user, determining a soothing mode according to the emotion of the user, and performing emotion soothing on the user.
In some embodiments, the control unit includes an emotion analyzing unit for pre-judging and classifying pre-recorded voice information of the user.
In some embodiments, the mood soothing system further comprises a display unit for displaying pre-entered audio data;
and the touch control unit is used for manually matching the emotion category in the section of the audio data.
Through adopting above-mentioned technical scheme, make it compare with prior art and have following beneficial effect:
the scheme can timely send out prompt tones when detecting that the emotion of parents is abnormal, and the parents can timely pay attention to the words of the parents so as to avoid adverse effects of improper words on children.
According to the scheme, the life audio is recorded in advance before use, the user can actively select the interval of the matched audio according to the emotion category after a period of time, the problem of misjudgment and misjudgment caused by individual difference of the user in the same emotion state is avoided, the identification accuracy is greatly improved, and the user experience is improved.
Drawings
FIG. 1 is a schematic illustration of a pacifying process in accordance with an embodiment of the present invention;
FIG. 2 is a block diagram of a system of an emotion soothing system according to an embodiment of the present invention;
FIG. 3 shows the gene frequencies of five common emotional states in the example of the present invention.
Detailed Description
The following describes an emotion soothing method and system according to the present invention in further detail with reference to the accompanying drawings and specific embodiments. Advantages and features of the present invention will become apparent from the following description and from the claims. It is to be noted that the drawings are in a very simplified form and are all used in a non-precise ratio for the purpose of facilitating and distinctly aiding in the description of the embodiments of the invention.
The invention provides an emotion placating method, which comprises the following specific steps:
a. inputting life conversation information in advance, and extracting emotion types according to conversation contents;
b. obtaining conversation content, judging whether the emotion is in a foreign-emotion state, if so, executing a soothing operation, and if not, not responding;
c. and judging the type of the abnormal emotion, and selecting a soothing mode according to the type of the abnormal emotion.
Specifically, referring to fig. 1, before use, the device is placed in a living area where a user frequently talks, and a period of conversation is collected, so as to obtain sound characteristic attributes corresponding to different emotion categories of the user in the period of time, thereby increasing reliability and accuracy in a subsequent use process; after extracting the dialogue information data, the user manually matches a certain dialogue emotion with one emotion category, for example, the user clearly knows which section of the pre-recorded content is happy, and the user selects the dialogue interval to match with the happy direction in the emotion category, so that the device is helped to accurately extract different emotion characteristics, and recognition errors caused by individual differences of the user are avoided.
In another embodiment, the apparatus for prerecording includes an emotion analyzing system capable of performing an initial evaluation of the emotion of the user when speaking, and the user determines whether the emotion analysis is accurate, if so, the emotion analysis is performed, and if not, the emotion analysis is manually adjusted.
After prerecording and finishing, the working state is started, the equipment collects the dialogue information of the user, the emotion state is judged from the dialogue information, and when the equipment is in the angry state, prompt tones are timely sent out to remind people, so that physical and psychological damage to other people is avoided. If the gas is not in the gas generating state, no operation is performed.
Fig. 2 is a system composition block diagram of an emotion soothing system, which includes a main control MCU module, a voice recognition system, a parent emotion recognition sensor, and an audio playing module, wherein the main control MCU module, the voice recognition system, the parent emotion recognition sensor, and the audio playing module are all integrated in one device. The system hardware adopts Arduino UNO development board as MCU, and the core of the system is Atmel 8-bit single-chip microcomputer atmega 328. A method for designing an emotion soothing system, comprising: the device comprises a main control MCU module, a voice recognition system, a parent emotion recognition sensor and an audio playing module, wherein the main control MCU module, the voice recognition system, the parent emotion recognition sensor and the audio playing module are all integrated in one device.
The power supply module is used for supplying power to the whole system and comprises a sensor power supply system, an audio playing power supply system, an online voice recognition system and an MCU (microprogrammed control unit) master control power supply system; the MCU main control module is used for controlling the operation of the whole system, including the control of an audio playing system and the control of an online voice recognition module. The system further comprises a PC-side upper computer display system which is used for displaying parent emotion data in real time and drawing a parent emotion state change diagram. The system is modularized and connected in a building block mode, so that the fussy wiring is avoided, the cost is saved, and the system complexity is reduced. The specific connection between the modules can be adjusted and designed according to actual conditions.
The assistant of the invention adopts the active wake-up function in order to save the complicated procedures of manual startup and shutdown and the like and avoid the trouble of forgetting to start up the system when parents and children talk. The system is in a standby state at ordinary times, when the system identifies the sound of parents, the microphone starts to upload the collected sound to the cloud end to start to call API analysis, and when the cloud end API judges that the sound of the parents is in an abnormal state at the moment, the system starts to work comprehensively.
The voice recognition module adopts a raspberry group, an embedded Linux system is used for coding and uploading the sound recorded by the microphone to the cloud end through Python programming, and a language recognition API of the cloud end is called for voice recognition.
And the parent emotion recognition module collects the language behavior of the parents at the moment by adopting a microphone, and then codes and transmits the language behavior to the online language recognition system.
In the speech emotion data analysis, mainly acoustic features are analyzed, and mainly the features comprise short-time energy, gene frequency, mel frequency cepstrum coefficient, formants and the like. In the speech signal, the gene frequency to be obtained is basically consistent with the gene frequency of vocal cord vibration which is emitted by the user, so the gene frequency is used as the characteristic of speech emotion recognition. It can be seen from fig. 3 that the difference between the gene frequency mean values of the speech of people in the five basic emotional states, i.e., the calm state, the sadness state, the surprise state, the happy state and the angry state, is relatively large, so that the gene frequency mean values can be used to distinguish the five states well, and when the gene frequency mean value of the recognized speech is between 207.3Hz and 248.9Hz, the speech is determined to be the angry state.
The audio playing module is a small and cheap MP3 module, and can directly drive the loudspeaker. The module perfectly integrates hard decoding of MP3, WAV and WMA. Meanwhile, the software supports TF card drive and FAT16 and FAT32 file systems. Can accomplish through simple serial port instruction and play appointed music to and function such as how to play music need not loaded down with trivial details bottom operation, convenient to use, reliable and stable.
The intelligent reminding system adopts 1600 ten thousand LEDs as light emitting sources, and different light colors are displayed by judging different emotions through the system. The parents can be reminded to pay attention to the emotion of the parents through different atmosphere lamps, and meanwhile, the communication environment between the parents and the children can be improved.
The display system of the PC upper computer adopts Labview programming, realizes communication between the device and the upper computer through RS485 or RS232, records the emotional condition of parents at every moment in the PC, and draws a parent emotional state change chart. The parents are reminded to pay attention to the emotion of the parents at any time, and the communication efficiency between the parents and the children is improved.
The emotion prediction system inputs the sound of the parents collected by the microphone into the cloud terminal to call the API for voice recognition, and calls the machine learning API for voice analysis at the same time, and can deduce a point with large emotion fluctuation according to the related information such as the frequency, the time point and the like of emotion change of the parents, so that the prediction effect is achieved, the parents are reminded of paying attention to the emotion in advance, and the parents can adjust the mind.
The voice recorded in the voice recognition is the recorded real voice of the children, so that the closeness between parents and the children can be increased, and the adults can quickly cool and calm down.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (6)

1. An emotion placating method is characterized by comprising the following steps:
a. the method comprises the steps of pre-inputting life conversation information, extracting emotion categories according to conversation contents, enabling a user to actively select a section matched with audio according to the emotion categories after a period of time, and specifically comprising the following steps:
setting a data acquisition device in an activity area with frequent user conversation, acquiring life conversation information of a user, manually selecting a conversation information interval corresponding to the emotion type, and finishing a preset value;
the data acquisition device is internally and pre-provided with an emotion recognition module which is used for judging emotion types in conversation contents in advance, receiving matching contents if the emotion types are judged correctly, and manually adjusting the range of a conversation information interval if the emotion types are judged incorrectly;
extracting emotion categories, and distinguishing the emotion categories by analyzing acoustic features including short-term energy, gene frequency, Mel frequency cepstrum coefficient and formants;
b. obtaining conversation content, judging whether the conversation content is in an abnormal emotional state, if so, executing a soothing operation, and if not, not responding; when the system identifies the sound of the parents, the microphone starts to upload the collected sound to the cloud end to start to call API analysis, and when the cloud end API judges that the sound of the parents is in an abnormal state at the moment, the system starts to work comprehensively;
c. judging the type of the abnormal emotion, selecting a placating mode according to the type of the abnormal emotion, simultaneously using an intelligent reminding system, adopting 1600 ten thousand LEDs as light emitting sources, and judging different emotions through the intelligent reminding system to display different light colors so as to remind parents of paying attention to the emotion of the parents and improve the communication environment between the parents and children.
2. An emotion soothing method according to claim 1, wherein the emotion categories include: one or more of angry, calm, sadness, surprise, and happy.
3. The method according to claim 2, wherein the emotional state is determined when the mean of the gene frequencies of the recognized sounds is between 207.3Hz and 248.9 Hz.
4. An emotion soothing method according to claim 1, wherein the soothing manner includes one or more of voice reminding, music and light.
5. An emotion soothing system, comprising:
the data acquisition unit is used for acquiring the voice information of the user and extracting emotion categories according to the conversation content, and the data acquisition unit specifically comprises: setting a data acquisition device in an activity area with frequent user conversation, acquiring life conversation information of a user, manually selecting a conversation information interval corresponding to the emotion type, and finishing a preset value; the data acquisition device is internally and pre-provided with an emotion recognition module which is used for judging emotion types in conversation contents in advance, receiving matching contents if the emotion types are judged correctly, and manually adjusting the range of a conversation information interval if the emotion types are judged incorrectly;
the control unit is used for receiving the user voice information to match and analyze emotion categories, wherein the emotion categories are extracted and distinguished by analyzing acoustic features including short-time energy, gene frequency, Mel frequency cepstrum coefficient and formants;
the soothing unit is used for responding to the abnormal emotion of the user, determining a soothing mode according to the emotion of the user, soothing the emotion of the user, simultaneously using the intelligent reminding system, adopting 1600 ten thousand LEDs as light emitting sources, and displaying different light colors by judging different emotions through the intelligent reminding system so as to remind parents of paying attention to the emotion of the parents and improve the communication environment between the parents and children; when the system identifies the sound of the parents, the microphone uploads the collected sound to the cloud to call API analysis, and when the cloud API judges that the sound of the parents is in an abnormal state, the system starts to work comprehensively.
6. An emotion soothing system according to claim 5, further comprising a display unit for displaying pre-entered audio data;
and the touch control unit is used for manually matching the emotion category in the section of the audio data.
CN201811423682.1A 2018-11-26 2018-11-26 Emotion pacifying method and system Expired - Fee Related CN109550133B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811423682.1A CN109550133B (en) 2018-11-26 2018-11-26 Emotion pacifying method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811423682.1A CN109550133B (en) 2018-11-26 2018-11-26 Emotion pacifying method and system

Publications (2)

Publication Number Publication Date
CN109550133A CN109550133A (en) 2019-04-02
CN109550133B true CN109550133B (en) 2021-05-11

Family

ID=65867702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811423682.1A Expired - Fee Related CN109550133B (en) 2018-11-26 2018-11-26 Emotion pacifying method and system

Country Status (1)

Country Link
CN (1) CN109550133B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136743A (en) * 2019-04-04 2019-08-16 平安科技(深圳)有限公司 Monitoring method of health state, device and storage medium based on sound collection

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104202718A (en) * 2014-08-05 2014-12-10 百度在线网络技术(北京)有限公司 Method and device for providing information for user
CN104288889A (en) * 2014-08-21 2015-01-21 惠州Tcl移动通信有限公司 Emotion regulation method and intelligent terminal
CN106528859A (en) * 2016-11-30 2017-03-22 英华达(南京)科技有限公司 Data pushing system and method
US20170102783A1 (en) * 2015-10-08 2017-04-13 Panasonic Intellectual Property Corporation Of America Method for controlling information display apparatus, and information display apparatus
CN106658129A (en) * 2016-12-27 2017-05-10 上海智臻智能网络科技股份有限公司 Emotion-based terminal control method and apparatus, and terminal
CN107066514A (en) * 2017-01-23 2017-08-18 深圳亲友科技有限公司 The Emotion identification method and system of the elderly
CN108549720A (en) * 2018-04-24 2018-09-18 京东方科技集团股份有限公司 It is a kind of that method, apparatus and equipment, storage medium are pacified based on Emotion identification
CN108594991A (en) * 2018-03-28 2018-09-28 努比亚技术有限公司 A kind of method, apparatus and computer storage media that help user to adjust mood

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106412312A (en) * 2016-10-19 2017-02-15 北京奇虎科技有限公司 Method and system for automatically awakening camera shooting function of intelligent terminal, and intelligent terminal
CN107714056A (en) * 2017-09-06 2018-02-23 上海斐讯数据通信技术有限公司 A kind of wearable device of intellectual analysis mood and the method for intellectual analysis mood

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104202718A (en) * 2014-08-05 2014-12-10 百度在线网络技术(北京)有限公司 Method and device for providing information for user
CN104288889A (en) * 2014-08-21 2015-01-21 惠州Tcl移动通信有限公司 Emotion regulation method and intelligent terminal
US20170102783A1 (en) * 2015-10-08 2017-04-13 Panasonic Intellectual Property Corporation Of America Method for controlling information display apparatus, and information display apparatus
CN106528859A (en) * 2016-11-30 2017-03-22 英华达(南京)科技有限公司 Data pushing system and method
CN106658129A (en) * 2016-12-27 2017-05-10 上海智臻智能网络科技股份有限公司 Emotion-based terminal control method and apparatus, and terminal
CN107066514A (en) * 2017-01-23 2017-08-18 深圳亲友科技有限公司 The Emotion identification method and system of the elderly
CN108594991A (en) * 2018-03-28 2018-09-28 努比亚技术有限公司 A kind of method, apparatus and computer storage media that help user to adjust mood
CN108549720A (en) * 2018-04-24 2018-09-18 京东方科技集团股份有限公司 It is a kind of that method, apparatus and equipment, storage medium are pacified based on Emotion identification

Also Published As

Publication number Publication date
CN109550133A (en) 2019-04-02

Similar Documents

Publication Publication Date Title
CN108320733B (en) Voice data processing method and device, storage medium and electronic equipment
Tahon et al. Towards a small set of robust acoustic features for emotion recognition: challenges
US7451079B2 (en) Emotion recognition method and device
US20180122372A1 (en) Distinguishable open sounds
EP3396668B1 (en) Classifying speech utterances
CN112750465B (en) Cloud language ability evaluation system and wearable recording terminal
Gerosa et al. A review of ASR technologies for children's speech
US10770092B1 (en) Viseme data generation
CN109545197B (en) Voice instruction identification method and device and intelligent terminal
US20160210985A1 (en) Improving natural language interactions using emotional modulation
KR101108114B1 (en) System and method for analyzing continuous sound, expressing emotion and producing communication about pet dog
WO2011028844A2 (en) Method and apparatus for tailoring the output of an intelligent automated assistant to a user
CN109671435B (en) Method and apparatus for waking up smart device
TW201923736A (en) Speech recognition method, device and system
CN109036395A (en) Personalized speaker control method, system, intelligent sound box and storage medium
US20110144993A1 (en) Disfluent-utterance tracking system and method
CN110246519A (en) Emotion identification method, equipment and computer readable storage medium
JP6915637B2 (en) Information processing equipment, information processing methods, and programs
CN111179965A (en) Pet emotion recognition method and system
CN109550133B (en) Emotion pacifying method and system
Audibert et al. Vowel space and f0 characteristics of infant-directed singing and speech
Siegert et al. How do we speak with Alexa: Subjective and objective assessments of changes in speaking style between HC and HH conversations
US20230206924A1 (en) Voice wakeup method and voice wakeup device
KR20090094576A (en) An apparatus and method for evaluating spoken ability by speech recognition through computer-lead interaction and thereof
CN110808050A (en) Voice recognition method and intelligent equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210511

Termination date: 20211126