CN109550133A - A kind of mood pacifies method and system - Google Patents
A kind of mood pacifies method and system Download PDFInfo
- Publication number
- CN109550133A CN109550133A CN201811423682.1A CN201811423682A CN109550133A CN 109550133 A CN109550133 A CN 109550133A CN 201811423682 A CN201811423682 A CN 201811423682A CN 109550133 A CN109550133 A CN 109550133A
- Authority
- CN
- China
- Prior art keywords
- mood
- classification
- pacifies
- user
- dialog information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4803—Speech analysis specially adapted for diagnostic purposes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7275—Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7282—Event detection, e.g. detecting unique waveforms indicative of a medical condition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0027—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M2230/00—Measuring parameters of the user
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Animal Behavior & Ethology (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Heart & Thoracic Surgery (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Psychiatry (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Physiology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Psychology (AREA)
- Child & Adolescent Psychology (AREA)
- Anesthesiology (AREA)
- Hematology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Developmental Disabilities (AREA)
- Educational Technology (AREA)
- Hospice & Palliative Care (AREA)
- Social Psychology (AREA)
- Acoustics & Sound (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The invention discloses a kind of moods to pacify method, and the dialog information specifically includes the following steps: the preparatory typing of a. is lived extracts wherein mood classification according to conversation content;B. conversation content is obtained, judges whether to locate mood in different not-ready status, operation is pacified if so, executing, if it is not, being then not responding to;C. judge the classification of the abnormal emotion, and mode is pacified according to abnormal emotion classification selection.Wherein step a is specifically included: by data acquisition device setting in the frequent zone of action of user session, being acquired user's life dialog information, is manually selected the corresponding dialog information section of mood classification, complete preset value;It is preset with Emotion identification module in the data acquisition device, for prejudging the mood classification in conversation content, receives matching content if correct judgment, manually adjusts dialog information interval range if being out of one's reckoning.
Description
Technical field
The present invention relates to computer fields, and in particular to a kind of mood pacifies method and system.
Background technique
Parent is the first teacher of child, and many demeanors of parent will affect the whole body of child.Some unsuitable
In the scene for mood swing occur, for example, when child is also small, can be done because of some behaviors of child bad of parent and say
Hurt the self-respect of child if unpleasant to hear, but child is again because fear that parent dare not speak, to cause child
Introverted, the personality later to child, which is moulded, causes adverse effect.
Summary of the invention
To solve the above problems, the present invention provides a kind of moods to pacify method and system, child is being abused to solve parent
The period of the day from 11 p.m. to 1 a.m due to that cannot be reminded in time, to cause child introverted, the personality later to child, which is moulded, causes adverse effect
The problem of.
To achieve the above object, the present invention the following technical schemes are provided:
A kind of mood pacifies method, specifically includes the following steps:
A. preparatory typing life dialog information, extracts wherein mood classification according to conversation content;
B. conversation content is obtained, judges whether to locate mood in different not-ready status, operation is pacified if so, executing, if it is not, not ringing then
It answers;
C. judge the classification of the abnormal emotion, and mode is pacified according to abnormal emotion classification selection.
In some embodiments, step a is specifically included:
By data acquisition device setting in the frequent zone of action of user session, user's life dialog information is acquired, manually
The corresponding dialog information section of mood classification is selected, preset value is completed;
In some embodiments, it is preset with Emotion identification module in the data acquisition device, for sentencing in advance
Mood classification in disconnected conversation content, receives matching content if correct judgment, manually adjusts dialogue letter if being out of one's reckoning
Cease interval range.
In some embodiments, the mood classification includes: in angry, tranquil, sad, surprised and happiness
One or several.
In some embodiments, the identification feature includes short-time energy, gene frequency, mel-frequency cepstrum system
Several and formant.
In some embodiments, when the gene frequency mean value of the sound recognized is in 207.3Hz~248.9Hz
Between when, will be determined as angry state.
In some embodiments, the mode of pacifying includes one of voice reminder, music, light or several
Kind.
The present invention also provides a kind of moods to pacify system, comprising:
Data acquisition unit, for acquiring user voice information;
Control unit, for receiving the user voice information, with the matching analysis mood classification;
Unit is pacified, for abnormal in response to the user emotion, mode is pacified according to the determination of the mood of the user, and
Mood is carried out to the user to pacify.
In certain embodiments, described control unit includes a mood analytical unit, to the user voice of pre- typing
Information is judged and is classified in advance.
In certain embodiments, the mood system of pacifying further includes the audio number that display unit is used to show pre- typing
According to;
Touch control unit, for matching mood classification manually in the segment of the audio data.
By using above-mentioned technical proposal, have the advantages that it compared with prior art
This programme can issue prompt tone when detecting parent's abnormal feeling in time, when parent pay attention to oneself speech in time
Row, causes adverse effect to child to avoid improperly speech.
Using preceding preparatory recording life audio, user can actively select according to mood classification this programme over time
The segment for matching audio, is avoided same emotional state and is judged by accident misjudgement problem as caused by user's individual difference, significantly
The accuracy rate of identification is improved, user experience is promoted.
Detailed description of the invention
Fig. 1 is that one embodiment of the invention pacifies flow diagram;
Fig. 2 pacifies the system composition block diagram of system for mood a kind of in the embodiment of the present invention;
Fig. 3 is the gene frequency in the embodiment of the present invention under five kinds of common affective states.
Specific embodiment
Below in conjunction with the drawings and specific embodiments to a kind of mood proposed by the present invention pacify method and system make it is further
It is described in detail.According to following explanation and claims, advantages and features of the invention will be become apparent from.It should be noted that attached drawing
It is all made of very simplified form and uses non-accurate ratio, only to convenient, lucidly the aid illustration present invention is implemented
The purpose of example.
The present invention provides a kind of moods to pacify method, and specific steps include:
A. preparatory typing life dialog information, extracts wherein mood classification according to conversation content;
B. conversation content is obtained, judges whether to locate mood in different not-ready status, operation is pacified if so, executing, if it is not, not ringing then
It answers;
C. judge the classification of the abnormal emotion, and mode is pacified according to abnormal emotion classification selection.
Specifically, referring to Fig. 1, acquiring one section first saying that the equipment is placed on the living area that user often talks with using preceding
The dialogue of time, it is therefore an objective to the corresponding sound property attribute of different mood classifications of user in this period is obtained, after increasing
Reliability and order of accuarcy in continuous use process;After extracting dialog information data, user manually carries out certain section dialogue mood
A kind of mood classification is matched, for example, user will be apparent that word when which section is happiness in the content prerecorded, user's selection
This section talks with the happiness in section and mood classification to matching, so that equipment be helped accurately to extract different mood characteristics, avoid because
The individual difference of user causes identification mistake.
In another embodiment, include a mood analysis system in the equipment for prerecording, can speak to user
When mood carry out preliminary assessment step judgement, judge that mood analyses whether accurately, to use if accurately by user, the hand if inaccurate
Dynamic adjustment.
Starting working condition after the completion of prerecording, equipment acquires the dialog information of user, therefrom judges the state of mood,
Prompt tone is issued in time when in angry to be reminded, and avoids causing physical and mental injury to other people.If not angry state is then
It does nothing.
Fig. 2 is the system composition block diagram that a kind of mood pacifies system, main control MCU module, speech recognition system, parent's mood
Identification sensor, audio playing module, the main control MCU module, speech recognition system, parent's Emotion identification sensor, audio
Playing module is integrated in inside a device.The system hardware uses MCU of the Arduino UNO development board as master control,
Core be Atmel 8 single-chip microcontroller atmega328.A kind of mood pacifies design method, comprising: main control MCU module,
Speech recognition system, parent's Emotion identification sensor, audio playing module, the main control MCU module, speech recognition system, family
Long Emotion identification sensor, audio playing module are integrated in inside a device.
Power module is used to power to whole system, including sensor power systems, audio play power-supply system, online language
Sound identifying system, MCU master control power-supply system;MCU main control module is used to control the operation of whole system, including plays to audio and be
The control of system and the control of online speech recognition module.Further include the end PC host computer display system, is used for real-time display parent feelings
Thread data, and draw parent's emotional state variation diagram.The system avoids irritated wiring by modularization, building block system connection, with
Reach save the cost, reduces the purpose of system complexity.Specific connection between above-mentioned module can carry out according to the actual situation
Adaptable adjustment and design.
Active arousal function, assistant of this invention in order to save the cumbersome process such as manual control switch's machine, while in order to avoid
Child parent forgets that the trouble such as starting, system use active arousal function in dialogue.System is usually in standby, when being
When system recognizes the sound of parent, microphone starts to start collected sound upload cloud to call API analysis, works as cloud
When API judges that the sound of parent at this time is in abnormal condition, system starts to work comprehensively.
Speech recognition module uses raspberry pie, is programmed using embedded Linux system by Python, by microphone records
Good sound carries out coding and uploads to cloud, and the language identification API in cloud is called to carry out speech recognition.
Parent's Emotion identification module, collects parent's language performance this moment using microphone, then carry out coding transmission to
Line speech recognition system.
It mainly include short-time energy, gene frequency mainly based on analytical acoustics feature in the analysis of speech emotional data
The features such as rate, mel-frequency cepstrum coefficient and formant.In voice signal, what gene frequency and we needed issued
The gene frequency of vocal cord vibration is substantially consistent, so we are using gene frequency come the spy as speech emotion recognition
Property.As can be drawn from Figure 3, the voice that people speak is in tranquility, sad state, surprised state, happiness state and anger
The difference comparsion between gene frequency mean value under this five kinds of basic emotion states of state is big, so the mean value of gene frequency can be with
For distinguishing this five kinds of states well, when the gene frequency mean value of the sound recognized is between 207.3Hz~248.9Hz
When, it will be determined as angry state.
Audio playing module is a small and exquisite and cheap MP3 module, can directly drive loudspeaker.Module itself
The perfect integration hard decoder of MP3, WAV, WMA.FAT16, FAT32 file system are supported in software support TF card driving simultaneously
System.It can be completed by the instruction of simple serial ports and play specified music, and how to play the functions such as music, without cumbersome
Bottom operation, it is easy to use, it is reliable and stable.
Intelligent reminding system judges different moods by system to show difference using 16,000,000 color LED as light emitting source
Light color.Parent can be reminded to pay attention to the mood of oneself by different atmosphere lamps, while parent and child can also be improved
Communication environment between son.
PC host computer display system, is programmed using Labview, by RS485 RS232 realization device and host computer it
Between communication, the emotional status all the time of parent is all recorded in PC machine, and draw parent's emotional state variation diagram.When
The mood for reminding parent to pay attention to oneself is carved, the exchange efficiency between parent-child is improved.
Emotional prediction system, when the voice input cloud of the collected parent of microphone is called API to carry out voice by system
The machine learning API that speech analysis is called while identification, can be according to the frequency of parent's emotional change, number and time point etc.
Relevant information is inferred to the bigger point of mood swing, to achieve the effect that prediction, remind parent in advance it is noted that oneself
Mood allows parent to adjust phychology.
The sound recorded in speech recognition is the real voice of the child of typing, can increase getting close between parent-child
Degree, enables adult quickly to cool down.
Specific embodiments of the present invention are described above.It is to be appreciated that the invention is not limited to above-mentioned
Particular implementation, those skilled in the art can make a variety of changes or modify within the scope of the claims, this not shadow
Ring substantive content of the invention.In the absence of conflict, the feature in embodiments herein and embodiment can any phase
Mutually combination.
Claims (10)
1. a kind of mood pacifies method, which is characterized in that specifically includes the following steps:
A. preparatory typing life dialog information, extracts wherein mood classification according to conversation content;
B. conversation content is obtained, judges whether to locate mood in different not-ready status, operation is pacified if so, executing, if it is not, being then not responding to;
C. judge the classification of the abnormal emotion, and mode is pacified according to abnormal emotion classification selection.
2. a kind of mood according to claim 1 pacifies method, which is characterized in that wherein step a is specifically included:
By data acquisition device setting in the frequent zone of action of user session, user's life dialog information is acquired, is manually selected
Preset value is completed in the corresponding dialog information section of mood classification.
3. a kind of mood according to claim 2 pacifies method, which is characterized in that be preset in the data acquisition device
Emotion identification module receives matching content, if judgement for prejudging the mood classification in conversation content if correct judgment
It is incorrect, manually adjust dialog information interval range.
4. a kind of mood according to claim 1 pacifies method, which is characterized in that the mood classification includes: angry, flat
One or several in quiet, sad, surprised and happiness.
5. a kind of mood according to claim 1 pacifies method, which is characterized in that the identification feature includes in short-term can
Amount, gene frequency, mel-frequency cepstrum coefficient and formant.
6. a kind of mood according to claim 5 pacifies method, which is characterized in that when the gene frequency of the sound recognized
When mean value is between 207.3Hz~248.9Hz, it will be determined as angry state.
7. a kind of mood according to claim 1 pacifies method, which is characterized in that the mode of pacifying includes that voice mentions
One or more of awake, music, light.
8. a kind of mood pacifies system characterized by comprising
Data acquisition unit, for acquiring user voice information;
Control unit, for receiving the user voice information, with the matching analysis mood classification;
Unit is pacified, for abnormal in response to the user emotion, mode is pacified according to the determination of the mood of the user, and to institute
User's progress mood is stated to pacify.
9. a kind of mood according to claim 8 pacifies system, which is characterized in that described control unit includes a mood point
Unit is analysed, the user voice information of pre- typing is judged and classified in advance.
10. a kind of mood according to claim 8 pacifies system, which is characterized in that the mood system of pacifying further includes display
Unit is used to show the audio data of pre- typing;
Touch control unit, for matching mood classification manually in the segment of the audio data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811423682.1A CN109550133B (en) | 2018-11-26 | 2018-11-26 | Emotion pacifying method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811423682.1A CN109550133B (en) | 2018-11-26 | 2018-11-26 | Emotion pacifying method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109550133A true CN109550133A (en) | 2019-04-02 |
CN109550133B CN109550133B (en) | 2021-05-11 |
Family
ID=65867702
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811423682.1A Expired - Fee Related CN109550133B (en) | 2018-11-26 | 2018-11-26 | Emotion pacifying method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109550133B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110136743A (en) * | 2019-04-04 | 2019-08-16 | 平安科技(深圳)有限公司 | Monitoring method of health state, device and storage medium based on sound collection |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104202718A (en) * | 2014-08-05 | 2014-12-10 | 百度在线网络技术(北京)有限公司 | Method and device for providing information for user |
CN104288889A (en) * | 2014-08-21 | 2015-01-21 | 惠州Tcl移动通信有限公司 | Emotion regulation method and intelligent terminal |
CN106412312A (en) * | 2016-10-19 | 2017-02-15 | 北京奇虎科技有限公司 | Method and system for automatically awakening camera shooting function of intelligent terminal, and intelligent terminal |
CN106528859A (en) * | 2016-11-30 | 2017-03-22 | 英华达(南京)科技有限公司 | Data pushing system and method |
US20170102783A1 (en) * | 2015-10-08 | 2017-04-13 | Panasonic Intellectual Property Corporation Of America | Method for controlling information display apparatus, and information display apparatus |
CN106658129A (en) * | 2016-12-27 | 2017-05-10 | 上海智臻智能网络科技股份有限公司 | Emotion-based terminal control method and apparatus, and terminal |
CN107066514A (en) * | 2017-01-23 | 2017-08-18 | 深圳亲友科技有限公司 | The Emotion identification method and system of the elderly |
CN107714056A (en) * | 2017-09-06 | 2018-02-23 | 上海斐讯数据通信技术有限公司 | A kind of wearable device of intellectual analysis mood and the method for intellectual analysis mood |
CN108549720A (en) * | 2018-04-24 | 2018-09-18 | 京东方科技集团股份有限公司 | It is a kind of that method, apparatus and equipment, storage medium are pacified based on Emotion identification |
CN108594991A (en) * | 2018-03-28 | 2018-09-28 | 努比亚技术有限公司 | A kind of method, apparatus and computer storage media that help user to adjust mood |
-
2018
- 2018-11-26 CN CN201811423682.1A patent/CN109550133B/en not_active Expired - Fee Related
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104202718A (en) * | 2014-08-05 | 2014-12-10 | 百度在线网络技术(北京)有限公司 | Method and device for providing information for user |
CN104288889A (en) * | 2014-08-21 | 2015-01-21 | 惠州Tcl移动通信有限公司 | Emotion regulation method and intelligent terminal |
US20170102783A1 (en) * | 2015-10-08 | 2017-04-13 | Panasonic Intellectual Property Corporation Of America | Method for controlling information display apparatus, and information display apparatus |
CN106412312A (en) * | 2016-10-19 | 2017-02-15 | 北京奇虎科技有限公司 | Method and system for automatically awakening camera shooting function of intelligent terminal, and intelligent terminal |
CN106528859A (en) * | 2016-11-30 | 2017-03-22 | 英华达(南京)科技有限公司 | Data pushing system and method |
CN106658129A (en) * | 2016-12-27 | 2017-05-10 | 上海智臻智能网络科技股份有限公司 | Emotion-based terminal control method and apparatus, and terminal |
CN107066514A (en) * | 2017-01-23 | 2017-08-18 | 深圳亲友科技有限公司 | The Emotion identification method and system of the elderly |
CN107714056A (en) * | 2017-09-06 | 2018-02-23 | 上海斐讯数据通信技术有限公司 | A kind of wearable device of intellectual analysis mood and the method for intellectual analysis mood |
CN108594991A (en) * | 2018-03-28 | 2018-09-28 | 努比亚技术有限公司 | A kind of method, apparatus and computer storage media that help user to adjust mood |
CN108549720A (en) * | 2018-04-24 | 2018-09-18 | 京东方科技集团股份有限公司 | It is a kind of that method, apparatus and equipment, storage medium are pacified based on Emotion identification |
Non-Patent Citations (2)
Title |
---|
李德毅,于剑,中国人工智能学会: "《中国科协新一代信息技术系列丛书 人工智能导论》", 31 August 2018 * |
王华朋: "《司法语音检验》", 31 January 2017 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110136743A (en) * | 2019-04-04 | 2019-08-16 | 平安科技(深圳)有限公司 | Monitoring method of health state, device and storage medium based on sound collection |
Also Published As
Publication number | Publication date |
---|---|
CN109550133B (en) | 2021-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108320733B (en) | Voice data processing method and device, storage medium and electronic equipment | |
Tahon et al. | Towards a small set of robust acoustic features for emotion recognition: challenges | |
US9501743B2 (en) | Method and apparatus for tailoring the output of an intelligent automated assistant to a user | |
US10068573B1 (en) | Approaches for voice-activated audio commands | |
Nygaard et al. | Resolution of lexical ambiguity by emotional tone of voice | |
Dore | The development of speech acts | |
CN109243431A (en) | A kind of processing method, control method, recognition methods and its device and electronic equipment | |
McKechnie et al. | Automated speech analysis tools for children’s speech production: A systematic literature review | |
Schuller et al. | Medium-term speaker states—A review on intoxication, sleepiness and the first challenge | |
WO2017059815A1 (en) | Fast identification method and household intelligent robot | |
CN106228988A (en) | A kind of habits information matching process based on voiceprint and device | |
TW201923736A (en) | Speech recognition method, device and system | |
CN111696559B (en) | Providing emotion management assistance | |
CN110085211A (en) | Speech recognition exchange method, device, computer equipment and storage medium | |
CN109036395A (en) | Personalized speaker control method, system, intelligent sound box and storage medium | |
WO2020140840A1 (en) | Method and apparatus for awakening wearable device | |
JP6915637B2 (en) | Information processing equipment, information processing methods, and programs | |
MacArthur et al. | Beyond poet voice: sampling the (non-) performance styles of 100 American poets | |
CN111179965A (en) | Pet emotion recognition method and system | |
CN109039647A (en) | Terminal and its verbal learning method | |
CN109550133A (en) | A kind of mood pacifies method and system | |
CN110808050A (en) | Voice recognition method and intelligent equipment | |
CN104754110A (en) | Machine voice conversation based emotion release method mobile phone | |
US20220101852A1 (en) | Conversation support device, conversation support system, conversation support method, and storage medium | |
CN113870857A (en) | Voice control scene method and voice control scene system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210511 Termination date: 20211126 |
|
CF01 | Termination of patent right due to non-payment of annual fee |