CN107463684A - Voice replying method and device, computer installation and computer-readable recording medium - Google Patents
Voice replying method and device, computer installation and computer-readable recording medium Download PDFInfo
- Publication number
- CN107463684A CN107463684A CN201710676671.3A CN201710676671A CN107463684A CN 107463684 A CN107463684 A CN 107463684A CN 201710676671 A CN201710676671 A CN 201710676671A CN 107463684 A CN107463684 A CN 107463684A
- Authority
- CN
- China
- Prior art keywords
- user
- terminal
- voice
- voice messaging
- current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3343—Query execution using phonetics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/335—Filtering based on additional data, e.g. user or group profiles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides a kind of voice replying method and device, computer installation and computer-readable recording medium.The voice replying method includes:The voice messaging of the user of terminal is obtained by the voice acquisition device of terminal;Obtain the current physiology parameter of the user;Speech recognition is carried out to obtain output information to the voice messaging according to the current physiology parameter of the user;Export the output information.The present invention can carry out the reply of more hommization according to the current state of user, realize that the close friend between voice assistant and user is interactive, allow user more to have the sense of reality, bring more preferable Consumer's Experience.
Description
Technical field
The present invention relates to intelligent sound technical field, more particularly to a kind of voice replying method and device, computer installation
And computer-readable recording medium.
Background technology
In prior art, voice assistant can carry out the intelligent interaction such as Intelligent dialogue and instant question and answer with user, and
And user can be helped to solve some problems, but current voice assistant can only based on the phonetic order that user currently inputs or
Person's current scene is replied user, and have ignored the current state of user.Therefore, voice assistant to user sensation still
It is one " machine ", feels it is ice-cold, stiff to what user brought, be not carried out more lively between voice assistant and user
Interaction, Consumer's Experience is bad.
The content of the invention
In view of the foregoing, it is necessary to which a kind of voice replying method and device, computer installation and computer-readable are provided
Storage medium, the reply of more hommization is carried out according to the current state of user, realizes the close friend between voice assistant and user
Interaction, allow user more to have the sense of reality, bring more preferable Consumer's Experience.
A kind of voice replying method, methods described include:
The voice messaging of the user of terminal is obtained by the voice acquisition device of terminal;
Obtain the current physiology parameter of the user;
Speech recognition is carried out to obtain output information to the voice messaging according to the current physiology parameter of the user;
Export the output information.
According to the preferred embodiment of the present invention, the current physiology parameter according to the user is carried out to the voice messaging
Speech recognition to obtain output information, including:
Obtain the default corresponding relation of voice messaging, current physiology parameter and output information;
According to corresponding to the default corresponding relation obtains the voice messaging with the current physiology match parameters
Output information.
According to the preferred embodiment of the present invention, the current physiology parameter according to the user is carried out to the voice messaging
Speech recognition to obtain output information, including:
Obtain the history usage record of the terminal;
Obtained according to the history usage record and matched with the current physiology parameter of the user and the voice messaging
Output information.
According to the preferred embodiment of the present invention, the output output information, including:
Return information of the output for the voice messaging.
According to the preferred embodiment of the present invention, the voice that the voice acquisition device by terminal obtains the user of terminal is believed
Breath includes:
The voice messaging of the user of the microphone acquisition terminal of terminal is called by voice assistant;
The current physiology parameter according to the user carries out speech recognition to the voice messaging to obtain output letter
Breath includes:
Speech recognition is carried out to obtain to the voice messaging according to the current physiology parameter of the user by voice assistant
To output information.
According to the preferred embodiment of the present invention, the voice that the voice acquisition device by terminal obtains the user of terminal is believed
Breath, obtaining the current physiology parameter of the user includes:
In the voice messaging for the user that the microphone by terminal obtains the terminal, obtained by the sensor of terminal
The current physiology parameter of the user;Or
In the voice messaging for the user that the microphone by terminal obtains the terminal, pass through the heart rate sensor of terminal
Obtain the Current heart rate parameter of the user.
A kind of speech answering device, described device include:
Acquiring unit, the voice messaging of the user for obtaining terminal by the voice acquisition device of terminal;
The acquiring unit, it is additionally operable to obtain the current physiology parameter of the user;
Recognition unit, speech recognition is carried out to the voice messaging to obtain for the current physiology parameter according to the user
To output information;
Output unit, for exporting the output information.
According to the preferred embodiment of the present invention, the recognition unit is specifically used for:
Obtain the default corresponding relation of voice messaging, current physiology parameter and output information;
According to corresponding to the default corresponding relation obtains the voice messaging with the current physiology match parameters
Output information;
The recognition unit is specifically additionally operable to:
Obtain the history usage record of the terminal;
Obtained according to the history usage record and matched with the current physiology parameter of the user and the voice messaging
Output information.
A kind of computer installation, the computer installation include processor, and the processor is used to perform in memory to deposit
The step of voice replying method being realized during the computer program of storage.
A kind of computer-readable recording medium, is stored thereon with computer program, and the computer program is held by processor
The step of voice replying method is realized during row.
As can be seen from the above technical solutions, the present invention obtains the language of the user of terminal by the voice acquisition device of terminal
Message ceases;Obtain the current physiology parameter of the user;The voice messaging is entered according to the current physiology parameter of the user
Row speech recognition is to obtain output information;Export the output information.It can be carried out using the present invention according to the current state of user
The more reply of hommization, realize that the close friend between voice assistant and user is interactive, allow user more to have the sense of reality, bring preferably
Consumer's Experience.
Brief description of the drawings
Fig. 1 is the flow chart of the preferred embodiment of voice replying method of the present invention;
Fig. 2 is the functional block diagram of the preferred embodiment of speech answering device of the present invention;
Fig. 3 is the structural representation of the computer installation for the preferred embodiment that the present invention realizes voice replying method.
Main element symbol description
Computer installation | 1 |
Memory | 12 |
Processor | 13 |
Voice acquisition device | 14 |
Microphone | 140 |
Sensor | 15 |
Heart rate sensor | 150 |
Camera device | 16 |
Speech answering device | 11 |
Acquiring unit | 100 |
Recognition unit | 101 |
Output unit | 102 |
Embodiment
In order that the object, technical solutions and advantages of the present invention are clearer, below in conjunction with the accompanying drawings with specific embodiment pair
The present invention is described in detail.
As shown in figure 1, it is the flow chart of the preferred embodiment of voice replying method of the present invention., should according to different demands
The order of step can change in flow chart, and some steps can be omitted.
The voice replying method is applied in one or more terminal, and the terminal is that one kind can be according to setting in advance
The equipment of fixed or storage instruction, automatic progress numerical computations and/or information processing, its hardware include but is not limited to microprocessor
Device, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), programmable gate array
(Field-Programmable Gate Array, FPGA), digital processing unit (Digital Signal Processor,
DSP), embedded device etc..
The terminal can be any electronic product that man-machine interaction can be carried out with user, for example, personal computer,
Tablet personal computer, smart mobile phone, personal digital assistant (Personal Digital Assistant, PDA), game machine, interactive mode
Web TV (Internet Protocol Television, IPTV), intellectual Wearable etc..
Network residing for the terminal includes but is not limited to internet, wide area network, Metropolitan Area Network (MAN), LAN, Virtual Private Network
Network (Virtual Private Network, VPN) etc..
The terminal can include voice acquisition device.The voice acquisition device can be used for the use for gathering the terminal
The voice messaging of family input, the voice acquisition device can include at least one microphone.
The microphone is a kind of energy conversion device that voice signal is converted to electric signal.The microphone can incite somebody to action
The vibration signal of sound is passed on the vibrating diaphragm of microphone, then promotes the magnet in the microphone to form the electric current of change, then
The electric current of the change is sent to the sound treatment circuit in the microphone, so as to realize the enhanced processing to sound.
The terminal can include sensor.The sensor can gather the current physiology parameter of user.The sensing
Device can include heart rate sensor, temperature sensor etc..
The heart rate sensor can gather the Current heart rate parameter of user.
The terminal can include camera device.The camera device is a kind of working as user that can obtain the terminal
The device of preceding expression information and current limb action.
S10, the voice messaging of the user of terminal is obtained by the voice acquisition device of terminal.
In at least one embodiment of the present invention, the voice acquisition device can be used for the user for gathering the terminal
The voice messaging of input, the voice acquisition device can include at least one microphone.
In at least one embodiment of the present invention, the microphone is a kind of energy that voice signal is converted to electric signal
Energy converter part.The microphone can pass to the vibration signal of sound on the vibrating diaphragm of microphone, then promote the Mike
Magnet in wind forms the electric current of change, then the electric current of the change is sent into the sound treatment circuit in the microphone, from
And realize the enhanced processing to sound.
In at least one embodiment of the present invention, the voice acquisition device by terminal obtains the user's of terminal
The embodiment of voice messaging includes:The voice of the user of the microphone acquisition terminal of terminal is called to believe by voice assistant
Breath.
In at least one embodiment of the present invention, the voice assistant be one kind can with user carry out Intelligent dialogue and
The speech recognition system of the intelligent interactions such as instant question and answer, the voice assistant can provide the user help in life, there is provided
The suggestion of amusement and recreation, the intelligent interactions such as intelligent chat can also be carried out with user.
In at least one embodiment of the present invention, the voice assistant communicates with the voice acquisition device, described
Voice assistant can call the microphone in the voice acquisition device.
Preferably, when the terminal monitoring inputs voice to the user of the terminal, described in the terminal startup
Voice assistant, meanwhile, the voice assistant communicates with the voice acquisition device, calls in the voice acquisition device extremely
The voice that a few microphone inputs to the user is acquired, so as to get the voice messaging of the user of the terminal.
S11, obtain the current physiology parameter of the user.
In at least one embodiment of the present invention, the current physiology parameter include, but are not limited to the next item down or
Multinomial combination:Heart rate, body temperature, respiratory rate, pulse, mood etc..
In at least one embodiment of the present invention, the terminal obtains the use of terminal by the voice acquisition device of terminal
The voice messaging at family, while the current physiology parameter of the user is obtained, major way includes:Obtained by the microphone of terminal
When taking the voice messaging of the user of the terminal, the current physiology parameter of the user is obtained by the sensor of terminal.
In at least one embodiment of the present invention, it is described to obtain the user's of the terminal by the microphone of terminal
During voice messaging, also included by way of the current physiology parameter that the sensor of terminal obtains the user it is following a kind of or
The combination of various ways:
(1) during the voice messaging for the user for obtaining the terminal in the microphone by terminal, obtained by sound transducer
Take the tone of the user.
Such as:In the voice messaging for the user that the terminal obtains the terminal by the microphone of terminal, Ke Yitong
Cross the tone that sound transducer obtains the voice of user's input.The tone of the voice of user's input includes, but unlimited
In:Breath, sound and the sensation to audience of the voice for the user input that the terminal obtains.The terminal passes through described
The tone determines the current emotional of the user.Specifically, when the terminal detects the tone gas of the voice of user's input
Full, sound pitch, when giving people the sensation of jump, the terminal determines that the current emotional of the user is happy.
(2) during the voice messaging for the user for obtaining the terminal in the microphone by terminal, passed by the heart rate of terminal
Sensor obtains the Current heart rate parameter of the user.
(3) during the voice messaging for the user for obtaining the terminal in the microphone by terminal, passed by the pulse of terminal
Sensor obtains the current pulse parameter of the user.Because the pulse and heart rate of people are basically identical, therefore, the pulse ginseng
Several and described hrv parameter has similar function.
(4) during the voice messaging for the user for obtaining the terminal in the microphone by terminal, the breathing frequency of terminal is passed through
Rate sensor obtains the current breath frequency of the user.The respiratory frequen can reflect exhaling for user in real time
Suction situation, respiration rate of the user within the breathing unit interval is recorded, while the current breath frequency of user can be shown.
(5) during the voice messaging for the user for obtaining the terminal in the microphone by terminal, passed by the body temperature of terminal
Sensor obtains the current body temperature of the user.The body temperature transducer can detect the body temperature situation of user in real time, and simultaneously
Show the body temperature of the user.
(6) during the voice messaging for the user for obtaining the terminal in the microphone by terminal, working as the user is obtained
Preceding mood.
The current emotional for obtaining the user can have various ways.For example, the terminal obtains the current of the user
Expression information, the current emotional of the user is determined according to the current expression information of the user.Specifically, the terminal is obtaining
While taking the voice messaging, the facial expression feature of the user of the terminal is obtained by camera device, by described
Facial expression feature is analyzed, to determine the current emotional of the user.More specifically, the terminal is obtained by camera device
The facial picture of the user is taken, extracts the facial expression feature of the user from the facial picture, then by the face
The good grader of expressive features input training in advance returns device, and analyzes prediction result by the grader or recurrence device,
To determine the current emotional of the user.
And for example, the terminal obtains the current limb action of the user, true according to the current limb action of the user
The current emotional of the fixed user.Specifically, the terminal is obtained while the voice messaging is obtained by camera device
The current limb action of the user of the terminal, by the good grader of the current limb action input training in advance or return device
In analyzed, to determine the current emotional of the user.
For another example, the terminal obtains the current expression information of the user and current limb action, according to the user's
Current expression information and current limb action determine the current emotional of the user.Specifically, the terminal is obtaining institute's predicate
While message ceases, the current expression information of the user of the terminal and current limb action are obtained by camera device, is used
Combination of the grader or recurrence device of training in advance to the current expression information and current limb action is analyzed, to determine
The current emotional of the user.
It should be noted that in other embodiments, the acquisition of the current physiology parameter and the current physiology parameter
Mode is not limited to not be limited herein with upper type, the present invention.
S12, speech recognition is carried out to the voice messaging according to the current physiology parameter of the user to obtain output letter
Breath.
In at least one embodiment of the present invention, according to the current physiology parameter of the user to the voice messaging
Speech recognition is carried out so that before obtaining output information, the terminal sets voice messaging, current physiology parameter and output information
Default corresponding relation.
Preferably, when the current physiology parameter shows that the user of the terminal is in tranquility, the terminal is set
Put the output information be according to the return information of the voice messaging (such as:When the terminal obtains the language of user's input
Sound is " how is current weather" when, it is the return information according to the voice messaging that the terminal, which sets the output information,:
" vast clear sky, it is adapted to outgoing!”).
Either, when the current physiology parameter shows that the user of the terminal is in special state, the terminal is set
It is special reply mode to put the output information.Such as:When the terminal works as front center by voice assistant acquisition user
Rate parameter or current breath frequency parameter, and according to judging the Current heart rate parameter or current breath frequency parameter
When the user of terminal is in post exercise state, due to the excessive velocities of breathing of the user, causes respiratory rate unstable, speak
Not clear enough, now, the terminal can reply the user " not worrying, have a rest and slowly say again ".
It should be noted that default pair between the voice messaging, the current physiology parameter and the output information
Should be related to can be pre-set before dispatching from the factory by the designer of the terminal, can also be by the user to the voice
According to the self-defined setting of use habit when assistant is initialized, this is not restricted by the present invention.
In at least one embodiment of the present invention, the current physiology parameter according to the user is believed the voice
Breath carries out speech recognition to be included in a manner of obtaining output information:Obtain voice messaging, current physiology parameter and output information
Default corresponding relation, and according to the default corresponding relation acquisition and the voice messaging of the current physiology match parameters
Corresponding output information.
Such as:Said when the terminal detects the user with more angry expression and the tone " you are good stupid,
You a big idiot " the words, and the user when speaking also with significantly shake the head wait passiveness limb action when,
The terminal can infer that the user is angry accordingly, and in this case, the voice assistant can use grievance, sorry
The tone responded:It is " sorry!Please owner give other a chance again!”
In contrast, with smile said when the terminal detects the user " you are good stupid, and you are a big idiot "
The words, and the user when speaking also with significantly nod etc. positive limb action when, the terminal can
It is not really in anger to be inferred to the user accordingly, then, the voice assistant can use more active, the naughty tone
Responded:"!Other does not have also you stupid in fact!”
In at least one embodiment of the present invention, the history usage record of the terminal is obtained, and according to the history
Usage record obtains the output information to match with the current physiology parameter of the user and the voice messaging.
Such as:The terminal obtains the history usage record of the terminal, detects the terminal in preset time range
It is interior (such as:In 1 hour) using always fat-reducing software, body-building software either running Counting software when, the terminal judges institute
State after user is kept in motion, you can export the sentence " not run, take breath " to match.
In at least one embodiment of the present invention, by voice assistant according to the current physiology parameter of the user to institute
State voice messaging and carry out speech recognition to obtain output information.
Such as:Recognized by the voice assistant word in the voice that the user inputs be probably A either
Homonym B, the terminal can be A or B by the one word of context determination, can also pass through the current of the user
The one word of physiological parameter auxiliary judgment is A or B, then obtains corresponding output information.Specifically, when the terminal is examined
The voice for measuring user's input is probably " when good Lei " either " tires out well ", according to the current physiology of the user of acquisition
Parameter, after the terminal judges that the user is in motion, then the terminal can determine that the voice of user's input is " good
It is tired ", while it is " tired out and just had a rest " to obtain output information.
S13, export the output information.
In at least one embodiment of the present invention, the mode of the output output information include, but are not limited to
The next item down or multinomial combination:Text importing, speech answering, video replies etc., this is not restricted by the present invention.
In at least one embodiment of the present invention, the output output information includes:Output is directed to the voice
The return information of information.
Such as:When the voice messaging that the terminal judges to obtain is " nearest beverage store is at which ", and the user
Current physiology parameter when being in plateau, the terminal judges that the user is in tranquility, and the terminal is defeated
Go out to be directed to the return information of the voice messaging:" the crossroad left-hand rotation first hand in front for you, it is necessary to open navigation”
In summary, the present invention can obtain the voice messaging of the user of terminal by the voice acquisition device of terminal;Obtain
The current physiology parameter of the user;According to the current physiology parameter of the user to the voice messaging carry out speech recognition with
Obtain output information;Export the output information.Therefore, the present invention can carry out more hommization according to the current state of user
Reply, realize that the close friend between voice assistant and user is interactive, allow user more to have the sense of reality, bring more preferable Consumer's Experience.
As shown in Fig. 2 it is the functional block diagram of the preferred embodiment of speech answering device of the present invention.The speech answering dress
Putting 11 includes acquiring unit 100, recognition unit 101 and output unit 102.Unit alleged by the present invention refers to that one kind can be located
Reason device 13 is performed and can complete the series of computation machine program segment of fixing function, and it is stored in memory 12.At this
In embodiment, the function on each unit will be described in detail in follow-up embodiment.
Acquiring unit 100 obtains the voice messaging of the user of terminal by the voice acquisition device of terminal.
In at least one embodiment of the present invention, the voice acquisition device can be used for the user for gathering the terminal
The voice messaging of input, the voice acquisition device can include at least one microphone.
In at least one embodiment of the present invention, the microphone is a kind of energy that voice signal is converted to electric signal
Energy converter part.The microphone can pass to the vibration signal of sound on the vibrating diaphragm of microphone, then promote the Mike
Magnet in wind forms the electric current of change, then the electric current of the change is sent into the sound treatment circuit in the microphone, from
And realize the enhanced processing to sound.
In at least one embodiment of the present invention, the acquiring unit 100 is obtained by the voice acquisition device of terminal
The embodiment of the voice messaging of the user of terminal includes:The microphone for calling terminal by voice assistant obtains terminal
The voice messaging of user.
In at least one embodiment of the present invention, the voice assistant be one kind can with user carry out Intelligent dialogue and
The speech recognition system of the intelligent interactions such as instant question and answer, the voice assistant can provide the user help in life, there is provided
The suggestion of amusement and recreation, the intelligent interactions such as intelligent chat can also be carried out with user.
In at least one embodiment of the present invention, the voice assistant communicates with the voice acquisition device, described
Voice assistant can call the microphone in the voice acquisition device.
Preferably, when the terminal monitoring inputs voice to the user of the terminal, described in the terminal startup
Voice assistant, meanwhile, the voice assistant communicates with the voice acquisition device, calls in the voice acquisition device extremely
The voice that a few microphone inputs to the user is acquired, so as to which the acquiring unit 100 gets the terminal
The voice messaging of user.
The acquiring unit 100 obtains the current physiology parameter of the user.
In at least one embodiment of the present invention, the current physiology parameter include, but are not limited to the next item down or
Multinomial combination:Heart rate, body temperature, respiratory rate, pulse, mood etc..
In at least one embodiment of the present invention, the acquiring unit 100 is obtained by the voice acquisition device of terminal
The voice messaging of the user of terminal, while the current physiology parameter of the user is obtained, major way includes:Passing through terminal
When microphone obtains the voice messaging of the user of the terminal, the current physiology that the user is obtained by the sensor of terminal is joined
Number.
In at least one embodiment of the present invention, described in the acquiring unit 100 obtains in the microphone by terminal
During the voice messaging of the user of terminal, also include by way of the sensor of terminal obtains the current physiology parameter of the user
The combination of following one or more kinds of modes:
(1) during the voice messaging for the user for obtaining the terminal in the microphone by terminal, obtained by sound transducer
Take the tone of the user.
Such as:In the voice messaging for the user that the acquiring unit 100 obtains the terminal by the microphone of terminal,
The tone of the voice of user's input can be obtained by sound transducer.The tone of the voice of user's input includes,
But it is not limited to:Breath, sound and the sensation to audience of the voice for the user input that the terminal obtains.The terminal is led to
Cross the current emotional that the tone determines the user.Specifically, when the terminal detects the voice of user's input
Tone gas is full, sound pitch, when giving people the sensation of jump, the terminal determines that the current emotional of the user is happy.
(2) during the voice messaging for the user for obtaining the terminal in the microphone by terminal, passed by the heart rate of terminal
Sensor obtains the Current heart rate parameter of the user.
(3) during the voice messaging for the user for obtaining the terminal in the microphone by terminal, passed by the pulse of terminal
Sensor obtains the current pulse parameter of the user.Because the pulse and heart rate of people are basically identical, therefore, the pulse ginseng
Several and described hrv parameter has similar function.
(4) during the voice messaging for the user for obtaining the terminal in the microphone by terminal, the breathing frequency of terminal is passed through
Rate sensor obtains the current breath frequency of the user.The respiratory frequen can reflect exhaling for user in real time
Suction situation, respiration rate of the user within the breathing unit interval is recorded, while the current breath frequency of user can be shown.
(5) during the voice messaging for the user for obtaining the terminal in the microphone by terminal, passed by the body temperature of terminal
Sensor obtains the current body temperature of the user.The body temperature transducer can detect the body temperature situation of user in real time, and simultaneously
Show the body temperature of the user.
(6) during the voice messaging for the user for obtaining the terminal in the microphone by terminal, working as the user is obtained
Preceding mood.
The current emotional that the acquiring unit 100 obtains the user can have various ways.For example, the acquiring unit
100 obtain the current expression information of the user, determine that the user's works as cause according to the current expression information of the user
Thread.Specifically, the acquiring unit 100 obtains the terminal while voice messaging is obtained by camera device
The facial expression feature of user, by analyzing the facial expression feature, to determine the current emotional of the user.More
Specifically, the terminal obtains the facial picture of the user by camera device, and the use is extracted from the facial picture
The facial expression feature at family, then the facial expression feature is inputted into the good grader of training in advance or returns device, and by described
Grader or recurrence device analyze prediction result, to determine the current emotional of the user.
And for example, the acquiring unit 100 obtains the current limb action of the user, according to the current limbs of the user
Action determines the current emotional of the user.Specifically, the acquiring unit 100 is led to while the voice messaging is obtained
The current limb action that camera device obtains the user of the terminal is crossed, the current limb action input training in advance is good
Analyzed in grader or recurrence device, to determine the current emotional of the user.
For another example, the acquiring unit 100 obtains the current expression information of the user and current limb action, according to described
The current expression information and current limb action of user determine the current emotional of the user.Specifically, the acquiring unit 100
While the voice messaging is obtained, the current expression information of the user of the terminal is obtained by camera device and works as forelimb
Body acts, and is carried out using combination of the grader or recurrence device of training in advance to the current expression information and current limb action
Analysis, to determine the current emotional of the user.
It should be noted that in other embodiments, the acquisition of the current physiology parameter and the current physiology parameter
Mode is not limited to not be limited herein with upper type, the present invention.
Recognition unit 101 carries out speech recognition to obtain according to the current physiology parameter of the user to the voice messaging
Output information.
In at least one embodiment of the present invention, joined in the recognition unit 101 according to the current physiology of the user
It is several that speech recognition is carried out to the voice messaging so that before obtaining output information, the terminal sets voice messaging, current physiology
The default corresponding relation of parameter and output information.
Preferably, when the current physiology parameter shows that the user of the terminal is in tranquility, the terminal is set
Put the output information be according to the return information of the voice messaging (such as:When the terminal obtains the language of user's input
Sound is " how is current weather" when, it is the return information according to the voice messaging that the terminal, which sets the output information,:
" vast clear sky, it is adapted to outgoing!”).
Either, when the current physiology parameter shows that the user of the terminal is in special state, the terminal is set
It is special reply mode to put the output information.Such as:When the terminal works as front center by voice assistant acquisition user
Rate parameter or current breath frequency parameter, and according to judging the Current heart rate parameter or current breath frequency parameter
When the user of terminal is in post exercise state, due to the excessive velocities of breathing of the user, causes respiratory rate unstable, speak
Not clear enough, now, the terminal can reply the user " not worrying, have a rest and slowly say again ".
It should be noted that default pair between the voice messaging, the current physiology parameter and the output information
Should be related to can be pre-set before dispatching from the factory by the designer of the terminal, can also be by the user to the voice
According to the self-defined setting of use habit when assistant is initialized, this is not restricted by the present invention.
In at least one embodiment of the present invention, the recognition unit 101 is according to the current physiology parameter of the user
Carry out speech recognition to the voice messaging is included in a manner of obtaining output information:Obtain voice messaging, current physiology parameter
With the default corresponding relation of output information, and obtained and the current physiology match parameters according to the default corresponding relation
Output information corresponding to the voice messaging.
Such as:Said when the terminal detects the user with more angry expression and the tone " you are good stupid,
You a big idiot " the words, and the user when speaking also with significantly shake the head wait passiveness limb action when,
The terminal can infer that the user is angry accordingly, and in this case, the voice assistant can use grievance, sorry
The tone responded:It is " sorry!Please owner give other a chance again!”
In contrast, with smile said when the terminal detects the user " you are good stupid, and you are a big idiot "
The words, and the user when speaking also with significantly nod etc. positive limb action when, the terminal can
It is not really in anger to be inferred to the user accordingly, then, the voice assistant can use more active, the naughty tone
Responded:"!Other does not have also you stupid in fact!”
In at least one embodiment of the present invention, the acquiring unit 100 obtains the history usage record of the terminal,
And the output to match with the current physiology parameter of the user and the voice messaging is obtained according to the history usage record
Information.
Such as:The acquiring unit 100 obtains the history usage record of the terminal, detects the terminal when default
Between in the range of (such as:In 1 hour) using always fat-reducing software, body-building software either running Counting software when, the terminal
After judging that the user is kept in motion, you can export the sentence " not run, take breath " to match.
In at least one embodiment of the present invention, the recognition unit 101 by voice assistant according to the user's
Current physiology parameter carries out speech recognition to obtain output information to the voice messaging.
Such as:Recognized by the voice assistant word in the voice that the user inputs be probably A either
Homonym B, the terminal can be A or B by the one word of context determination, can also pass through the current of the user
The one word of physiological parameter auxiliary judgment is A or B, then obtains corresponding output information.Specifically, when the identification is single
Member 101 detects that the voice of user's input is probably " when good Lei " either " tires out well ", according to the user's of acquisition
Current physiology parameter, after the recognition unit 101 judges that the user is in motion, then the recognition unit 101 can determine
The voice of user's input is " good tired ", while it is " tired out and just had a rest " to obtain output information.
Output unit 102 exports the output information.
In at least one embodiment of the present invention, the mode that the output unit 102 exports the output information includes,
But it is not limited to the combination of following one or more:Text importing, speech answering, video replies etc., the present invention do not make herein
Limitation.
In at least one embodiment of the present invention, the output unit 102 exports the output information and included:Export pin
To the return information of the voice messaging.
Such as:When the voice messaging that the terminal judges to obtain is " nearest beverage store is at which ", and the user
Current physiology parameter when being in plateau, the terminal judges that the user is in tranquility, and the output is single
Return information of the output of member 102 for the voice messaging:" the crossroad left-hand rotation first hand in front is led, it is necessary to be opened for you
Boat”
In summary, the present invention can obtain the voice messaging of the user of terminal by the voice acquisition device of terminal;Obtain
The current physiology parameter of the user;According to the current physiology parameter of the user to the voice messaging carry out speech recognition with
Obtain output information;Export the output information.Therefore, the present invention can carry out more hommization according to the current state of user
Reply, realize that the close friend between voice assistant and user is interactive, allow user more to have the sense of reality, bring more preferable Consumer's Experience.
The above-mentioned integrated unit realized in the form of software function module, can be stored in one and computer-readable deposit
In storage media.Above-mentioned software function module is stored in a storage medium, including some instructions are causing a computer
It is each that equipment (can be personal computer, server, or network equipment etc.) or processor (processor) perform the present invention
The part steps of embodiment methods described.
As shown in figure 3, it is the structural representation of the terminal for the preferred embodiment that the present invention realizes voice replying method.It is described
Computer installation 1 includes memory 12, processor 13, and is stored in the memory 12 and can be on the processor 13
The computer program of operation, such as S10, S11, S12 and S13.
Or the processor 13 realizes each module in above-mentioned each device embodiment/mono- when performing the computer program
The function of member, such as:The voice messaging of the user of terminal is obtained by the voice acquisition device of terminal;Obtain working as the user
Preceding physiological parameter;Speech recognition is carried out to the voice messaging according to the current physiology parameter of the user to obtain output letter
Breath;Export the output information.
Exemplary, the computer program can be divided into one or more module/units, one or more
Individual module/unit is stored in the memory 12, and is performed by the processor 13, to complete the present invention.It is one
Or multiple module/units can be the series of computation machine programmed instruction section that can complete specific function, the instruction segment is used to retouch
State implementation procedure of the computer program in the computer installation 1.For example, the computer program can be divided into
Acquiring unit 100, recognition unit 101, output unit 102.Each unit concrete function is as follows:
Acquiring unit 100, the voice messaging of the user for obtaining terminal by the voice acquisition device of terminal;
The acquiring unit 100, it is additionally operable to obtain the current physiology parameter of the user;
Recognition unit 101, speech recognition is carried out to the voice messaging for the current physiology parameter according to the user
To obtain output information;
Output unit 102, for exporting the output information.
The computer installation 1 can be that the calculating such as desktop PC, notebook, palm PC and cloud server are set
It is standby.The computer installation 1 may include, but be not limited only to, processor 13, memory 12.
It will be understood by those skilled in the art that the schematic diagram is only the example of computer installation 1, do not form to meter
The restriction of calculation machine device 1, it can include than illustrating more or less parts, either combine some parts or different portions
Part, such as the computer installation 1 can also include input-output equipment, network access equipment, bus etc..
Alleged processor 13 can be CPU (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other PLDs, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor can also be any conventional processor
Deng the processor 13 is the control centre of the computer installation 1, utilizes various interfaces and connection whole computer dress
Put 1 various pieces.
The processor 13 is the arithmetic core (Core) and control core (Control Unit) of computer installation 1.Institute
State processor 13 and can perform the operating system of the computer installation 1 and the types of applications program of installation, program code etc..
The memory 12 can be used for storing the computer program and/or module, the processor 13 by operation or
The computer program and/or module being stored in the memory 12 are performed, and calls the data being stored in memory 12,
Realize the various functions of the computer installation 1.The memory 12 can mainly include storing program area and storage data field, its
In, storing program area can storage program area, application program (such as sound-playing function, image needed at least one function
Playing function etc.) etc.;Storage data field can store uses created data (such as voice data, phone directory according to mobile phone
Deng) etc..In addition, memory 12 can include high-speed random access memory, nonvolatile memory can also be included, such as firmly
Disk, internal memory, plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital,
SD) block, flash card (Flash Card), at least one disk memory, flush memory device or other volatile solid-states
Part.
The memory 12 is used to store a kind of program of voice replying method and various data, and is filled in the computer
Put and high speed is realized in 1 running, is automatically completed the access of program or data.The memory 12 can be computer installation
1 external memory storage and/or internal storage.Further, the memory 12 can not have shape in kind in integrated circuit
The circuit with store function of formula, such as RAM (Random-Access Memory, random access memory), FIFO (First
In First Out) etc..Or the memory 12 can also be the memory for having physical form, such as memory bar, TF card
(Trans-flash Card) etc..
If the integrated module/unit of the computer installation 1 is realized in the form of SFU software functional unit and as independently
Production marketing or in use, can be stored in a computer read/write memory medium.Based on such understanding, the present invention
All or part of flow in above-described embodiment method is realized, the hardware of correlation can also be instructed come complete by computer program
Into described computer program can be stored in a computer-readable recording medium, and the computer program is being executed by processor
When, can be achieved above-mentioned each embodiment of the method the step of.Wherein, the computer program includes computer program code, described
Computer program code can be source code form, object identification code form, executable file or some intermediate forms etc..The meter
Calculation machine computer-readable recording medium can include:Can carry any entity or device of the computer program code, recording medium, USB flash disk,
Mobile hard disk, magnetic disc, CD, computer storage, read-only storage (ROM, Read-Only Memory), random access memory
Device (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..Need what is illustrated
It is that the content that the computer-readable medium includes can be fitted according to legislation in jurisdiction and the requirement of patent practice
When increase and decrease, such as in some jurisdictions, according to legislation and patent practice, computer-readable medium, which does not include electric carrier wave, to be believed
Number and telecommunication signal.
The computer installation 1 be it is a kind of can according to the instruction for being previously set or storing, it is automatic carry out numerical computations and/
Or the equipment of information processing, its hardware include but is not limited to microprocessor, application specific integrated circuit (Application Specific
Integrated Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), number
Word processing device (Digital Signal Processor, DSP), embedded device etc..
Keyboard, mouse, remote control, touch pad or voice-operated device etc. can be passed through with user by also including but is not limited to any one
Mode carries out the electronic product of man-machine interaction, for example, personal computer, tablet personal computer, smart mobile phone, personal digital assistant
(Personal Digital Assistant, PDA), game machine, IPTV (Internet Protocol
Television, IPTV), intellectual Wearable etc..
Network residing for the computer installation 1 includes but is not limited to internet, wide area network, Metropolitan Area Network (MAN), LAN, virtual
Dedicated network (Virtual Private Network, VPN) etc..
The processor 13 can perform the operating system of the computer installation 1 and types of applications program, the journey of installation
Sequence code etc..Such as speech answering device 11.
The speech answering device 11 obtains the voice messaging of the user of terminal by the voice acquisition device of terminal;Obtain
The current physiology parameter of the user;According to the current physiology parameter of the user to the voice messaging carry out speech recognition with
Obtain output information;Export the output information.The present invention can carry out the reply of more hommization according to the current state of user,
Realize that the close friend between voice assistant and user is interactive, allow user more to have the sense of reality, bring more preferable Consumer's Experience.
With reference to Fig. 1, the memory 12 in the computer installation 1 stores multiple instruction to realize a kind of speech answering
Method, the processor 13 can perform the multiple instruction so as to realize:Terminal is obtained by the voice acquisition device of terminal
The voice messaging of user;Obtain the current physiology parameter of the user;According to the current physiology parameter of the user to institute's predicate
Message breath carries out speech recognition to obtain output information;Export the output information.
According to the preferred embodiment of the present invention, the processor 13, which also performs multiple instruction, to be included:
Obtain the default corresponding relation of voice messaging, current physiology parameter and output information;
According to corresponding to the default corresponding relation obtains the voice messaging with the current physiology match parameters
Output information.
According to the preferred embodiment of the present invention, the processor 13, which also performs multiple instruction, to be included:
Obtain the history usage record of the terminal;
Obtained according to the history usage record and matched with the current physiology parameter of the user and the voice messaging
Output information.
According to the preferred embodiment of the present invention, the processor 13, which also performs multiple instruction, to be included:
Return information of the output for the voice messaging.
According to the preferred embodiment of the present invention, the processor 13, which also performs multiple instruction, to be included:
The voice messaging of the user of the microphone acquisition terminal of terminal is called by voice assistant;
According to the preferred embodiment of the present invention, the processor 13, which also performs multiple instruction, to be included:
Speech recognition is carried out to obtain to the voice messaging according to the current physiology parameter of the user by voice assistant
To output information.
According to the preferred embodiment of the present invention, the processor 13, which also performs multiple instruction, to be included:
In the voice messaging for the user that the microphone by terminal obtains the terminal, obtained by the sensor of terminal
The current physiology parameter of the user;Or
In the voice messaging for the user that the microphone by terminal obtains the terminal, pass through the heart rate sensor of terminal
Obtain the Current heart rate parameter of the user.
Specifically, the processor 13 refers to Fig. 1 to the concrete methods of realizing of above-mentioned instruction and corresponds to correlation in embodiment
The description of step, will not be described here.
In several embodiments provided by the present invention, it should be understood that disclosed system, apparatus and method can be with
Realize by another way.For example, device embodiment described above is only schematical, for example, the module
Division, only a kind of division of logic function, can there is other dividing mode when actually realizing.
The module illustrated as separating component can be or may not be physically separate, show as module
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of module therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional module in each embodiment of the present invention can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, can also be realized in the form of hardware adds software function module.
It is obvious to a person skilled in the art that the invention is not restricted to the details of above-mentioned one exemplary embodiment, Er Qie
In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.
Therefore, no matter from the point of view of which point, embodiment all should be regarded as exemplary, and is nonrestrictive, sheet
The scope of invention limits by appended claims rather than described above, it is intended that will fall equivalency in claim
All changes in implication and scope are included in the present invention.Any attached associated diagram mark in claim should not be considered as limit
The involved claim of system.
Furthermore, it is to be understood that the word of " comprising " one is not excluded for other units or step, odd number is not excluded for plural number.In system claims
The multiple units or device of statement can also be realized by a unit or device by software or hardware.Second grade word is used
To represent title, and it is not offered as any specific order.
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention and it is unrestricted, although reference
The present invention is described in detail for preferred embodiment, it will be understood by those within the art that, can be to the present invention's
Technical scheme is modified or equivalent substitution, without departing from the spirit and scope of technical solution of the present invention.
Claims (10)
1. a kind of voice replying method, it is characterised in that methods described includes:
The voice messaging of the user of terminal is obtained by the voice acquisition device of terminal;
Obtain the current physiology parameter of the user;
Speech recognition is carried out to obtain output information to the voice messaging according to the current physiology parameter of the user;
Export the output information.
2. voice replying method as claimed in claim 1, it is characterised in that the current physiology parameter according to the user
Speech recognition is carried out to obtain output information to the voice messaging, including:
Obtain the default corresponding relation of voice messaging, current physiology parameter and output information;
It is defeated according to corresponding to the default corresponding relation acquisition with the voice messaging of the current physiology match parameters
Go out information.
3. voice replying method as claimed in claim 1, it is characterised in that the current physiology parameter according to the user
Speech recognition is carried out to obtain output information to the voice messaging, including:
Obtain the history usage record of the terminal;
Obtained according to the history usage record match with the current physiology parameter of the user and the voice messaging it is defeated
Go out information.
4. voice replying method as claimed in claim 1, it is characterised in that the output output information, including:
Return information of the output for the voice messaging.
5. voice replying method as claimed in claim 1, it is characterised in that the voice acquisition device by terminal obtains
The voice messaging of the user of terminal includes:
The voice messaging of the user of the microphone acquisition terminal of terminal is called by voice assistant;
The current physiology parameter according to the user carries out speech recognition to obtain output information bag to the voice messaging
Include:
It is defeated to obtain to voice messaging progress speech recognition according to the current physiology parameter of the user by voice assistant
Go out information.
6. the voice replying method as described in claim any one of 1-5, it is characterised in that the voice collecting by terminal
Device obtains the voice messaging of the user of terminal, and obtaining the current physiology parameter of the user includes:
In the voice messaging for the user that the microphone by terminal obtains the terminal, by described in the sensor acquisition of terminal
The current physiology parameter of user;Or
In the voice messaging for the user that the microphone by terminal obtains the terminal, obtained by the heart rate sensor of terminal
The Current heart rate parameter of the user.
7. a kind of speech answering device, it is characterised in that described device includes:
Acquiring unit, the voice messaging of the user for obtaining terminal by the voice acquisition device of terminal;
The acquiring unit, it is additionally operable to obtain the current physiology parameter of the user;
Recognition unit, it is defeated to obtain that speech recognition is carried out to the voice messaging for the current physiology parameter according to the user
Go out information;
Output unit, for exporting the output information.
8. speech answering device as claimed in claim 7, it is characterised in that the recognition unit is specifically used for:
Obtain the default corresponding relation of voice messaging, current physiology parameter and output information;
It is defeated according to corresponding to the default corresponding relation acquisition with the voice messaging of the current physiology match parameters
Go out information;
The recognition unit is specifically additionally operable to:
Obtain the history usage record of the terminal;
Obtained according to the history usage record match with the current physiology parameter of the user and the voice messaging it is defeated
Go out information.
9. a kind of computer installation, it is characterised in that the computer installation includes processor, and the processor is deposited for execution
Realized during the computer program stored in reservoir as described in any one in claim 1-6 the step of voice replying method.
10. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that:The computer program
Realized when being executed by processor as described in any one in claim 1-6 the step of voice replying method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710676671.3A CN107463684A (en) | 2017-08-09 | 2017-08-09 | Voice replying method and device, computer installation and computer-readable recording medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710676671.3A CN107463684A (en) | 2017-08-09 | 2017-08-09 | Voice replying method and device, computer installation and computer-readable recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107463684A true CN107463684A (en) | 2017-12-12 |
Family
ID=60548811
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710676671.3A Withdrawn CN107463684A (en) | 2017-08-09 | 2017-08-09 | Voice replying method and device, computer installation and computer-readable recording medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107463684A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108984229A (en) * | 2018-07-24 | 2018-12-11 | 广东小天才科技有限公司 | Application program starting control method and family education equipment |
CN110275691A (en) * | 2018-03-15 | 2019-09-24 | 阿拉的(深圳)人工智能有限公司 | Automatic reply method, device, terminal and the storage medium that intelligent sound wakes up |
CN111192583A (en) * | 2018-11-14 | 2020-05-22 | 本田技研工业株式会社 | Control device, agent device, and computer-readable storage medium |
CN111190480A (en) * | 2018-11-14 | 2020-05-22 | 本田技研工业株式会社 | Control device, agent device, and computer-readable storage medium |
CN113544769A (en) * | 2019-04-10 | 2021-10-22 | 深圳迈瑞生物医疗电子股份有限公司 | Recording method of clinical events, medical device and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105374366A (en) * | 2015-10-09 | 2016-03-02 | 广东小天才科技有限公司 | Method and system for recognizing semantics of wearable device |
CN107393529A (en) * | 2017-07-13 | 2017-11-24 | 珠海市魅族科技有限公司 | Audio recognition method, device, terminal and computer-readable recording medium |
-
2017
- 2017-08-09 CN CN201710676671.3A patent/CN107463684A/en not_active Withdrawn
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105374366A (en) * | 2015-10-09 | 2016-03-02 | 广东小天才科技有限公司 | Method and system for recognizing semantics of wearable device |
CN107393529A (en) * | 2017-07-13 | 2017-11-24 | 珠海市魅族科技有限公司 | Audio recognition method, device, terminal and computer-readable recording medium |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110275691A (en) * | 2018-03-15 | 2019-09-24 | 阿拉的(深圳)人工智能有限公司 | Automatic reply method, device, terminal and the storage medium that intelligent sound wakes up |
CN108984229A (en) * | 2018-07-24 | 2018-12-11 | 广东小天才科技有限公司 | Application program starting control method and family education equipment |
CN108984229B (en) * | 2018-07-24 | 2021-11-26 | 广东小天才科技有限公司 | Application program starting control method and family education equipment |
CN111192583A (en) * | 2018-11-14 | 2020-05-22 | 本田技研工业株式会社 | Control device, agent device, and computer-readable storage medium |
CN111190480A (en) * | 2018-11-14 | 2020-05-22 | 本田技研工业株式会社 | Control device, agent device, and computer-readable storage medium |
CN111192583B (en) * | 2018-11-14 | 2023-10-03 | 本田技研工业株式会社 | Control device, agent device, and computer-readable storage medium |
CN113544769A (en) * | 2019-04-10 | 2021-10-22 | 深圳迈瑞生物医疗电子股份有限公司 | Recording method of clinical events, medical device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107463684A (en) | Voice replying method and device, computer installation and computer-readable recording medium | |
CN110688911B (en) | Video processing method, device, system, terminal equipment and storage medium | |
WO2020135194A1 (en) | Emotion engine technology-based voice interaction method, smart terminal, and storage medium | |
JP6755304B2 (en) | Information processing device | |
CN110534099A (en) | Voice wakes up processing method, device, storage medium and electronic equipment | |
CN110070879A (en) | A method of intelligent expression and phonoreception game are made based on change of voice technology | |
CN108922525B (en) | Voice processing method, device, storage medium and electronic equipment | |
CN110399837A (en) | User emotion recognition methods, device and computer readable storage medium | |
CN109145145A (en) | A kind of data-updating method, client and electronic equipment | |
CN110110169A (en) | Man-machine interaction method and human-computer interaction device | |
CN107452400A (en) | Voice broadcast method and device, computer installation and computer-readable recording medium | |
CN107393529A (en) | Audio recognition method, device, terminal and computer-readable recording medium | |
CN111696559B (en) | Providing emotion management assistance | |
WO2020253128A1 (en) | Voice recognition-based communication service method, apparatus, computer device, and storage medium | |
JP6391386B2 (en) | Server, server control method, and server control program | |
JP2024525119A (en) | System and method for automatic generation of interactive synchronized discrete avatars in real time | |
JP6860010B2 (en) | Information processing systems, information processing methods, and information processing programs | |
CN104036776A (en) | Speech emotion identification method applied to mobile terminal | |
US20150324352A1 (en) | Systems and methods for dynamically collecting and evaluating potential imprecise characteristics for creating precise characteristics | |
CN114127849A (en) | Speech emotion recognition method and device | |
CN110442867B (en) | Image processing method, device, terminal and computer storage medium | |
US20200176019A1 (en) | Method and system for recognizing emotion during call and utilizing recognized emotion | |
CN106774845A (en) | A kind of intelligent interactive method, device and terminal device | |
CN111291151A (en) | Interaction method and device and computer equipment | |
US20230049015A1 (en) | Selecting and Reporting Objects Based on Events |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20171212 |