CN109741746A - Robot personalizes interactive voice algorithm, emotion communication algorithm and robot - Google Patents

Robot personalizes interactive voice algorithm, emotion communication algorithm and robot Download PDF

Info

Publication number
CN109741746A
CN109741746A CN201910096249.XA CN201910096249A CN109741746A CN 109741746 A CN109741746 A CN 109741746A CN 201910096249 A CN201910096249 A CN 201910096249A CN 109741746 A CN109741746 A CN 109741746A
Authority
CN
China
Prior art keywords
robot
user
voice
algorithm
personalizes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910096249.XA
Other languages
Chinese (zh)
Inventor
张峰
吴义坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI YUANQU INFORMATION TECHNOLOGY Co Ltd
Original Assignee
SHANGHAI YUANQU INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI YUANQU INFORMATION TECHNOLOGY Co Ltd filed Critical SHANGHAI YUANQU INFORMATION TECHNOLOGY Co Ltd
Priority to CN201910096249.XA priority Critical patent/CN109741746A/en
Publication of CN109741746A publication Critical patent/CN109741746A/en
Pending legal-status Critical Current

Links

Abstract

It personalizes interactive voice algorithm, emotion communication algorithm and robot the invention discloses a kind of robot, solves the problems, such as that original robot interactive is not natural enough and smoothness.Robot of the invention has continuous speech and listens attentively to answering model, in this mode, user can speak with robot, it can also chat with other people, robot judges whether user speaks with oneself by algorithm, to answer, using location algorithm and microphone array algorithm, the voice for only acquiring user direction, reduces the influence of ambient noise, not by noise jamming.The affection computation method that robot is answered, so that robot looks at user, and according to affection computation method, the expression for providing the answer with emotion and personalizing when being similar to person to person's exchange, can look at other side's exchange, and have corresponding expression and mood.Man machine language's interaction technique that a whole set of height personalizes, allow to so that interaction of the people between robot as person to person naturally, more intelligent, facilitate application.

Description

Robot personalizes interactive voice algorithm, emotion communication algorithm and robot
Technical field
The invention belongs to field in intelligent robotics, and in particular to a kind of robot personalizes interactive voice algorithm, feelings Sense exchange algorithm and robot.
Background technique
Can be many with the artificial intelligence robot type of interactive voice on existing market, but interactive mode is inadequate compared with people It is natural and smooth.For example, people needs by the way of similar wechat with when machine human hair phonetic order, pin with robot Voice key speech, unclamps again voice key after having said.For another example, it when some robots receive phonetic order every time, needs first with calling out Word of waking up wakes up (such as day cat smart), and wake-up word will be used for multiple times by having when multiple phonetic orders, and cannot continuous phonetic order, It is bothered using interaction.
In addition, the probability of success of interactive voice reduces in the case where surrounding has noise especially voice noise.
Summary of the invention
It personalizes interactive voice algorithm, emotion the technical problems to be solved by the present invention are: providing a kind of robot Algorithm and robot are exchanged, solves the problems, such as original robot interactive not enough nature and process.
The present invention uses following technical scheme to solve above-mentioned technical problem:
Robot personalizes interactive voice algorithm, and robot has the voice based on embedded chip and wakes up algorithm and be based on The Continuous Speech Recognition System of extensive computation, after robot power supply,
Firstly, the voice of starting embedded chip wakes up algorithm;
Secondly, robot uses auditory localization algorithm after user has used wake-up word, direction and the distance of user are determined, and will Head rotation is directed at user, starts recognition of face and following function and Continuous Speech Recognition System;
Again, the subsequent phonetic order of identification user and user mobile information, respond the phonetic order of user, and always Keep head user oriented;
Then, after Continuous Speech Recognition System starting, robot starting continuous speech listens attentively to answering model, and passes through efficient voice Detection algorithm identification voice simultaneously makes interaction with user;
Finally, user speech information is not detected within the preset time, robot enters suspend mode.
The method for identifying the subsequent phonetic order of user is as follows:
Robot judges the time point that subsequent phonetic order obtains, and wakes up word and continuous speech recognition system if issued in user Period before system starting receives user speech instruction V, then starts in Continuous Speech Recognition System, first processing user Then phonetic order V reprocesses the phonetic order of user's sending after Continuous Speech Recognition System starting.
After robot continuous speech listens attentively to answering model starting, user directly issues phonetic order, and robot is in complexity In voice command system, the phonetic order of automatic identification user, and make a response.
The efficient voice detection algorithm, specifically comprises the following steps:
Step 1: waking up whether the phonetic order that algorithm judges that robot receives is to wake up word by voice, if it is, connecing Get off one to default and speak with robot;Otherwise, step 2 is executed;
The text information that step 2, robot identification user issue, and generate text sequence A1, A2, A3 ... ... An, wherein An For Chinese character, phonetic or foreign language;
Whether step 3 reaches threshold value by calculating multivariate probability P (A1, A2, A3 ... ... An), and robot detects text sequence Whether A1, A2, A3 ... ... An are efficient voice, if it is efficient voice, execute step 4, otherwise, robot is not responding to;
Step 4, the text information issued by semantic processes module and user, judge whether user speaks with robot, such as Fruit is to make corresponding answer, and otherwise, robot is not responding to.
Robot uses microphone array, for preventing the interference of ambient noise, and applies voice and image blend algorithm Handle user instruction information, specific as follows:
Step a, the sound of robot identification user is obtained the direction A of user using auditory localization algorithm, and turned according to the direction Mobile robot head aligning direction A starts camera, carries out recognition of face and tracking;
Step b, robot judges whether sounding if user's sounding according to the sound of user, it is current obtain user to user in real time Newest voice directions are recorded as direction B by the direction made a sound, and previous voice directions are recorded as direction A;
Step c, robot persistently carries out recognition of face and tracking, and the newest direction of face tracking is direction C;
Step d, robot judges the direction of user: if the direction C of face tracking is not interrupted, judging that the direction of user is Direction C;If the direction of face tracking is interrupted, judge the direction of user for B;
Step e, real-time detection judges the direction of user, judges the whether user oriented direction in the head of robot, if it is not, Head turns to the direction that alignment detects;
Step e, step b to step e is repeated, until robot starts standby suspend mode.
Robot personalizes emotion communication algorithm, includes the following steps:
Step A, establish comprising whether the Customer attribute row form of classification and level categories, actively or passively collect user list letter Breath, and store;
Step B, user speech information is received, and the content of the voice messaging received and Customer attribute row form is compared, is sentenced It is disconnected whether to have matching content, if so, executing step C;Otherwise, corresponding response is provided;
Step C, the voice messaging that analysis user issues,
If it is whether the voice messaging of category attribute, according in user list information hobby detest content, issue band emotion Answer voice;
If it is the voice messaging of level categories attribute, the voice messaging of user and the difference degree of user property are judged, according to Difference degree handles the voice messaging of user, and provides corresponding answer;
If the voice messaging of user include simultaneously whether the voice messaging of category attribute and level categories attribute, then follow the steps D:
Step D, whether the voice messaging difference extent value for judging level categories attribute is more than threshold value, if it does, using rank The method of category attribute is handled, otherwise, using whether the method for category attribute is handled.
The method for handling the voice messaging of user in the step C according to difference degree is specific as follows:
The diversity factor for calculating the level categories attribute and attribute thresholds that obtain, if it is positive difference, then robot issues happy Voice;If it is reversed difference, robot issues the voice encouraged.
Highly personalize the interactive voice robot with emotion communication, including robot body and control system, wherein Pattern recognition device, speech recognition equipment, display device are set on robot body, and the head of robot body can be with respect to body Body is freely rotated;Robot control system includes voice interaction module, speech processing module, image processing module, robot control Molding block;Wherein, voice interaction module is used to receive the voice messaging of user, and issues corresponding response to user;Voice is known For handling the voice messaging received, processing method personalizes voice friendship other module using above-mentioned robot Mutual algorithm;Image processing module is for handling the image information received;Robot control module is according to treated Voice messaging, image information control head and the body kinematics of robot, and show to corresponding information.
The speech recognition equipment includes microphone array.
The display device is the display screen for being mounted on robot head, is believed according to the voice messaging of user and identity Breath shows the expression of hommization on the display screen.
Compared with prior art, the invention has the following advantages:
1, robot of the invention has continuous speech and listens attentively to answering model, and in this mode, user can speak with robot, It can also chat with other people, other things can also be done.Robot judges whether user speaks with oneself by algorithm, comes It answers, using location algorithm and microphone array algorithm, only acquires the voice in user direction, reduce the influence of ambient noise, Not by noise jamming.
2, the hybrid algorithm of voice and image calculates the direction of user, carries out robot head alignment and microphone array Orientation speech recognition so that robot may determine that whether user exchanges with oneself.Only when being exchanged with oneself It just makes accordingly, when similar person to person exchanges, other side's name will be by not needing every words, can also be interrupted exchange.
3, the affection computation method that robot is answered, so that robot looks at user, and according to affection computation method, provides Answer with emotion and the expression to personalize when being similar to person to person's exchange, can look at other side's exchange, and have corresponding expression and feelings Thread.
4, man machine language's interaction technique that a whole set of height personalizes, allows to the interaction picture so that between people and robot Person to person is the same naturally, more intelligently, facilitating application.
Specific embodiment
Structure and the course of work of the invention are described further below.
Robot personalizes interactive voice algorithm, robot have the voice based on embedded chip wake up algorithm and Continuous Speech Recognition System based on extensive computation, after robot power supply,
Firstly, the voice of starting embedded chip wakes up algorithm.The algorithm can be run always, and whether monitoring users have used wake-up Word wakes up.
Secondly, robot uses auditory localization algorithm after user has used wake-up word, direction and the distance of user are determined, And head rotation is directed at user, start recognition of face and following function and Continuous Speech Recognition System;
Again, the subsequent phonetic order of identification user and user mobile information, respond the phonetic order of user, and always Keep head user oriented;
Then, after Continuous Speech Recognition System starting, robot starting continuous speech listens attentively to answering model, and passes through efficient voice Detection algorithm identification voice simultaneously makes interaction with user;
Finally, user speech information is not detected within the preset time, robot enters suspend mode.
Specific embodiment one,
Robot technology based on the man machine language's interaction highly to personalize realizes interactive voice by following below scheme:
1. robot powers, the voice started in embedded chip wakes up algorithm, which can run always, and monitoring users are It is no that wake-up word has been used to wake up.
2. after user has used wake-up word, robot uses auditory localization algorithm according to the voice of user first, determines and use The direction at family and distance, robot head rotation is directed at user according to the direction of user, and starts camera, and starting face is known Not and track.
Meanwhile extensive Continuous Speech Recognition System starting, the subsequent phonetic order of user is identified, if connecting on a large scale Before continuous speech recognition system starting and user says wakes up after word that there are also the voice V of user, then the extensive continuous speech recognition such as After system starting, preferential identifying processing voice V;In this way, wake-up word can be used in user, after waiting machines to respond, besides other Phonetic order;User can also will wake up word and phonetic order links together, using more convenient rapid.
If user's movement, recognition of face and track algorithm can continue to monitor the position of user, so that robot head one Straight alignment user location.
3. extensive Continuous Speech Recognition System starts, robot starting continuous speech listens attentively to answering model, in the mould Under formula, user does not need to be waken up with wake-up word using phonetic order, and can directly say;Meanwhile in this mode, user can be with It speaks with robot, can also chat with other people, other things can also be done.Robot by efficient voice detection algorithm come Judge whether user speaks with oneself.If user is no and robot speaks, then extensive continuous speech in N minutes Identifying system stops, to save operation consumption.User reuses phonetic order and needs to be waken up with wake-up word.
The method for identifying the subsequent phonetic order of user is as follows:
Robot judges the time point that subsequent phonetic order obtains, and wakes up word and continuous speech recognition system if issued in user Period before system starting receives user speech instruction V, then starts in Continuous Speech Recognition System, first processing user Then phonetic order V reprocesses the phonetic order of user's sending after Continuous Speech Recognition System starting.
After robot continuous speech listens attentively to answering model starting, user directly issues phonetic order, and robot is in complexity In voice command system, the phonetic order of automatic identification user, and make a response.
The efficient voice detection algorithm, specifically comprises the following steps:
Step 1: waking up whether the phonetic order that algorithm judges that robot receives is to wake up word by voice, if it is, connecing Get off one to default and speak with robot;Otherwise, step 2 is executed;
The text information that step 2, robot identification user issue, and generate text sequence A1, A2, A3 ... ... An, wherein An For Chinese character, phonetic or foreign language;
Whether step 3 reaches threshold value by calculating multivariate probability P (A1, A2, A3 ... ... An), and robot detects text sequence Whether A1, A2, A3 ... ... An are efficient voice, if it is efficient voice, execute step 4, otherwise, robot is not responding to;
Step 4 judges whether user speaks with robot by the text information that semantic processes module and user issue, if It is to make corresponding answer, otherwise, robot is not responding to.
Specific embodiment two,
1) text that robot identification user says, generates text sequence: A1, A2, A3 ... ..., and An (herein, carrys out Chinese Say, An can be Chinese character, such as " in ", " state " etc., be also possible to phonetic, such as " zh ", " ong1 " etc.);
2) by calculating whether multivariate probability P (A1, A2, A3 ... ..., An) reaches threshold value, robot detects text sequence A1, Whether A2, A3 ... ..., An are efficient voice;For example obtained binary conditions probability P (A2 | A1) by text big data, P (A3 | A2) ... ..., then P (A1, A2, A3 ... ..., An)=P (A1) * P (A2 | A1) * P (A3 | A2) ... .* P (An | An-1), three First or more polynary conditional probability is similar;
3) if robot detects that text sequence is non-voice, do not respond;
If 4) robot detects that text sequence is voice, can be judged by semantic processes module user whether and machine People speak (for example, detection text sequence and robot preset problem identical threshold value), if it is determined that user be not and machine Device people speaks, and does not respond.If it is determined that being spoken with machine, corresponding answer is provided;
If 5) wake up algorithm detect user used wake-up word wake up, lower voice of user be directly considered and machine Device people speaks.
Robot uses microphone array, for preventing the interference of ambient noise, and applies voice and image blend algorithm Handle user instruction information, specific as follows:
Step a, the sound of robot identification user is obtained the direction A of user using auditory localization algorithm, and turned according to the direction Mobile robot head aligning direction A starts camera, carries out recognition of face and tracking;
Step b, robot judges whether sounding if user's sounding according to the sound of user, it is current obtain user to user in real time Newest voice directions are recorded as direction B by the direction made a sound, and previous voice directions are recorded as direction A;
Step c, robot persistently carries out recognition of face and tracking, and the newest direction of face tracking is direction C;
Step d, robot judges the direction of user: if the direction C of face tracking is not interrupted, judging that the direction of user is Direction C;If the direction of face tracking is interrupted, judge the direction of user for B;
Step e, real-time detection judges the direction of user, judges the whether user oriented direction in the head of robot, if it is not, Head turns to the direction that alignment detects;
Step e, step b to step e is repeated, until robot starts standby suspend mode.
Robot can additionally use sound groove recognition technology in e and face recognition technology, to different users according to its age and Gender gives different answers, to provide more humane exchange.
Robot can have screen, feed back according to the problem of user with identity, the expression for providing hommization on screen.
Robot personalizes emotion communication algorithm, includes the following steps:
Step A, establish comprising whether the Customer attribute row form of classification and level categories, actively or passively collect user list letter Breath, and store;
Step B, user speech information is received, and the content of the voice messaging received and Customer attribute row form is compared, is sentenced It is disconnected whether to have matching content, if so, executing step C;Otherwise, corresponding response is provided;
Step C, the voice messaging that analysis user issues,
If it is whether the voice messaging of category attribute, according in user list information hobby detest content, issue band emotion Answer voice;
If it is the voice messaging of level categories attribute, the voice messaging of user and the difference degree of user property are judged, according to Difference degree handles the voice messaging of user, and provides corresponding answer;
If the voice messaging of user include simultaneously whether the voice messaging of category attribute and level categories attribute, then follow the steps D:
Step D, whether the voice messaging difference extent value for judging level categories attribute is more than threshold value, if it does, using rank The method of category attribute is handled, otherwise, using whether the method for category attribute is handled.
The method for handling the voice messaging of user in the step C according to difference degree is specific as follows:
The diversity factor for calculating the level categories attribute and attribute thresholds that obtain, if it is positive difference, then robot issues happy Voice;If it is reversed difference, robot issues the voice encouraged.
Specific embodiment three,
1) establish the attribute list of user, include two classes, one kind is whether classification, such as user like thing, disagreeable east West;One is level categories, such as Language Mastery (such as kindergarten's Third school grade is 30), mathematics standard (such as primary school's second grade Level for 20), swimming;This attribute list is established by two kinds of approach, first is that providing option, user oneself is allowed to fill out;Two It is to extract the information of user in the daily interactive voice of robot, is established automatically to user.
2) current problem (include current problem and before the problem of) for analyzing user, sees if there is Customer attribute row form Content is included in the inside, if do not included, the normal tone is answered.
3) if the problem of analyzing user, includes whether the attribute of classification provides then according to user preferences with emotion Answer voice.
If the problem of 4) analyzing user, the problem of including the attribute of level categories, judging user and user property Difference degree and difference quality, then handled according to difference extent value.
For example user property is primary school's second grade mathematics standard 20, but in interaction before, user answers Chu Liao primary school The mathematical problem of Third school grade, attribute value 30, the two diversity factor are more than threshold value, and are positive difference, the subsequent answer of robot It is middle to use happy voice.
User says that I learns to have done one " Kung Pao chicken " new dish for another example, and user's category is not registered before the menu Property in, rank default value is 0, and the problem of according to user, identifies that " Kung Pao chicken " rank in cooking is 10, the two difference Degree is more than threshold value, and is positive difference, can be using happy voice in the subsequent answer of robot;Say that I does in such as user " Kung Pao chicken " be not very good eating, the problem of according to user, identify that the rank is 5, and the rank in user property is 10, the two Diversity factor is more than threshold value, and is reversed difference, can be using the voice for encouraging emotion in the subsequent answer of robot.
If 5) the problem of analyzing user, that is, include whether the attribute of classification, and include the attribute of level categories, then Whether the difference extent value for judging level categories attribute is more than threshold value, is the processing for then using level categories attribute, otherwise uses Whether the processing of category attribute.
6) for special user instruction, additional priority processing is given, for example user says " today, I was very glad ", machine People can use the answer voice " owner I also very happy " with happiness emotion.
Highly personalize the interactive voice robot with emotion communication, including robot body and control system, wherein Pattern recognition device, speech recognition equipment, display device are set on robot body, and the head of robot body can be with respect to body Body is freely rotated;Robot control system includes voice interaction module, speech processing module, image processing module, robot control Molding block;Wherein, voice interaction module is used to receive the voice messaging of user, and issues corresponding response to user;Voice is known For handling the voice messaging received, processing method personalizes voice friendship other module using above-mentioned robot Mutual algorithm;Image processing module is for handling the image information received;Robot control module is according to treated Voice messaging, image information control head and the body kinematics of robot, and show to corresponding information.
The speech recognition equipment includes microphone array.
The display device is the display screen for being mounted on robot head, is believed according to the voice messaging of user and identity Breath shows the expression of hommization on the display screen.

Claims (10)

  1. The interactive voice algorithm 1. robot personalizes, it is characterised in that: robot has the voice based on embedded chip Algorithm and the Continuous Speech Recognition System based on extensive computation are waken up, after robot power supply,
    Firstly, the voice of starting embedded chip wakes up algorithm;
    Secondly, robot uses auditory localization algorithm after user has used wake-up word, direction and the distance of user are determined, and will Head rotation is directed at user, starts recognition of face and following function and Continuous Speech Recognition System;
    Again, the subsequent phonetic order of identification user and user mobile information, respond the phonetic order of user, and always Keep head user oriented;
    Then, after Continuous Speech Recognition System starting, robot starting continuous speech listens attentively to answering model, and passes through efficient voice Detection algorithm identification voice simultaneously makes interaction with user;
    Finally, user speech information is not detected within the preset time, robot enters suspend mode.
  2. The interactive voice algorithm 2. robot according to claim 1 personalizes, it is characterised in that: identification user is subsequent Phonetic order method it is as follows:
    Robot judges the time point that subsequent phonetic order obtains, and wakes up word and continuous speech recognition system if issued in user Period before system starting receives user speech instruction V, then starts in Continuous Speech Recognition System, first processing user Then phonetic order V reprocesses the phonetic order of user's sending after Continuous Speech Recognition System starting.
  3. The interactive voice algorithm 3. robot according to claim 1 personalizes, it is characterised in that: the continuous language of robot After sound listens attentively to answering model starting, user directly issues phonetic order, and robot is automatic to know in complicated voice command system The phonetic order of other user, and make a response.
  4. The interactive voice algorithm 4. robot according to claim 1 personalizes, it is characterised in that: the efficient voice Detection algorithm includes the following steps:
    Step 1: waking up whether the phonetic order that algorithm judges that robot receives is to wake up word by voice, if it is, connecing Get off one to default and speak with robot;Otherwise, step 2 is executed;
    The text information that step 2, robot identification user issue, and generate text sequence A1, A2, A3 ... ... An, wherein An For Chinese character, phonetic or foreign language;
    Whether step 3 reaches threshold value by calculating multivariate probability P (A1, A2, A3 ... ... An), and robot detects text sequence Whether A1, A2, A3 ... ... An are efficient voice, if it is efficient voice, execute step 4, otherwise, robot is not responding to;
    Step 4 judges whether user speaks with robot by the text information that semantic processes module and user issue, if It is to make corresponding answer, otherwise, robot is not responding to.
  5. The interactive voice algorithm 5. robot according to claim 1 personalizes, it is characterised in that: robot uses wheat Gram wind array, for preventing the interference of ambient noise, and application voice and image blend algorithm handle user instruction information, have Body is as follows:
    Step a, the sound of robot identification user is obtained the direction A of user using auditory localization algorithm, and turned according to the direction Mobile robot head aligning direction A starts camera, carries out recognition of face and tracking;
    Step b, robot judges whether sounding if user's sounding according to the sound of user, it is current obtain user to user in real time Newest voice directions are recorded as direction B by the direction made a sound, and previous voice directions are recorded as direction A;
    Step c, robot persistently carries out recognition of face and tracking, and the newest direction of face tracking is direction C;
    Step d, robot judges the direction of user: if the direction C of face tracking is not interrupted, judging that the direction of user is Direction C;If the direction of face tracking is interrupted, judge the direction of user for B;
    Step e, real-time detection judges the direction of user, judges the whether user oriented direction in the head of robot, if it is not, Head turns to the direction that alignment detects;
    Step e, step b to step e is repeated, until robot starts standby suspend mode.
  6. The emotion communication algorithm 6. robot personalizes, characterized by the following steps:
    Step A, establish comprising whether the Customer attribute row form of classification and level categories, actively or passively collect user list letter Breath, and store;
    Step B, user speech information is received, and the content of the voice messaging received and Customer attribute row form is compared, is sentenced It is disconnected whether to have matching content, if so, executing step C;Otherwise, corresponding response is provided;
    Step C, the voice messaging that analysis user issues,
    If it is whether the voice messaging of category attribute, according in user list information hobby detest content, issue band emotion Answer voice;
    If it is the voice messaging of level categories attribute, the voice messaging of user and the difference degree of user property are judged, according to Difference degree handles the voice messaging of user, and provides corresponding answer;
    If the voice messaging of user include simultaneously whether the voice messaging of category attribute and level categories attribute, then follow the steps D:
    Step D, whether the voice messaging difference extent value for judging level categories attribute is more than threshold value, if it does, using rank The method of category attribute is handled, otherwise, using whether the method for category attribute is handled.
  7. The emotion communication algorithm 7. robot according to claim 6 personalizes, it is characterised in that: in the step C The method for handling the voice messaging of user according to difference degree is specific as follows:
    The diversity factor for calculating the level categories attribute and attribute thresholds that obtain, if it is positive difference, then robot issues happy Voice;If it is reversed difference, robot issues the voice encouraged.
  8. The interactive voice robot with emotion communication 8. height personalizes, it is characterised in that: including robot body and control System, wherein pattern recognition device, speech recognition equipment, display device, the head of robot body are set on robot body Can relative body be freely rotated;Robot control system includes voice interaction module, speech processing module, image procossing mould Block, robot control module;Wherein, voice interaction module is used to receive the voice messaging of user, and issues to user corresponding Response;Speech recognition module is for handling the voice messaging received, in processing method application claim 1 to 5 Described in any item robots personalize interactive voice algorithm;Image processing module be used for the image information received into Row processing;Robot control module controls head and the body fortune of robot according to treated voice messaging, image information It is dynamic, and corresponding information is shown.
  9. The interactive voice robot with emotion communication 9. height according to claim 8 personalizes, it is characterised in that: institute Stating speech recognition equipment includes microphone array.
  10. The interactive voice robot with emotion communication 10. height according to claim 8 personalizes, it is characterised in that: The display device is the display screen for being mounted on robot head, according to the voice messaging and identity information of user, by human nature The expression of change is shown on the display screen.
CN201910096249.XA 2019-01-31 2019-01-31 Robot personalizes interactive voice algorithm, emotion communication algorithm and robot Pending CN109741746A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910096249.XA CN109741746A (en) 2019-01-31 2019-01-31 Robot personalizes interactive voice algorithm, emotion communication algorithm and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910096249.XA CN109741746A (en) 2019-01-31 2019-01-31 Robot personalizes interactive voice algorithm, emotion communication algorithm and robot

Publications (1)

Publication Number Publication Date
CN109741746A true CN109741746A (en) 2019-05-10

Family

ID=66366964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910096249.XA Pending CN109741746A (en) 2019-01-31 2019-01-31 Robot personalizes interactive voice algorithm, emotion communication algorithm and robot

Country Status (1)

Country Link
CN (1) CN109741746A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310644A (en) * 2019-06-28 2019-10-08 广州云蝶科技有限公司 Wisdom class board exchange method based on speech recognition
CN110349577A (en) * 2019-06-19 2019-10-18 深圳前海达闼云端智能科技有限公司 Man-machine interaction method, device, storage medium and electronic equipment
CN111192597A (en) * 2019-12-27 2020-05-22 浪潮金融信息技术有限公司 Processing method of continuous voice conversation in noisy environment
CN111312243A (en) * 2020-02-14 2020-06-19 北京百度网讯科技有限公司 Equipment interaction method and device
CN112185388A (en) * 2020-09-14 2021-01-05 北京小米松果电子有限公司 Speech recognition method, device, equipment and computer readable storage medium
CN112420045A (en) * 2020-12-11 2021-02-26 奇瑞汽车股份有限公司 Automobile-mounted voice interaction system and method
CN112908325A (en) * 2021-01-29 2021-06-04 中国平安人寿保险股份有限公司 Voice interaction method and device, electronic equipment and storage medium
CN113284490A (en) * 2021-04-23 2021-08-20 歌尔股份有限公司 Control method, device and equipment of electronic equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654943A (en) * 2015-10-26 2016-06-08 乐视致新电子科技(天津)有限公司 Voice wakeup method, apparatus and system thereof
CN106292732A (en) * 2015-06-10 2017-01-04 上海元趣信息技术有限公司 Intelligent robot rotating method based on sound localization and Face datection
CN106782554A (en) * 2016-12-19 2017-05-31 百度在线网络技术(北京)有限公司 Voice awakening method and device based on artificial intelligence
CN106875945A (en) * 2017-03-09 2017-06-20 广东美的制冷设备有限公司 Sound control method, device and air-conditioner
CN107199572A (en) * 2017-06-16 2017-09-26 山东大学 A kind of robot system and method based on intelligent auditory localization and Voice command
CN107256707A (en) * 2017-05-24 2017-10-17 深圳市冠旭电子股份有限公司 A kind of audio recognition method, system and terminal device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106292732A (en) * 2015-06-10 2017-01-04 上海元趣信息技术有限公司 Intelligent robot rotating method based on sound localization and Face datection
CN105654943A (en) * 2015-10-26 2016-06-08 乐视致新电子科技(天津)有限公司 Voice wakeup method, apparatus and system thereof
CN106782554A (en) * 2016-12-19 2017-05-31 百度在线网络技术(北京)有限公司 Voice awakening method and device based on artificial intelligence
CN106875945A (en) * 2017-03-09 2017-06-20 广东美的制冷设备有限公司 Sound control method, device and air-conditioner
CN107256707A (en) * 2017-05-24 2017-10-17 深圳市冠旭电子股份有限公司 A kind of audio recognition method, system and terminal device
CN107199572A (en) * 2017-06-16 2017-09-26 山东大学 A kind of robot system and method based on intelligent auditory localization and Voice command

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349577A (en) * 2019-06-19 2019-10-18 深圳前海达闼云端智能科技有限公司 Man-machine interaction method, device, storage medium and electronic equipment
CN110310644A (en) * 2019-06-28 2019-10-08 广州云蝶科技有限公司 Wisdom class board exchange method based on speech recognition
CN111192597A (en) * 2019-12-27 2020-05-22 浪潮金融信息技术有限公司 Processing method of continuous voice conversation in noisy environment
CN111312243A (en) * 2020-02-14 2020-06-19 北京百度网讯科技有限公司 Equipment interaction method and device
CN111312243B (en) * 2020-02-14 2023-11-14 北京百度网讯科技有限公司 Equipment interaction method and device
CN112185388A (en) * 2020-09-14 2021-01-05 北京小米松果电子有限公司 Speech recognition method, device, equipment and computer readable storage medium
CN112185388B (en) * 2020-09-14 2024-04-09 北京小米松果电子有限公司 Speech recognition method, device, equipment and computer readable storage medium
CN112420045A (en) * 2020-12-11 2021-02-26 奇瑞汽车股份有限公司 Automobile-mounted voice interaction system and method
CN112908325A (en) * 2021-01-29 2021-06-04 中国平安人寿保险股份有限公司 Voice interaction method and device, electronic equipment and storage medium
CN112908325B (en) * 2021-01-29 2022-10-28 中国平安人寿保险股份有限公司 Voice interaction method and device, electronic equipment and storage medium
CN113284490A (en) * 2021-04-23 2021-08-20 歌尔股份有限公司 Control method, device and equipment of electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN109741746A (en) Robot personalizes interactive voice algorithm, emotion communication algorithm and robot
CN108000526B (en) Dialogue interaction method and system for intelligent robot
US11017779B2 (en) System and method for speech understanding via integrated audio and visual based speech recognition
CN108108340B (en) Dialogue interaction method and system for intelligent robot
US11430439B2 (en) System and method for providing assistance in a live conversation
US9635178B2 (en) Coordinating voice calls between representatives and customers to influence an outcome of the call
US9501743B2 (en) Method and apparatus for tailoring the output of an intelligent automated assistant to a user
CN116547746A (en) Dialog management for multiple users
US20190371318A1 (en) System and method for adaptive detection of spoken language via multiple speech models
CN110326261A (en) Determine that the speaker in audio input changes
US11594224B2 (en) Voice user interface for intervening in conversation of at least one user by adjusting two different thresholds
CN107870994A (en) Man-machine interaction method and system for intelligent robot
CN109101663A (en) A kind of robot conversational system Internet-based
CN105810200A (en) Man-machine dialogue apparatus and method based on voiceprint identification
CN104538043A (en) Real-time emotion reminder for call
US20220101856A1 (en) System and method for disambiguating a source of sound based on detected lip movement
CN106815321A (en) Chat method and device based on intelligent chat robots
Matsusaka et al. Conversation robot participating in group conversation
WO2021162675A1 (en) Synthesized speech audio data generated on behalf of human participant in conversation
CN105912111A (en) Method for ending voice conversation in man-machine interaction and voice recognition device
CN114821744A (en) Expression recognition-based virtual character driving method, device and equipment
CN111063346A (en) Cross-media star emotion accompany interaction system based on machine learning
US20220021762A1 (en) A command based interactive system and a method thereof
CN106251717A (en) Intelligent robot speech follow read learning method and device
CN110502609A (en) A kind of method, apparatus and company robot of adjusting mood

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190510

WD01 Invention patent application deemed withdrawn after publication