CN104252287A - Interaction device and method for improving expression capability based on interaction device - Google Patents
Interaction device and method for improving expression capability based on interaction device Download PDFInfo
- Publication number
- CN104252287A CN104252287A CN201410449741.8A CN201410449741A CN104252287A CN 104252287 A CN104252287 A CN 104252287A CN 201410449741 A CN201410449741 A CN 201410449741A CN 104252287 A CN104252287 A CN 104252287A
- Authority
- CN
- China
- Prior art keywords
- picture
- voice
- unit
- word
- interactive device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000003993 interaction Effects 0.000 title abstract description 14
- 230000002452 interceptive effect Effects 0.000 claims abstract description 41
- 238000012545 processing Methods 0.000 claims abstract description 40
- 230000005236 sound signal Effects 0.000 claims abstract description 9
- 230000005284 excitation Effects 0.000 claims description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 235000016838 Pomo dAdamo Nutrition 0.000 description 2
- 244000003138 Pomo dAdamo Species 0.000 description 2
- 230000019771 cognition Effects 0.000 description 2
- 230000001149 cognitive effect Effects 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000013011 mating Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Landscapes
- Electrically Operated Instructional Devices (AREA)
Abstract
The invention discloses an interaction device and a method for improving expression capability based on the interaction device. The interaction device comprises: the voice recognition system comprises a display unit, a voice acquisition unit, a voice recognition processing unit and a sounding unit; wherein: the display unit is used for displaying pictures; the sound acquisition unit is used for acquiring sound signals; the voice recognition processing unit is used for recognizing characters from the voice signal, comparing the recognized characters with the keywords, and controlling the sound production unit to output prompt voice corresponding to the picture when the recognized characters are not matched with the keywords; the sound production unit is used for outputting the guide voice corresponding to the picture and the corresponding prompt voice. Through displaying the picture on the display unit, processing the sound and controlling the sound production unit to respond through the sound acquisition unit and the voice recognition processing unit, the comprehensive interaction of the interactive equipment and the user is realized.
Description
Technical field
The present invention relates to mutual field, particularly relate to a kind of interactive device and the method improved one's powers of expression based on interactive device.
Background technology
Man-machine interaction is an important research direction in smart machine field, the use be directed in education and instruction is of great importance, but for current based on mutual education and instruction equipment, particularly be directed to the education and instruction equipment of ability to express, picture talk ability and ability to exchange, mainly unidirectional communication, picture and word explanation are such as provided for User is unidirectional, do not receive feedback; Or can only receive simply to the feedback whether judged, and feedback manually operates realization that feedback interactive strong does not realize the repeatedly interactive of both sides.
Summary of the invention
The present invention proposes a kind of interactive device and the method improved one's powers of expression based on interactive device, it is by arranging cell stores picture and related voice, Show Picture on the display unit, by sound collection unit and voice recognition processing unit sound processed and control phonation unit response, achieving the general interaction of interactive device and user.
For realizing above-mentioned design, the present invention by the following technical solutions:
Adopt a kind of interactive device on the one hand, comprising: display unit, sound collection unit, voice recognition processing unit and phonation unit; Wherein:
Described display unit is used for Showing Picture;
Described sound collection unit is used for collected sound signal;
Described voice recognition processing unit is used for identifying word from described voice signal, the described word that identifies and key word are contrasted, when the described word identified does not mate with described key word, control phonation unit and export suggestion voice corresponding to described picture;
Described phonation unit is for exporting guiding voice corresponding to described picture and corresponding suggestion voice.
Wherein, described storage unit is also for storing excitation voice;
Described voice recognition processing unit also for when described in the word that identifies and described keyword match time, control phonation unit output drive voice.
Wherein, Content Difficulty classification storage pressed by described picture.
Wherein, also comprise control module, described control module is for detecting Content Difficulty and the picture of selection.
Wherein, described control module is touch control module.
Adopt a kind of method improved one's powers of expression based on interactive device on the other hand, comprising:
Display unit Shows Picture, and phonation unit exports and guides voice;
Sound collection unit collected sound signal;
Voice recognition processing unit identifies word from described voice signal, the word identified and key word is contrasted, and when the word identified does not mate with described key word, controls phonation unit and exports suggestion voice corresponding to picture;
Phonation unit exports suggestion voice.
Wherein, when the word identified and described keyword match, described voice recognition processing unit controls phonation unit output drive voice.
Wherein, Content Difficulty classification storage pressed by described picture.
Wherein, described display unit Shows Picture, and phonation unit also comprises before exporting and guiding voice:
Control module detects the Content Difficulty and picture selected.
Wherein, described control module is touch control module.
Beneficial effect of the present invention is: by arranging cell stores picture and related voice, Show Picture on the display unit, by sound collection unit and voice recognition processing unit sound processed and control phonation unit response, achieving the general interaction of interactive device and user.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, below the accompanying drawing used required in describing the embodiment of the present invention is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to the content of the embodiment of the present invention and these accompanying drawings.
Fig. 1 is the block diagram of the first embodiment of a kind of interactive device that the embodiment of the present invention provides.
Fig. 2 is the block diagram of the second embodiment of a kind of interactive device that the embodiment of the present invention provides.
Fig. 3 is the method flow diagram of the first embodiment of a kind of method improved one's powers of expression based on interactive device that the embodiment of the present invention provides.
Fig. 4 is the method flow diagram of the second embodiment of a kind of method improved one's powers of expression based on interactive device that the embodiment of the present invention provides.
Embodiment
The technical matters solved for making the present invention, the technical scheme of employing and the technique effect that reaches are clearly, be described in further detail below in conjunction with the technical scheme of accompanying drawing to the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those skilled in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Please refer to Fig. 1, it is the block diagram of the first embodiment of a kind of interactive device that the embodiment of the present invention provides.Interactive device in the present embodiment, is mainly used in carrying out the mutual of machine and people, is specially adapted to the mutual guiding in students'learning.
This interactive device, comprising: display unit 120, sound collection unit 130, voice recognition processing unit 140 and phonation unit 150; Wherein:
Described display unit 120 is for Showing Picture;
Display unit 120 comprises the various display screen that can carry out vision display from hardware configuration.
Described sound collection unit 130 is for collected sound signal;
Sound collection unit 130 can be specifically various microphone.
Described voice recognition processing unit 140 for identifying word from described voice signal, the described word that identifies and key word are contrasted, when the described word identified does not mate with described key word, control phonation unit 150 and export suggestion voice corresponding to described picture;
Voice recognition processing unit 140 is specially a processor, such as CPU (Central Processing Unit, central processing unit), MCU (Micro Control Unit, micro-control unit), the judgement process after voice recognition processing unit 140 pairs of voice signals identify and identify.
Described phonation unit 150 is for exporting guiding voice corresponding to described picture and corresponding suggestion voice.
Phonation unit 150 comprises loudspeaker or other audible devices.
In sum, by Showing Picture on display unit 120, processing by sound collection unit 130 and voice recognition processing unit 140 pairs of sound and controlling phonation unit 150 and respond, achieve the general interaction of interactive device and user.
Please refer to Fig. 2, it is the block diagram of the second embodiment of a kind of interactive device that the embodiment of the present invention provides.This interactive device comprises: display unit 120, sound collection unit 130, voice recognition processing unit 140 and phonation unit 150; Wherein:
Described display unit 120 is for Showing Picture;
Described sound collection unit 130 is for collected sound signal;
Described voice recognition processing unit 140 for identifying word from described voice signal, the described word that identifies and key word are contrasted, when the described word identified does not mate with described key word, control phonation unit 150 and export suggestion voice corresponding to picture;
Voice recognition processing unit 140, after processing voice corresponding to current picture, can import next pictures and complete reciprocal process based on next pictures.
Described phonation unit 150 is for exporting guiding voice corresponding to described picture and corresponding suggestion voice.
Described suggestion voice exports when the word identified does not mate with key word, and suggestion voice can be considered as the prompting in detail further to image content.
Wherein, described storage unit 110 is also for storing excitation voice;
Excitation voice export when the word identified and keyword match, corresponding different with picture with suggestion voice from guiding voice, excitation voice can not be corresponding with picture, prepares some excitation voice in whole storage unit 110, such as " excellent ", " continuing to refuel " etc.
Described voice recognition processing unit 140 also for when described in the word that identifies and described keyword match time, control phonation unit 150 output drive voice.
The broadcasting of excitation voice is equally also controlled by voice recognition processing unit 140.
Wherein, Content Difficulty classification storage pressed by described picture.
For the cognitive law of User, picture is classified by Content Difficulty, general interactive according to carrying out cognition from easy to difficult on the whole.
Wherein, also comprise control module 110, described control module 110 is for detecting Content Difficulty and the picture of selection.
The content of the mutual mistake of User can be skipped when upper once operative installations, now uses control module 110 to confirm new picture by Content Difficulty and picture.
Wherein, described control module 110 is touch control module.
Touch control module can be integrated together with display device, realizes the function of display unit 120 and the function of touch control module by a touch display screen simultaneously.
In sum, by Showing Picture on display unit 120, processing by sound collection unit 130 and voice recognition processing unit 140 pairs of sound and controlling phonation unit 150 and respond, achieve the general interaction of interactive device and user.The setting of control module 110 improves the control effects of user in reciprocal process.
The embodiment of a kind of method improved one's powers of expression based on interactive device provided for the embodiment of the present invention below.The embodiment of method realizes based on the embodiment of above-mentioned interactive device, and both belong to same design, the detail content of not detailed description in the embodiment of method, can with reference to the embodiment of above-mentioned interactive device.
Please refer to Fig. 3, it is the process flow diagram of the first embodiment of a kind of method improved one's powers of expression based on interactive device of the present invention, and as shown in the figure, the method comprises:
Step S101: Show Picture and export guiding voice corresponding to described picture.
Such as, in picture, main body is a man, and the guiding voice of its correspondence are " children, the people that you see now is man or woman? " What these guiding voice were corresponding is key word is " man ".Display unit 120 Shows Picture, and phonation unit 150 exports and guides voice guiding student user to observe picture simultaneously.
Step S102: collected sound signal.
Sound collection unit 130 gathers User according to the voice signal guiding voice to send, and after collection, voice signal is sent to voice recognition processing unit 140 and carries out subsequent treatment.
Step S103: identify word from described voice signal, contrasts the word identified and key word.
Voice recognition processing unit 140 pairs of voice signals carry out speech recognition, and concrete speech recognition technology comparative maturity, does not repeat them here.
Step S104: when the word identified does not mate with described key word, export the suggestion voice that described picture is corresponding.
The word identified and key word can be contrasted after identifying word, if key word does not have identical with having in the word identified, be then considered as mating unsuccessful.Such as, do not match identical word when being contrasted with the word identified by key word " man ", voice recognition processing unit 140 will control phonation unit 150 and export corresponding suggestion voice.Suggestion voice is now " you see his heavy features, have Adam's apple, then think about it " such as.
After output suggestion voice, the operation that in fact also can restart below from step S102, gathers new voice signal, carries out voice recognition processing, just in this as the conventional processing in mutual, is not elaborated.
In sum, by Showing Picture upper and export corresponding guiding voice, by sound collection and speech recognition is carried out to the sound collected and to identify result respond, achieve the general interaction of interactive device and user, improve the interactive experience of User.
Please refer to Fig. 4, it is the process flow diagram of the second embodiment of a kind of method improved one's powers of expression based on interactive device of the present invention, and as shown in the figure, the method comprises:
Step S201: detect the Content Difficulty and picture selected.
In the present embodiment, described control module 110 is preferably touch control module, corresponding, the Content Difficulty of selection and picture, comprising: the Content Difficulty selected by touch operation and picture.Also the mode of other such as mechanical keys can be selected to realize controlling, specifically determined by the hardware carrier realizing the technical program.Such as panel computer, PC, mechanical push-key type intelligent terminal, its mode realized is each different.
Step S202: Show Picture and export guiding voice corresponding to described picture.
In memory cell 110, Content Difficulty classification storage pressed by described picture.For the cognitive law of User, picture is classified by Content Difficulty, general interactive according to carrying out cognition from easy to difficult on the whole.
Such as, in picture, main body is a man, and the guiding voice of its correspondence are " children, the people that you see now is man or woman? " What these guiding voice were corresponding is key word is " man ".Display unit 120 Shows Picture, and phonation unit 150 exports and guides voice guiding student user to observe picture simultaneously.
Step S203: collected sound signal.
Sound collection unit 130 gathers User according to the voice signal guiding voice to send, and after collection, voice signal is sent to voice recognition processing unit 140 and carries out subsequent treatment.
Step S204: identify word from described voice signal, contrasts the word identified and key word.
Voice recognition processing unit 140 pairs of voice signals carry out speech recognition, and concrete speech recognition technology comparative maturity, does not repeat them here.The word identified and key word can be contrasted after identifying word, if key word does not have identical with having in the word identified, be then considered as mating unsuccessful.Such as, do not match identical word when being contrasted with the word identified by key word " man ", voice recognition processing unit 140 will control phonation unit 150 and export corresponding suggestion voice.Suggestion voice is now " you see his heavy features, have Adam's apple, then think about it " such as.
Step S205: when the word identified does not mate with described key word, export the suggestion voice that described picture is corresponding.
After output suggestion voice, the operation that in fact also can restart below from step S102, gathers new voice signal, carries out voice recognition processing, just in this as the conventional processing in mutual, is not elaborated.
Step S206: when the word identified and described keyword match, output drive voice.
When what have with keyword match in the word identified, then think and the guiding of photo current terminated, to User output drive voice.
Particular content existing explanation in previous embodiment of excitation voice.
In sum, by Showing Picture upper and export corresponding guiding voice, by sound collection and speech recognition is carried out to the sound collected and to identify result respond, achieve the general interaction of interactive device and user, improve the interactive experience of User.The setting controlling difficulty selection and picture selection step improves the control effects of user in reciprocal process.Interaction effect has been enriched further to excitation when the match is successful simultaneously.
Above content is only preferred embodiment of the present invention, and for those of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, this description should not be construed as limitation of the present invention.
Claims (10)
1. an interactive device, is characterized in that, comprising: display unit, sound collection unit, voice recognition processing unit and phonation unit; Wherein:
Described display unit is used for Showing Picture;
Described sound collection unit is used for collected sound signal;
Described voice recognition processing unit is used for identifying word from described voice signal, the described word that identifies and key word are contrasted, when the described word identified does not mate with described key word, control phonation unit and export suggestion voice corresponding to described picture;
Described phonation unit is for exporting guiding voice corresponding to described picture and corresponding suggestion voice.
2. a kind of interactive device according to claim 1, is characterized in that, described storage unit is also for storing excitation voice;
Described voice recognition processing unit also for when described in the word that identifies and described keyword match time, control phonation unit output drive voice.
3. a kind of interactive device according to claim 1, is characterized in that, described picture is pressed Content Difficulty classification and stored.
4. a kind of interactive device according to claim 3, is characterized in that, also comprise control module, and described control module is for detecting Content Difficulty and the picture of selection.
5. a kind of interactive device according to claim 4, is characterized in that, described control module is touch control module.
6., based on the method improved one's powers of expression of interactive device, it is characterized in that, comprising:
Show Picture and export guiding voice corresponding to described picture;
Collected sound signal;
From described voice signal, identify word, the word identified and key word are contrasted;
When the word identified does not mate with described key word, export the suggestion voice that described picture is corresponding.
7. method according to claim 6, is characterized in that, when the word identified and described keyword match, and output drive voice.
8. method according to claim 6, is characterized in that, described picture is pressed Content Difficulty classification and stored.
9. method according to claim 8, is characterized in that, described in Show Picture and before exporting guiding voice corresponding to described picture, also comprise:
Detect the Content Difficulty and picture selected.
10. method according to claim 9, is characterized in that, the Content Difficulty of described selection and picture, comprising: the Content Difficulty selected by touch operation and picture, according to the Content Difficulty of the probability selection mated in predetermined period and picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410449741.8A CN104252287A (en) | 2014-09-04 | 2014-09-04 | Interaction device and method for improving expression capability based on interaction device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410449741.8A CN104252287A (en) | 2014-09-04 | 2014-09-04 | Interaction device and method for improving expression capability based on interaction device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104252287A true CN104252287A (en) | 2014-12-31 |
Family
ID=52187262
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410449741.8A Pending CN104252287A (en) | 2014-09-04 | 2014-09-04 | Interaction device and method for improving expression capability based on interaction device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104252287A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106781734A (en) * | 2016-12-29 | 2017-05-31 | 上海清之泓教育科技有限公司 | For the mobile terminal of foreign language teaching |
CN109260733A (en) * | 2018-09-12 | 2019-01-25 | 苏州颗粒智能玩具有限公司 | A kind of educational toy with interaction function |
CN110503938A (en) * | 2019-08-30 | 2019-11-26 | 北京太极华保科技股份有限公司 | The recognition methods of machine conversational language and device, identification engine switching method and device |
CN110867187A (en) * | 2019-10-31 | 2020-03-06 | 北京大米科技有限公司 | Voice data processing method and device, storage medium and electronic equipment |
CN111512642A (en) * | 2017-12-28 | 2020-08-07 | 索尼公司 | Display apparatus and signal generating apparatus |
CN111710213A (en) * | 2020-06-05 | 2020-09-25 | 河南艺树教育科技有限公司 | Quantifiable music teaching system |
CN113377982A (en) * | 2021-06-22 | 2021-09-10 | 读书郎教育科技有限公司 | System and method for instructing students to tell picture story |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1619530A (en) * | 2003-11-18 | 2005-05-25 | 英业达股份有限公司 | Multi information prompted type writing phonetic learning system and method |
CN1622143A (en) * | 2005-01-07 | 2005-06-01 | 华南理工大学 | Multimedia Chinese character electronic tracing book |
CN101366015A (en) * | 2005-10-13 | 2009-02-11 | K·K·K·侯 | Computer-aided method and system for guided teaching and learning |
US7526735B2 (en) * | 2003-12-15 | 2009-04-28 | International Business Machines Corporation | Aiding visual search in a list of learnable speech commands |
CN101944297A (en) * | 2010-08-30 | 2011-01-12 | 深圳市莱科电子技术有限公司 | Parent-child interactive education system and method |
CN102077260A (en) * | 2008-06-27 | 2011-05-25 | 悠进机器人股份公司 | Interactive learning system using robot and method of operating the same in child education |
CN102446428A (en) * | 2010-09-27 | 2012-05-09 | 北京紫光优蓝机器人技术有限公司 | Robot-based interactive learning system and interaction method thereof |
CN102682768A (en) * | 2012-04-23 | 2012-09-19 | 天津大学 | Chinese language learning system based on speech recognition technology |
CN202615639U (en) * | 2012-04-16 | 2012-12-19 | 黄进明 | Touch type click-to-read machine |
CN202838711U (en) * | 2012-07-06 | 2013-03-27 | 北京千家悦网络科技有限公司 | Device for interacting via language and interaction system |
-
2014
- 2014-09-04 CN CN201410449741.8A patent/CN104252287A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1619530A (en) * | 2003-11-18 | 2005-05-25 | 英业达股份有限公司 | Multi information prompted type writing phonetic learning system and method |
US7526735B2 (en) * | 2003-12-15 | 2009-04-28 | International Business Machines Corporation | Aiding visual search in a list of learnable speech commands |
CN1622143A (en) * | 2005-01-07 | 2005-06-01 | 华南理工大学 | Multimedia Chinese character electronic tracing book |
CN101366015A (en) * | 2005-10-13 | 2009-02-11 | K·K·K·侯 | Computer-aided method and system for guided teaching and learning |
CN102077260A (en) * | 2008-06-27 | 2011-05-25 | 悠进机器人股份公司 | Interactive learning system using robot and method of operating the same in child education |
CN101944297A (en) * | 2010-08-30 | 2011-01-12 | 深圳市莱科电子技术有限公司 | Parent-child interactive education system and method |
CN102446428A (en) * | 2010-09-27 | 2012-05-09 | 北京紫光优蓝机器人技术有限公司 | Robot-based interactive learning system and interaction method thereof |
CN202615639U (en) * | 2012-04-16 | 2012-12-19 | 黄进明 | Touch type click-to-read machine |
CN102682768A (en) * | 2012-04-23 | 2012-09-19 | 天津大学 | Chinese language learning system based on speech recognition technology |
CN202838711U (en) * | 2012-07-06 | 2013-03-27 | 北京千家悦网络科技有限公司 | Device for interacting via language and interaction system |
Non-Patent Citations (1)
Title |
---|
朱佩荣: "阿莫纳什维利的教学法(下)", 《外国教育资料》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106781734A (en) * | 2016-12-29 | 2017-05-31 | 上海清之泓教育科技有限公司 | For the mobile terminal of foreign language teaching |
CN111512642A (en) * | 2017-12-28 | 2020-08-07 | 索尼公司 | Display apparatus and signal generating apparatus |
CN111512642B (en) * | 2017-12-28 | 2022-04-29 | 索尼公司 | Display apparatus and signal generating apparatus |
US11579833B2 (en) | 2017-12-28 | 2023-02-14 | Sony Corporation | Display apparatus and signal generation apparatus |
CN109260733A (en) * | 2018-09-12 | 2019-01-25 | 苏州颗粒智能玩具有限公司 | A kind of educational toy with interaction function |
CN110503938A (en) * | 2019-08-30 | 2019-11-26 | 北京太极华保科技股份有限公司 | The recognition methods of machine conversational language and device, identification engine switching method and device |
CN110867187A (en) * | 2019-10-31 | 2020-03-06 | 北京大米科技有限公司 | Voice data processing method and device, storage medium and electronic equipment |
CN110867187B (en) * | 2019-10-31 | 2022-07-12 | 北京大米科技有限公司 | Voice data processing method and device, storage medium and electronic equipment |
CN111710213A (en) * | 2020-06-05 | 2020-09-25 | 河南艺树教育科技有限公司 | Quantifiable music teaching system |
CN113377982A (en) * | 2021-06-22 | 2021-09-10 | 读书郎教育科技有限公司 | System and method for instructing students to tell picture story |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104252287A (en) | Interaction device and method for improving expression capability based on interaction device | |
CN106056996B (en) | A kind of multimedia interactive tutoring system and method | |
CN108000526B (en) | Dialogue interaction method and system for intelligent robot | |
KR20180025121A (en) | Method and apparatus for inputting information | |
CN105320726A (en) | Reducing the need for manual start/end-pointing and trigger phrases | |
CN102945120B (en) | A kind of based on the human-computer interaction auxiliary system in children's application and exchange method | |
US11062708B2 (en) | Method and apparatus for dialoguing based on a mood of a user | |
CN103546790A (en) | Language interaction method and language interaction system on basis of mobile terminal and interactive television | |
CN107564522A (en) | A kind of intelligent control method and device | |
CN112102828A (en) | Voice control method and system for automatically broadcasting content on large screen | |
CN107491286A (en) | Pronunciation inputting method, device, mobile terminal and the storage medium of mobile terminal | |
CN103903613A (en) | Information processing method and electronic device | |
CN112735418A (en) | Voice interaction processing method and device, terminal and storage medium | |
CN111197841A (en) | Control method, control device, remote control terminal, air conditioner, server and storage medium | |
CN109032345A (en) | Apparatus control method, device, equipment, server-side and storage medium | |
CN110825164A (en) | Interaction method and system based on wearable intelligent equipment special for children | |
CN111968641B (en) | Voice assistant awakening control method and device, storage medium and electronic equipment | |
CN111524507A (en) | Voice information feedback method, device, equipment, server and storage medium | |
CN106653020A (en) | Multi-business control method and system for smart sound and video equipment based on deep learning | |
CN112242143B (en) | Voice interaction method and device, terminal equipment and storage medium | |
CN103095927A (en) | Displaying and voice outputting method and system based on mobile communication terminal and glasses | |
CN114489331A (en) | Method, apparatus, device and medium for interaction of separated gestures distinguished from button clicks | |
CN105425942A (en) | Method and system for pushing learning content to display interface | |
CN105893345A (en) | Information processing method and electronic equipment | |
CN112837672B (en) | Method and device for determining conversation attribution, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20141231 |