CN109346077B - Voice system suitable for portable intelligent equipment and use method thereof - Google Patents
Voice system suitable for portable intelligent equipment and use method thereof Download PDFInfo
- Publication number
- CN109346077B CN109346077B CN201811296524.4A CN201811296524A CN109346077B CN 109346077 B CN109346077 B CN 109346077B CN 201811296524 A CN201811296524 A CN 201811296524A CN 109346077 B CN109346077 B CN 109346077B
- Authority
- CN
- China
- Prior art keywords
- voice
- interface
- chat
- recognition
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 230000006698 induction Effects 0.000 claims abstract description 8
- 230000033764 rhythmic process Effects 0.000 claims description 18
- 238000010079 rubber tapping Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2201/00—Electronic components, circuits, software, systems or apparatus used in telephone systems
- H04M2201/40—Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/12—Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/22—Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/74—Details of telephonic subscriber devices with voice recognition means
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Environmental & Geological Engineering (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Telephone Function (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to a voice system suitable for portable intelligent equipment and a using method thereof. The use method of the system comprises a hardware induction use method, a software one-key use method, a track recognition use method and a knocking use method. The invention has the advantages that: the voice system can be used in various modes and can be used in different occasions; different using methods can be matched for use, so that the voice system is simpler and more convenient to use; the method is provided with the recognition of chatting voice and the recognition of semantic voice, so that the voice recognition is more accurate.
Description
Technical Field
The invention relates to the field of voice recognition, in particular to a voice system suitable for portable intelligent equipment and a using method thereof.
Background
The smart device becomes an indispensable electronic product in the mass life in the modern times, the smart phone is representative, the smart phone solves the life needs of people in interconnection and intercommunication anytime and anywhere, shopping, paying, reading, understanding news facts, playing games, voice and video, doing business and the like are all realized through the smart phone, the times are changing in developing technology, smart life is more and more popularized, various technical layers are endless, and the smart voice interaction system is closest to people
The existing voice interaction system mainly comprises the following two using methods, the first method is to find a voice application point through a touch control mode and open a voice interface to start voice recognition, the second method is to wake up the voice recognition through specific sound induction when the equipment is in a wake-up state, the first method needs to wake up the system firstly, unlock the system and then start the voice recognition through touch control operation, and the operation is complicated. The second method needs specific voice and voiceprint induction to wake up, and voice recognition is waken up after recognition, so that frequent recognition and wake-up are needed during multiple operations although the waiting time is short. And the existing voice system can only recognize voice and can not recognize voice except for human speaking voice. In addition, the current voice system is mainly oriented to system operation and cannot be used for voice application of software, particularly chat software.
Disclosure of Invention
The invention mainly solves the problems and provides the voice system which has various voice system awakening modes, is convenient and quick to awaken and is suitable for the portable intelligent equipment and the use method thereof, wherein the voice system combines voice chat and voice operation.
The technical scheme adopted by the invention for solving the technical problem is that the voice system suitable for the portable intelligent device comprises a main control module, a wake-up module, a voice receiving module, a distance sensing module, a gesture track recognition module, a knocking module and a one-key voice button displayed on a screen, wherein the voice receiving module, the distance sensing module, the gesture track recognition module, the knocking module and the wake-up module are all connected with the main control module, and the distance sensing module is arranged near the voice receiving module.
The main control module controls equipment to perform various operations, the awakening module is used for awakening the equipment, the sound receiving module and the distance sensing module work in a matched mode, the distance sensing module senses the distance between a mobile phone and a human body, when the distance is reduced to a certain range, the sound receiving module works to receive voice, the gesture track recognition module is used for sensing and recognizing a track which is moved under the control of a user after the user takes up the mobile phone, whether the user needs to use a voice system or not is judged according to the track, one-key voice buttons are only displayed on an equipment screen locking interface in a default mode, and one-key voice buttons can be displayed on a mobile phone interface all the time through user setting according to user requirements.
The invention also provides a using method of the voice system suitable for the portable intelligent equipment, which comprises a hardware induction using method, a software one-key using method, a track recognition using method and a knocking using method. Each method can wake up and use the voice system, and different methods can be matched with each other to enable the voice system to be used more conveniently, efficiently and accurately.
As a preferable scheme of the above scheme, the hardware induction using method includes the following steps:
s01: the user awakens the portable intelligent device;
s02: the main control module triggers the distance sensor;
s03: the distance sensor senses whether an object exists nearby, if so, a signal is sent to the main control module, and the main control module triggers the sound receiver and wakes up the voice system; if not, continuing to detect;
s04: receiving the voice by the sound receiver;
s05: stopping receiving the voice when the distance sensor detects that the object is far away;
s06: the speech system performs speech recognition.
As a preferable mode of the above, the software one-key using method includes the steps of:
s11: the user wakes up the portable intelligent equipment, and the portable intelligent equipment displays a screen locking interface;
s12: displaying a one-key voice button on a screen locking interface of the portable intelligent equipment;
s13: pressing a one-key voice button by a user, and preparing the voice system to receive voice;
s14: the voice system receives voice;
s15: the user releases the one-key voice button, and the voice system stops receiving voice;
s16: the speech system performs speech recognition.
As a preferable solution of the above solution, the track identification using method includes the steps of:
s21: the gesture track recognition sensor recognizes the moving track of the portable intelligent device;
s22: matching the identified moving track of the portable intelligent device with a preset track;
s23: if the matching is successful, starting a voice system; if the matching fails, no processing is carried out;
s24: the voice system receives voice;
s25: the voice system performs voice recognition and executes.
The track recognition using method can be matched with a hardware sensing using method, so that the voice system is more accurate in awakening and receiving voice.
As a preferable mode of the above, the knocking using method includes the steps of:
s31: the user taps the screen according to a certain rhythm;
s32: the knocking module receives a knocking signal;
s33: the main control module matches the received knocking rhythm with a preset knocking rhythm;
s34: if the matching is successful, the main control module performs operation corresponding to a preset knocking rhythm; if the failure occurs, the processing is not carried out and prompt is carried out at the same time. The knocking module is used for receiving sound or vibration generated when a user knocks on the screen, and the user can set different knocking rhythms for different mobile phone operations.
As a preferable solution of the above solution, the speech recognition includes the steps of:
s41: receiving and analyzing voice;
s42: judging whether preset semantic recognition keyword commands exist in the initial content and the tail end content of the analyzed content, if so, analyzing the semantics, and operating according to the voice; if not, detecting the current interface state;
s43: if the current interface of the equipment is a chat software interface, performing chat voice recognition; and if the current display interface of the equipment is a non-chat software interface, performing voice sentence recognition.
All interfaces in the chat software belong to chat software interfaces, including a main chat software interface, a chat window interface, an information popup window and the like, and the voice recognition keywords are used for distinguishing whether input voice is used for voice chat or voice recognition and can be defined by a user.
As a preferable aspect of the foregoing solution, the chat speech recognition includes the steps of:
s51: judging whether the current interface of the equipment is in a chat interface, if so, judging whether the chat software supports sending voice information, and if the chat software supports sending voice information, directly sending the voice to the chat interface and a contact corresponding to a popup window of the chat information; if the chat software does not support sending voice information, converting the voice instruction into text content and sending the text content to a chat interface and a contact corresponding to the chat information popup window;
s52: if not, judging whether preset contact keywords exist in the initial content and the end content of the analyzed content;
s53: if so, sending the content except the contact name in the voice content to the contact; if not, judging whether the initial content and the end content of the analyzed content have names or not;
s54: if yes, a selection interface appears for the user to select and record the contact person; if not, the semantics are analyzed, and the operation is carried out according to the voice.
The preset contact keywords are contacts in a phone book or chatting software, and the names refer to words with similar names or nicknames in voice.
As a preferable solution of the above solution, the speech sentence recognition includes the steps of:
s61: judging whether the equipment is currently in a system interface;
s62: if so, analyzing the content semantics and operating according to the semantics; if not, detecting whether the current interface software supports receiving voice;
s63: if yes, sending the voice to the software; and if not, prompting.
The device interface state comprises a chat software interface, a system interface and other software interfaces, wherein the chat software interface comprises a chat interface and a non-chat interface.
As a preferable scheme of the above scheme, the chat interface includes an interface provided with an input window and capable of sending information to a single person or a plurality of persons, and an interface with a popup window for chat information.
The invention has the advantages that: the voice system can be used in various modes and can be used in different occasions; different using methods can be matched for use, so that the voice system is simpler and more convenient to use; the method is provided with the recognition of chatting voice and the recognition of semantic voice, so that the voice recognition is more accurate.
Drawings
FIG. 1 is a block diagram of an embodiment of the present invention.
FIG. 2 is a flow chart of a hardware sensing method of the present invention.
FIG. 3 is a flow chart of a software one-key usage method of the present invention.
FIG. 4 is a flow chart of a method for using trajectory recognition in the present invention.
FIG. 5 is a flow chart of a tapping method of use in the present invention.
FIG. 6 is a flow chart of speech recognition according to the present invention.
FIG. 7 is a flow chart of the chat speech recognition of the present invention.
FIG. 8 is a flow chart of speech sentence recognition according to the present invention.
The system comprises a main control module 2, a wake-up module 3, a sound receiving module 4, a distance sensing module 5, a gesture track recognition sensing module 6 and a knocking module.
Detailed Description
The technical solution of the present invention is further described below by way of examples with reference to the accompanying drawings.
Example 1:
the embodiment provides a speech system suitable for portable intelligent device, as shown in fig. 1, including host system 1, awaken module 2, sound receiving module 3, apart from response module 4, gesture orbit recognition module 5, strike module 6 and show the one key voice button on the screen, sound receiving module, apart from response module, gesture orbit recognition module, strike module and awaken the module and all link to each other with host system, near sound receiving module is located to apart from response module. In this embodiment, the distance sensing module further has a biological recognition function to reduce the misjudgment probability, the wake-up module can provide a wrist-lifting wake-up function for the mobile phone, and the knocking module can receive vibration generated by knocking and convert the vibration into an electric signal to be sent to the main control module.
Correspondingly, the embodiment provides a voice system using method suitable for portable intelligent equipment, which comprises a hardware induction using method, a software one-key using method, a track recognition using method and a knocking using method.
As shown in fig. 2, the hardware sensing using method includes the following steps:
s01: a user wakes up the intelligent equipment by using a wrist-lifting wake-up function;
s02: the main control module triggers the distance sensor;
s03: the distance sensor senses whether an object exists nearby, if so, a signal is sent to the main control module, and the main control module triggers the sound receiver and wakes up the voice system; if not, continuing to detect;
s04: receiving the voice by the sound receiver;
s05: stopping receiving the voice when the distance sensor detects that the object is far away;
s06: the speech system performs speech recognition.
As shown in fig. 3, the software one-key using method includes the following steps:
s11: a user wakes up the intelligent equipment by using a wrist-lifting wake-up function, and the portable intelligent equipment displays a screen locking interface;
s12: displaying a one-key voice button on a screen locking interface of the portable intelligent equipment;
s13: pressing a one-key voice button by a user, and preparing the voice system to receive voice;
s14: the voice system receives voice;
s15: the user releases the one-key voice button, and the voice system stops receiving voice;
s16: the speech system performs speech recognition.
In the implementation, the one-key voice button is only displayed on the screen locking interface, and the one-key voice button disappears after the equipment is unlocked and enters the main interface.
As shown in fig. 4, the track recognition using method includes the following steps:
s21: the gesture track recognition sensor recognizes the moving track of the portable intelligent device;
s22: matching the identified moving track of the portable intelligent device with a preset track;
s23: if the matching is successful, starting a voice system; if the matching fails, no processing is carried out;
s24: the voice system receives voice;
s25: the voice system performs voice recognition and executes.
When the mobile phone is used, the mobile phone is usually taken up and moved to a proper position, and the track of taking up the mobile phone to place the mobile phone in front of the chest is obviously different from the track of taking up the mobile phone to place the mobile phone at the chin by using the steps of the mobile phone.
As shown in fig. 5, the tapping use method includes the steps of:
s31: the user taps the screen according to a certain rhythm;
s32: the knocking module receives a knocking signal;
s33: the main control module matches the received knocking rhythm with a preset knocking rhythm;
s34: if the matching is successful, the main control module performs operation corresponding to a preset knocking rhythm; if the failure occurs, the processing is not carried out and prompt is carried out at the same time.
In the embodiment, a user can set different operations for different knocking rhythms according to own preference, and the user can set the knocking rhythm of knocking two times after continuous knocking and short pause as the remote controller of the mobile phone; when a user uses the mobile phone, the mobile phone directly pops up a small window on a current interface for the user to remotely control, and the user can switch the small window into a large window; after the user taps the mobile phone according to the tapping rhythm for starting the mobile phone voice system, the mobile phone starts the voice system, and the user can input voice. Here, the knocking rhythm is not limited to the above-mentioned one, and the knocking rhythm may be a plurality of continuous knocking, a plurality of knocking at a predetermined interval, a plurality of knocking, a plurality of pause, or the like.
As shown in fig. 6, the speech recognition includes the following steps:
s41: receiving and analyzing voice;
s42: judging whether preset semantic recognition keyword commands exist in the initial content and the tail end content of the analyzed content, if so, analyzing the semantics, and operating according to the voice; if not, detecting the current interface state;
s43: if the current interface of the equipment is a chat software interface, performing chat voice recognition; and if the current display interface of the equipment is a non-chat software interface, performing voice sentence recognition.
In the embodiment, two semantic recognition keywords are set, namely ' cancel ' and ' over ', when a voice instruction of ' call for A ' and ' cancel ' call for B ' is input by voice, the voice system ignores the voice before cancellation, analyzes the semantic after cancellation, and calls for B according to the semantic; when the user inputs ' opening the address list ' cancel ', the voice system ignores the voice input; when the user inputs a voice instruction and then adds over, the voice system diameter analyzes the voice semantic and operates according to the semantic.
As shown in fig. 7, the chat speech recognition includes the following steps:
s51: judging whether the current interface of the equipment is in a chat interface, if so, judging whether the chat software supports sending voice information, and if the chat software supports sending voice information, directly sending the voice to the chat interface and a contact corresponding to a popup window of the chat information; if the chat software does not support sending voice information, converting the voice instruction into text content and sending the text content to a chat interface and a contact corresponding to the chat information popup window;
s52: if not, judging whether preset contact keywords exist in the initial content and the end content of the analyzed content;
s53: if so, sending the content except the contact name in the voice content to the contact; if not, judging whether the initial content and the end content of the analyzed content have names or not;
s54: if yes, a selection interface appears for the user to select and record the contact person; if not, the semantics are analyzed, and the operation is carried out according to the voice.
As shown in fig. 8, the speech sentence recognition includes the following steps:
s61: judging whether the equipment is currently in a system interface;
s62: if so, analyzing the content semantics and operating according to the semantics; if not, detecting whether the current interface software supports receiving voice;
s63: if yes, sending the voice to the software; and if not, prompting.
Example 2:
compared with embodiment 1, the present embodiment combines the hardware sensing method and the track finger identification method. In this embodiment, when the user picks up the mobile phone and puts the mobile phone at the chin, which is in accordance with the track for starting the voice system, the voice system does not receive the voice command at this time, only when the distance sensor senses that an object is close to the mobile phone, the voice system starts to receive the voice command, and when the distance sensor senses that the object is away from the mobile phone, the voice system stops receiving the voice and starts voice recognition.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Claims (8)
1. A voice system using method suitable for portable intelligent equipment is characterized by comprising the following steps: the method comprises a hardware induction using method, a software one-key using method, a track identification using method and a knocking using method;
the voice system comprises a main control module (1), a wake-up module (2), a sound receiving module (3), a distance sensing module (4), a gesture track recognition module (5), a knocking module (6) and a one-key voice button displayed on a screen, wherein the sound receiving module, the distance sensing module, the gesture track recognition module, the knocking module and the wake-up module are all connected with the main control module, and the distance sensing module is arranged near the sound receiving module;
the speech system is capable of performing speech recognition, the speech recognition comprising the steps of:
s41: receiving and analyzing voice;
s42: judging whether preset semantic recognition keyword commands exist in the initial content and the tail end content of the analyzed content, if so, analyzing the semantics, and operating according to the semantics; if not, detecting the current interface state;
s43: if the current interface of the equipment is a chat software interface, performing chat voice recognition; if the current display interface of the equipment is a non-chat software interface, carrying out voice sentence recognition;
the chat voice recognition is used for judging whether the received voice belongs to chat contents; the speech statement identifies a command for determining whether the received speech belongs to an execution operation.
2. The method of claim 1, wherein the method comprises: the hardware induction using method comprises the following steps:
s01: the user awakens the portable intelligent device;
s02: the main control module triggers the distance sensor;
s03: the distance sensor senses whether an object exists nearby, if so, a signal is sent to the main control module, and the main control module triggers the sound receiver and wakes up the voice system; if not, continuing to detect;
s04: receiving the voice by the sound receiver;
s05: stopping receiving the voice when the distance sensor detects that the object is far away;
s06: the speech system performs speech recognition.
3. The method of claim 1, wherein the method comprises: the software one-key using method comprises the following steps:
s11: the user wakes up the portable intelligent equipment, and the portable intelligent equipment displays a screen locking interface;
s12: displaying a one-key voice button on a screen locking interface of the portable intelligent equipment;
s13: pressing a one-key voice button by a user, and preparing the voice system to receive voice;
s14: the voice system receives voice;
s15: the user releases the one-key voice button, and the voice system stops receiving voice;
s16: the speech system performs speech recognition.
4. The method of claim 1, wherein the method comprises: the track identification and use method comprises the following steps:
s21: the gesture track recognition sensor recognizes the moving track of the portable intelligent device;
s22: matching the identified moving track of the portable intelligent device with a preset track;
s23: if the matching is successful, starting a voice system; if the matching fails, no processing is carried out;
s24: the voice system receives voice;
s25: the voice system performs voice recognition and executes.
5. The method of claim 1, wherein the method comprises: the knocking use method comprises the following steps:
s31: the user taps the screen according to a certain rhythm;
s32: the knocking module receives a knocking signal;
s33: the main control module matches the received knocking rhythm with a preset knocking rhythm;
s34: if the matching is successful, the main control module performs operation corresponding to a preset knocking rhythm; if the failure occurs, the processing is not carried out and prompt is carried out at the same time.
6. The method of claim 1, wherein the method comprises: the chat voice recognition comprises the following steps:
s51: judging whether the current interface of the equipment is in a chat interface, if so, judging whether the chat software supports sending voice information, and if the chat software supports sending voice information, directly sending the voice to the chat interface and a contact corresponding to a popup window of the chat information; if the chat software does not support sending voice information, converting the voice instruction into text content and sending the text content to a chat interface and a contact corresponding to the chat information popup window;
s52: if not, judging whether preset contact keywords exist in the initial content and the end content of the analyzed content;
s53: if so, sending the content except the contact name in the voice content to the contact; if not, judging whether the initial content and the end content of the analyzed content have names or not; the name is a person name or a nickname which does not belong to the contact keyword;
s54: if yes, a selection interface appears for the user to select and record the contact person; if not, the semantics are analyzed, and the operation is carried out according to the semantics.
7. The method of claim 1, wherein the method comprises: the speech sentence recognition comprises the following steps:
s61: judging whether the equipment is currently in a system interface;
s62: if so, analyzing the content semantics and operating according to the semantics; if not, detecting whether the current interface software supports receiving voice;
s63: if yes, sending the voice to the software; and if not, prompting.
8. The method of claim 6, wherein the method further comprises: the chat interface comprises an interface which is provided with an input window and can send information to a single person or a plurality of persons and an interface with a chat information popup window.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811296524.4A CN109346077B (en) | 2018-11-01 | 2018-11-01 | Voice system suitable for portable intelligent equipment and use method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811296524.4A CN109346077B (en) | 2018-11-01 | 2018-11-01 | Voice system suitable for portable intelligent equipment and use method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109346077A CN109346077A (en) | 2019-02-15 |
CN109346077B true CN109346077B (en) | 2022-03-25 |
Family
ID=65313341
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811296524.4A Active CN109346077B (en) | 2018-11-01 | 2018-11-01 | Voice system suitable for portable intelligent equipment and use method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109346077B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110568832A (en) * | 2019-09-27 | 2019-12-13 | 海尔优家智能科技(北京)有限公司 | Remote controller, coordinator, intelligent household equipment and remote control system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103248751A (en) * | 2012-02-13 | 2013-08-14 | 联想(北京)有限公司 | Electronic device and method for realizing functional control thereof |
CN103269395A (en) * | 2013-04-22 | 2013-08-28 | 聚熵信息技术(上海)有限公司 | Speech control method and device based on screen locking state |
CN104657105A (en) * | 2015-01-30 | 2015-05-27 | 腾讯科技(深圳)有限公司 | Method and device for starting voice input function of terminal |
CN105551487A (en) * | 2015-12-07 | 2016-05-04 | 北京云知声信息技术有限公司 | Voice control method and apparatus |
CN106297801A (en) * | 2016-08-16 | 2017-01-04 | 北京云知声信息技术有限公司 | Method of speech processing and device |
CN107193914A (en) * | 2017-05-15 | 2017-09-22 | 广东艾檬电子科技有限公司 | A kind of pronunciation inputting method and mobile terminal |
CN107592415A (en) * | 2017-08-31 | 2018-01-16 | 努比亚技术有限公司 | Voice transmitting method, terminal and computer-readable recording medium |
CN108270922A (en) * | 2018-01-19 | 2018-07-10 | 西安蜂语信息科技有限公司 | Method of speech processing and device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8588377B2 (en) * | 2007-03-02 | 2013-11-19 | Cisco Technology, Inc. | Method and system for grouping voice messages |
US9280981B2 (en) * | 2013-02-27 | 2016-03-08 | Blackberry Limited | Method and apparatus for voice control of a mobile device |
-
2018
- 2018-11-01 CN CN201811296524.4A patent/CN109346077B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103248751A (en) * | 2012-02-13 | 2013-08-14 | 联想(北京)有限公司 | Electronic device and method for realizing functional control thereof |
CN103269395A (en) * | 2013-04-22 | 2013-08-28 | 聚熵信息技术(上海)有限公司 | Speech control method and device based on screen locking state |
CN104657105A (en) * | 2015-01-30 | 2015-05-27 | 腾讯科技(深圳)有限公司 | Method and device for starting voice input function of terminal |
CN105551487A (en) * | 2015-12-07 | 2016-05-04 | 北京云知声信息技术有限公司 | Voice control method and apparatus |
CN106297801A (en) * | 2016-08-16 | 2017-01-04 | 北京云知声信息技术有限公司 | Method of speech processing and device |
CN107193914A (en) * | 2017-05-15 | 2017-09-22 | 广东艾檬电子科技有限公司 | A kind of pronunciation inputting method and mobile terminal |
CN107592415A (en) * | 2017-08-31 | 2018-01-16 | 努比亚技术有限公司 | Voice transmitting method, terminal and computer-readable recording medium |
CN108270922A (en) * | 2018-01-19 | 2018-07-10 | 西安蜂语信息科技有限公司 | Method of speech processing and device |
Also Published As
Publication number | Publication date |
---|---|
CN109346077A (en) | 2019-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108735209B (en) | Wake-up word binding method, intelligent device and storage medium | |
US10796694B2 (en) | Optimum control method based on multi-mode command of operation-voice, and electronic device to which same is applied | |
US7158871B1 (en) | Handwritten and voice control of vehicle components | |
CN109712621B (en) | Voice interaction control method and terminal | |
US20060074658A1 (en) | Systems and methods for hands-free voice-activated devices | |
CN108388786A (en) | Unlocked by fingerprint method and device | |
CN108108142A (en) | Voice information processing method, device, terminal device and storage medium | |
CN108345781A (en) | Unlocked by fingerprint method and device | |
US20090153366A1 (en) | User interface apparatus and method using head gesture | |
CN108712566B (en) | Voice assistant awakening method and mobile terminal | |
CN107919138B (en) | Emotion processing method in voice and mobile terminal | |
CN107870674B (en) | Program starting method and mobile terminal | |
CN105489220A (en) | Method and device for recognizing speech | |
CN107845386B (en) | Sound signal processing method, mobile terminal and server | |
CN111833872B (en) | Voice control method, device, equipment, system and medium for elevator | |
WO1995025326A1 (en) | Voice/pointer operated system | |
CN109302528B (en) | Photographing method, mobile terminal and computer readable storage medium | |
CN109847348B (en) | Operation interface control method, mobile terminal and storage medium | |
CN108763475B (en) | Recording method, recording device and terminal equipment | |
JP2004214895A (en) | Auxiliary communication apparatus | |
CN110780751B (en) | Information processing method and electronic equipment | |
CN106775377A (en) | The control method of gesture identifying device, equipment and gesture identifying device | |
CN109346077B (en) | Voice system suitable for portable intelligent equipment and use method thereof | |
CN111367483A (en) | Interaction control method and electronic equipment | |
CN110262767B (en) | Voice input wake-up apparatus, method, and medium based on near-mouth detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |