CN103077716A - Auxiliary starting device and speed control system and method - Google Patents
Auxiliary starting device and speed control system and method Download PDFInfo
- Publication number
- CN103077716A CN103077716A CN 201210593061 CN201210593061A CN103077716A CN 103077716 A CN103077716 A CN 103077716A CN 201210593061 CN201210593061 CN 201210593061 CN 201210593061 A CN201210593061 A CN 201210593061A CN 103077716 A CN103077716 A CN 103077716A
- Authority
- CN
- China
- Prior art keywords
- mobile terminal
- terminal apparatus
- wireless transport
- transport module
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Mobile Radio Communication Systems (AREA)
- Telephone Function (AREA)
Abstract
The invention relates to an auxiliary starting device and a speed control system and method. The auxiliary starting device is used for assisting with starting a speech system of a mobile terminal device. The mobile terminal device comprises a first wireless transmission module and the speech system which is coupled with the first wireless transmission module. The auxiliary starting device comprises a second wireless transmission module and a trigger module. The second wireless transmission module is matched with the first wireless transmission module. The trigger module is coupled with the second wireless transmission module. When the trigger module is triggered, the second wireless transmission module is connected with the first wireless transmission module of the mobile terminal device through a wireless communication protocol to start the speed system of the mobile terminal device.
Description
Technical field
The present invention relates to a kind of technology of opening voice function, and be particularly related to a kind of auxiliary actuating apparatus, speech control system and method thereof.
Background technology
Along with the development of science and technology, the mobile terminal apparatus with voice system is day by day popularized.Above-mentioned voice system is by the speech understanding technology, allows user and mobile terminal apparatus link up.For instance, the user is as long as tell a certain requirement to above-mentioned mobile terminal apparatus, and such as wanting to look into train number, look into weather or want to call etc., system just can according to user's voice signal, take corresponding action.Above-mentioned action may be to move with voice mode answer user's problem or according to the system that user's instruction goes to order about mobile terminal apparatus.
Yet, in the technical development process of voice system, but face some problems and need to be resolved hurrily.Such as: voice are in conjunction with the data security of cloud server, the problems such as convenience that voice system starts.
With the data security of voice in conjunction with cloud server, be with the concept of voice interactive system in conjunction with the high in the clouds technology at present, with complicated and need the speech processes process of powerful arithmetic capability support to transfer to cloud server to carry out.Mode although it is so can significantly reduce the cost of the required configure hardware of mobile terminal apparatus.But, converse, pass the action such as news in brief for needs by address list and since need by upload address list to the cloud server looking for conversation or to pass the object of news in brief, so maintaining secrecy of address list will be an important subject under discussion.Although cloud server can adopt the encryption line, and the mode of taking instant biography, not preserving, be difficult to eliminate the user to the worry of the above-mentioned practice.
On the other hand, with the convenience that voice system starts, mostly be at present that its shown application program of screen of triggering mobile terminals device starts, perhaps start by the set physical button of mobile terminal apparatus.Above-mentioned design all must start by mobile terminal apparatus itself, but in some occasion, above-mentioned design but is suitable inconvenience.Such as: during the road, and mobile terminal apparatus is placed in pocket or the handbag, when perhaps cooking in the kitchen, need to dial the mobile phone that is positioned at the parlor, can't touch immediately mobile terminal apparatus with users such as inquiry friend recipe details, but the situation that voice system is opened.
In addition, the sound amplification function in the mobile terminal apparatus equally also has similar problem.Although at present the user can pass through the finger manipulation mobile phone, or hold mobile phone mobile phone is pressed close to ear with the startup sound amplification function with one hand.But when the user can't touch mobile terminal apparatus immediately, but need make sound amplification function the time, the design that needs at present start by mobile terminal apparatus itself will cause user's inconvenience.
Therefore, how to improve these above-mentioned shortcomings, become the subject under discussion that needs to be resolved hurrily.
Summary of the invention
The invention provides a kind of auxiliary actuating apparatus, speech control system and method thereof, it can allow the user wirelessly start the voice dialogue function that mobile communications device provides by auxiliary actuating apparatus, to promote the convenience on using.
The present invention proposes a kind of speech control system, and this speech control system comprises an auxiliary actuating apparatus and a mobile terminal apparatus.Above-mentioned auxiliary actuating apparatus is in order to an auxiliary voice system of opening mobile terminal apparatus, and auxiliary actuating apparatus comprises one first wireless transport module.Mobile terminal apparatus comprises one second wireless transport module and above-mentioned voice system.The second wireless transport module and the first wireless transport module coupling, voice system couples the second wireless transport module.When auxiliary actuating apparatus was triggered, the first wireless transport module linked by the second wireless transport module of a wireless communication protocol and mobile terminal apparatus, to start the voice system of mobile terminal apparatus.
The present invention proposes a kind of auxiliary actuating apparatus in addition, in order to an auxiliary voice system of opening a mobile terminal apparatus.Above-mentioned mobile terminal apparatus comprises one first wireless transport module, and the voice system that couples the first wireless transport module.Auxiliary actuating apparatus comprises one second wireless transport module and a trigger module.The second wireless transport module and the first wireless transport module coupling, trigger module couples the second wireless transport module.When trigger module was triggered, the second wireless transport module linked by the first wireless transport module of a wireless communication protocol and mobile terminal apparatus, to start the voice system of mobile terminal apparatus.
The present invention proposes again a kind of method of speech control, is applicable to a mobile terminal apparatus and an auxiliary actuating apparatus.Above-mentioned auxiliary actuating apparatus is in order to the auxiliary voice system of opening mobile terminal apparatus.Auxiliary actuating apparatus and mobile terminal apparatus have respectively the first wireless transport module and second wireless transport module of mutual coupling.The method of above-mentioned speech control is that the first wireless transport module receives a trigger pip.Then, start the first wireless transport module, and send a wireless signal transmission.Then, the second wireless transport module receives wireless signal transmission, to start voice system.
Based on above-mentioned, when mobile communications device receives from the wireless signal transmission of auxiliary actuating apparatus when starting voice system, mobile communications device just can begin to receive user's voice signal, to resolve described voice signal by the speech understanding module.Thus, the user wirelessly starts the voice dialogue function that mobile communications device provides by auxiliary actuating apparatus, uses the convenience of promoting on using.
For above-mentioned feature and advantage of the present invention can be become apparent, embodiment cited below particularly, and cooperate accompanying drawing to be described in detail below.
Description of drawings
Fig. 1 is the calcspar of the speech control system that illustrates according to one embodiment of the invention.
Fig. 2 is the calcspar of the speech control system that illustrates according to another embodiment of the present invention.
Fig. 3 is the process flow diagram of the speech control method that illustrates according to one embodiment of the invention.
Fig. 4 is the calcspar according to the voice interactive system of one embodiment of the invention.
Fig. 5 is the synoptic diagram according to the voice communication flow process that is used for voice interactive system of one embodiment of the invention.
Fig. 6 is the system schematic according to the mobile terminal apparatus of one embodiment of the invention.
Fig. 7 is the process flow diagram according to the automatic starting method of the conversation sound amplification function of the mobile terminal apparatus of one embodiment of the invention.
[main element symbol description]
100,200: speech control system
110: auxiliary actuating apparatus
112,122: wireless transport module
114: trigger module
116: wireless charging battery
1162: battery unit
1164: wireless charging module
120,220,420: mobile terminal apparatus
121,426: voice system
124,610: the speech sample module
126: voice synthetic module
127: the voice output interface
128,424: communication module
130,410:(high in the clouds) server
132: the speech understanding module
1322: voice identification module
1324: speech processing module
400: voice interactive system
412,422,660: processing unit
414: communication module
428: storage unit
429: address list
330: display unit
620: input block
630: pull and connect the unit
640: receiver
650: public address equipment
670: earphone
S302 ~ S312, S501 ~ S519, S710 ~ S770: step
DRC: conversation receive data
DTC: conversation transmits data
SAI: input audio signal
SAO: output audio signal
SIO: input operation signal
Embodiment
Although mobile terminal apparatus now can provide voice system, link up with mobile terminal apparatus to allow the user send voice, the user still must start by mobile terminal apparatus itself when starting this voice system.Therefore can't touch immediately mobile terminal apparatus the user, but the situation that voice system is opened often can't satisfy user's demand immediately.For this reason, the present invention proposes device and the corresponding method thereof that a kind of assistant voice system opens, and allows more easily opening voice system of user.In order to make content of the present invention more clear, below the example that really can implement according to this as the present invention especially exemplified by embodiment.
Fig. 1 is the calcspar of the speech control system that illustrates according to one embodiment of the invention.Please refer to Fig. 1, speech control system 100 comprises auxiliary actuating apparatus 110, mobile terminal apparatus 120 and server 130.In the present embodiment, auxiliary actuating apparatus 110 can by wireless signal transmission, start the voice system of mobile terminal apparatus 120, so that mobile terminal apparatus 120 is linked up according to voice signal and server 130.
Specifically, auxiliary actuating apparatus 110 comprises the first wireless transport module 112 and trigger module 114, and wherein trigger module 114 is coupled to the first wireless transport module 112.The first wireless transport module 112 for example is to support wireless compatible authentication (Wireless fidelity, Wi-Fi), global intercommunication microwave access (Worldwide Interoperability for Microwave Access, WiMAX), bluetooth (Bluetooth), ultra broadband (ultra-wideband, UWB) or radio-frequency (RF) identification (Radio-frequency identification, the device of communication protocol such as RFID), it can send wireless signal transmission, to correspond to each other with another wireless transport module and to set up wireless link.Trigger module 114 is such as being button, button etc.In the present embodiment, after the user presses this trigger module 114 generations one trigger pip, the first wireless transport module 112 receives this trigger pip and starts, this moment, the first wireless transport module 112 can send wireless signal transmission, and transmitted this wireless signal transmission to mobile terminal apparatus 120 by the first wireless transport module 112.In one embodiment, above-mentioned auxiliary actuating apparatus 110 can be a bluetooth earphone.
Although it should be noted that the earphone/of some hand-free also has the design that starts mobile terminal apparatus 120 some function at present, in the another embodiment of the present invention, auxiliary actuating apparatus 110 can be different from above-mentioned earphone/.Above-mentioned earphone/by with the line of mobile terminal apparatus, listen/converse to replace the earphone/ on the mobile terminal apparatus 120, start-up performance is additional design, but the application's auxiliary actuating apparatus 110 " only " is used for opening the voice system of mobile terminal apparatus 120, do not have the function of listening/conversing, so inner circuit design can be simplified, cost is also lower.In other words, for above-mentioned hands-free headsets/microphone, auxiliary actuating apparatus 110 is other devices, and namely the user may possess earphone/and the application's of hand-free auxiliary actuating apparatus 110 simultaneously.
In addition, the body of above-mentioned auxiliary actuating apparatus 110 can be the articles for use that the user can reach conveniently, ornaments such as ring, wrist-watch, earrings, necklace, glasses, be various Portable article, or installation component, for example for being disposed at the driving accessory on the bearing circle, be not limited to above-mentioned.That is to say that auxiliary actuating apparatus 110 is the device of " life-stylize ", by the setting of built-in system, allow the user can touch easily trigger module 114, with the opening voice system.For instance, when the body of auxiliary actuating apparatus 110 was ring, the user easily moveable finger trigger module 114 of pressing ring was triggered it.On the other hand, the body when auxiliary actuating apparatus 110 is that the user also can trigger the trigger module 114 of driving accessory device during the road easily when being disposed at the device of driving accessory.In addition, compared to the discomfort of wearing earphone/and listening/converse, use the application's auxiliary actuating apparatus 110 voice system in the mobile terminal apparatus 120 can be opened, even and then open sound amplification function (then will describe in detail), so that the user need not wear earphone/, still can directly listen/converse by mobile terminal apparatus 120.In addition, for the user, the article of auxiliary actuating apparatus 110 for originally wearing or use of these " life-stylizes " so do not have in the use the uncomfortable or problem of discomfort, namely do not need the adaptation of taking time.For instance, when the user cooks in the kitchen, in the time of need to dialing the mobile phone that is positioned over the parlor, suppose its wear have ring, the auxiliary actuating apparatus of the present invention 110 of necklace or wrist-watch body, just can touch ring, necklace or wrist-watch with the opening voice system with inquiry friend recipe details.Also can reach above-mentioned purpose although partly have at present the earphone/of start-up performance, but in the process of at every turn cooking, be not all to need to call to consult the friend at every turn, so for the user, wear at any time earphone/and cook, can say suitable inconvenience in order to controlling mobile terminal apparatus at any time.
In other embodiments, auxiliary actuating apparatus 110 also may be configured with wireless charging battery 116, in order to drive the first wireless transport module 112.Furthermore, wireless charging battery 116 comprises battery unit 1162 and wireless charging module 1164, and wherein wireless charging module 1164 is coupled to battery unit 1162.At this, wireless charging module 1164 can receive the energy of supplying from a wireless power supply (not illustrating), and is that electric power comes battery unit 1162 charging with this energy conversion.Thus, the first wireless transport module 112 of auxiliary actuating apparatus 110 can charge by wireless charging battery 116 expediently.
On the other hand, mobile terminal apparatus 120 for example is mobile phone (Cell phone), personal digital assistant (Personal Digital Assistant, PDA) mobile phone, smart mobile phone (Smart phone), or palmtop computer (Pocket PC), Tablet PC (Tablet PC) or mobile computer of communication software etc. are installed.Mobile terminal apparatus 120 can be any portable (Portable) mobile device that possesses communication function, does not limit its scope at this.In addition, mobile terminal apparatus 120 can use Android operating system, microsoft operating system, Android operating system, (SuSE) Linux OS etc., is not limited to above-mentioned.
Mobile terminal apparatus 120 comprises the second wireless transport module 122, the second wireless transport module 122 can be complementary with the first wireless transport module 112 of auxiliary actuating apparatus 110, and adopt corresponding wireless communication protocol (communication protocols such as wireless compatible authentication, global intercommunication microwave access, bluetooth, ultra-wideband communication protocol or radio-frequency (RF) identification), use with the first wireless transport module 112 and set up wireless link.It should be noted that " first " described herein wireless transport module 112, " second " wireless transport module 122 are disposed at different devices in order to wireless transport module to be described, is not to limit the present invention.
In other embodiments, mobile terminal apparatus 120 also comprises voice system 121, this voice system 121 is coupled to the second wireless transport module 122, so after the user triggers the trigger module 114 of auxiliary actuating apparatus 110, can wirelessly start voice system 121 by the first wireless transport module 112 and the second wireless transport module 122.In one embodiment, this voice system 121 can comprise speech sample module 124, voice synthetic module 126 and voice output interface 127.Speech sample module 124 is in order to receiving the voice signal from the user, and this speech sample module 124 is such as the device that is the audio reception such as microphone (Microphone).Voice synthetic module 126 can be inquired about a speech database for speech synthesis, and this speech database for speech synthesis for example be record literal with and the information of corresponding voice, so that voice synthetic module 126 can be found out the voice corresponding to the specific character message, so that message language is carried out phonetic synthesis.Afterwards, voice synthetic module 126 can with synthetic voice by 127 outputs of voice output interface, be used to play and give the user.Above-mentioned voice output interface 127 is such as being loudspeaker or earphone etc.
In addition, mobile terminal apparatus 120 also may be configured with communication module 128.Communication module 128 for example is can transmit and the element that receives wireless signal, such as radio-frequency (RF) transceiver.Furthermore, communication module 128 can allow the user answer or call or use other services that telecommunication operator provides by mobile terminal apparatus 120.In the present embodiment, communication module 128 can be by the response message of internet reception from server 130, and set up conversation line between mobile terminal apparatus 120 and at least one electronic installation according to this response message, wherein said electronic installation for example is another mobile terminal apparatus (not illustrating).
Server 130 is such as being the webserver or cloud server etc., and it has speech understanding module 132.In the present embodiment, speech understanding module 132 comprises voice identification module 1322 and speech processing module 1324, and wherein speech processing module 1324 is coupled to voice identification module 1322.At this, voice identification module 1322 can receive the voice signal that transmits from speech sample module 124, voice signal is converted to a plurality of segmentations semantic (such as vocabulary or words and expressions etc.).1324 of speech processing module can parse according to these segmentations semantemes mean (such as intention, time, place etc.) of the semantic representatives of these segmentations, and then judge the meaning represented in the above-mentioned voice signal.In addition, speech processing module 1324 also can produce corresponding response message according to the result who resolves.In the present embodiment, speech understanding module 132 can be come implementation by the hardware circuit that or several logic gates combine, and also can be to come implementation with computer program code.It is worth mentioning that in another embodiment, speech understanding module 132 is configurable in mobile terminal apparatus 220, speech control system 200 as shown in Figure 2.
The method that the above-mentioned speech control system 100 plain language sounds of below namely arranging in pairs or groups are controlled.Fig. 3 is the process flow diagram of the speech control method that illustrates according to one embodiment of the invention.Please be simultaneously with reference to Fig. 1 and Fig. 3, in step 302, auxiliary actuating apparatus 110 sends wireless signal transmission to mobile terminal apparatus 120.Detailed explanation is that when the first wireless transport module 112 of auxiliary actuating apparatus 110 was triggered because receiving a trigger pip, this auxiliary actuating apparatus 110 can send wireless signal transmission to mobile terminal apparatus 120.Particularly, when the trigger module 114 in the auxiliary actuating apparatus 110 is pressed by the user, this moment, trigger module 114 meetings be triggered because of trigger pip, and make the first wireless transport module 112 send wireless signal transmission to the second wireless transport module 122 of mobile terminal apparatus 120, use so that the first wireless transport module 112 links by wireless communication protocol and the second wireless transport module 122.Above-mentioned auxiliary actuating apparatus 110 only is used for opening the voice system of mobile terminal apparatus 120, does not have the function of listening/conversing, so inner circuit design can be simplified, cost is also lower.In other words, for the additional hands-free headsets/microphone of general mobile terminal apparatus 120, auxiliary actuating apparatus 110 is another devices, and namely the user may possess earphone/and the application's of hand-free auxiliary actuating apparatus 110 simultaneously.
It is worth mentioning that, the body of above-mentioned auxiliary actuating apparatus 110 can be the articles for use that the user can reach conveniently, various Portable article such as ring, wrist-watch, earrings, necklace, glasses, or installation component, for example for being disposed at the driving accessory on the bearing circle, be not limited to above-mentioned.That is to say that auxiliary actuating apparatus 110 is the device of " life-stylize ", by the setting of built-in system, allow the user can touch easily trigger module 114, with opening voice system 121.Therefore, use the application's auxiliary actuating apparatus 110 voice system 121 in the mobile terminal apparatus 120 can be opened, even and then open sound amplification function (then will describe in detail), so that the user need not wear earphone/, still can directly listen/converse by mobile terminal apparatus 120.In addition, for the user, the article of auxiliary actuating apparatus 110 for originally wearing or use of these " life-stylizes " are not so have in the use the uncomfortable or problem of discomfort.
In addition, the first wireless transport module 112 and the second wireless transport module 122 all can be in sleep pattern or mode of operation.Wherein, it is closed condition that sleep pattern refers to wireless transport module, that is wireless transport module can not receive/the detecting wireless signal transmission, and can't link with other wireless transport module.It is opening that mode of operation refers to wireless transport module, that is wireless transport module detecting wireless signal transmission constantly, or sends at any time wireless signal transmission, and can link with other wireless transport module.At this, when trigger module 114 is triggered, if the first wireless transport module 112 is in sleep pattern, then trigger module 114 can wake the first wireless transport module 112 up, make the first wireless transport module 112 enter mode of operation, and make the first wireless transport module 112 send wireless signal transmission to the second wireless transport module 122, and allow the first wireless transport module 112 link by the second wireless transport module 122 of wireless communication protocol and mobile terminal apparatus 120.
On the other hand, continue to maintain mode of operation and consume too much electric power for fear of the first wireless transport module 112, in the Preset Time after the first wireless transport module 112 enters mode of operation (for example being 5 minutes), if trigger module 114 is not triggered again, then the first wireless transport module 112 can enter sleep pattern from mode of operation, and stops to link with the second wireless transport module 120 of mobile terminal apparatus 120.
Afterwards, in step 304, the second wireless transport module 122 of mobile terminal apparatus 120 can receive wireless signal transmission, to start voice system 121.Then, at step S306, when the second wireless transport module 122 detected wireless signal transmission, mobile terminal apparatus 120 can start voice system 121, and 121 sampling modules 124 of voice system can begin received speech signal, for example " temperature several years today ", " phone Lao Wang.", " ask enquiring telephone number." etc.
At step S308, speech sample module 124 can be sent to above-mentioned voice signal the speech understanding module 132 in the server 130, to resolve voice signal and to produce response message by speech understanding module 132.Furthermore, voice identification module 1322 in the speech understanding module 132 can receive the voice signal from speech sample module 124, and it is semantic that voice signal is divided into a plurality of segmentations, speech processing module 1324 then can be carried out speech understanding to above-mentioned segmentation semanteme, to produce in order to respond the response message of voice signal.
In another embodiment of the present invention, mobile terminal apparatus 120 more can receive the response message that speech processing module 1324 produces, and perhaps carries out the operation that response message is assigned by interior in the voice output interface 127 output response messages according to this.At step S310, the voice synthetic module 126 of mobile terminal apparatus 120 can receive the response message that speech understanding module 132 produces, and carry out phonetic synthesis according to the content in the response message (such as vocabulary or words and expressions etc.), and produces voice answer-back.And at step S312, voice output interface 127 can receive and export this voice answer-back.
For example, when the user presses trigger module 114 in the auxiliary actuating apparatus 110,112 of the first wireless transport modules can send wireless signal transmission to the second wireless transport module 122, so that mobile terminal apparatus 120 starts the speech sample module 124 of voice system 121.At this, suppose that the voice signal from the user is an inquiry sentence, for example " temperature several years today ", then speech sample module 124 just can receive and the speech understanding module 132 that this voice signal is sent in the server 130 is resolved, and speech understanding module 132 can send back mobile terminal apparatus 120 with resolving the response message that produces.Suppose that the content in the response message that speech understanding module 132 produces is " 30 ℃ ", then voice synthetic module 126 can synthesize voice answer-back with the message of these " 30 ℃ ", and voice output interface 127 can should be reported these voice to the user.
In another embodiment, suppose that the voice signal from the user is an imperative sentence, for example " phone Lao Wang.", then can pick out this imperative sentence in the speech understanding module 132 and be " dialing to the request of Lao Wang ".In addition, speech understanding module 132 can produce new response message again, and for example " whether PLSCONFM sets aside Lao Wang ", and the response message that this is new is sent to mobile terminal apparatus 120.At this, voice synthetic module 126 can synthesize voice answer-back by the response message that this is new, and reports in the user by voice output interface 127.Further say, when the user reply sure answer for "Yes" and so on the time, similarly, speech sample module 124 can receive and transmit this voice signal to server 130, to allow speech understanding module 132 resolve.After speech understanding module 132 is resolved and finished, just can record a dialing command information at response message, and be sent to mobile terminal apparatus 120.At this moment, the contact information that 128 of communication modules can record according to call database inquires the telephone number of " Lao Wang ", and setting up the conversation line between mobile terminal apparatus 120 and another electronic installation, that is " Lao Wang " given in dialing.
In other embodiments, except above-mentioned speech control system 100, also can utilize speech control system 200 or other similar systems, carry out above-mentioned method of operating, not be limited with the above embodiments.
In sum, in the speech control system and method for present embodiment, auxiliary actuating apparatus can wirelessly be opened the phonetic function of mobile terminal apparatus.And, the body of this auxiliary actuating apparatus can be the user conveniently can and the articles for use of " life-stylize ", ornaments such as ring, wrist-watch, earrings, necklace, glasses, be various Portable article, or installation component, for example for being disposed at the driving accessory on the bearing circle, be not limited to above-mentioned.Thus, compared to the discomfort of wearing in addition at present hands-free headsets/microphone, will be more convenient with the voice system that the application's auxiliary actuating apparatus 110 is opened in the mobile terminal apparatus 120.
It should be noted that above-mentioned server 130 with speech understanding module may be the webserver or cloud server, and cloud server may relate to the problem of user's the right of privacy.For example, the user need upload complete address list to cloud server, just can finish as calling, send out the operation relevant with address list such as news in brief.Even cloud server adopt to be encrypted line, and instant biography do not preserve, and the load that still is difficult to eliminate the user is excellent.Accordingly, below provide method and the corresponding voice interactive system thereof of another kind of speech control, mobile terminal apparatus can in the situation of not uploading complete address list, be carried out the interactive voice service with cloud server.In order to make content of the present invention more clear, below the example that really can implement according to this as the present invention especially exemplified by embodiment.
Fig. 4 is the calcspar according to the voice interactive system of one embodiment of the invention.Please refer to Fig. 4, voice interactive system 400 can comprise cloud server 410 and mobile terminal apparatus 420, but cloud server 410 and mobile terminal apparatus 420 interconnecting lines.Voice interactive system 400 is to carry out the interactive voice service by cloud server 410.That is, come processed voice identification by the cloud server 410 with powerful arithmetic capability, reduce by this data of mobile terminal apparatus 420 and process load, also can promote accuracy and the recognition speed of speech recognition.
In mobile terminal apparatus 420, comprise processing unit 422, communication module 424, voice system 426, storage unit 428.In one embodiment, mobile terminal apparatus 420 also disposes a display unit 430.Wherein, processing unit 422 is coupled to communication module 424, voice system 426, storage unit 428 and display unit 430.More store an address list 429 in the storage unit 428.
Above-mentioned processing unit 422 is for possessing the hardware (such as chipset, processor etc.) of arithmetic capability, in order to control the overall operation of mobile terminal apparatus 420.Processing unit 422 for example is CPU (central processing unit) (Central Processing Unit, CPU), or other programmable microprocessors (Microprocessor), digital signal processor (Digital Signal Processor, DSP), Programmable Logic Controller, Application Specific Integrated Circuit (Application Specific Integrated Circuits, ASIC), programmable logic device (Programmable Logic Device, PLD) or other similar devices.
Above-mentioned communication module 424 for example is network card, and it can be to link up via wire transmission or wireless transmission and cloud server 410.And above-mentioned voice system 426 comprises the radio reception devices such as microphone at least, so that sound is converted to electronic signal.Said memory cells 428 for example is random access memory (Random Access Memory, RAM), ROM (read-only memory) (Read-Only Memory, ROM), flash memory (Flash memory) or disk storage device (Magnetic disk storage device) etc.Above-mentioned display unit 430 for example is liquid crystal display (Liquid Crystal Display, LCD) or the Touch Screen (touch screen) with touch-control module etc.
On the other hand, the cloud server 410 entity main frame for having powerful arithmetic capability, a super virtual machine that perhaps can be comprised of a group entity main frame uses to carry out large-scale task.At this, cloud server 410 comprises processing unit 412 and communication module 414.At this, the communication module 414 of cloud server 410 is coupled to its processing unit 412.Communication module 414 is in order to link up with the communication module 424 of mobile terminal apparatus 420.Communication module 414 for example is network card, and it can be to link up via wire transmission or wireless transmission and mobile terminal apparatus 420.
In addition, the processing unit 412 in the cloud server 410 is for example formed the CPU array by the CPU of multi-core or by a plurality of CPU for to have more powerful arithmetic capability.The processing unit 412 of cloud server 410 for example comprises speech understanding module 132 as shown in Figure 1 at least.Processing unit 412 can come the voice signal that receives from mobile terminal apparatus 420 is resolved by the speech understanding module.And the result that cloud server 410 will be resolved by communication module 414 is sent to mobile terminal apparatus 420, so that mobile terminal apparatus 420 is able to carry out corresponding action according to the result.
The above-mentioned Fig. 4 that below namely arranges in pairs or groups is illustrated in the exchange of speech flow process of voice interactive system.
Fig. 5 is the synoptic diagram according to the voice communication flow process that is used for voice interactive system of one embodiment of the invention.Please simultaneously with reference to Fig. 4 and Fig. 5, in step S501, in mobile terminal apparatus 420, receive the first voice signal by voice system 426, and in step S503, by communication module 424 the first voice signal is sent to cloud server 410.At this, mobile terminal apparatus 420 is such as being by elements such as the microphones in the voice system 426 and receive the first voice signal from the user.For instance, suppose that mobile terminal apparatus 420 is mobile phone, the user says " phoning Lao Wang " facing to mobile phone, then voice system 426 can " be phoned Lao Wang " with this voice signal by communication module 424 and be sent to cloud server 410 after this voice signal of reception " is phoned Lao Wang ".In one embodiment, above-mentioned voice system 426 can start by Fig. 1 ~ auxiliary actuating apparatus shown in Figure 3.
Then, in step S505, beyond the clouds in the server 410, processing unit 412 utilizes the speech understanding module to resolve the first voice signal, and, in step S507, the communication target that processing unit 412 will be obtained by the first voice signal is sent to mobile terminal apparatus 420 by communication module 414.Content take the first voice signal " is phoned Lao Wang " as example, and the processing unit 412 of cloud server 410 can utilize the speech understanding module to resolve the first voice signal, obtains by this communication instruction and communication target.Namely, the speech understanding module can parse the first voice signal and comprise " making a phone call " and " Lao Wang ", accordingly, the processing unit 412 of cloud server 410 just can be judged communication instruction and be the dialing instruction, and communication target is " Lao Wang ", and is sent to mobile terminal apparatus 420 by communication module 414.
Then, in step S509, in mobile terminal apparatus 420, the processing unit 422 of mobile terminal apparatus 420 is according to the address list 429 in the communication target search storage unit 428, and acquisition meets the selective listing of communication target.For example, the processing unit 422 of mobile terminal apparatus 420 finds many and has the contact information of " king ", thereby produce selective listing, and be shown in the display unit 430 in the process of searching address list, selects for the user.
For instance, shown in the table 1, in address list, search the contact information that meets communication target " Lao Wang " under the selective listing for example.In this example, suppose the contact information that finds 4 to meet, and with the coordinator's title in the contact information, namely " Wang Congming ", " king five ", " Wang Anshi " and " Wang Wei " write in the selective listing.
Table 1
And if the user speaks facing to mobile terminal apparatus 420, shown in step S511, mobile terminal apparatus 420 can receive the second voice signal by voice system 426.And when mobile terminal apparatus 420 received the second voice signal, in step S513, mobile terminal apparatus 420 can be sent to cloud server 410 by communication module 424 simultaneously with the second voice signal and selective listing.Such as: user after watching selective listing and say " the 1st " or contents such as " Wang Congming " facing to mobile terminal apparatus 420, and when forming the second voice signal, mobile terminal apparatus 420 just can be sent to cloud server 410 with selective listing with the second voice signal.
In addition, the user also can arbitrarily say other guide, that is to say, no matter the content that the user says why, as long as mobile terminal apparatus 420 receives the second voice signal, just can be sent to cloud server 410 with the second voice signal and selective listing simultaneously.
It is worth mentioning that in this application, the address list with " complete " is not uploaded to cloud server 410, and only will meet communication target with the form of " selective listing ", is uploaded to cloud server 410 to carry out the speech signal analysis second time.In other words, only have " part " contact data can be uploaded.In one embodiment, mobile terminal apparatus 420 is uploaded in the selective listing of cloud server 410 can include only coordinator's title, and does not comprise telephone number or other information.The content of the selective listing of uploading can be set according to user's demand.
In addition, it should be noted that, in this application, the second voice signal and selective listing are sent to cloud server 410 simultaneously, need gradation to resolve each voice signal and each tabulation compared to the communication means that need not upload at present address list, namely a step only comprises an information, and the application's voice switching method is more quick.
Then, in the server 410, processing unit 412 can utilize the speech understanding module to resolve the second voice signal, shown in step S515 beyond the clouds.For example, utilize the speech understanding module parses to go out the included content of the second voice signal and be " the 3rd ", then the processing unit 412 of cloud server 410 just can further remove to compare the 3rd contact information in the selective listing that mobile terminal apparatus 420 receives.Take table 1 as example, the 3rd contact information is " Wang Anshi ".
It should be noted that, the design of the speech understanding module 132 by as shown in Figure 1, the user does not need the complete content of selective listing of telling as the second voice signal, such as " the 1st Wang Congming ", only need tell the content of part selective listing, as the second voice signal, and the selective listing of arranging in pairs or groups simultaneously is uploaded to the speech understanding module 132 of cloud server, can parse select target such as " the 1st " or " Wang Congming ".In other words, the selective listing content comprises a plurality of project information, and each project information has the content (as: name, telephone number etc.) of numbering and corresponding this numbering at least, and the second voice signal comes from partial content or the numbering of corresponding this numbering.
Afterwards, in step S517, cloud server 410 is sent to mobile terminal apparatus 420 by its communication module 414 with communication instruction and select target.And in other embodiments, cloud server 410 also can namely transmit first communication instruction to mobile terminal apparatus 420 storages after step S505 has resolved the first voice signal, transmits afterwards select target again, does not limit the delivery time point of communication instruction at this.
After mobile terminal apparatus 420 received communication instruction and select target, in step S519, mobile terminal apparatus 420 was by 422 pairs of select targets of its processing unit, the communication operation that the executive communication instruction is corresponding.Above-mentioned communication instruction is such as the instruction that needs to use this address list content for dialing instruction or citation instruction etc., and communication instruction is to be obtained based on the first voice signal by cloud server 410.For example, the content of supposing the first voice signal is " phoning Lao Wang ", and then cloud server 410 is judged communication instruction and is the dialing instruction by " making a phone call ".Again for example, suppose that the content of the first voice signal is " passing news in brief to Lao Wang ", then cloud server 410 is judged communication instruction for summoning instruction by " biography news in brief ".In addition, above-mentioned select target then is based on the second voice signal and selective listing and obtain by cloud server 410.Take the selective listing shown in the above-mentioned table 1 as example, the content of supposing the second voice signal is " the 3rd ", and then cloud server 410 just can be judged select target and is " Wang Anshi ".For example, call to select target, or start a citation interface, to transmit news in brief to select target.
It should be noted that mobile terminal apparatus 420 can include only coordinator's title in the selective listing that above-mentioned steps S509 obtains, and do not comprise telephone number or other information.Therefore, when mobile terminal apparatus 420 when cloud server 410 receives communication instruction and select target, the processing unit 422 of mobile terminal apparatus 420 can take out the telephone number of corresponding selection target in address list, and comes communication operation corresponding to executive communication instruction according to telephone number.
In addition, in other embodiments, mobile terminal apparatus 420 also can comprise coordinator's title and telephone number simultaneously in the selective listing that above-mentioned steps S509 obtains, perhaps also can comprise other information.Therefore, in step S515, the processing unit 412 of cloud server 410 just can be based on the second voice signal and selective listing, and obtains the telephone number of select target, and in step S517, communication instruction and telephone number are sent to mobile terminal apparatus 420.Accordingly, in step S519, mobile terminal apparatus 420 comes communication operation corresponding to executive communication instruction according to telephone number.
In sum, the application's utilization is uploaded simultaneously mode to the cloud server with powerful arithmetic capability of selective listing that the first voice produce, select target that the second voice signal produces and is carried out speech understanding program, and this selective listing only comprises the address list of part.Therefore, the application's speech control system can be possessed higher treatment efficiency and better security simultaneously.
On the other hand, although it should be noted that above-mentioned auxiliary actuating apparatus has solved the user and can't touch immediately mobile terminal apparatus, need to use the problem of voice system, so that the user can pass through the speech understanding technology, allow user and mobile terminal apparatus carry out question and answer.Yet, for the situation that needs public address system to open, still need be to start expansion by mobile terminal apparatus itself at present, so can't satisfy user's demand immediately.For this reason, the present invention proposes a kind of method and corresponding device thereof of opening public address system, allows the user can open more easily public address system.In order to make content of the present invention more clear, below the example that really can implement according to this as the present invention especially exemplified by embodiment.
On the other hand, although it should be noted that above-mentioned auxiliary actuating apparatus has solved the user and can't touch immediately mobile terminal apparatus, need to use the voice system problem, so that the user can pass through the speech understanding technology, allow user and mobile terminal apparatus carry out question and answer.Yet, for the situation that needs sound amplification function to open, still need at present to start sound amplification function by mobile terminal apparatus itself, when the user can't touch mobile terminal apparatus immediately, but in the time of need making sound amplification function, the design that needs at present to start by mobile terminal apparatus itself will cause user's inconvenience.For this reason, the present invention proposes a kind of method and corresponding device thereof of opening sound amplification function, allows the user can open more easily sound amplification function.In order to make content of the present invention more clear, below the example that really can implement according to this as the present invention especially exemplified by embodiment.
Fig. 6 is the system schematic according to the mobile terminal apparatus of one embodiment of the invention.Please refer to Fig. 6, in the present embodiment, mobile terminal apparatus 600 comprises voice system, input block 620, pulls and connects unit 630, receiver 640, public address equipment 650 and processing unit 660.In another embodiment of the present invention, mobile terminal apparatus 600 also can comprise earphone 670.Mobile terminal apparatus 600 can be mobile phone or other similar electronic installations, and it is similar to the mobile terminal apparatus 120 of Fig. 1, and its detailed content can with reference to aforementioned content, not repeat them here.Processing unit 660 couples speech sample module 610, input block 620, pulls and connects unit 630, receiver 640, public address equipment 650, earphone 670.Voice system comprises speech sample module 610, and this speech sample module 610 is converted to input speech signal SAI with sound, and above-mentioned speech sample module 610 can be microphone or similar electronic component.In other words, speech sample module 610 can be considered the part of voice system, and this described voice system is similar to the voice system 121 of Fig. 1, and its detailed content can with reference to aforementioned content, not repeat them here.Input block 620 corresponding users' operation provides input operation signal SIO, and input block 620 can be keyboard, contact panel or similar electronic component.Pull and connect unit 630 and pull and connect function in order to be controlled by processing unit 660 execution.Receiver 640, public address equipment 650, earphone 670 are converted to sound in order to the output voice signal SAO that processing unit 660 is provided, so can be considered the voice output interface.Above-mentioned public address equipment 650 is such as being loudspeaker etc.Above-mentioned earphone 670 can be wired earphone and wireless headset at least one of them.
As from the foregoing, the physical button that the unlatching of phonetic function can be by pressing mobile communications device, control screen or utilize auxiliary actuating apparatus of the present invention.In the situation that the hypothesis phonetic function has been opened, when the user talks facing to mobile terminal apparatus 600, sound can be converted to input speech signal SAI by speech sample module 610, processing unit 660 can be according to input speech signal SAI, when carrying out content matching for information such as the coordinator's title in the address list or telephone numbers, when the information in the address list conforms to input speech signal SAI, 660 of processing units can open pull and connect unit 630 pull and connect function and public address equipment 650, so that after connecting, the user can with coordinator's conversation.Detailed explanation is that processing unit 660 can be converted to input speech signal SAI one input word string, and will input the information such as a plurality of coordinator's titles in word string and the address list, a plurality of telephone numbers relatively.When the input word string met one of them of the information such as these coordinator's titles, these telephone numbers, processing unit 660 was opened the function of pulling and connecting of pulling and connecting unit 630.On the contrary, when the input word string did not meet these coordinator's titles and these telephone numbers, processing unit 660 was not opened the function of pulling and connecting of pulling and connecting unit 630.
In other words, in the present embodiment, when the content matching in processing unit 660 affirmation input speech signal SAI and the address list, processing unit 660 can provide enabling signal, in order to automatically open the conversation sound amplification function of mobile terminal apparatus 100.Specifically, processing unit 660 can supply enabling signal to public address equipment 650 by automatic lifting, and input speech signal SAI is converted to conversation transmits data DTC, and transmit conversation transmission data DTC to coordinator (another mobile terminal apparatus does not illustrate) by pulling and connecting unit 630.Simultaneously, processing unit 660 can receive conversation receive data DRC by pulling and connecting unit 630, and provide output audio signal SAO to public address equipment 650 according to conversation receive data DRC, so that output audio signal SAO is converted to sound, and in the mode that amplifies with voice output.
It is worth mentioning that, in the mode of present startup sound amplification function, be still and adopt the mode that starts by mobile terminal apparatus itself to carry out, can't touch immediately mobile terminal apparatus but work as the user, in the time of but need using sound amplification function, present design will cause user's inconvenience.So in the present embodiment, in the situation that voice system is opened, sound amplification function is further opened in the action that can pull and connect by voice, is user-friendly for conversation.
In another embodiment, when public address equipment 650 and earphone 670 all with the situation of mobile terminal apparatus 600 lines under (being that public address equipment 650 all couples processing unit with earphone 670), if provide to processing unit 660 and be input speech signal SAI, processing unit 660 can be according to user's setting, making earphone 670 conversations is the first preferential talking mode (preset value), and public address equipment 650 is the second preferential talking mode.Perhaps, public address equipment 650 is made as the first preferential talking mode (preset value), earphone 670 conversations are made as the second preferential talking mode.
In addition, in another embodiment, when the user provides input operation signal SIO by input block 620, the problem that the expression user can't not touch mobile terminal apparatus immediately, so processing unit 660 carries out the address book data coupling according to input operation signal SIO after, by processing unit 660, pull and connect unit 630 and output audio signal SAO can be sent to the voice output interfaces such as public address equipment 650, receiver 640 or earphone 670, it is looked closely the output interface (preset value) that the user presets and decides.
For instance, when the user says facing to mobile terminal apparatus " phone Lao Wang ", after speech sample module 610 receives these sound at this moment, it is changed into input speech signal SAI, and this input speech signal SAI is by the parsing of speech understanding module, (for example: Lao Wang), and and then (for example: Wang Anshi) obtain select target obtain communication instruction (for example: make a phone call) and communication target.Owing to be the communication instruction of resolving from " voice ", so processing unit 660 automatic liftings are opened public address equipment 650 for enabling signal, in order to the follow-up conversation that amplifies.That is to say, when pull and connect the unit finish pull and connect after, the user can utilize public address equipment directly to talk with Lao Wang.Perhaps, in another example, when the user says facing to mobile terminal apparatus " answer the call ", after speech sample module 610 receives these sound at this moment, it is changed into input speech signal SAI, and this input speech signal SAI obtains communication instruction (as: answering the call) by the parsing of speech understanding module.Owing to be the communication instruction of resolving from " voice ", so processing unit 660 automatic liftings are opened public address equipment 650 for enabling signal, can utilize public address equipment directly and the Lao Wang dialogue in order to the user.Embodiment about configuration mode and the correlative detail of above-mentioned speech understanding module has been described in the front does not repeat them here.In addition, about communication target and last resulting select target, its embodiment can be taked aforementioned method or other the similar methods of utilizing cloud server, does not repeat them here.Certainly, as mentioned above, in public address equipment 650 and earphone 670 and the situation of depositing, processing unit 660 can be according to user's setting, and making earphone 670 conversations is the first preferential talking mode, and public address equipment 650 is the second preferential talking mode.
In another example, if the user is by the display unit 430 of similar Fig. 4, select in the address list to utilize button or touch-control " Wang Anshi " time, owing to be when providing input operation signal SIO by input block 620, processing unit 660 can carry out the address book data coupling according to input operation signal SIO, and by processing unit 660, pull and connect unit 630 and user's setting, the user output audio signal SAO is sent to the voice output interfaces such as public address equipment 650, receiver 640 or earphone 670, so that can talk with Wang Anshi.
According to above-mentioned, can converge the whole automatic starting method that goes out a kind of sound amplification function of conversing of a mobile terminal apparatus.Fig. 7 is the process flow diagram according to the automatic starting method of the conversation sound amplification function of the mobile terminal apparatus of one embodiment of the invention.Please simultaneously with reference to Fig. 7, in the present embodiment, judge whether the processing unit 660 of mobile terminal apparatus 600 pulls and connects function (step S710) with unlatching.In other words, from the input speech signal SAI of the input operation signal SIO of input block 620 or speech sample module 610 may not with pull and connect relevantly, it might be the operation of carrying out other.Such as: enable the computer function in the mobile terminal apparatus or utilize voice system inquiry weather etc.When processing unit 660 according to input-signal judging with unlatching pull and connect unit 630 pull and connect function the time, that is input signal and pull and connect the action relevant, the judged result of step S710 is "Yes", then execution in step S720; Otherwise when processing unit 660 will can not be pulled and connected function according to input-signal judging, that is the judged result of step S710 is "No", then finishes the automatic starting method of this conversation sound amplification function.
Then, in step S720, judge whether processing unit 660 receives the input speech signal SAI that pulls and connects function in order to unlatching.When processing unit 660 receive from speech sample module 610 pull and connect the input speech signal SAI of function in order to unlatching the time, that is the judged result of step S720 is "Yes", can whether be connected (step S730) with earphone 670 in Check processing unit 660.When processing unit 660 is connected with earphone 670, that is the judged result of step S730 is "Yes", processing unit 660 automatic liftings for enabling signal starting earphone, and output audio signal SAO is to earphone 670(step S740); Otherwise, when processing unit 660 is not connected with earphone 670, that is the judged result of step S730 is "No", processing unit 660 automatic liftings supply enabling signal to start public address equipment 650, and output voice signal SAO is to the public address equipment 650 of mobile terminal apparatus 600, to open the conversation sound amplification function (step S750) of mobile terminal apparatus 600.It is worth mentioning that, when processing unit 660 receives when pulling and connecting the input speech signal of function in order to unlatching, above-mentioned step 730 ~ step 750 is to carry out under the user is set as earphone 670 situation of preferential voice output interface (suppose all lines of public address equipment 650 and earphone 670).In other embodiments, the user also can be set as public address equipment 650 preferential voice output interface.Certainly, when earphone 670 and public address equipment 650 only have one of them line, then can set equipment on line as preferential voice output interface.Above-mentioned implementation step is to know the operator can do corresponding change according to its demand.
On the other hand, when processing unit 660 do not receive from speech sample module 610 pull and connect the input speech signal SAI of function in order to unlatching the time, that is the judged result of step S720 is "No", can follow Check processing unit 660 and whether be connected (step S760) with earphone 670.Specifically, processing unit 660 does not receive the input speech signal SAI from speech sample module 610, but processing unit is pulled and connected function with unlatching again, and expression processing unit 660 receives the input operation signal SIO from input block 620, and this input operation signal SIO and pull and connect move relevant.When processing unit 660 is connected with earphone 670, that is the judged result of step S760 is "Yes", processing unit 660 can automatic liftings for enabling signals starting earphone 670, and output voice signal SAO is to earphone 670(step S740).Otherwise when processing unit 660 is not connected with earphone 670, that is the judged result of step S760 is "No", and processing unit 660 provides output voice signal SAO one of them (step S770) to public address equipment and receiver according to a preset value.Wherein, the order of above-mentioned steps system is as the usefulness of explanation, and the embodiment of the invention is not as limit.It is worth mentioning that, when step 760 is judged as " be ", then will provide output audio signal SAO to earphone 670, above-mentioned condition is set as earphone 670 for the user situation of preferential voice output interface (supposing all lines of receiver 640, public address equipment 650, earphone 670).In other embodiments, the user also can be set as preferential voice output interface with receiver 640 or public address equipment 650.Certainly, at receiver 640, public address equipment 650, when earphone 670 equipment only have one of them line, then can set equipment on line as preferential voice output interface.Above-mentioned implementation step is to know the operator can do corresponding change according to its demand.
In sum, the automatic starting method of the mobile terminal apparatus of the embodiment of the invention and conversation sound amplification function thereof, when the processing unit reception is pulled and connected the input speech signal of function in order to unlatching, pull and connect the function except opening, more can automatically open sound amplification function, will export voice signal to public address equipment.Thus, when the user can't touch mobile terminal apparatus immediately, but sound amplification function need be made the time, can start sound amplification function by voice system, to improve the ease of use of portable terminal.
Although the present invention with embodiment openly as above; so it is not to limit the present invention; those skilled in the art when doing a little change and retouching, are as the criterion so protection scope of the present invention ought be looked the appended claims confining spectrum without departing from the spirit and scope of the present invention.
Claims (23)
1. speech control system comprises:
One auxiliary actuating apparatus, in order to an auxiliary voice system of opening a mobile terminal apparatus, this auxiliary actuating apparatus comprises one first wireless transport module; And
This mobile terminal apparatus, this mobile terminal apparatus comprises:
One second wireless transport module is with this first wireless transport module coupling; And
This voice system couples this second wireless transport module,
Wherein when this auxiliary actuating apparatus was triggered, this first wireless transport module linked by this second wireless transport module of a wireless communication protocol and this mobile terminal apparatus, to start this voice system of this mobile terminal apparatus.
2. speech control system as claimed in claim 1, wherein this assistant starting equipment does not possess the function of earphone, microphone or the combination of the two only in order to auxiliary this voice system of opening this mobile terminal apparatus.
3. speech control system as claimed in claim 1 also comprises a hands-free headsets, with this mobile terminal apparatus line.
4. speech control system as claimed in claim 1 also comprises a hand-free microphone, with this mobile terminal apparatus line.
5. speech control system as claimed in claim 1, wherein this auxiliary actuating apparatus also comprises a trigger module, this trigger module and this first wireless transport module couple.
6. speech control system as claimed in claim 5, wherein when this trigger module is triggered, if this first wireless transport module is in a sleep pattern, then this trigger module wakes this first wireless transport module up, make this first wireless transport module enter a mode of operation, and link by this second wireless transport module of this wireless communication protocol and this mobile terminal apparatus.
7. speech control system as claimed in claim 6, wherein in the Preset Time after this first wireless transport module enters this mode of operation, if this trigger module is not triggered, then this first wireless transport module enters this sleep pattern from this mode of operation, and stops to link with this second wireless transport module of this mobile terminal apparatus.
8. speech control system as claimed in claim 1, wherein this auxiliary actuating apparatus is a bluetooth earphone.
9. speech control system as claimed in claim 1, wherein the body of this auxiliary actuating apparatus is the body of an ornament.
10. speech control system as claimed in claim 9, wherein this ornament comprises ring, wrist-watch, earrings, necklace, glasses.
11. speech control system as claimed in claim 1, wherein this mobile terminal apparatus uses Android operating system.
12. an auxiliary actuating apparatus, in order to an auxiliary voice system of opening a mobile terminal apparatus, this mobile terminal apparatus comprises one first wireless transport module, and this voice system that couples this first wireless transport module, and this auxiliary actuating apparatus comprises:
One second wireless transport module is with this first wireless transport module coupling; And
One trigger module couples this second wireless transport module,
Wherein when this trigger module was triggered, this second wireless transport module linked by this first wireless transport module of a wireless communication protocol and this mobile terminal apparatus, to start this voice system of this mobile terminal apparatus.
13. auxiliary actuating apparatus as claimed in claim 12, wherein this assistant starting equipment does not possess the function of earphone, microphone or the combination of the two only in order to auxiliary this voice system of opening this mobile terminal apparatus.
14. auxiliary actuating apparatus as claimed in claim 12, wherein when this trigger module is triggered, if this second wireless transport module is in a sleep pattern, then this trigger module wakes this second wireless transport module up, make this second wireless transport module enter a mode of operation, and link by this first wireless transport module of this wireless communication protocol and this mobile terminal apparatus.
15. auxiliary actuating apparatus as claimed in claim 14, wherein in the Preset Time after this second wireless transport module enters this mode of operation, if this trigger module is not triggered, then this second wireless transport module enters this sleep pattern from this mode of operation, and stops to link with this first wireless transport module of this mobile terminal apparatus.
16. auxiliary actuating apparatus as claimed in claim 12, wherein this auxiliary actuating apparatus is a bluetooth earphone.
17. auxiliary actuating apparatus as claimed in claim 12, wherein the body of this auxiliary actuating apparatus is the body of an ornament.
18. auxiliary actuating apparatus as claimed in claim 17, wherein this ornament comprises ring, wrist-watch, earrings, necklace, glasses.
19. auxiliary actuating apparatus as claimed in claim 12, wherein this mobile terminal apparatus uses Android operating system.
20. the method for a speech control, be applicable to a mobile terminal apparatus and an auxiliary actuating apparatus, this auxiliary actuating apparatus is in order to an auxiliary voice system of opening this mobile terminal apparatus, this auxiliary actuating apparatus and this mobile terminal apparatus have respectively one first wireless transport module and one second wireless transport module of mutual coupling, and the method for this speech control comprises:
This first wireless transport module receives a trigger pip;
Start this first wireless transport module, and send a wireless signal transmission; And
This second wireless transport module receives this wireless signal transmission, to start this voice system.
21. the method for speech control as claimed in claim 20, wherein this assistant starting equipment does not possess the function of earphone, microphone or the combination of the two only in order to auxiliary this voice system of opening this mobile terminal apparatus.
22. the method for speech control as claimed in claim 20, wherein this auxiliary actuating apparatus is a bluetooth earphone.
23. the method for speech control as claimed in claim 20, wherein this mobile terminal apparatus uses Android operating system.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201210593061 CN103077716A (en) | 2012-12-31 | 2012-12-31 | Auxiliary starting device and speed control system and method |
CN201711423107.7A CN108124043A (en) | 2012-12-31 | 2013-05-17 | Auxiliary actuating apparatus, speech control system and its method |
CN2013101832176A CN103280086A (en) | 2012-12-31 | 2013-05-17 | Auxiliary starter, voice control system and method thereof |
TW102121753A TWI633484B (en) | 2012-12-31 | 2013-06-19 | Activation assisting apparatus, speech operation system and method thereof |
US13/923,383 US8934886B2 (en) | 2012-12-31 | 2013-06-21 | Mobile apparatus and method of voice communication |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN 201210593061 CN103077716A (en) | 2012-12-31 | 2012-12-31 | Auxiliary starting device and speed control system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103077716A true CN103077716A (en) | 2013-05-01 |
Family
ID=48154224
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 201210593061 Pending CN103077716A (en) | 2012-12-31 | 2012-12-31 | Auxiliary starting device and speed control system and method |
CN2013101832176A Pending CN103280086A (en) | 2012-12-31 | 2013-05-17 | Auxiliary starter, voice control system and method thereof |
CN201711423107.7A Pending CN108124043A (en) | 2012-12-31 | 2013-05-17 | Auxiliary actuating apparatus, speech control system and its method |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2013101832176A Pending CN103280086A (en) | 2012-12-31 | 2013-05-17 | Auxiliary starter, voice control system and method thereof |
CN201711423107.7A Pending CN108124043A (en) | 2012-12-31 | 2013-05-17 | Auxiliary actuating apparatus, speech control system and its method |
Country Status (2)
Country | Link |
---|---|
CN (3) | CN103077716A (en) |
TW (1) | TWI633484B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103838159A (en) * | 2014-03-17 | 2014-06-04 | 联想(北京)有限公司 | Control method and device and electronic equipment |
CN104134442A (en) * | 2014-08-15 | 2014-11-05 | 广东欧珀移动通信有限公司 | Method and device for starting voice services |
CN106714023A (en) * | 2016-12-27 | 2017-05-24 | 广东小天才科技有限公司 | Bone conduction earphone-based voice awakening method and system and bone conduction earphone |
WO2018023417A1 (en) * | 2016-08-02 | 2018-02-08 | 张阳 | Information pushing method for use when making telephone call, and glasses |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103728906B (en) * | 2014-01-13 | 2017-02-01 | 江苏惠通集团有限责任公司 | Intelligent home control device and method |
TWI712944B (en) * | 2019-11-28 | 2020-12-11 | 睿捷國際股份有限公司 | Sound-based equipment surveillance method |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2003272871A1 (en) * | 2002-10-18 | 2004-05-04 | Beijing Kexin Technology Co., Ltd. | Portable digital mobile communication apparatus, method for controlling speech and system |
CN1831937A (en) * | 2005-03-08 | 2006-09-13 | 台达电子工业股份有限公司 | Method and device for voice identification and language comprehension analysing |
TWI407765B (en) * | 2009-05-08 | 2013-09-01 | Htc Corp | Mobile device, power saving method and computer executable medium |
CN101923539B (en) * | 2009-06-11 | 2014-02-12 | 珠海市智汽电子科技有限公司 | Man-machine conversation system based on natural language |
US20100330908A1 (en) * | 2009-06-25 | 2010-12-30 | Blueant Wireless Pty Limited | Telecommunications device with voice-controlled functions |
CN101951553B (en) * | 2010-08-17 | 2012-10-10 | 深圳市车音网科技有限公司 | Navigation method and system based on speech command |
CN102006373B (en) * | 2010-11-24 | 2015-01-28 | 深圳市车音网科技有限公司 | Vehicle-mounted service system and method based on voice command control |
CN102779509B (en) * | 2011-05-11 | 2014-12-03 | 联想(北京)有限公司 | Voice processing equipment and voice processing method |
CN102821424B (en) * | 2011-06-09 | 2015-02-18 | 中磊电子股份有限公司 | Auxiliary mobile data distribution method, communication device and mobile device |
-
2012
- 2012-12-31 CN CN 201210593061 patent/CN103077716A/en active Pending
-
2013
- 2013-05-17 CN CN2013101832176A patent/CN103280086A/en active Pending
- 2013-05-17 CN CN201711423107.7A patent/CN108124043A/en active Pending
- 2013-06-19 TW TW102121753A patent/TWI633484B/en active
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103838159A (en) * | 2014-03-17 | 2014-06-04 | 联想(北京)有限公司 | Control method and device and electronic equipment |
CN103838159B (en) * | 2014-03-17 | 2017-03-01 | 联想(北京)有限公司 | A kind of control method, device and electronic equipment |
CN104134442A (en) * | 2014-08-15 | 2014-11-05 | 广东欧珀移动通信有限公司 | Method and device for starting voice services |
WO2018023417A1 (en) * | 2016-08-02 | 2018-02-08 | 张阳 | Information pushing method for use when making telephone call, and glasses |
CN106714023A (en) * | 2016-12-27 | 2017-05-24 | 广东小天才科技有限公司 | Bone conduction earphone-based voice awakening method and system and bone conduction earphone |
CN106714023B (en) * | 2016-12-27 | 2019-03-15 | 广东小天才科技有限公司 | Bone conduction earphone-based voice awakening method and system and bone conduction earphone |
Also Published As
Publication number | Publication date |
---|---|
CN108124043A (en) | 2018-06-05 |
TWI633484B (en) | 2018-08-21 |
CN103280086A (en) | 2013-09-04 |
TW201426531A (en) | 2014-07-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103095813A (en) | Voice interaction system, mobile terminal device and voice communication method | |
CN103077716A (en) | Auxiliary starting device and speed control system and method | |
CN113572731B (en) | Voice communication method, personal computer, terminal and computer readable storage medium | |
CN202160254U (en) | Human face identification building intercom system | |
CN103237142A (en) | Method for generating automobile control instruction through Bluetooth earphone and cell phone | |
CN203524952U (en) | Guide system based on video communication | |
CN104104789A (en) | Voice answering method and mobile terminal device | |
EP3735646B1 (en) | Using auxiliary device case for translation | |
CN104168044A (en) | Method for controlling mobile terminal through multifunctional Bluetooth watch | |
CN104023302A (en) | Programmable digital hearing aid system and method thereof | |
US8934886B2 (en) | Mobile apparatus and method of voice communication | |
CN103517170A (en) | Remote-control earphone with built-in cellular telephone module | |
US8321227B2 (en) | Methods and devices for appending an address list and determining a communication profile | |
CN106023566A (en) | Voice-recognition-based Bluetooth remote control device, system and method | |
CN103281450B (en) | Mobile terminal apparatus and automatically open the method for voice output interface of this device | |
CN203118191U (en) | Mobile terminal remote control device | |
CN104952457A (en) | Device and method for digital hearing aiding and voice enhancing processing | |
CN108399918A (en) | Smart machine connection method, smart machine and terminal | |
CN109831766B (en) | Data transmission method, Bluetooth equipment assembly and Bluetooth communication system | |
CN203219360U (en) | Intelligent mobile phone with one-key press key function | |
CN102014179A (en) | Discrete mobile phone | |
CN206162095U (en) | Intelligence house robot with voice interaction function | |
CN106464288A (en) | Method to achieve intercom and smart bracelet | |
CN103136924A (en) | Multifunctional remote control unit | |
CN203387581U (en) | Radio with handset functions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130501 |