CN104104789A - Voice answering method and mobile terminal device - Google Patents

Voice answering method and mobile terminal device Download PDF

Info

Publication number
CN104104789A
CN104104789A CN201310291083.XA CN201310291083A CN104104789A CN 104104789 A CN104104789 A CN 104104789A CN 201310291083 A CN201310291083 A CN 201310291083A CN 104104789 A CN104104789 A CN 104104789A
Authority
CN
China
Prior art keywords
voice
mobile terminal
terminal apparatus
call
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310291083.XA
Other languages
Chinese (zh)
Inventor
寻亮
张国峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Via Technologies Inc
Original Assignee
Via Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Via Technologies Inc filed Critical Via Technologies Inc
Priority to CN201310291083.XA priority Critical patent/CN104104789A/en
Priority to CN201710903738.2A priority patent/CN107613132A/en
Priority to TW102125584A priority patent/TWI535258B/en
Publication of CN104104789A publication Critical patent/CN104104789A/en
Pending legal-status Critical Current

Links

Landscapes

  • Telephonic Communication Services (AREA)
  • Telephone Function (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention provides a voice answering method and a mobile terminal device, wherein the mobile terminal device has a general mode and a first mode. The voice answering method comprises the following steps: carrying out switching from the general mode to the first mode; when the first mode receives a telephone call, sending a voice notification and starting to receive voice signals; analyzing the analysis to obtain a voice identification result; and executing response operation according to the voice identification result.

Description

Voice answering method and mobile terminal apparatus
Technical field
The present invention relates to a kind of technology of speech control, and be particularly related to a kind of voice answering method and the mobile terminal apparatus that uses the method for automatic unlatching hand-free system.
Background technology
Along with scientific and technological development, the mobile terminal apparatus with voice system is day by day universal.Above-mentioned voice system is by speech understanding technology, allows user and mobile terminal apparatus link up.For instance, user is as long as tell a certain requirement to above-mentioned mobile terminal apparatus, for example, want to look into train number, look into weather or want to call etc., and system just can be according to user's voice signal, takes corresponding action.Above-mentioned action may be answer user's problem or go the system of ordering about mobile terminal apparatus to move according to user's instruction with voice mode.
The convenience starting with voice system, is mostly at present that its shown application program of screen of triggering mobile terminals device starts, or starts by the set physical button of mobile terminal apparatus.Therefore, user must directly touch the screen of mobile terminal apparatus or set physical button, and to start voice system by mobile terminal apparatus itself, but this for the user, and in some occasion, above-mentioned design is but suitable inconvenience.Such as: during the road, or while cooking in kitchen, need to dial the mobile phone that is positioned at parlor, to inquire that the users such as friend's recipe details cannot touch mobile terminal apparatus immediately, but the situation that need make voice system open.Further, after opening voice dialogue, how more to meet the repeatedly interactive dialogue of slipping out of the hand completely of the human conversation natural law.In other words, user still must, by hand, start the voice system of mobile terminal apparatus, and cannot accomplish to be completely free of the operation of hand at present.
Base this, how to improve these above-mentioned shortcomings, become subject under discussion urgently to be resolved hurrily.
Summary of the invention
The invention provides a kind of voice answering method and mobile terminal apparatus, wherein in the time that mobile terminal apparatus receives incoming call call, mobile terminal apparatus just can be opened its hand-free system automatically, allow easily user and mobile terminal apparatus carry out voice communication, and mobile terminal apparatus can be responded this incoming call call according to the said content of user, makes user in dialog procedure, no longer need manual participation.By this, the present invention can realize and interactively slipping out of the hand completely, use more convenient, provide voice service rapidly.
The present invention proposes a kind of voice answering method, for having the mobile terminal apparatus of normal mode and first mode.Voice answering method comprises the following steps.Switch to first mode from normal mode.When receive incoming call call in first mode, send verbal announcement, and start received speech signal.Resolve voice signal to obtain speech recognition result.According to speech recognition result, carry out corresponding response operation.
The present invention separately proposes a kind of mobile terminal apparatus, and it comprises voice-output unit, voice receiving unit, language understanding module and carrys out communication unit.Voice-output unit is in order to send verbal announcement.Voice receiving unit is in order to received speech signal.Language understanding module is coupled to voice receiving unit, in order to resolve voice signal.Carry out communication unit and be coupled to voice-output unit and language understanding module.Carry out communication unit in order to receive incoming call call and to carry out response operation.Wherein, mobile terminal apparatus switches to first mode from normal mode, and in the time carrying out the call of communication unit reception incoming call, carrys out communication unit and send verbal announcement by voice-output unit, and start voice receiving unit received speech signal.And language understanding module parses voice signal to be to obtain speech recognition result, and to carry out communication unit and carry out corresponding response operation according to speech recognition result.
Based on above-mentioned, when mobile terminal apparatus is in the time that first mode receives incoming call call, mobile terminal apparatus can send verbal announcement automatically with inquiry user, and allows user according to verbal announcement, manipulate mobile terminal apparatus respond by the mode of voice.And mobile terminal apparatus can, according to what is said or talked about from user, be carried out corresponding response operation.Thus, mobile terminal apparatus can be opened its hand-free system automatically so that voice service to be provided rapidly, make user more convenient and manipulate mobile terminal apparatus by the mode of voice more easily, by this, in the time that mobile terminal apparatus receives incoming call call, user can depart from manual operation completely and respond.
For above-mentioned feature and advantage of the present invention can be become apparent, special embodiment below, and coordinate accompanying drawing to be described in detail below.
Brief description of the drawings
Fig. 1 is the calcspar of the mobile terminal apparatus that illustrates according to one embodiment of the invention.
Fig. 2 is the flow chart of the voice answering method that illustrates according to one embodiment of the invention.
Fig. 3 is the calcspar of the mobile terminal apparatus that illustrates according to one embodiment of the invention.
Fig. 4 is the flow chart of the speech control method that illustrates according to one embodiment of the invention.
Fig. 5 is the flow chart of the speech control method that illustrates according to one embodiment of the invention.
[symbol description]
100,300: mobile terminal apparatus
104,304: auxiliary operation device
106,306: semantic database
110,310: voice-output unit
120,320: voice receiving unit
130,330: language understanding module
140,340: carry out communication unit
350: voice wake module
A1: voice answer-back
C: incoming call call
V1, V2, V3: voice signal
SD: speech recognition result
SO: verbal announcement
SI: voice signal S202, S204, S206, S208: each step of voice answering method
S402, S404, S406, S408, S410, S412, S414, S502, S504, S506, S508, S510: the flow chart of speech control method
Embodiment
Although mobile terminal apparatus now can provide voice system, link up with mobile terminal apparatus to allow user send voice, user, in the time starting this voice system, still must start by mobile terminal apparatus itself.Therefore cannot touch immediately mobile terminal apparatus user, but the situation that need make voice system open often cannot meet user's demand immediately.Further, allow to wake up speech dialogue system, but current mobile device still needs the participation frequently of hand in dialog procedure, such as after user puts question to and finish, need to again inquire time, need manually opening voice conversational system again, extremely inconvenient.For this reason, the present invention proposes a kind of voice answering method, speech control method and mobile terminal apparatus, allows user's opening voice system more easily.Further, the present invention can make user in whole dialog procedure, breaks away from the operation of hand, makes the more convenient rapid and natural of dialogue.More clear in order to make within the present invention to hold, below the example that really can implement according to this as the present invention especially exemplified by embodiment.
Fig. 1 is the calcspar of the mobile terminal apparatus that illustrates according to one embodiment of the invention.Please refer to Fig. 1, mobile terminal apparatus 100 has voice-output unit 110, voice receiving unit 120, language understanding module 130 and carrys out communication unit 140.Mobile terminal apparatus 100 is for example mobile phone (Cell phone), personal digital assistant (Personal Digital Assistant, PDA) mobile phone, smart mobile phone (Smart phone), or palmtop computer (Pocket PC), Tablet PC (Tablet PC) or mobile computer of communication software etc. are installed.Mobile terminal apparatus 100 can be any portable (Portable) mobile device that possesses communication function, does not limit its scope at this.In addition, mobile terminal apparatus 100 can use Android operating system, microsoft operating system, Android operating system, (SuSE) Linux OS etc., is not limited to above-mentioned.In the present embodiment, mobile terminal apparatus 100 can receive incoming call call C by carrying out communication unit 140.In the time carrying out communication unit 140 and receive incoming call call C, mobile terminal apparatus 100 can be by voice-output unit 110, automatically sends verbal announcement SO and how to respond with inquiry user.Now, mobile terminal apparatus 100 can pass through voice receiving unit 120 to receive the voice signal SI from user, and by language understanding module 130, this voice signal SI is resolved to produce speech recognition result SD.Finally, mobile terminal apparatus 100 can be by carrying out communication unit 140, to carry out corresponding response operation according to speech recognition result SD.Above-mentioned module and the function of unit are described below.
Voice-output unit 110 is for example loud speaker.Voice-output unit 110 has sound amplification function, in order to export verbal announcement and the voice from call object.Specifically, in the time that mobile terminal apparatus 100 receives incoming call call C, mobile terminal apparatus 100 can send verbal announcement SO by voice-output unit 110, for example, to inform whether send a telegram here source (object of conversing) or the inquiry user of call C of user will answer this incoming call call C etc.For example, carry out communication unit 140 and can send the telephone number information about incoming call call C according to incoming call call C and by voice-output unit 110, or and then find coordinator's title of transfering to this incoming call call C according to coordinator's address list, be not limited to above-mentioned.For instance, carry out communication unit 140 and can send out by voice-output unit 110 information about incoming call call C such as " Wang Daming is sent a telegram here to you; answer now ", " X company sends a telegram here to you; answer now ", " incoming call is 0922-123564; answer now " or " incoming call is 886922-123564, answers now ".In addition, if this incoming call call C does not provide telephone number, carry out communication unit 140 and also can send default verbal announcement SO by voice-output unit 110, for example, " this is unknown phone, answers now " etc.On the other hand, when after user's connecting incoming call call C, user also can answer by voice-output unit 110.
Voice receiving unit 120 is for example microphone, in order to receive user's sound, to obtain the voice signal SI from user.
Language understanding module 130 is coupled to voice receiving unit 120, and the voice signal SI receiving in order to resolve voice receiving unit 120, to obtain speech recognition result.Particularly, language understanding module 130 can comprise voice identification module and speech processing module (not illustrating), wherein, voice identification module can receive the voice signal SI transmitting from voice receiving unit 120, such as, voice signal is converted to multiple segmentation semantemes (vocabulary or words and expressions etc.).Speech processing module can parse according to these segmentations semantemes mean (such as intention, time, the place etc.) of the semantic representatives of these segmentations, and then judges the represented meaning in above-mentioned voice signal SI.In addition, speech processing module also can produce corresponding response content according to resolved result.
Further; in the natural language understanding under computer system architecture; conventionally can extract by fixing word method the statement of voice signal SI; such as, to resolve instruction that these statements are meant or intention (incoming call answering call C, refusal incoming call answering call C or send the actions such as news in brief) etc.; and judge the meaning of voice signal SI, use and obtain speech recognition result.In the present embodiment, the speech processing module of language understanding module 130, can pass through semantic database 106, which instruction the segmentation semanteme that comes to be divided in voice inquirement signal SI is corresponding to, and wherein semantic database 106 can record the relation of various segmentations semantemes and various command.In the present embodiment, according to above-mentioned various segmentation semantemes, which the speech processing module of language understanding module 130 also can judge in voice signal SI is the information that user's wish is responded incoming call call C.
For instance, in the time that user responds " good ", " answering ", " connecing " etc. and so on and indicates the voice signal SI of incoming call answering call C, language understanding module 130 can be inquired about by semantic database 106 the corresponding orders such as " good ", " answering ", " connecing ", represents incoming call answering call C and parse above-mentioned voice signal SI.In another embodiment, in the time that user's response " not connecing ", " no ", " first not connecing " etc. and so on indicate to refuse the voice signal SI of incoming call answering call C, language understanding module 130 can be inquired about the corresponding order such as " not connecing ", " no ", " first not connecing " by semantic database 106, is in order to represent refusal incoming call answering call C and parse above-mentioned voice signal SI.
In another embodiment, when responding " first not connecing; tell he me to call back to him after company " etc. and so on, user represents to send message when responding the voice signal SI of incoming call call C, language understanding module 130 can be inquired about by semantic database 106 " first not connecing " corresponding order, and parses voice signal SI for representing refusal incoming call answering call C.And, it is the order that represents to send message that language understanding module 130 also can be judged " telling him " by semantic database 106, using according to this and order executive communication operation, for example, is to order to produce signal of communication (as sent news in brief etc.) according to this.Wherein, it is the response content (being for example " calling back after company ") while representing to send message that language understanding module 130 also can be judged " telling him " voice afterwards.
It should be noted that, in the present embodiment, the hardware circuit that language understanding module 130 can be combined by or several gate carrys out implementation, can be also to carry out implementation with computer program code.It is worth mentioning that, in another embodiment, above-mentioned language understanding module is also configurable in cloud server.That is to say, mobile terminal apparatus 100 also can with cloud server (not illustrating) line, wherein cloud server line has language understanding module.Thus, mobile terminal apparatus 100 can be by received voice signal SI, sends to the language understanding module in cloud server to resolve, then obtains speech recognition result from cloud server.
Carry out communication unit 140 and be coupled to voice receiving unit 120 and language understanding module 130.Carry out communication unit 140 in order to receive incoming call call C and executive communication operation.Specifically, carrying out communication unit 140 receives after incoming call call C, can be according to user's voice (then will describe in detail), carry out incoming call answering call C, refusing incoming call call C, transmit default voice answer-back to respond incoming call call C, or transmit the answer signals such as news in brief, voice answer-back, to respond incoming call call C, wherein to there is user's wish in answer signal the response content of responding incoming call call C.
Described herein, the mobile terminal apparatus 100 of the present embodiment has normal mode and first mode.Wherein, first mode is for example mobile terminal apparatus 100 in the crane device of movement and enter vehicle-mounted pattern.More specifically, in this first mode, in the time that mobile terminal apparatus 100 receives incoming call call C, mobile terminal apparatus 100 can send verbal announcement (for example source of incoming call call) automatically to inquire whether user answers this incoming call call C, be that mobile terminal apparatus 100 can automatically be opened its hand-free system, to carry out interactive voice with user.Comparatively speaking, normal mode is for example that mobile terminal apparatus 100 is in the time of non-vehicle-mounted pattern.That is, in this normal mode, mobile terminal apparatus 100 can automatically not send verbal announcement and whether answer this incoming call call C with inquiry user, and cannot respond according to user's voice signal, and mobile terminal apparatus 100 can automatically not opened its hand-free system.
Thus, in the time that mobile terminal apparatus 100 switches to first mode, if mobile terminal apparatus 100 receives incoming call call, can send verbal announcement user, to allow user by the mode of voice, voice signal is to mobile terminal apparatus 100, makes that mobile terminal apparatus 100 can what is said or talked about according to user, responds this incoming call call (for example answering or refuse the traffic operations such as incoming call answering call).
It should be noted that, the mobile terminal apparatus 100 of the present embodiment can switch to first mode from normal mode automatically.Particularly, when mobile terminal apparatus 100 lines are during in servicing unit 104, mobile terminal apparatus 100 can switch to first mode from normal mode.On the other hand, when mobile terminal apparatus 100 is not when line is in servicing unit 104, mobile terminal apparatus 104 can switch to normal mode from first mode.At this, mobile terminal apparatus 100 can be matched with servicing unit 104.Wherein, when mobile terminal apparatus 100 is by wireless signal transmission or while being electrically connected at servicing unit 104, can make mobile terminal apparatus 10 automatically switch to first mode.
In addition, in another embodiment, when mobile terminal apparatus 100 is during for the crane device of movement, mobile terminal apparatus 100 also can, according to the size of the speed of induction crane device, determine whether switching to first mode.For example, in the time that the speed of crane device exceedes threshold value, 100 of mobile terminal apparatus can switch to first mode from normal mode.On the other hand, in the time that the speed of crane device does not exceed threshold value, 100 of mobile terminal apparatus can be from switching to normal mode from first mode.Thus, user can manipulate mobile terminal apparatus 100 by voice more expediently.
Fig. 2 is the flow chart of the voice answering method that illustrates according to one embodiment of the invention.Referring to Fig. 1 and Fig. 2, in step 202, mobile terminal apparatus 100 can switch to first mode from normal mode.At mobile terminal apparatus 100 in first mode in the situation that, as shown in step S204, in the time carrying out communication unit 140 and receive incoming call call C, carrying out communication unit 140 can send verbal announcement SO by voice-output unit 110, and starts voice receiving unit 120 received speech signal SI.According to above-mentioned verbal announcement SO, user can learn the source of incoming call call C, and can manipulate communication unit 140 to respond this incoming call call C by the mode of voice.Therefore,, in the time carrying out communication unit 140 and receive incoming call call C, carry out communication unit 140 and can start voice receiving units 120 to receive the voice signal SI from user.
At step S206, language understanding module 130 can be resolved the received voice signal SI of voice receiving unit 120, to obtain speech recognition result.At this, language understanding module 130 can receive the voice signal SI from voice receiving unit 120, and voice signal SI is divided into multiple segmentation semantemes.And language understanding module 130 can be carried out natural language understanding to above-mentioned segmentation semanteme, to pick out the response message in voice signal SI.
Then, at step S208, carry out the speech recognition result that communication unit 140 can parse according to language understanding module 130, carry out corresponding traffic operation.In the present embodiment, because user can be by the mode of voice, answer, refusing incoming call call C, send message or other move to respond incoming call call C with order mobile terminal apparatus 100, therefore language understanding module 130 can be judged the order in voice signal SI after resolving voice signal SI.Can carry out the traffic operation to according to the order in voice signal SI therefore carry out communication unit 140.Above-mentioned come the performed traffic operation of communication unit 140 can be incoming call answering call C, refusal incoming call answering call C, transmit default voice answer-back to respond incoming call call C, or transmit the answer signals such as news in brief, voice answer-back, to respond incoming call call C, wherein to there is user's wish in answer signal the response content of responding incoming call call C.
Carry out the performed traffic operation of communication unit 140 in order to make those skilled in the art further understand the present embodiment, below again for all embodiment, wherein, the mobile terminal apparatus 100 of the Fig. 1 that still arranges in pairs or groups describes.
In the time that mobile terminal apparatus 100 switches to first mode (for example mobile terminal apparatus 100 in the crane device of movement and enter vehicle-mounted pattern), suppose to come communication unit 140 and receive incoming call call C, and carry out communication unit 140 and can send " Wang Daming is sent a telegram here to you, answers now " this verbal announcement SO by voice-output unit 110.In the present embodiment, if user responds " good " this voice signal SI, carry out communication unit 140 and can answer this incoming call call C.
On the other hand, if user responds " not connecing " this voice signal SI, carry out communication unit 140 and can answer this incoming call call C by refusal.In one embodiment, carrying out communication unit 140 also can transmit " phone that you dial temporarily cannot be answered, and please dials after a while again, or leaves a message after " serge " sound " this default voice answer-back and respond incoming call call C.
In addition, if user responds " first not connecing; tell he me to call back to him after company " this voice signal SI, carry out communication unit 140 and can answer this incoming call call C by refusal, and can obtain response content from voice identification result, this response content that " calls back after company " to be to send news in brief, wherein for example in news in brief, records " I in session, clawback more after a while " this news in brief content and responds incoming call call C.
Thus, in the situation that mobile terminal apparatus 100 enters vehicle-mounted pattern, mobile terminal apparatus 100 can be inquired whether incoming call answering call C of user automatically, answers, refuses to answer or other traffic operations to allow user directly manipulate mobile terminal apparatus 100 by the mode of voice.
It should be noted that in addition, this enforcement profit does not limit user and responds incoming call call C by the mode of voice.In other embodiments, user can be disposed at by pressing the button (not illustrating) of mobile terminal apparatus 100, to make answer/rejection of communication unit 140.Or, user also can, by line for example, in the auxiliary operation device (not illustrating) (being the portable device with Bluetooth function or wireless transmission function) of mobile terminal apparatus 100, manipulate answer/rejection of communication unit 140.
According to above-mentioned, mobile terminal apparatus 100 can switch to first mode from normal mode automatically.And when carrying out communication unit 140 in the time that first mode receives incoming call call, voice-output unit 110 can send verbal announcement with inquiry user.In the time that user sends voice signal, language understanding module 130 can be resolved this voice signal, and carrys out the speech recognition result obtaining after communication unit 140 can be resolved according to language understanding module 130, carries out corresponding traffic operation.Thus, mobile terminal apparatus can provide voice service more quickly, wherein when mobile terminal apparatus 100 is the in the situation that of first mode, for example, during for the crane device of movement, the verbal announcement that user can send according to mobile terminal apparatus 100 easily, responds incoming call call by the mode of voice.By this, user can manipulate mobile terminal apparatus more expediently.
Fig. 3 is the calcspar of the mobile terminal apparatus that illustrates according to one embodiment of the invention.Please refer to Fig. 3, mobile terminal apparatus 300 has voice-output unit 310, voice receiving unit 320, language understanding module 330 and voice wake module 350.The mobile terminal apparatus 300 of the present embodiment is similar to the mobile terminal apparatus 100 of Fig. 1, and its difference is: the mobile terminal apparatus 300 of the present embodiment has more voice wake module 350.
Voice wake module 350 is in order to judge whether to receive the voice signal with identifying information.In the present embodiment, in the time that voice wake module 350 does not receive the voice signal with identifying information, voice-output unit 310, voice receiving unit 320 and language understanding module 330 can or be closed isotype in standby, and mobile terminal apparatus 300 can not carry out interactive voice with user.And in the time that voice wake module 350 receives the voice signal with identifying information, 300 of mobile terminal apparatus can start voice receiving unit 320 with the voice signal after receiving, and resolve by language understanding module 330, be that mobile terminal apparatus 300 can carry out interactive voice according to this voice signal and user, and also can carry out corresponding to the response operation of voice signal etc.Therefore in the present embodiment, user can be directly in the mode of voice, says the voice (for example specific vocabulary, as name) with identifying information, wakes mobile terminal apparatus 300 up and carries out voice interactive function.In addition, the hardware circuit that the voice wake module 350 of the present embodiment can be combined by or several gate carrys out implementation, can be also to carry out implementation with computer program code.
It is worth mentioning that, because voice receiving unit 320 is to be activated after voice wake module 350 picks out identifying information, therefore language understanding module 330 can avoid non-speech audio (for example noise signals) to resolve.In addition, as long as for example, because voice wake module 350 can pick out the corresponding message of identifying information (the corresponding message of " little madder " this identifying information), can judge that received voice signal has identifying information, therefore voice wake module 350 can not possess the ability that has natural language understanding, and has the consumption of lower-wattage.Thus, in the time that user does not provide the voice signal with identifying information, mobile terminal apparatus 300 can not start voice interactive function, manipulates by voice therefore mobile terminal apparatus 300 is not only easy to use person, can save electrical source consumption yet.
Therefore in the present embodiment, mobile terminal apparatus 300 can judge whether to receive the voice signal (below representing with voice signal V1) that meets identifying information by voice wake module 350, if, mobile terminal apparatus 300 can start voice receiving unit 320 to receive message, and judges by language understanding module 330 whether voice receiving unit 320 receives another voice signal (below representing with voice signal V2) after voice signal V1.If language understanding module 330 judges voice receiving unit 320 and receive voice signal V2, language understanding module 330 can be resolved voice signal V2 and obtain speech recognition result, and judges in speech recognition result whether have and can carry out solicited message.If speech recognition result has can carry out solicited message time, mobile terminal apparatus 300 can be carried out response operation by language understanding module 330, and terminated speech interactive function.
But, if above-mentioned voice receiving unit 320 is after voice signal V1, do not receive another voice signal V2, or, language understanding module 330 is resolved voice signal V2 and the speech recognition result that obtains, do not have can carry out solicited message time, mobile terminal apparatus 300 can be carried out voice dialogue pattern by language understanding module 330, to carry out voice communication with user.Wherein, language understanding module 330 is in the time carrying out voice dialogue pattern, and language understanding module 330 can send the solicited message (be user's intention) of voice answer-back with inquiry user automatically.Now, language understanding module 330 can judge whether the voice signal that user exports meets termination of a session information, or no have can carry out solicited message.If had, can terminated speech dialogue mode, or after carrying out the corresponding solicited message carried out terminated speech dialogue mode; If not, 330 of language understanding modules can continue to carry out voice dialogue pattern, can carry out solicited message until the voice signal that user exports meets termination of a session information or has.
Below the above-mentioned mobile terminal apparatus 300 of arranging in pairs or groups illustrates the method for speech control.Fig. 4 is the flow chart of the speech control method that illustrates according to one embodiment of the invention.Referring to Fig. 3 and Fig. 4, in step S402, voice wake module 350 can judge whether to receive the voice signal (below representing with voice signal V1) that meets identifying information.Specifically, identifying information can be the corresponding default sound of specific vocabulary (for example name), and wherein this default sound can be within special audio scope or specific energy range.That is to say, voice wake module 350 can judge whether to receive the default sound within special audio scope or specific energy range, whether receives the voice signal V1 with identifying information and judge.In the present embodiment, user can set this identifying information by the system of mobile terminal apparatus 300 in advance, for example provide in advance identifying information corresponding default sound, and whether voice wake module 350 can meet this default sound by comparison voice signal V1, judge whether voice signal V1 has identifying information.For instance, suppose that identifying information is the corresponding default sound of " little madder " this name, voice wake module 350 can judge whether to receive the voice signal V1 with " little madder ".
If voice wake module 350 does not receive the voice signal V1 that meets identifying information,, as shown in step S404, mobile terminal apparatus 300 can not start voice interactive function.Because voice wake module 350 does not receive the voice signal V1 that meets identifying information, therefore voice receiving unit 320 is into closed condition or resting state and can carry out the reception of voice signal, therefore the voice signal of the language understanding module 330 in mobile terminal apparatus 300 after can not obtaining resolved.For instance, suppose that identifying information is for " little madder ", if user does not say " little madder " and said other voice such as " Xiao Wang ", voice wake module 350 cannot receive the voice signal V1 that meets " little madder ", therefore the voice interactive function of mobile terminal apparatus 300 can not be activated.
In step S406, in the time that voice wake module 350 judges that voice signal V1 meets identifying information, mobile terminal apparatus 300 can start voice receiving unit 320 to receive message.And language understanding module 330 can, according to the received message of voice receiving unit 320, judge whether voice receiving unit 320 receives another voice signal (below representing with voice signal V2) after voice signal V1.In the present embodiment, language understanding module 330 can judge whether the energy of the received message of voice receiving unit 320 exceedes a set point.If the energy of described message does not exceed set point, language understanding module 330 can judge that this message is noise, uses and judges that voice receiving unit 320 does not receive voice signal V2; If the energy of described message has reached set point, language understanding module 330 can judge that voice receiving unit 320 has received voice signal V2, and then carries out follow-up step according to this voice signal V2.
If language understanding module 330 judges voice receiving unit 320 and do not receive voice signal V2,, as shown in step S408, language understanding module 330 can be carried out voice dialogue patterns.In voice dialogue pattern, language understanding module 330 can send voice answer-back by voice-output unit 310, and can continue to receive and resolve another voice signal from user by voice receiving unit 320, make according to this another voice answer-back or response operation, until language understanding module 330 is judged the voice signal with termination of a session information, or mobile terminal apparatus 300 completed user order or request till.About the detailed step of voice dialogue pattern, will be in rear detailed description (as shown in Figure 5).
If language understanding module 330 judges voice receiving unit 320 and receive voice signal V2,, as shown in step S410, language understanding module 330 can be resolved voice signal V2 and obtain speech recognition result.Language understanding module 330 can receive the voice signal V2 from voice receiving unit 320, and voice signal V2 is divided into multiple segmentation semantemes, and above-mentioned segmentation semanteme is carried out to natural language understanding, to pick out the content in voice signal V2.As the language understanding module 130 of Fig. 1, the language understanding module 330 of the present embodiment can be extracted according to fixing word method the statement of voice signal V2, the instruction or the intention (such as imperative sentence or inquiry sentence) etc. that are meant to resolve these statements, and judge the meaning of voice signal V2, use and obtain speech recognition result.Wherein, language understanding module 330 can be passed through semantic database 306, and which instruction the segmentation semanteme that comes to be divided in voice inquirement signal V2 is corresponding to, and above-mentioned semantic database 306 can record the relation of various segmentations semantemes and various command.
Then,, as shown in step S412, language understanding module 330 can judge in speech recognition result whether have and can carry out solicited message.Specifically, can carry out solicited message for example refers to and allows mobile terminal apparatus 300 complete solicit operation.That is to say, language understanding module 330 can be according to the solicited message carried out in speech recognition result, allows mobile terminal apparatus 300 carry out an action, and wherein mobile terminal apparatus 300 for example can complete by one or more application programs.For instance, when voice signal V2 is " helping me to phone Wang Daming ", " helping me to look into the Taibei weather of tomorrow " or " what time present " etc., voice signal V2 has and can carry out solicited message, therefore, language understanding module 330 is resolved after above-mentioned voice signal V2, can make mobile terminal apparatus 300 call to Wang Daming, online and look into and return the weather of Taibei tomorrow or these actions such as inquire about and return now.
On the other hand, can carry out solicited message if speech recognition result does not have, representation language Understanding Module 330 cannot judge according to speech recognition result user's intention, therefore cannot allow mobile terminal apparatus 300 complete solicit operation.For instance, when voice signal V2 is " helping me to make a phone call ", " helping me to look into weather ", " now " etc., language understanding module 330 is resolved after voice signal V2, cannot make mobile terminal apparatus 300 complete above-mentioned solicit operation.That is language understanding module 330 cannot judge call object in above-mentioned voice signal V2, inquire about in which or the weather in which place time, and cannot according to one not the sentence of the complete meaning of one's words of tool carry out.
When speech recognition result has can carry out solicited message time, as shown in step S414, language understanding module 330 can be carried out response operation, and mobile terminal apparatus 300 can be closed and be received other voice signals (below representing with voice signal V3), uses the voice interactive function of turning-off mobile terminal device 300.
Specifically, in the time can carrying out solicited message and be operational order, language understanding module 330 can start the operating function corresponding to operational order.For example, when carrying out solicited message for " turning down the brightness of screen ", language understanding module 330 can be sent the signal of an adjustment brightness in the system of mobile terminal apparatus 300, and its brightness by screen is turned down.In addition,, in the time can carrying out solicited message for inquiry sentence, language understanding module 330 can send the corresponding voice answer-back at this inquiry sentence.Now language understanding module 330 can pick out the one or more keywords in inquiry sentence, and in Search engine, inquires about corresponding answer according to these keywords, then exports voice answer-back by voice-output unit 310.For example, when carrying out solicited message for " tomorrow, the temperature in the Taibei was the several years ", language understanding module 330 can be sent a request signal to inquire about corresponding answer by Search engine, and exports " tomorrow, the temperature in the Taibei was 26 degree " this voice answer-back by voice-output unit 310.
Described hereinly be, because the above-mentioned solicited message carried out can allow mobile terminal apparatus 300 complete solicit operation, therefore after language understanding module 330 is carried out response operation, voice receiving unit 320 now can become to close or resting state, and can not receive other voice signal V3.Further, in the time that voice receiving unit 320 is closed received speech signal V3, if user's wish makes mobile terminal apparatus 300 carry out solicit operation by the mode of voice, user need call out the voice with identifying information again, use by voice wake module 350 and judge, and then again start voice receiving unit 320.
When speech recognition result does not have can carry out solicited message time, as shown in step S408, language understanding module 330 can carry out voice dialogue patterns (about the detailed step of voice dialogue pattern, will be in rear detailed description, as shown in Figure 5).At this, language understanding module 330 can send voice answer-back by voice-output unit 310 according to voice signal V2, and can pass through voice receiving unit 320, continues to receive another voice signal.That is to say, language understanding module 330 can continue to receive and resolve the voice signal from user, make according to this another voice answer-back or response operation, until language understanding module 330 is judged the voice signal with termination of a session information, or mobile terminal apparatus 300 completed user order or request till.Thus, in the present embodiment, user only needs to send the voice signal with identifying information, can carry out voice communication with mobile terminal apparatus 300 easily.After mobile terminal apparatus 300 Reclosable voice receiving units 320, again described in basis, there is the voice signal of identifying information and automatically open voice interactive function, therefore user can fully liberate both hands, and and mobile terminal apparatus 300 engage in the dialogue, and manipulate mobile terminal apparatus 300 by the mode of voice completely and carry out corresponding response operation etc.
In order to make those skilled in the art further understand the performed voice dialogue pattern of above-mentioned language understanding module 330, be below example for all embodiment again, the mobile terminal apparatus 300 of the Fig. 3 that wherein still arranges in pairs or groups describes.
Fig. 5 is the flow chart of the speech control method that illustrates according to one embodiment of the invention.Referring to Fig. 3, Fig. 4 and Fig. 5, language understanding module 330 is in the time carrying out voice dialogue pattern (as the step S408 of Fig. 4), in the step S502 of Fig. 5, language understanding module 330 can produce voice answer-back, below represent with voice answer-back A1, and export by voice-output unit 310.Because language understanding module 330 can be carried out voice dialogue pattern because not receiving voice signal V2 (as the step S406 of Fig. 4), or do not there is the voice signal V2 that can carry out solicited message and carry out voice dialogue pattern (as the step S412 of Fig. 4) because receiving, so time, language understanding module 330 can send the solicited message (be user's intention) of voice answer-back A1 with inquiry user automatically.
For instance, in the time that voice receiving unit 320 does not receive voice signal V2, language understanding module 330 can send " what has ", " what service need to be provided " etc. by voice-output unit 310, is not limited to this, uses inquiry user.In addition, when the received voice signal V2 of language understanding module 330 does not have can carry out solicited message time, language understanding module 330 can send by voice-output unit 310 " you say be the weather in which place ", " you say be whose phone " or " you say be what meaning " etc., is not limited to this.
It should be noted that, language understanding module 330 can not have the voice signal V2 that can carry out solicited message according to this yet, and finds out the voice answer-back of this voice signal of coupling V2.In other words, language understanding module 330 can enter the pattern of voice-enabled chat, to link up with user.Wherein, language understanding module 330 thoroughly semantic database 306 realizes the pattern of above-mentioned voice-enabled chat.Specifically, semantic database 306 can record multiple candidate answers, and language understanding module 330 choose these candidate answers according to priority one of them be used as voice answer-back.For example, language understanding module 330 can be according to everybody's use habit, to determine the priority of these candidate answers.Or language understanding module 330 can be according to user's hobby or custom, to determine the priority of these candidate answers.It is worth mentioning that, in semantic database 306, also can record the content of the voice answer-back that previous language understanding module 330 exports, and produce voice answer-back according to previous content.The above-mentioned method of selecting voice answer-back is for illustrating, and the present embodiment is not as restriction.
After language understanding module 330 is exported voice answer-back by voice-output unit 310, in step S504, language understanding module 330 can judge whether voice receiving unit 320 receives other voice signals (below representing with voice signal V4) again.Similar to the step S406 of Fig. 4 herein, can be with reference to aforesaid explanation.
In the time of voice receiving unit 320 received speech signal V4, as shown in step S506, language understanding module 330 can judge whether voice signal V4 meets termination of a session information, or whether voice signal V4 has and can carry out solicited message.Termination of a session information is for example specific vocabulary, in order to represent termination of a session.That is language understanding module 330 can be resolved voice signal V4, if be resolved to above-mentioned specific vocabulary, judges that voice signal V4 meets termination of a session information.For instance, when voice signal V4 meets these termination of a session informations such as " goodbye " or " it is over ", voice receiving unit 320 can not continue received speech signal.On the other hand, can carry out solicited message if voice signal V4 has, language understanding module 330 can be carried out corresponding to the response operation that can carry out solicited message.And, language understanding module 330 meeting terminated speech dialogue modes, and voice receiving unit 320 also no longer continues received speech signal.Similar to the step S414 of Fig. 4 at this, can be with reference to aforesaid explanation.
In step S506, if voice signal V4 meets termination of a session information, or have can carry out solicited message time, as shown in step S508,330 terminated speech dialogue modes of language understanding module, and terminating reception voice signal afterwards, finish according to this mobile terminal apparatus 300 and user and carry out voice communication.That is to say, if now user's wish manipulates mobile terminal apparatus 300 by the mode of voice, need to say the there is identifying information voice signal of (for example " little madder " this name), just can restart mobile terminal apparatus 300 and carry out interactive voice.
In addition, in step S506, if voice signal V4 does not meet termination of a session information, also do not have can carry out solicited message time, get back to step S502, language understanding module 330 can continue to send voice answer-back by voice-output unit 310 and inquire user.
On the other hand, return to step S504, when voice receiving unit 320 does not receive voice signal V4,, as shown in step S510, language understanding module 330 can judge the number of times that does not receive voice signal V4 in Preset Time, whether exceedes preset times.Specifically, if do not receive voice signal V4 in Preset Time, language understanding module 330 can a number of times of record.Thus, in the time that recorded number of times does not exceed preset times, get back to step S502, language understanding module 330 can continue to send voice answer-back by voice-output unit 310, uses inquiry user's intention.Wherein, language understanding module 330 can, after voice receiving unit 320 does not receive the Preset Time of voice signal V4, produce voice answer-back.Above-mentioned voice answer-back is for example the question sentence such as " you also exist ", " what service need to be provided ", is not limited to this.
Otherwise, in step S510, when recorded number of times is when exceeding preset times, as shown in step S508, language understanding module 330 can stop this voice dialogue pattern, and the voice signal after voice receiving unit 320 meeting terminating receptions, that is mobile terminal apparatus 300 can finish and user carry out voice communication, to finish interactive voice.
It is worth mentioning that, after mobile terminal apparatus 300 finishes voice interactive function, user not only can call out the voice signal with identifying information, to link up with mobile terminal apparatus 300, user also can be by auxiliary operation device 304, send wireless signal transmission to mobile terminal apparatus 300 from auxiliary operation device 304, to start voice interactive function.At this, mobile terminal apparatus 300 just can start voice receiving unit 320 and carry out received speech signal.
According to above-mentioned, the mobile terminal apparatus 300 of the present embodiment can be according to the voice signal that meets identifying information, and starts the voice interactive function of mobile terminal apparatus 300, uses voice service can be provided more quickly.Wherein, in the time that mobile terminal apparatus 300 does not start its voice interactive function, voice wake module 350 can be detected the voice signal that meets identifying information.If when voice wake module 350 receives the above-mentioned voice signal that meets identifying information, 320 of voice receiving units can be activated, to be received in above-mentioned voice signal another voice signal afterwards.Afterwards, 330 of language understanding modules can be made response operation and stop the voice interactive function of mobile terminal apparatus 300 according to above-mentioned another voice signal; Or send voice answer-back according to above-mentioned another voice signal, use and obtain user's intention or talk with user, until be resolved to termination of a session information or make response operation.Thus, user only needs to send the voice signal with identifying information, can carry out voice communication with mobile terminal apparatus 300 easily, and can liberate both hands completely in communication process, because mobile terminal apparatus 300 is automatically to open voice interactive function after a dialogue bout.By this, user can manipulate mobile terminal apparatus 300 more expediently.
In sum, in voice answering method of the present invention and mobile terminal apparatus, mobile terminal apparatus can switch to first mode from normal mode automatically.And when mobile terminal apparatus receives when call incoming call at first mode, mobile terminal apparatus can send verbal announcement with inquiry user, manipulate mobile terminal apparatus and respond and allow user send voice signal by the mode of voice.Now, mobile terminal apparatus can be resolved according to the voice signal from user, and according to the speech recognition result obtaining after resolving, carries out corresponding response operation.Thus, the verbal announcement that user can send according to mobile terminal apparatus easily, responds incoming call call by the mode of voice.
In addition,, in speech control method of the present invention and mobile terminal apparatus, mobile terminal apparatus can be according to the voice signal that meets identifying information, to start voice interactive function.In the time that mobile terminal apparatus does not start its voice interactive function, if mobile terminal apparatus receives the voice signal that meets identifying information, mobile terminal apparatus can be received in another voice signal after above-mentioned voice signal.Afterwards, mobile terminal apparatus can be made response operation terminated speech interactive function according to above-mentioned another voice signal; Or send voice answer-back according to above-mentioned another voice signal, use and obtain user's intention or talk with user, until be resolved to termination of a session information or make response operation.Thus, user only needs to send the voice signal with identifying information, can carry out voice communication with mobile terminal apparatus easily, and can liberate both hands completely in communication process, because mobile terminal apparatus is always automatically opened phonetic entry after a dialogue bout.And it is mutual that mobile terminal apparatus can carry out terminated speech according to the said content of user, uses voice service can be provided more quickly.Base this, voice answering method of the present invention, speech control method and mobile terminal apparatus, can allow user can manipulate more expediently mobile terminal apparatus.
Although the present invention with embodiment openly as above; so it is not in order to limit the present invention; those skilled in the art without departing from the spirit and scope of the present invention, when doing a little change and retouching, are as the criterion depending on appended claims confining spectrum therefore protection scope of the present invention is worked as.

Claims (12)

1. a voice answering method, for having a mobile terminal apparatus of a normal mode and a first mode, the method comprises:
When this mobile terminal apparatus line is during in a servicing unit, this mobile terminal apparatus switches to this first mode from this normal mode;
When receive an incoming call call in this first mode, send a verbal announcement, and start reception one voice signal;
Resolve this voice signal to obtain a speech recognition result;
According to this speech recognition result, carry out a corresponding traffic operation; And
When this mobile terminal apparatus is not when line is in this servicing unit, this mobile terminal apparatus switches to this normal mode from this first mode.
2. voice answering method as claimed in claim 1, wherein this mobile terminal apparatus is for a mobile crane device, and this voice answering method also comprises:
In the time that the speed of this crane device exceedes a threshold value, this mobile terminal apparatus switches to this first mode from this normal mode; And
In the time that the speed of this crane device does not exceed this threshold value, this mobile terminal apparatus switches to this normal mode from this first mode.
3. voice answering method as claimed in claim 1, wherein this first mode is the crane device of this mobile terminal apparatus for movement.
4. voice answering method as claimed in claim 1, wherein comprises in the step of carrying out this corresponding traffic operation:
Answer the call of this incoming call or refusal and answer this incoming call call, the step of wherein answering this incoming call call at refusal comprises and transmits a default voice answer-back to respond this incoming call call.
5. voice answering method as claimed in claim 1, also comprises:
Obtain a response content from this speech recognition result, and produce an answer signal to respond this incoming call call according to this response content.
6. voice answering method as claimed in claim 1, also comprises:
Receive a manipulation signal from an auxiliary operation device, to answer or to refuse to answer this incoming call call.
7. a mobile terminal apparatus, comprising:
One voice-output unit, in order to send a verbal announcement;
One voice receiving unit, in order to receive a voice signal;
One language understanding module, is coupled to this voice receiving unit, in order to resolve this voice signal;
One carrys out communication unit, be coupled to this voice-output unit and this language understanding module, this carrys out communication unit in order to receive an incoming call call and to carry out a traffic operation, wherein when this mobile terminal apparatus line is during in a servicing unit, this mobile terminal apparatus switches to a first mode from a normal mode, and carry out communication unit in the time that this first mode receives this incoming call call when this, this carrys out communication unit and sends this verbal announcement by this voice-output unit, and start this voice receiving unit and receive this voice signal, this this voice signal of language understanding module parses is to obtain a speech recognition result, this carrys out communication unit and carries out this corresponding traffic operation according to this speech recognition result, and when this mobile terminal apparatus is not when line is in this servicing unit, this mobile terminal apparatus switches to this normal mode from this first mode.
8. mobile terminal apparatus as claimed in claim 7, wherein this mobile terminal apparatus is for a mobile crane device, and in the time that the speed of this crane device exceedes a threshold value, this mobile terminal apparatus switches to this first mode from this normal mode, and in the time that the speed of this crane device does not exceed this threshold value, this mobile terminal apparatus switches to this normal mode from this first mode.
9. mobile terminal apparatus as claimed in claim 7, wherein this first mode is the crane device of this mobile terminal apparatus for movement.
10. mobile terminal apparatus as claimed in claim 7, wherein this carrys out communication unit according to this speech recognition result, answer this incoming call call or refusal and answer this incoming call call, when wherein this carrys out communication unit refusal and answers this incoming call call, transmit a default voice answer-back to respond this incoming call call.
11. mobile terminal apparatus as claimed in claim 7, wherein this carrys out communication unit and obtains a response content from this speech recognition result, and produces an answer signal to respond this incoming call call according to this response content.
12. mobile terminal apparatus as claimed in claim 7, wherein this carrys out communication unit from an auxiliary operation device reception one manipulation signal, to answer or to refuse to answer this incoming call call.
CN201310291083.XA 2013-04-10 2013-07-11 Voice answering method and mobile terminal device Pending CN104104789A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201310291083.XA CN104104789A (en) 2013-04-10 2013-07-11 Voice answering method and mobile terminal device
CN201710903738.2A CN107613132A (en) 2013-04-10 2013-07-11 Voice answering method and mobile terminal apparatus
TW102125584A TWI535258B (en) 2013-04-10 2013-07-17 Voice answering method and mobile terminal apparatus

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN 201310122236 CN103220423A (en) 2013-04-10 2013-04-10 Voice answering method and mobile terminal device
CN201310122236.8 2013-04-10
CN201310291083.XA CN104104789A (en) 2013-04-10 2013-07-11 Voice answering method and mobile terminal device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201710903738.2A Division CN107613132A (en) 2013-04-10 2013-07-11 Voice answering method and mobile terminal apparatus

Publications (1)

Publication Number Publication Date
CN104104789A true CN104104789A (en) 2014-10-15

Family

ID=48817867

Family Applications (3)

Application Number Title Priority Date Filing Date
CN 201310122236 Pending CN103220423A (en) 2013-04-10 2013-04-10 Voice answering method and mobile terminal device
CN201310291083.XA Pending CN104104789A (en) 2013-04-10 2013-07-11 Voice answering method and mobile terminal device
CN201710903738.2A Pending CN107613132A (en) 2013-04-10 2013-07-11 Voice answering method and mobile terminal apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN 201310122236 Pending CN103220423A (en) 2013-04-10 2013-04-10 Voice answering method and mobile terminal device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201710903738.2A Pending CN107613132A (en) 2013-04-10 2013-07-11 Voice answering method and mobile terminal apparatus

Country Status (2)

Country Link
CN (3) CN103220423A (en)
TW (1) TWI535258B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105472152A (en) * 2015-12-03 2016-04-06 广东小天才科技有限公司 Method and system for automatically answering call for intelligent terminal
CN107465805A (en) * 2017-06-28 2017-12-12 深圳天珑无线科技有限公司 A kind of incoming call answering method, the device and communication terminal with store function
CN108810244A (en) * 2017-04-27 2018-11-13 丰田自动车株式会社 Speech dialogue system and information processing unit

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103929532A (en) * 2014-03-18 2014-07-16 联想(北京)有限公司 Information processing method and electronic equipment
CN104464723B (en) * 2014-12-16 2018-03-20 科大讯飞股份有限公司 A kind of voice interactive method and system
CN104683584B (en) * 2015-03-06 2017-08-25 广东欧珀移动通信有限公司 A kind of convenient call method of mobile terminal and system
CN105049591A (en) * 2015-05-26 2015-11-11 腾讯科技(深圳)有限公司 Method and device for processing incoming call
CN105007375A (en) * 2015-07-20 2015-10-28 广东小天才科技有限公司 Method and device for automatically answering external calls
CN105810194B (en) * 2016-05-11 2019-07-05 北京奇虎科技有限公司 Speech-controlled information acquisition methods and intelligent terminal under standby mode
TWI639115B (en) 2017-11-01 2018-10-21 塞席爾商元鼎音訊股份有限公司 Method of detecting audio inputting mode
CN108880993A (en) * 2018-07-02 2018-11-23 广东小天才科技有限公司 A kind of voice instant communicating method, system and mobile terminal
CN108847236A (en) * 2018-07-26 2018-11-20 珠海格力电器股份有限公司 The analysis method and device of the method for reseptance and device of voice messaging, voice messaging
CN110060678B (en) * 2019-04-16 2021-09-14 深圳欧博思智能科技有限公司 Virtual role control method based on intelligent device and intelligent device
CN112995929A (en) * 2019-11-29 2021-06-18 长城汽车股份有限公司 Short message sending method and device and vehicle
CN111191005A (en) * 2019-12-27 2020-05-22 恒大智慧科技有限公司 Community query method and system, community server and computer readable storage medium
CN111160002B (en) 2019-12-27 2022-03-01 北京百度网讯科技有限公司 Method and device for analyzing abnormal information in output spoken language understanding

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1494299A (en) * 2002-10-30 2004-05-05 英华达(上海)电子有限公司 Device and method for converting speech sound input into characters on handset
CN101657033A (en) * 2008-08-22 2010-02-24 环达电脑(上海)有限公司 Portable communication apparatus and method with voice control
CN102843471A (en) * 2012-08-17 2012-12-26 广东欧珀移动通信有限公司 Method for intelligently controlling answer mode of mobile phone and mobile phone
CN103139396A (en) * 2013-03-28 2013-06-05 上海斐讯数据通信技术有限公司 Implementation method of contextual model and mobile terminal

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101211504A (en) * 2006-12-31 2008-07-02 康佳集团股份有限公司 Method, system and apparatus for remote control for TV through voice
US8165886B1 (en) * 2007-10-04 2012-04-24 Great Northern Research LLC Speech interface system and method for control and interaction with applications on a computing system
TW201013635A (en) * 2008-09-24 2010-04-01 Mitac Int Corp Intelligent voice system and method thereof
CN202413790U (en) * 2011-12-15 2012-09-05 浙江吉利汽车研究院有限公司 Automobile self-adapting speech prompting system
CN102932595A (en) * 2012-10-22 2013-02-13 北京小米科技有限责任公司 Method and device for sound-control photographing and terminal
CN103024177A (en) * 2012-12-13 2013-04-03 广东欧珀移动通信有限公司 Mobile terminal driving mode operation method and mobile terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1494299A (en) * 2002-10-30 2004-05-05 英华达(上海)电子有限公司 Device and method for converting speech sound input into characters on handset
CN101657033A (en) * 2008-08-22 2010-02-24 环达电脑(上海)有限公司 Portable communication apparatus and method with voice control
CN102843471A (en) * 2012-08-17 2012-12-26 广东欧珀移动通信有限公司 Method for intelligently controlling answer mode of mobile phone and mobile phone
CN103139396A (en) * 2013-03-28 2013-06-05 上海斐讯数据通信技术有限公司 Implementation method of contextual model and mobile terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105472152A (en) * 2015-12-03 2016-04-06 广东小天才科技有限公司 Method and system for automatically answering call for intelligent terminal
CN108810244A (en) * 2017-04-27 2018-11-13 丰田自动车株式会社 Speech dialogue system and information processing unit
CN107465805A (en) * 2017-06-28 2017-12-12 深圳天珑无线科技有限公司 A kind of incoming call answering method, the device and communication terminal with store function

Also Published As

Publication number Publication date
CN103220423A (en) 2013-07-24
TW201440482A (en) 2014-10-16
TWI535258B (en) 2016-05-21
CN107613132A (en) 2018-01-19

Similar Documents

Publication Publication Date Title
CN104104790A (en) Voice control method and mobile terminal device
CN104104789A (en) Voice answering method and mobile terminal device
CN107895578B (en) Voice interaction method and device
CN101971250B (en) Mobile electronic device with active speech recognition
CN1220176C (en) Method for training or adapting to phonetic recognizer
CN106201424B (en) A kind of information interacting method, device and electronic equipment
CN108108142A (en) Voice information processing method, device, terminal device and storage medium
AU2019246868A1 (en) Method and system for voice activation
WO2017128775A1 (en) Voice control system, voice processing method and terminal device
CN110473555B (en) Interaction method and device based on distributed voice equipment
US20050124322A1 (en) System for communication information from a server via a mobile communication device
CN113705943B (en) Task management method and system based on voice intercom function and mobile device
KR20140067687A (en) Car system for interactive voice recognition
US8321227B2 (en) Methods and devices for appending an address list and determining a communication profile
CN105007365A (en) Method and apparatus for dialing extension number
CN104575496A (en) Method and device for automatically sending multimedia documents and mobile terminal
KR20150088532A (en) Apparatus for providing service during call and method for using the apparatus
US20110183725A1 (en) Hands-Free Text Messaging
CN103188633A (en) Vehicle-mounted communication system
CN101588415A (en) Voice service method and voice service system
KR20040008990A (en) Voice recognition key input wireless terminal, method for using voice in place of key input in wireless terminal, and recording medium therefore
CN110602325B (en) Voice recommendation method and device for terminal
CN110839169B (en) Intelligent equipment remote control device and control method based on same
CN107889085A (en) Voice signal is input to method, electronic installation and the computer of intelligent apparatus
CN111343226A (en) Vehicle and control method of vehicle

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20141015

RJ01 Rejection of invention patent application after publication