CN106601242A - Executing method and device of operation event and terminal - Google Patents
Executing method and device of operation event and terminal Download PDFInfo
- Publication number
- CN106601242A CN106601242A CN201510673325.0A CN201510673325A CN106601242A CN 106601242 A CN106601242 A CN 106601242A CN 201510673325 A CN201510673325 A CN 201510673325A CN 106601242 A CN106601242 A CN 106601242A
- Authority
- CN
- China
- Prior art keywords
- contextual model
- instruction
- voice operating
- action events
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000000875 corresponding Effects 0.000 claims abstract description 24
- 238000011022 operating instruction Methods 0.000 claims description 27
- 238000001514 detection method Methods 0.000 claims description 8
- 238000000034 method Methods 0.000 description 13
- 230000003993 interaction Effects 0.000 description 4
- 230000006399 behavior Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000002452 interceptive Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006011 modification reaction Methods 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000000977 initiatory Effects 0.000 description 1
- 230000002618 waking Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/04—Programme control other than numerical control, i.e. in sequence controllers or logic controllers
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/22—Interactive procedures; Man-machine interfaces
- G10L17/24—Interactive procedures; Man-machine interfaces the user being prompted to utter a password or a predefined phrase
Abstract
The present invention provides an executing method and device of an operation event and a terminal. The method comprises a step of receiving the voice operation instruction of a terminal user, a step of starting a scene mode corresponding to the voice operation instruction, wherein the scene mode comprises two or more operation events, and a step of executing the operation events included in the scene mode. By using the technical scheme provided by the invention, a problem of a poor user experience caused by a condition that a single voice instruction can only execute one operation in the related technology is solved, thus the operation complexity of the terminal is reduced, and a user experience is improved greatly.
Description
Technical field
The present invention relates to the communications field, in particular to execution method and device, the terminal of a kind of Action Events.
Background technology
With the development of speech recognition technology, increasing mobile terminal is all integrated with speech control system, and user can be with
Mobile terminal is operated by saying instruction, but existing operating system is all single instruction execution system, that is to say, that user
Say a kind of action command, such as phone certain number, mobile terminal after speech recognition, if recognizing successfully
Perform the action made a phone call to certain number.
Existing operating system is all single instruction execution system, and user says that an instruction can only perform an action, if
Under certain scene, he wishes that mobile terminal is that the thing that he does may be more than one to user, such as when user enters car
In, user wishes that mobile phone can open bluetooth, play music, open navigation, even volume is tuned in car suitably
Volume etc..These actions then need at least four phonetic orders if based on existing speech control system.This obviously can
Allow user to think that operation is very loaded down with trivial details, reduce voice-operated Consumer's Experience, finally cause speech control system to perform practically no function.
For in correlation technique, wall scroll phonetic order can only perform the problem of an operation and caused user experience difference,
Not yet propose effective solution.
The content of the invention
In order to solve above-mentioned technical problem, the invention provides execution method and device, the terminal of a kind of Action Events.
According to an aspect of the invention, there is provided a kind of execution method of Action Events, including:Receiving terminal user's
Voice operating is instructed;Open and the voice operating corresponding contextual model of instruction, wherein, the contextual model includes:
Two or more Action Events;Perform the Action Events that the contextual model includes.
Preferably, open with before the voice operating corresponding contextual model of instruction, methods described also includes:Configuration language
The corresponding relation of sound operational order and contextual model.
Preferably, before the voice operating instruction of receiving terminal user, methods described also includes:Event is specified in detection;
Under the triggering of the specified event, speech pattern is opened, wherein, under the speech pattern, receive the terminal use's
Voice operating is instructed.
Preferably, before the voice operating instruction of receiving terminal user, methods described also includes:Configure the contextual model
Including Action Events.
Preferably, the Action Events at least include one below:Adjust volume to designated value, unlatching navigation feature, beat
Open music player, calling designated contact.
Preferably, the voice operating instruction includes:Wall scroll voice operating is instructed.
According to another aspect of the present invention, a kind of performs device of Action Events is additionally provided, including:Receiver module,
Voice operating for receiving terminal user is instructed;First opening module, it is corresponding with voice operating instruction for opening
Contextual model, wherein, the contextual model includes:Two or more Action Events;Performing module, for holding
The Action Events that the row contextual model includes.
Preferably, described device also includes:First configuration module, is connected, for configuring language with first opening module
The corresponding relation of sound operational order and contextual model.
Preferably, described device also includes:Detection module, for detecting specified event;Second opening module, it is and described
Detection module connects, under the triggering of the specified event, opening speech pattern, wherein, under the speech pattern,
Receive the voice operating instruction of the terminal use.
Preferably, described device also includes:Second configuration module, for configuring the Action Events that the contextual model includes.
According to another aspect of the present invention, a kind of terminal is additionally provided, including:Action Events described in any of the above item
Performs device.
By the present invention, by receiving voice operating instruction, and then open it is corresponding with voice operating instruction include it is multiple
The technical scheme of the contextual model of Action Events, i.e., just being capable of the multiple behaviour of control terminal execution by the instruction of wall scroll voice operating
Make event, in solving correlation technique, wall scroll phonetic order can only perform one operation and caused user experience difference
Problem, and then the complex operation degree of terminal is reduced, substantially increase Consumer's Experience.
Description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, constitutes the part of the application, the present invention
Schematic description and description be used for explain the present invention, do not constitute inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart of the execution method of the Action Events according to the embodiment of the present invention;
Fig. 2 is the structured flowchart of the performs device of the Action Events according to the embodiment of the present invention;
Fig. 3 is another structured flowchart of the performs device of the Action Events according to the embodiment of the present invention;
Fig. 4 is the terminal structure schematic diagram according to example of the present invention;
Fig. 5 is the flow chart of the sound control method based on contextual model according to the preferred embodiment of the present invention.
Specific embodiment
Below with reference to accompanying drawing and in conjunction with the embodiments describing the present invention in detail.It should be noted that in the feelings not conflicted
Under condition, the feature in embodiment and embodiment in the application can be mutually combined.
Other features and advantages of the present invention will illustrate in the following description, also, partly become from specification
It is clear that or being understood by implementing the present invention.The purpose of the present invention and other advantages can be by the explanations write
Specifically noted structure is realizing and obtain in book, claims and accompanying drawing.
In order that those skilled in the art more fully understand the present invention program, below in conjunction with the embodiment of the present invention in it is attached
Figure, is clearly and completely described, it is clear that described embodiment is only to the technical scheme in the embodiment of the present invention
It is the embodiment of a part of the invention, rather than the embodiment of whole.Based on the embodiment in the present invention, this area is common
The every other embodiment that technical staff is obtained under the premise of creative work is not made, should all belong to guarantor of the present invention
The scope of shield.
In embodiments of the present invention, a kind of execution method of Action Events is additionally provided, Fig. 1 is according to the embodiment of the present invention
Action Events execution method flow chart, as shown in figure 1, comprising the following steps:
Step S102, the voice operating instruction of receiving terminal user;
Step S104, opens and the above-mentioned voice operating corresponding contextual model of instruction, wherein, above-mentioned contextual model includes:
Two or more Action Events;
Step S106, performs the Action Events that above-mentioned contextual model includes.
By above-mentioned each step, by receiving voice operating instruction, and then open it is corresponding with voice operating instruction including
The technical scheme of the contextual model of multiple Action Events, i.e., by the instruction of wall scroll voice operating just can control terminal perform
Multiple Action Events, in solving correlation technique, wall scroll phonetic order can only perform one and operate and caused Consumer's Experience
The problem of degree difference, and then the complex operation degree of terminal is reduced, substantially increase Consumer's Experience.
It should be noted that the executive agent of step S102-S106 can be the terminals such as mobile phone, panel computer, but
Not limited to this.
Alternatively, before the step of execution step S104, that is, opening and the above-mentioned voice operating corresponding scene of instruction
Before pattern, technical scheme below is can also carry out:The instruction of configuration voice operating and the corresponding relation of contextual model, for example:
Configuration voice operating instruction " I onboard " this operational order can find the scene mould being pre-configured with according to corresponding relation
Formula, performs corresponding multiple Action Events in contextual model, Ke Yishi:Music is opened, navigation etc. operation is opened.
In fact, be not that any reception can all open contextual model to the voice operating instruction of user, in terminal
Before the voice operating instruction of user, specified event is also detected;Under the triggering of the event of specifying, speech pattern is opened,
Wherein, under the speech pattern, the voice operating instruction of above-mentioned terminal use is received, that is to say, that need user to shift to an earlier date
Speech pattern is opened, contextual model can be opened when terminal receives the voice operating instruction specified, performed many
Individual Action Events.
In an optional example, before the voice operating instruction of receiving terminal user, said method also includes:Configuration feelings
The Action Events that scape pattern includes, the Action Events of configuration can include multiple executable events, and this is entirely can be with
Configured according to the requirement of user, wherein, aforesaid operations event at least includes one below:Volume is adjusted to specified
Value, unlatching navigation feature, opening music player, calling designated contact.
The above-mentioned implementation procedure of the embodiment of the present invention is illustrated below in conjunction with one, but is not used in the restriction embodiment of the present invention.
Terminal use can summarize the contextual model that oneself is often in and add, such as " I am in car ", " I is having a bath ",
" I in session " etc..Terminal provides various executable actions and selects to user.User is that various contextual models select difference
Action intersection, such as be that contextual model " I am in car " selects to open bluetooth, plays music, volume of music is tuned into
10 grades, opening navigation.When user is in the contextual model of " I am in car ", instruction " I am in car " is said, eventually
After end sound identification module is recognized and the match is successful, bluetooth is opened, music is played and is adjusted volume to 10 for user automatically
Level, opening navigation.
By the above-mentioned technical proposal of the embodiment of the present invention, when user is in certain scene, in that context it may be convenient to pass through
One instruction just allows terminal to make their own multiple required movements, enter every time the scene so as to avoid, and user is required for
By a plurality of phonetic order by terminal oneself to perform multiple actions, one side user operation step of getting up is more and loaded down with trivial details, separately
On the one hand when every time interactive voice will spend certain, efficiency is very low.These two aspects can all substantially reduce user and make
With the enthusiasm of speech control system.And pass through the technical scheme that the embodiment of the present invention is provided, originally need repeatedly interaction
Process only need to once interact, it is assumed that interaction times are N, then interactive efficiency will lift N times, if it is considered that
The false recognition rate that is difficult to avoid that in interaction every time and caused interaction failure, efficiency can be lifted more, and this can significantly be carried
Rise the Consumer's Experience of speech control system.
In the technical scheme of above-described embodiment and example, user can add self-defined contextual model, and certain terminal also may be used
With preset various conventional contextual models;User adds self-defined executable action, and certain terminal can also be preset various conventional
Executable action;User is self-defining contextual model from the multiple actions of option and installment in executable action;Terminal is preserved to be used
The self-defined contextual model in family and corresponding action are to instruction database.
User wakes up instruction, earphone keystroke or other hardware handles means and starts language under the contextual model by voice
Sound identification application, after speech recognition application is in speech recognition mode, says the corresponding instruction of the contextual model.
Speech recognition application identifies user instruction, and with existing contextual model instructions match, such as the match is successful, then performing should
Each action of command request;As it fails to match, then user is pointed out to re-enter, if the frequency of failure exceedes stipulated number,
Then flow process terminates.
It should be noted that for aforesaid each method embodiment, in order to be briefly described, therefore it is all expressed as a series of
Combination of actions, but those skilled in the art should know, the present invention do not limited by described sequence of movement,
Because according to the present invention, some steps can adopt other orders or while carry out.Secondly, those skilled in the art
Should know, embodiment described in this description belongs to preferred embodiment, and involved action and module might not
It is essential to the invention.
A kind of performs device of Action Events is additionally provided in the present embodiment, for realizing above-described embodiment and being preferable to carry out
Mode, had carried out repeating no more for explanation, and below the module to being related in the device is illustrated.Such as following institute
Use, term " module " can realize the software of predetermined function and/or the combination of hardware.Although following examples institute
The device of description is preferably realized with software, but hardware, or the realization of the combination of software and hardware be also may be simultaneously
It is contemplated.Fig. 2 is the structured flowchart of the performs device of the Action Events according to the embodiment of the present invention.As shown in Fig. 2
The device includes:
Receiver module 20, the voice operating for receiving terminal user is instructed;
First opening module 22, is connected with receiver module 20, for opening and the above-mentioned voice operating corresponding scene of instruction
Pattern, wherein, contextual model includes:Two or more Action Events;
Performing module 24, for performing the Action Events that above-mentioned contextual model includes.
By the integrated use of above-mentioned modules, by receiving voice operating instruction, and then open and voice operating instruction
The technical scheme of the corresponding contextual model for including multiple Action Events, i.e., just can be controlled by the instruction of wall scroll voice operating
Terminal processed performs multiple Action Events, and in solving correlation technique, wall scroll phonetic order can only perform an operation and cause
User experience difference problem, and then reduce the complex operation degree of terminal, substantially increase Consumer's Experience.
Fig. 3 is another structured flowchart of the performs device of the Action Events according to the embodiment of the present invention, as shown in figure 3, on
Stating device also includes:First configuration module 26, is connected with the first opening module 22, for configure voice operating instruction with
The corresponding relation of contextual model.
As shown in figure 3, said apparatus also include:Detection module 28, for detecting specified event;Second opening module
30, it is connected with detection module 28, under the triggering of the event of specifying, opening speech pattern, wherein, in the voice
Under pattern, the voice operating instruction of above-mentioned terminal use is received, said apparatus also include:Second configuration module 32, with
One opening module 22 connects, for configuring the Action Events that contextual model includes.
In embodiments of the present invention, a kind of terminal is additionally provided, including:The execution of the Action Events described in any of the above item
Device.
In order to understand the performs device of aforesaid operations event, example of the present invention provides a kind of terminal, and Fig. 4 is according to this
Express the terminal structure schematic diagram of example, as shown in figure 4, including:
The self-defined unit 40 of contextual model:User is by this element by the self-defined contextual model of rule addition, contextual model name
Claim that usually there is semantic phrase, for voice recognition unit identification.
The self-defined unit 42 of executable action:User is by this element by the self-defined executable action of rule addition, the rule
Executable action is divided into into type of action, action object, the two can be translated as the intelligible behavior of terminal after combining.
Contextual model dispensing unit 44 (equivalent to the second configuration module 32 in above-mentioned enforcement):User is by this element
The self-defined contextual model option and installment action to be performed.
Contextual model memory cell 46:The database model of one multi-to-multi, for storing contextual model with executable action
Many-to-many relationship.
Speech recognition wakeup unit 48 (equivalent to second opening module 30 of above-described embodiment):For waking up speech recognition
Using instruction, earphone keystroke, terminal push or other hardware handles means can be waken up realizing by voice.
Voice recognition unit 50 (equivalent to the receiver module 20 of above-described embodiment):The contextual model instruction of receive user,
The self-defined contextual model title of the storage in the contextual model instruction of identifying user, with contextual model memory cell is carried out
Match somebody with somebody, such as the match is successful, then perform each action of the command request;As it fails to match, then user is pointed out to re-enter, such as
The fruit frequency of failure exceedes stipulated number, then flow process terminates.
Action execution unit 52 (equivalent to the performing module 24 of above-described embodiment):After contextual model instructions match success,
Action execution unit is gone out type of action, the action object of each action of the command request by rule parsing, according to analysis result
The intelligible instruction of initiating terminal completes each action executing by terminal.
In order to be better understood from the execution method of aforesaid operations event, it is described in detail below in conjunction with preferred embodiment.
Fig. 5 is the flow chart of the sound control method based on contextual model according to the preferred embodiment of the present invention, such as Fig. 5 institutes
Show, comprise the following steps:
Step S502:User adds self-defined contextual model;
Certainly terminal can also preset various conventional contextual models, aforesaid way is included in the present invention.
Step S504:User adds self-defined executable action;
Certainly terminal can also preset various conventional executable actions, aforesaid way is included in the present invention.
Step S506:User is self-defining contextual model from the multiple actions of option and installment in executable action;
Contextual model is the relation of multi-to-multi with executable action, can be can perform by main body'choice of contextual model during configuration
Action, or can perform action, select contextual model, aforesaid way to be included in the present invention.
Step S508:Terminal preserves User Defined contextual model and corresponding action to instruction database;
Terminal preserves User Defined contextual model and corresponding action and including but not limited to uses to the method for instruction database
The modes such as database purchase, file storage.
Step S510:User is waken up at instruction, earphone keystroke or other hardware under the contextual model by voice
Reason means start speech recognition application, after speech recognition application is in speech recognition mode, say the contextual model correspondence
Instruction;
Step S512:Speech recognition application identifies user instruction, with existing contextual model instructions match, such as matches into
Work(, then perform each action of the command request;As it fails to match, then user is pointed out to re-enter, if the frequency of failure is super
Stipulated number is crossed, then flow process terminates.
In sum, the embodiment of the present invention has reached following technique effect:In solving correlation technique, wall scroll phonetic order
One can only be performed to operate and the problem of caused user experience difference, and then reduce the complex operation degree of terminal, greatly
Consumer's Experience is improve greatly, extends the application scenarios of terminal.
In another embodiment, a kind of software is additionally provided, the software is used to perform above-described embodiment and be preferable to carry out
Technical scheme described in mode.
In another embodiment, a kind of storage medium is additionally provided, be stored with above-mentioned software in the storage medium, should
Storage medium is included but is not limited to:CD, floppy disk, hard disk, scratch pad memory etc..
It should be noted that description and claims of this specification and term " first ", " second " in above-mentioned accompanying drawing
Etc. being object for distinguishing similar, without for describing specific order or precedence.It should be appreciated that so using
Object can exchange in the appropriate case, so that embodiments of the invention described herein can be with except illustrating here
Or the order beyond those of description is implemented.Additionally, term " comprising " and " having " and their any deformation, it is intended that
Be cover it is non-exclusive include, for example, contain the process of series of steps or unit, method, system, product or
Equipment is not necessarily limited to those steps clearly listed or unit, but may include clearly not list or for these
Other intrinsic steps of process, method, product or equipment or unit.
Obviously, those skilled in the art should be understood that above-mentioned each module of the invention or each step can be with general
Realizing, they can be concentrated on single computing device computing device, or be distributed in multiple computing devices and constituted
Network on, alternatively, they can be realized with the executable program code of computing device, it is thus possible to by they
Storage is performed in the storage device by computing device, and in some cases, can be held with the order being different from herein
The shown or described step of row, or they are fabricated to respectively each integrated circuit modules, or will be many in them
Individual module or step are fabricated to single integrated circuit module to realize.So, the present invention is not restricted to any specific hardware
Combine with software.
The preferred embodiments of the present invention are the foregoing is only, the present invention is not limited to, for the technology of this area
For personnel, the present invention can have various modifications and variations.It is all within the spirit and principles in the present invention, made it is any
Modification, equivalent, improvement etc., should be included within the scope of the present invention.
Claims (11)
1. a kind of execution method of Action Events, it is characterised in that include:
The voice operating instruction of receiving terminal user;
Open and the voice operating corresponding contextual model of instruction, wherein, the contextual model includes:Two or two
Above Action Events;
Perform the Action Events that the contextual model includes.
2. method according to claim 1, it is characterised in that open with before the voice operating corresponding contextual model of instruction,
Methods described also includes:
The instruction of configuration voice operating and the corresponding relation of contextual model.
3. method according to claim 1, it is characterised in that before the voice operating instruction of receiving terminal user, methods described
Also include:
Event is specified in detection;
Under the triggering of the specified event, speech pattern is opened, wherein, under the speech pattern, receive the terminal
The voice operating instruction of user.
4. method according to claim 1, it is characterised in that before the voice operating instruction of receiving terminal user, methods described
Also include:
Configure the Action Events that the contextual model includes.
5. the method according to any one of Claims 1-4, it is characterised in that the Action Events at least include one below:
Volume is adjusted to designated value, unlatching navigation feature, opening music player, calling designated contact.
6. the method according to any one of Claims 1-4, it is characterised in that the voice operating instruction includes:Wall scroll voice
Operational order.
7. a kind of performs device of Action Events, it is characterised in that include:
Receiver module, the voice operating for receiving terminal user is instructed;
First opening module, for opening and the voice operating corresponding contextual model of instruction, wherein, the scene mould
Formula includes:Two or more Action Events;
Performing module, for performing the Action Events that the contextual model includes.
8. device according to claim 7, it is characterised in that described device also includes:
First configuration module, is connected with first opening module, right with contextual model for configuring voice operating instruction
Should be related to.
9. device according to claim 7, it is characterised in that described device also includes:
Detection module, for detecting specified event;
Second opening module, is connected with the detection module, under the triggering of the specified event, opening voice mould
Formula, wherein, under the speech pattern, receive the voice operating instruction of the terminal use.
10. device according to claim 7, it is characterised in that described device also includes:
Second configuration module, for configuring the Action Events that the contextual model includes.
11. a kind of terminals, it is characterised in that include:Including the device described in any one of claim 7-10.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510673325.0A CN106601242A (en) | 2015-10-16 | 2015-10-16 | Executing method and device of operation event and terminal |
PCT/CN2015/098022 WO2016184095A1 (en) | 2015-10-16 | 2015-12-21 | Operation event execution method and apparatus, and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510673325.0A CN106601242A (en) | 2015-10-16 | 2015-10-16 | Executing method and device of operation event and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106601242A true CN106601242A (en) | 2017-04-26 |
Family
ID=57319349
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510673325.0A Withdrawn CN106601242A (en) | 2015-10-16 | 2015-10-16 | Executing method and device of operation event and terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106601242A (en) |
WO (1) | WO2016184095A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107277225A (en) * | 2017-05-04 | 2017-10-20 | 北京奇虎科技有限公司 | Method, device and the smart machine of Voice command smart machine |
CN109117233A (en) * | 2018-08-22 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | Method and apparatus for handling information |
CN110164426A (en) * | 2018-02-10 | 2019-08-23 | 佛山市顺德区美的电热电器制造有限公司 | Sound control method and computer storage medium |
CN113401134A (en) * | 2021-06-10 | 2021-09-17 | 吉利汽车研究院(宁波)有限公司 | Contextual model self-defining method and device, electronic equipment and storage medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108132768A (en) * | 2016-12-01 | 2018-06-08 | 中兴通讯股份有限公司 | The processing method of phonetic entry, terminal and network server |
CN110754097B (en) * | 2017-08-18 | 2022-06-07 | Oppo广东移动通信有限公司 | Call control method, device, terminal equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102833421A (en) * | 2012-09-17 | 2012-12-19 | 东莞宇龙通信科技有限公司 | Mobile terminal and reminding method |
CN202798881U (en) * | 2012-07-31 | 2013-03-13 | 北京播思软件技术有限公司 | Apparatus capable of controlling running of mobile equipment by using voice command |
CN104866181A (en) * | 2015-06-08 | 2015-08-26 | 北京金山安全软件有限公司 | Method and device for executing multi-operation event |
CN105739940A (en) * | 2014-12-08 | 2016-07-06 | 中兴通讯股份有限公司 | Storage method and device |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6117005B2 (en) * | 1979-12-21 | 1986-05-06 | Matsushita Electric Ind Co Ltd | |
JP2002283259A (en) * | 2001-03-27 | 2002-10-03 | Sony Corp | Operation teaching device and operation teaching method for robot device and storage medium |
CN103197571A (en) * | 2013-03-15 | 2013-07-10 | 张春鹏 | Control method, device and system |
-
2015
- 2015-10-16 CN CN201510673325.0A patent/CN106601242A/en not_active Withdrawn
- 2015-12-21 WO PCT/CN2015/098022 patent/WO2016184095A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202798881U (en) * | 2012-07-31 | 2013-03-13 | 北京播思软件技术有限公司 | Apparatus capable of controlling running of mobile equipment by using voice command |
CN102833421A (en) * | 2012-09-17 | 2012-12-19 | 东莞宇龙通信科技有限公司 | Mobile terminal and reminding method |
CN105739940A (en) * | 2014-12-08 | 2016-07-06 | 中兴通讯股份有限公司 | Storage method and device |
CN104866181A (en) * | 2015-06-08 | 2015-08-26 | 北京金山安全软件有限公司 | Method and device for executing multi-operation event |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107277225A (en) * | 2017-05-04 | 2017-10-20 | 北京奇虎科技有限公司 | Method, device and the smart machine of Voice command smart machine |
CN107277225B (en) * | 2017-05-04 | 2020-04-24 | 北京奇虎科技有限公司 | Method and device for controlling intelligent equipment through voice and intelligent equipment |
CN110164426A (en) * | 2018-02-10 | 2019-08-23 | 佛山市顺德区美的电热电器制造有限公司 | Sound control method and computer storage medium |
CN110164426B (en) * | 2018-02-10 | 2021-10-26 | 佛山市顺德区美的电热电器制造有限公司 | Voice control method and computer storage medium |
CN109117233A (en) * | 2018-08-22 | 2019-01-01 | 百度在线网络技术(北京)有限公司 | Method and apparatus for handling information |
US11474779B2 (en) | 2018-08-22 | 2022-10-18 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for processing information |
CN113401134A (en) * | 2021-06-10 | 2021-09-17 | 吉利汽车研究院(宁波)有限公司 | Contextual model self-defining method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2016184095A1 (en) | 2016-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106601242A (en) | Executing method and device of operation event and terminal | |
US11302302B2 (en) | Method, apparatus, device and storage medium for switching voice role | |
US10838765B2 (en) | Task execution method for voice input and electronic device supporting the same | |
US10839806B2 (en) | Voice processing method and electronic device supporting the same | |
CN103646646B (en) | A kind of sound control method and electronic equipment | |
CN104599669A (en) | Voice control method and device | |
US20190267001A1 (en) | System for processing user utterance and controlling method thereof | |
KR100679043B1 (en) | Apparatus and method for spoken dialogue interface with task-structured frames | |
CN102292766B (en) | Method and apparatus for providing compound models for speech recognition adaptation | |
EP2521121B1 (en) | Method and device for voice controlling | |
CN106658129A (en) | Emotion-based terminal control method and apparatus, and terminal | |
CN106356059A (en) | Voice control method, device and projector | |
CN107210040A (en) | The operating method of phonetic function and the electronic equipment for supporting this method | |
CN109326289A (en) | Exempt to wake up voice interactive method, device, equipment and storage medium | |
CN104978964B (en) | Phonetic control command error correction method and system | |
CN109584875A (en) | A kind of speech ciphering equipment control method, device, storage medium and speech ciphering equipment | |
CN107018228B (en) | Voice control system, voice processing method and terminal equipment | |
EP3734596A1 (en) | Server for determining target device based on speech input of user and controlling target device, and operation method of the server | |
CN105975063B (en) | A kind of method and apparatus controlling intelligent terminal | |
CN109637548A (en) | Voice interactive method and device based on Application on Voiceprint Recognition | |
CN103426429B (en) | Sound control method and device | |
CN109656512A (en) | Exchange method, device, storage medium and terminal based on voice assistant | |
CN110246499A (en) | The sound control method and device of home equipment | |
JP2020003774A (en) | Method and apparatus for processing speech | |
CN109445879A (en) | Method, storage medium and the equipment of monitor video are shown with suspended window |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20170426 |
|
WW01 | Invention patent application withdrawn after publication |