CN110459222A - Sound control method, phonetic controller and terminal device - Google Patents
Sound control method, phonetic controller and terminal device Download PDFInfo
- Publication number
- CN110459222A CN110459222A CN201910844308.7A CN201910844308A CN110459222A CN 110459222 A CN110459222 A CN 110459222A CN 201910844308 A CN201910844308 A CN 201910844308A CN 110459222 A CN110459222 A CN 110459222A
- Authority
- CN
- China
- Prior art keywords
- control
- voice recognition
- recognition data
- voice
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application is suitable for voice control technology field, provide sound control method, phonetic controller, terminal device and computer readable storage medium, the sound control method, it include: the voice recognition data for obtaining user, the voice recognition data is to carry out speech recognition by the voice data for inputting user to obtain;If the voice recognition data includes presupposed information, according to the presupposed information, multiple control instructions associated with the voice recognition data are obtained, wherein the presupposed information includes default segmentation mark and/or preset instructions;Control designated terminal executes control corresponding to each control instruction in the multiple control instruction respectively and operates.By above-mentioned sound control method, control efficiency when user executes multiple operations by voice come controlling terminal equipment can be improved.
Description
Technical field
The application belongs to voice control technology field more particularly to sound control method, phonetic controller, terminal device
And computer readable storage medium.
Background technique
With the continuous development of speech recognition technology, there are various bases in the terminal devices such as intelligent sound box, mobile terminal
In the application, such as voice assistant etc. of speech recognition technology.
In routine use, when user wants to issue multiple instruction to terminal devices such as intelligent sound boxes, generally require more
It is secondary to say the pre-set wake-up word of intelligent sound box to wake up intelligent sound box, and a language is issued when waking up intelligent sound box every time
Sound instruction.Also, when waking up intelligent sound box by waking up word, it is also possible to there is a possibility that wake-up failure, cause user may
Need repeatedly to be attempted so that control efficiency when user executes multiple operations by voice come controlling terminal equipment compared with
It is low.
Summary of the invention
The embodiment of the present application provides sound control method, phonetic controller, terminal device and computer-readable storage
Control efficiency when user executes multiple operations by voice come controlling terminal equipment can be improved in medium.
In a first aspect, the embodiment of the present application provides a kind of sound control method, comprising:
The voice recognition data of user is obtained, the voice recognition data is to carry out by the voice data inputted to user
Speech recognition obtains;
If the voice recognition data includes presupposed information, according to the presupposed information, obtain and the speech recognition
The associated multiple control instructions of data, wherein the presupposed information includes default segmentation mark and/or preset instructions;
Control designated terminal executes control corresponding to each control instruction in the multiple control instruction respectively
Operation.
Second aspect, the embodiment of the present application provide a kind of phonetic controller, comprising:
First obtains module, and for obtaining the voice recognition data of user, the voice recognition data is by user
The voice data of input carries out speech recognition and obtains;
Second acquisition module, according to the presupposed information, obtains if including presupposed information for the voice recognition data
Take multiple control instructions associated with the voice recognition data, wherein the presupposed information include default segmentation mark and/
Or preset instructions;
Control module executes each control instruction in the multiple control instruction for controlling designated terminal respectively
Corresponding control operation.
The third aspect, the embodiment of the present application provide a kind of terminal device, including memory, processor, display and
Store the computer program that can be run in the memory and on the processor, which is characterized in that the processor is held
Sound control method as described in relation to the first aspect is realized when the row computer program.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, the computer-readable storage
Media storage has computer program, and voice control as described in relation to the first aspect is realized when the computer program is executed by processor
Method.
Existing beneficial effect is the embodiment of the present application compared with prior art: in the embodiment of the present application, available use
The voice recognition data at family, if the voice recognition data includes presupposed information, wherein the presupposed information includes default segmentation
Mark and/or preset instructions, then according to the default segmentation mark and/or preset instructions, it is available to arrive and the voice
The associated multiple control instructions of data are identified, so that the terminal device of such as intelligent sound box, mobile terminal can basis
It is corresponding to execute multiple control operations in the voice recognition data got.At this point, user is primary by issuing to terminal device
Phonetic order, can controlling terminal equipment execute multiple operations, without waking up the terminal devices such as intelligent sound box repeatedly with respectively
Issue multiple phonetic orders.By the embodiment of the present application, it is multiple come the execution of controlling terminal equipment by voice that user can be improved
Control efficiency when operation promotes the interactive experience of user.
Detailed description of the invention
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application
Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is a kind of flow diagram for sound control method that one embodiment of the application provides;
Fig. 2 is the flow diagram for another sound control method that one embodiment of the application provides;
Fig. 3 is the flow diagram for another sound control method that one embodiment of the application provides;
Fig. 4 is a kind of information exchange schematic diagram that one embodiment of the application provides;
Fig. 5 is a kind of structural schematic diagram for phonetic controller that one embodiment of the application provides;
Fig. 6 is the structural schematic diagram for the terminal device that one embodiment of the application provides.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed
Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific
The application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
It should be appreciated that working as in present specification and the appended claims in use, term " includes " instruction is retouched
State the presence of feature, entirety, step, operation, element and/or component, but be not precluded one or more of the other feature, entirety,
Step, operation, the presence or addition of element, component and/or its set.
It is also understood that referring in present specification to term "and/or" used in the appended claims related
Join any combination and all possible combinations of one or more of item listed, and including these combinations.
As present specification and it is used in the attached claims, term " if " can be according to upper and lower
Text be interpreted " when ... when " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase is " if really
It is fixed " or " if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " ring
Should be in determination " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
The reference " one embodiment " described in the specification of the present application or " some embodiments " etc. mean in the application
One or more embodiments in include in conjunction with the embodiment description special characteristic, structure or feature.As a result, in this specification
In difference occur sentence " in one embodiment ", " in some embodiments ", " in some other embodiment ",
" in other embodiments " etc. uninevitable all referring to identical embodiment, mean " one or more but be not
In addition all embodiments " are only otherwise especially emphasized.The terms "include", "comprise", " having " and their change
Shape can mean that " including but not limited to ", only otherwise in addition especially emphasize.
Sound control method provided by the embodiments of the present application can be applied to server, intelligent sound box, mobile phone, plate electricity
Brain, wearable device, mobile unit, augmented reality (augmented reality, AR)/virtual reality (virtual
Reality, VR) equipment, laptop, Ultra-Mobile PC (ultra-mobile personal computer,
UMPC), on the terminal devices such as net book, personal digital assistant (personal digital assistant, PDA), the application
Embodiment is not intended to be limited in any the concrete type of terminal device.
Specifically, Fig. 1 shows a kind of flow chart of sound control method provided by the embodiments of the present application, the voice control
Method can be applied to the terminal devices such as server, intelligent sound box or mobile phone.
In some embodiments, the sound control method can be applied to server (such as Cloud Server), the clothes
Business device can be communicatively coupled with other terminals, such as intelligent sound box, mobile phone, wearable device to realize that information is transmitted.Example
Such as, the voice data of user can be received by intelligent sound box, and passes through speech recognition technology (Speech in intelligent sound box
Recognition Technology) speech recognition is carried out to the voice data after obtain voice recognition data, then will be described
Voice recognition data is transmitted to the given server, and the given server can be executed according to the voice recognition data should
Sound control method.Wherein, the speech recognition technology is also referred to as automatic speech recognition (Automatic Speech
Recognition, ASR).Certainly, in some embodiments, the sound control method can also by intelligent sound box, wearable set
It is standby that terminal devices is waited locally to be executed.Using the type of terminal device of the sound control method, this is not restricted.
As shown in Figure 1, the sound control method includes:
Step S101, obtains the voice recognition data of user, and the voice recognition data is the language by inputting to user
Sound data carry out speech recognition and obtain.
In the embodiment of the present application, the voice data can be set for terminal devices such as intelligent sound boxes by sensings such as microphones
It is standby to receive data obtained from the voice signal of user.In some embodiments, described in the terminal devices such as intelligent sound box obtain
The opportunity of the voice data of user's input can be the intelligent sound box detect it is default wake up word after, start to obtain the use
The voice data of family input;It is also likely to be to detect user to the pressing of special entity key or virtual key or click behaviour
After work, start the voice data for obtaining user's input, which can be configured according to practical application scene.
The method of the speech recognition can there are many, for example, hidden Markov model (Hidden can be based on
Markov Model, HMM), artificial neural network etc. realize speech recognition.Illustratively, the voice recognition data can
To include text information corresponding to the voice data.
It should be noted that the voice data to user's input carries out the step of speech recognition in the embodiment of the present application
Suddenly it can be completed by the terminal device for executing the sound control method, be also possible to other terminals and pass through the voice that inputs to user
After data progress speech recognition obtains the voice recognition data, it is transmitted to the terminal device for executing the sound control method.
Step S102, if the voice recognition data includes presupposed information, according to the presupposed information, obtain with it is described
The associated multiple control instructions of voice recognition data, wherein the presupposed information includes presetting segmentation mark and/or presetting to refer to
It enables.
In the embodiment of the present application, the presupposed information may include the pre-set information of developer, also may include using
The preparatory custom information in family.It is described according to the presupposed information, obtain multiple controls associated with the voice recognition data
Make instruction mode can there are many, for example, can be by carrying out keyword extraction to the voice recognition data, semantic knowing
Not, at least one of preset table inquiry mode, determines multiple control instructions associated with the voice recognition data.
The default segmentation mark may include default segmentation word, default segmentation sentence etc..Wherein, illustratively, described
Default segmentation mark may include the vocabulary of connection function is played in corresponding language, such as " then ", " then ", " again ", " under
One step " etc..In addition it is also possible to include other preset vocabulary, for example, user can set personal idiom to
The default segmentation mark, at this point, user can realize the connection between multiple phonetic orders by the idiom, so that user
Language when issuing multiple phonetic orders more meets the personal habits of user.In some embodiments, the voice control is executed
The terminal device of method can be identified by the default segmentation, be split to the voice recognition data, obtained multiple points
It cuts as a result, and obtaining control instruction corresponding to any segmentation result respectively.
In some embodiments, it is corresponding with multiple control instructions that the preset instructions can be preset, so that
User can execute corresponding multiple control operations by the corresponding voice of the preset instructions, controlling terminal equipment.
Wherein, illustratively, it can be user and one or more preset instructions be set by mobile terminals such as mobile phones,
And multiple control instructions corresponding to each described preset instructions.
Specifically, in the terminal, can receive one or more preset instructions and each institute of user's input
Multiple control instructions corresponding to preset instructions are stated, and establish the command mappings between the preset instructions and the control instruction
Table.Wherein, the preset instructions can indicate the presumptive instruction that user issues, and control instruction corresponding to the preset instructions can
To indicate the instruction of actual implementation corresponding to the presumptive instruction.Detect that the voice recognition data includes in terminal device
After the preset instructions, the multiple controls executed can be actually wanted to by inquiry described instruction mapping table quick obtaining to user
System instruction, to carry out subsequent control operation.
In some embodiments, if the voice recognition data does not include presupposed information, can by it is preset other
Mode determines control instruction associated with the voice recognition data, for example, can be using the voice recognition data as one
A entirety carries out keyword extraction, semantics recognition and preset table inquiry etc. processing to determine and the voice recognition data
Associated control instruction.
Step S103, it is right that control designated terminal executes each control instruction institute in the multiple control instruction respectively
The control operation answered.
In the embodiment of the present application, the designated terminal can be intelligent sound box, mobile phone, wearable device or server etc.
Deng.
It should be noted that the designated terminal can be the terminal device for executing the sound control method, it is also possible to
Execute the other equipment except the terminal device of the sound control method;If the designated terminal is other equipment, can be
It executes the terminal device of the sound control method by wireless communication or the mode of wire communication transmits the control instruction
To the execution terminal, executed in the multiple control instruction respectively corresponding to each control instruction with controlling designated terminal
Control operation.The specific setting of the designated terminal can be selected according to practical application scene.
Optionally, in some embodiments, the designated terminal is intelligent sound box;
The voice recognition data for obtaining user, the voice recognition data are the voice data by inputting to user
Speech recognition is carried out to obtain, comprising:
Cloud Server obtains the voice recognition data of user from intelligent sound box, and the voice recognition data is the intelligent sound
Case obtains after carrying out speech recognition to the voice data after the voice data for receiving user's input;
The control designated terminal is executed in the multiple control instruction respectively corresponding to each control instruction
Control operation, comprising:
Multiple control instructions associated with the voice recognition data are sent to the intelligent sound by the Cloud Server
Case, so that the intelligent sound box executes control corresponding to each control instruction in the multiple control instruction respectively
Operation.
In the embodiment of the present application, the voice recognition data that Cloud Server obtains user from intelligent sound box can be.The cloud
Server can carry out wireless transmission with the intelligent sound box and/or wire transmission, specific communication mode can be according to reality
Application scenarios are configured.
By judging whether the voice recognition data includes presupposed information in Cloud Server storage presupposed information, and
The voice recognition data includes presupposed information, then according to the presupposed information, obtains associated with the voice recognition data
Multiple control instructions, Cloud Server can be made to undertake at least part of calculating work, so as to reduce local device
The data of (such as intelligent sound box) store and calculate pressure, improve the speed of service.Also, presupposed information is stored in cloud service
Device can safeguard presupposed information in order to developer.
It should be noted that the sound control method can also execute in intelligent sound box in some other embodiment,
At this point it is possible to be that intelligent sound box itself obtains voice recognition data, and according to the presupposed information, obtains and the speech recognition
The associated multiple control instructions of data.
In the embodiment of the present application, the voice recognition data of available user, if the voice recognition data includes default
Information, wherein the presupposed information includes default segmentation mark and/or preset instructions, then being identified according to the default segmentation
And/or preset instructions, it is available to arrive multiple control instructions associated with the voice recognition data, so that such as intelligence
Can speaker, mobile terminal terminal device can be corresponding to execute multiple controls and grasp according in the voice recognition data got
Make.At this point, user is by issuing a phonetic order to terminal device, can controlling terminal equipment execute multiple operations, and nothing
The terminal devices such as intelligent sound box need to be waken up repeatedly to issue multiple phonetic orders respectively.By the embodiment of the present application, can be improved
User executes control efficiency when multiple operations by voice come controlling terminal equipment, promotes the interactive experience of user.
Fig. 2 shows the flow chart of another sound control method provided by the embodiments of the present application, the voice control sides
Method includes:
Step S201, obtains the voice recognition data of user, and the voice recognition data is the language by inputting to user
Sound data carry out speech recognition and obtain.
Step S202 is identified if the voice recognition data includes default segmentation mark according to the default segmentation, right
The voice recognition data is split, and obtains multiple segmentation results of the voice recognition data.
The default segmentation mark may include default segmentation word, default segmentation sentence etc..Wherein, illustratively, described
Default segmentation mark may include the vocabulary of connection function is played in corresponding language, such as " then ", " then ", " again ", " under
One step " etc..In addition it is also possible to include other preset vocabulary, for example, user can set personal idiom to
The default segmentation mark, at this point, user can realize the connection between multiple phonetic orders by the idiom, so that user
Language when issuing multiple phonetic orders more meets the personal habits of user.
In the embodiment of the present application, the terminal device for executing the sound control method can be identified by the default segmentation,
The voice recognition data is split, multiple segmentation results of the voice recognition data are obtained.It wherein, specifically, can
To be identified as spliting node with each default segmentation, the voice recognition data is divided into multiple portions, that is, is obtained more
A segmentation result.
Illustratively, described to be identified according to the default segmentation, the voice recognition data is split, described in acquisition
Multiple segmentation results of voice recognition data may include:
Each default segmentation mark in the voice recognition data is deleted, to be intercepted from the voice recognition data
The multiple segmentation result out.
At this point, after deleting each default segmentation mark in the voice recognition data, remaining speech recognition number
According to multiple portions can be divided into, to be truncated to each segmentation result.
Step S203 obtains control instruction corresponding to each segmentation result in the multiple segmentation result.
In the embodiment of the present application, can by keyword extraction, semantics recognition, preset table inquiry etc. in modes at least
One kind determining control instruction corresponding to each segmentation result respectively.
Step S204, it is right that control designated terminal executes each control instruction institute in the multiple control instruction respectively
The control operation answered.
The step S201 and step S204 of the present embodiment are same or similar with above-mentioned step S101 and step S103 respectively,
Details are not described herein again.
In some embodiments, optionally, the control designated terminal is executed respectively in the multiple control instruction
The operation of control corresponding to each control instruction, comprising:
According to chronological order of the multiple segmentation result in the voice recognition data, the multiple point is determined
That cuts control instruction corresponding to each segmentation result in result executes sequence;
Control designated terminal executes sequence according to described, executes the corresponding control operation of each described control instruction.
It may be associated between each segmentation result, it is also possible to be independent from each other in the embodiment of the present application.
In order to reduce calculation amount caused by the relevance calculated between segmentation result, more easily executes each control and operate, it can be with
It is each to determine by the execution sequence of control instruction corresponding to each segmentation result in the multiple segmentation result of determination
Control instruction executes sequence.
The embodiment of the present application provides the mode that a kind of pair of voice recognition data is split, wherein by described default
Segmentation mark, can rapidly and accurately be split voice recognition data.In some embodiments, by preparatory setting,
The default segmentation mark can be made to match with the attribute of language itself and the speech habits of user, at this point, user can
To identify corresponding phonetic order by issuing voice segmentation, easily controls designated terminal and execute multiple control operations, letter
The process for having changed voice control improves control efficiency when user executes multiple operations by voice come controlling terminal equipment.
Fig. 3 shows the flow chart of another sound control method provided by the embodiments of the present application, the voice control side
Method includes:
Step S301, obtains the voice recognition data of user, and the voice recognition data is the language by inputting to user
Sound data carry out speech recognition and obtain.
Step S302, according to preset command mappings table, obtains institute if the voice recognition data includes preset instructions
State multiple control instructions corresponding to preset instructions.
It is pre-recorded between preset instructions and multiple control instructions in described instruction mapping table in the embodiment of the present application
Therefore mapping relation information according to the preset command mappings table, can determine multiple controls corresponding to the preset instructions
System instruction.
For example, may include that preset instructions " going to bed early and get up early " and the preset instructions are " early in described instruction mapping table
Sleep getting up early " corresponding to two control instructions;Wherein, one in described two control instructions is used to indicate intelligent sound box in evening
Upper 11 playing duration times are to sleep preceding music in 10 minutes, another in described two control instructions is used to indicate intelligent sound
Case plays default the tinkle of bells at 7 points in the morning.
At this point, when detecting in the voice recognition data includes " going to bed early and get up early ", correspondingly, in the intelligent sound box
Perform two control operation corresponding with corresponding two control instructions.Certainly, the preset instructions may be arranged as
Other content is only used as exemplary illustration, not as the limitation to this programme herein.
In some embodiments, it can be user and preset described instruction mapping table by mobile terminals such as mobile phones, and
Described instruction mapping table is transmitted to the terminal device (such as server, intelligent sound box) for executing the sound control method;
Alternatively, being also possible to developer presets described instruction mapping table, and described instruction mapping table is stored in advance to execution institute
In the terminal device of predicate sound controlling method.
For example, in some embodiments, before executing step S302, user's input can be received in the terminal
The each preset instructions and each control instruction mapping relation information, and establish record the preset instructions and the control
The command mappings table of mapping relations between system instruction, then described instruction mapping table is transmitted to and executes the sound control method
Terminal device in;Alternatively, the mapping for each preset instructions and each control instruction that the user can also be inputted
After relation information is transmitted to the terminal device, by the terminal device according to the mapping relation information foundation record it is described pre-
If the command mappings table of the mapping relations between instruction and the control instruction.
Wherein, the preset instructions can indicate the presumptive instruction that user issues, control corresponding to the preset instructions
Instruction can indicate the instruction of actual implementation corresponding to the presumptive instruction.In addition, user can also be by described mobile whole
The mapping relation information of each preset instructions and each control instruction is changed at end, in each preset instructions and each control
After the mapping relation information change of system instruction, the mobile terminal can be by each preset instructions after the change and each
The mapping relation information of a control instruction is uploaded in the terminal device for executing the sound control method, so that the terminal
Equipment timely updates described instruction mapping table.
Step S303, it is right that control designated terminal executes each control instruction institute in the multiple control instruction respectively
The control operation answered.
The step S301 and step S303 of the present embodiment are same or similar with above-mentioned step S101 and step S103 respectively,
Details are not described herein again.
In some embodiments, optionally, it according to preset command mappings table, obtains corresponding to the preset instructions
Before multiple control instructions, further includes:
Receive the mapping relations letter of each preset instructions and each control instruction that user inputs on mobile terminals
Breath, and generate the command mappings table for recording the mapping relation information, wherein each described preset instructions corresponds to multiple controls
Instruction.
It should be noted that can be the mapping that mobile terminal itself receives user's input in the embodiment of the present application
Relation information is also possible to the other equipment in addition to the mobile terminal such as server and passes through the information with the mobile terminal
Transmission, receives the mapping relation information.
In the present embodiment, user can be corresponding multiple by the customized preset instructions of mobile terminal and the preset instructions
Control instruction, to provide personalized voice service for user.Also, in application scenes, due to being that user makes by oneself
The mapping relations of justice, therefore, in subsequent voice control, the phonetic order that user issues may not be the voice actually executed
Instruction, to realize certain cipher round results.In addition, in application scenes, due to setting the preset instructions pair
The multiple control instructions answered, at this point, user can control corresponding designated terminal to execute multiple by issuing once command
Control operation, to improve control efficiency.
In some embodiments, optionally, the inquiry for there are each preset instructions is also recorded in described instruction mapping table
Number;
The sound control method further include:
If the voice recognition data includes the preset instructions, by described instruction mapping table, the preset instructions
Inquiry times increase it is primary;
According to the inquiry times of each preset instructions, the row of each preset instructions in described instruction mapping table is adjusted
Column sequence.
In the embodiment of the present application, by the inquiry times according to each preset instructions, described instruction mapping table is adjusted
In each preset instructions put in order, can make in described instruction mapping table, the more preset instructions of inquiry times
Position is forward, so as to improve search efficiency in subsequent query described instruction mapping table.
Below with a specific example illustrate the embodiment of the present application one of information exchange schematic diagram.As shown in figure 4,
An one of specific example for the application information exchange schematic diagram.
Wherein, intelligent sound box receives each preset instructions of user's input and the mapping relations letter of each control instruction
Breath, and the mapping relation information of each preset instructions and each control instruction that the user inputs is transmitted to service
Device;The server records reflecting between the preset instructions and the control instruction according to the mapping relation information, foundation
Penetrate the command mappings table of relationship.
The intelligent sound box obtains the voice data of user's input, and to the voice data after detecting wake-up word
Speech recognition is carried out, the voice recognition data of user is obtained;The voice recognition data is sent to described by the intelligent sound box
Server;If the server detects that the voice recognition data includes preset instructions, according to described instruction mapping table, obtain
Take multiple control instructions corresponding to the preset instructions;The server refers to multiple controls corresponding to the preset instructions
Order is sent to intelligent sound box, is referred to indicating that the intelligent sound box executes each control in the multiple control instruction respectively
Enable corresponding control operation.
It should be noted that above-mentioned specific example is as just an illustrative signal in the embodiment of the present application, without
As the limitation to the information interaction approach in the embodiment of the present application.
In the embodiment of the present application, if the voice recognition data includes preset instructions, according to preset command mappings table,
Multiple control instructions corresponding to the preset instructions are obtained, so that it is logical to realize user by presetting described instruction mapping table
Cross to terminal device and issue a phonetic order, can controlling terminal equipment execute the functions of multiple operations, without repeatedly calling out
The terminal devices such as intelligent sound box wake up to issue multiple phonetic orders respectively, substantially increases user and is set by voice come controlling terminal
Control efficiency when the multiple operations of standby execution, promotes the interactive experience of user.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present application constitutes any limit
It is fixed.
Corresponding to sound control method described in foregoing embodiments, Fig. 5 shows a kind of language provided by the embodiments of the present application
The structural block diagram of sound control device illustrates only part relevant to the embodiment of the present application for ease of description.
Referring to Fig. 5, which includes:
First obtains module 501, for obtaining the voice recognition data of user, the voice recognition data be by with
The voice data of family input carries out speech recognition and obtains;
Second obtains module 502, if including presupposed information for the voice recognition data, according to the default letter
Breath obtains multiple control instructions associated with the voice recognition data, wherein the presupposed information includes default segmentation mark
Knowledge and/or preset instructions;
Control module 503 executes each control in the multiple control instruction for controlling designated terminal respectively
The corresponding control operation of instruction.
Optionally, the second acquisition module 502 specifically includes:
Cutting unit is marked if including default segmentation mark for the voice recognition data according to the default segmentation
Know, the voice recognition data is split, multiple segmentation results of the voice recognition data are obtained;
First acquisition unit refers to for obtaining control corresponding to each segmentation result in the multiple segmentation result
It enables.
Optionally, the control module 503 specifically includes:
Determination unit, for the chronological order according to the multiple segmentation result in the voice recognition data,
Determine control instruction corresponding to each segmentation result in the multiple segmentation result executes sequence;
Control unit executes sequence according to described for controlling designated terminal, it is opposite to execute each described control instruction
The control operation answered.
Optionally, the second acquisition module 502 is specifically used for:
If the voice recognition data includes preset instructions, according to preset command mappings table, the default finger is obtained
Enable corresponding multiple control instructions.
Optionally, the phonetic controller 5 further include:
Generation module, for receiving each preset instructions and each control instruction that user inputs on mobile terminals
Mapping relation information, and generate and record the command mappings table of the mapping relation information, wherein each described preset instructions
Corresponding multiple control instructions.
Optionally, the inquiry times for there are each preset instructions are also recorded in described instruction mapping table;
The phonetic controller 5 further include:
Processing module, if including the preset instructions for the voice recognition data, by described instruction mapping table,
The inquiry times of the preset instructions increase primary;
Module is adjusted, for the inquiry times according to each preset instructions, is adjusted each in described instruction mapping table
A preset instructions put in order.
Optionally, the designated terminal is intelligent sound box;
The first acquisition module 501 is specifically used for:
Cloud Server obtains the voice recognition data of user from intelligent sound box, and the voice recognition data is the intelligent sound
Case obtains after carrying out speech recognition to the voice data after the voice data for receiving user's input;
The control module 503 is specifically used for:
Multiple control instructions associated with the voice recognition data are sent to the intelligent sound by the Cloud Server
Case, so that the intelligent sound box executes control corresponding to each control instruction in the multiple control instruction respectively
Operation.
In the embodiment of the present application, the voice recognition data of available user, if the voice recognition data includes default
Information, wherein the presupposed information includes default segmentation mark and/or preset instructions, then being identified according to the default segmentation
And/or preset instructions, it is available to arrive multiple control instructions associated with the voice recognition data, so that such as intelligence
Can speaker, mobile terminal terminal device can be corresponding to execute multiple controls and grasp according in the voice recognition data got
Make.At this point, user is by issuing a phonetic order to terminal device, can controlling terminal equipment execute multiple operations, and nothing
The terminal devices such as intelligent sound box need to be waken up repeatedly to issue multiple phonetic orders respectively.By the embodiment of the present application, can be improved
User executes control efficiency when multiple operations by voice come controlling terminal equipment, promotes the interactive experience of user.
It should be noted that the contents such as information exchange, implementation procedure between above-mentioned apparatus/unit, due to the application
Embodiment of the method is based on same design, concrete function and bring technical effect, for details, reference can be made to embodiment of the method part, this
Place repeats no more.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing
The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also
To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list
Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system
The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
Fig. 6 is the structural schematic diagram for the terminal device that one embodiment of the application provides.As shown in fig. 6, the end of the embodiment
End equipment 6 includes: at least one processor 60 (only showing one in Fig. 6) processor, memory 61 and is stored in described deposit
In reservoir 61 and the computer program 62 that can run at least one described processor 60, the processor 60 execute the meter
The step in above-mentioned any each sound control method embodiment is realized when calculation machine program 62.
The terminal device 6 can be intelligent sound box, wearable device, desktop PC, notebook, palm PC and
Cloud server etc. calculates equipment.The terminal device may include, but be not limited only to, processor 60, memory 61.Art technology
Personnel are appreciated that Fig. 6 is only the citing of terminal device 6, do not constitute the restriction to terminal device 6, may include than figure
Show more or fewer components, perhaps combine certain components or different components, such as can also include input equipment, defeated
Equipment, network access equipment etc. out.Wherein, the input equipment may include keyboard, Trackpad, fingerprint collecting sensor (use
In the finger print information of acquisition user and the directional information of fingerprint), microphone, camera etc., output equipment may include display,
Loudspeaker etc..
Alleged processor 60 can be central processing unit (Central Processing Unit, CPU), the processor
60 can also be other general processors, digital signal processor (Digital Signal Processor, DSP), dedicated collection
At circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
The memory 61 can be the internal storage unit of the terminal device 6, such as terminal in some embodiments
The hard disk or memory of equipment 6.The memory 61 is also possible to the external storage of the terminal device 6 in further embodiments
The plug-in type hard disk being equipped in equipment, such as the terminal device 6, intelligent memory card (Smart Media Card, SMC), peace
Digital (Secure Digital, SD) card, flash card (Flash Card) etc..Further, the memory 61 can be with
Both including the terminal device 6 internal storage unit and also including External memory equipment.The memory 61 is for storing operation
System, application program, Boot loader (BootLoader), data and other programs etc., such as the computer program
Program code etc..The memory 61 can be also used for temporarily storing the data that has exported or will export.
In addition, the terminal device 6 can also include network connecting module, such as bluetooth module Wi-Fi mould although being not shown
Block, cellular network module etc., details are not described herein.
The embodiment of the present application also provides a kind of computer readable storage medium, the computer-readable recording medium storage
There is computer program, the step that can be achieved in above-mentioned each embodiment of the method is realized when the computer program is executed by processor
Suddenly.
The embodiment of the present application provides a kind of computer program product, when computer program product is run on mobile terminals
When, so that realizing the step that can be achieved in above-mentioned each embodiment of the method when mobile terminal execution.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, the application realizes above-described embodiment side
All or part of the process in method can instruct relevant hardware to complete by computer program, the computer journey
Sequence can be stored in a computer readable storage medium, and the computer program is when being executed by processor, it can be achieved that above-mentioned each
The step of embodiment of the method.Wherein, the computer program includes computer program code, and the computer program code can be with
For source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium at least may be used
With include: computer program code can be carried to any entity of camera arrangement/terminal device or device, recording medium,
Computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random
Access Memory), electric carrier signal, telecommunication signal and software distribution medium.Such as USB flash disk, mobile hard disk, magnetic disk or
CD etc..In certain jurisdictions, according to legislation and patent practice, computer-readable medium cannot be electric carrier signal and
Telecommunication signal.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
Scope of the present application.
In embodiment provided herein, it should be understood that disclosed device/network equipment and method, it can be with
It realizes by another way.For example, device described above/network equipment embodiment is only schematical, for example, institute
The division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such as
Multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Separately
A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device
Or the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
Embodiment described above is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality
Example is applied the application is described in detail, those skilled in the art should understand that: it still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all
Comprising within the scope of protection of this application.
Claims (10)
1. a kind of sound control method characterized by comprising
The voice recognition data of user is obtained, the voice recognition data is to carry out voice by the voice data inputted to user
Identification obtains;
If the voice recognition data includes presupposed information, according to the presupposed information, obtain and the voice recognition data
Associated multiple control instructions, wherein the presupposed information includes default segmentation mark and/or preset instructions;
Control designated terminal executes control corresponding to each control instruction in the multiple control instruction respectively and operates.
2. sound control method as described in claim 1, which is characterized in that if the voice recognition data includes default
Information obtains multiple control instructions associated with the voice recognition data then according to the presupposed information, comprising:
If the voice recognition data includes default segmentation mark, identified according to the default segmentation, to the speech recognition
Data are split, and obtain multiple segmentation results of the voice recognition data;
Obtain control instruction corresponding to each segmentation result in the multiple segmentation result.
3. sound control method as claimed in claim 2, which is characterized in that the control designated terminal executes described more respectively
Control corresponding to each control instruction operates in a control instruction, comprising:
According to chronological order of the multiple segmentation result in the voice recognition data, the multiple segmentation knot is determined
Control instruction corresponding to each segmentation result executes sequence in fruit;
Control designated terminal executes sequence according to described, executes the corresponding control operation of each described control instruction.
4. sound control method as described in claim 1, which is characterized in that if the voice recognition data includes default
Information obtains multiple control instructions associated with the voice recognition data then according to the presupposed information, comprising:
If the voice recognition data includes preset instructions, according to preset command mappings table, the preset instructions institute is obtained
Corresponding multiple control instructions.
5. sound control method as claimed in claim 4, which is characterized in that according to preset command mappings table, obtain institute
Before stating multiple control instructions corresponding to preset instructions, further includes:
The mapping relation information of each preset instructions and each control instruction that user inputs on mobile terminals is received, and
Generate the command mappings table for recording the mapping relation information, wherein each described preset instructions corresponds to multiple control instructions.
6. sound control method as claimed in claim 5, which is characterized in that also record has each institute in described instruction mapping table
State the inquiry times of preset instructions;
The sound control method further include:
If the voice recognition data includes the preset instructions, by described instruction mapping table, the preset instructions are looked into
Number is ask to increase once;
According to the inquiry times of each preset instructions, the arrangement for adjusting each preset instructions in described instruction mapping table is suitable
Sequence.
7. the sound control method as described in claim 1 to 6 any one, which is characterized in that the designated terminal is intelligence
Speaker;
The voice recognition data for obtaining user, the voice recognition data are to be carried out by the voice data inputted to user
Speech recognition obtains, comprising:
Cloud Server obtains the voice recognition data of user from intelligent sound box, and the voice recognition data is that the intelligent sound box exists
Receive user input voice data after to the voice data carry out speech recognition after obtain;
The control designated terminal executes control corresponding to each control instruction in the multiple control instruction respectively
Operation, comprising:
Multiple control instructions associated with the voice recognition data are sent to the intelligent sound box by the Cloud Server, with
So that the intelligent sound box executes control corresponding to each control instruction in the multiple control instruction respectively and operates.
8. a kind of phonetic controller characterized by comprising
First obtains module, and for obtaining the voice recognition data of user, the voice recognition data is by inputting to user
Voice data carry out speech recognition obtain;
Second obtains module, if including presupposed information for the voice recognition data, according to the presupposed information, obtain with
The associated multiple control instructions of voice recognition data, wherein the presupposed information is including default segmentation mark and/or in advance
If instruction;
Control module, for controlling designated terminal, to execute in the multiple control instruction each control instruction institute respectively right
The control operation answered.
9. a kind of terminal device, including memory, processor and storage are in the memory and can be on the processor
The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 7 when executing the computer program
Described in any item sound control methods.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In the computer program realizes sound control method as described in any one of claim 1 to 7 when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910844308.7A CN110459222A (en) | 2019-09-06 | 2019-09-06 | Sound control method, phonetic controller and terminal device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910844308.7A CN110459222A (en) | 2019-09-06 | 2019-09-06 | Sound control method, phonetic controller and terminal device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110459222A true CN110459222A (en) | 2019-11-15 |
Family
ID=68491114
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910844308.7A Pending CN110459222A (en) | 2019-09-06 | 2019-09-06 | Sound control method, phonetic controller and terminal device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110459222A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110996374A (en) * | 2019-12-17 | 2020-04-10 | 腾讯科技(深圳)有限公司 | Wireless network control method, device, equipment and medium |
CN111124512A (en) * | 2019-12-10 | 2020-05-08 | 珠海格力电器股份有限公司 | Awakening method, device, equipment and medium for intelligent equipment |
CN111369993A (en) * | 2020-03-03 | 2020-07-03 | 珠海格力电器股份有限公司 | Control method, control device, electronic equipment and storage medium |
CN111638928A (en) * | 2020-05-21 | 2020-09-08 | 北京百度网讯科技有限公司 | Operation guiding method, device, equipment and readable storage medium of application program |
CN112086097A (en) * | 2020-07-29 | 2020-12-15 | 广东美的白色家电技术创新中心有限公司 | Instruction response method of voice terminal, electronic device and computer storage medium |
CN112155485A (en) * | 2020-09-14 | 2021-01-01 | 江苏美的清洁电器股份有限公司 | Control method, control device, cleaning robot and storage medium |
CN112242140A (en) * | 2020-10-13 | 2021-01-19 | 中移(杭州)信息技术有限公司 | Intelligent device control method and device, electronic device and storage medium |
CN113031746A (en) * | 2019-12-09 | 2021-06-25 | Oppo广东移动通信有限公司 | Display screen area refreshing method, storage medium and electronic equipment |
CN113160808A (en) * | 2020-01-22 | 2021-07-23 | 广州汽车集团股份有限公司 | Voice control method and system and voice control equipment |
CN113555019A (en) * | 2021-07-21 | 2021-10-26 | 维沃移动通信(杭州)有限公司 | Voice control method and device and electronic equipment |
WO2023093280A1 (en) * | 2021-11-29 | 2023-06-01 | Oppo广东移动通信有限公司 | Speech control method and apparatus, electronic device, and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5652897A (en) * | 1993-05-24 | 1997-07-29 | Unisys Corporation | Robust language processor for segmenting and parsing-language containing multiple instructions |
CN101281745A (en) * | 2008-05-23 | 2008-10-08 | 深圳市北科瑞声科技有限公司 | Interactive system for vehicle-mounted voice |
CN103514201A (en) * | 2012-06-27 | 2014-01-15 | 阿里巴巴集团控股有限公司 | Method and device for querying data in non-relational database |
CN105791931A (en) * | 2016-02-26 | 2016-07-20 | 深圳Tcl数字技术有限公司 | Smart television and voice control method of the smart television |
CN106384591A (en) * | 2016-10-27 | 2017-02-08 | 乐视控股(北京)有限公司 | Method and device for interacting with voice assistant application |
CN107680589A (en) * | 2017-09-05 | 2018-02-09 | 百度在线网络技术(北京)有限公司 | Voice messaging exchange method, device and its equipment |
CN108694946A (en) * | 2018-05-09 | 2018-10-23 | 四川斐讯信息技术有限公司 | A kind of speaker control method and system |
CN109766487A (en) * | 2018-12-26 | 2019-05-17 | 郑州云海信息技术有限公司 | The method and device of page access anticipation is carried out based on middleware |
-
2019
- 2019-09-06 CN CN201910844308.7A patent/CN110459222A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5652897A (en) * | 1993-05-24 | 1997-07-29 | Unisys Corporation | Robust language processor for segmenting and parsing-language containing multiple instructions |
CN101281745A (en) * | 2008-05-23 | 2008-10-08 | 深圳市北科瑞声科技有限公司 | Interactive system for vehicle-mounted voice |
CN103514201A (en) * | 2012-06-27 | 2014-01-15 | 阿里巴巴集团控股有限公司 | Method and device for querying data in non-relational database |
CN105791931A (en) * | 2016-02-26 | 2016-07-20 | 深圳Tcl数字技术有限公司 | Smart television and voice control method of the smart television |
CN106384591A (en) * | 2016-10-27 | 2017-02-08 | 乐视控股(北京)有限公司 | Method and device for interacting with voice assistant application |
CN107680589A (en) * | 2017-09-05 | 2018-02-09 | 百度在线网络技术(北京)有限公司 | Voice messaging exchange method, device and its equipment |
CN108694946A (en) * | 2018-05-09 | 2018-10-23 | 四川斐讯信息技术有限公司 | A kind of speaker control method and system |
CN109766487A (en) * | 2018-12-26 | 2019-05-17 | 郑州云海信息技术有限公司 | The method and device of page access anticipation is carried out based on middleware |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113031746A (en) * | 2019-12-09 | 2021-06-25 | Oppo广东移动通信有限公司 | Display screen area refreshing method, storage medium and electronic equipment |
CN113031746B (en) * | 2019-12-09 | 2023-02-28 | Oppo广东移动通信有限公司 | Display screen area refreshing method, storage medium and electronic equipment |
CN111124512A (en) * | 2019-12-10 | 2020-05-08 | 珠海格力电器股份有限公司 | Awakening method, device, equipment and medium for intelligent equipment |
CN110996374B (en) * | 2019-12-17 | 2023-08-25 | 腾讯科技(深圳)有限公司 | Wireless network control method, device, equipment and medium |
CN110996374A (en) * | 2019-12-17 | 2020-04-10 | 腾讯科技(深圳)有限公司 | Wireless network control method, device, equipment and medium |
CN113160808A (en) * | 2020-01-22 | 2021-07-23 | 广州汽车集团股份有限公司 | Voice control method and system and voice control equipment |
CN111369993B (en) * | 2020-03-03 | 2023-06-20 | 珠海格力电器股份有限公司 | Control method, control device, electronic equipment and storage medium |
CN111369993A (en) * | 2020-03-03 | 2020-07-03 | 珠海格力电器股份有限公司 | Control method, control device, electronic equipment and storage medium |
CN111638928A (en) * | 2020-05-21 | 2020-09-08 | 北京百度网讯科技有限公司 | Operation guiding method, device, equipment and readable storage medium of application program |
CN111638928B (en) * | 2020-05-21 | 2023-09-01 | 阿波罗智联(北京)科技有限公司 | Operation guiding method, device and equipment of application program and readable storage medium |
CN112086097A (en) * | 2020-07-29 | 2020-12-15 | 广东美的白色家电技术创新中心有限公司 | Instruction response method of voice terminal, electronic device and computer storage medium |
CN112086097B (en) * | 2020-07-29 | 2023-11-10 | 广东美的白色家电技术创新中心有限公司 | Instruction response method of voice terminal, electronic equipment and computer storage medium |
CN112155485A (en) * | 2020-09-14 | 2021-01-01 | 江苏美的清洁电器股份有限公司 | Control method, control device, cleaning robot and storage medium |
CN112242140A (en) * | 2020-10-13 | 2021-01-19 | 中移(杭州)信息技术有限公司 | Intelligent device control method and device, electronic device and storage medium |
CN113555019A (en) * | 2021-07-21 | 2021-10-26 | 维沃移动通信(杭州)有限公司 | Voice control method and device and electronic equipment |
WO2023093280A1 (en) * | 2021-11-29 | 2023-06-01 | Oppo广东移动通信有限公司 | Speech control method and apparatus, electronic device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110459222A (en) | Sound control method, phonetic controller and terminal device | |
US10847139B1 (en) | Crowd sourced based training for natural language interface systems | |
JP6738445B2 (en) | Long-distance extension of digital assistant service | |
US11030412B2 (en) | System and method for chatbot conversation construction and management | |
US9946511B2 (en) | Method for user training of information dialogue system | |
WO2018213740A1 (en) | Action recipes for a crowdsourced digital assistant system | |
CN107610698A (en) | A kind of method for realizing Voice command, robot and computer-readable recording medium | |
US8938388B2 (en) | Maintaining and supplying speech models | |
US20160027440A1 (en) | Selective speech recognition for chat and digital personal assistant systems | |
KR20180070684A (en) | Parameter collection and automatic dialog generation in dialog systems | |
TW201826112A (en) | Voice-based interaction method and apparatus, electronic device, and operating system | |
KR20180121758A (en) | Electronic apparatus for processing user utterance and controlling method thereof | |
CN107430623A (en) | Offline syntactic model for the dynamic updatable of resource-constrained off-line device | |
CN104969289A (en) | Voice trigger for a digital assistant | |
CN107430855A (en) | The sensitive dynamic of context for turning text model to voice in the electronic equipment for supporting voice updates | |
CN105489221A (en) | Voice recognition method and device | |
KR20190046631A (en) | System and method for natural language processing | |
CN108701127A (en) | Electronic equipment and its operating method | |
US20200265843A1 (en) | Speech broadcast method, device and terminal | |
CN108632653A (en) | Voice management-control method, smart television and computer readable storage medium | |
CN110010125A (en) | A kind of control method of intelligent robot, device, terminal device and medium | |
CN107591150A (en) | Audio recognition method and device, computer installation and computer-readable recording medium | |
US20220283831A1 (en) | Action recipes for a crowdsourced digital assistant system | |
CN111312233A (en) | Voice data identification method, device and system | |
JP7436077B2 (en) | Skill voice wake-up method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191115 |