CN108899025A - Terminal equipment control method, equipment and storage medium - Google Patents
Terminal equipment control method, equipment and storage medium Download PDFInfo
- Publication number
- CN108899025A CN108899025A CN201810822087.9A CN201810822087A CN108899025A CN 108899025 A CN108899025 A CN 108899025A CN 201810822087 A CN201810822087 A CN 201810822087A CN 108899025 A CN108899025 A CN 108899025A
- Authority
- CN
- China
- Prior art keywords
- voice information
- earphone
- sample voice
- terminal device
- control instruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
- G10L2015/0638—Interactive procedures
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Abstract
The embodiment of the present invention provides a kind of terminal equipment control method, equipment and storage medium.This method includes:Earphone collecting sample voice messaging;According to sample voice information, the characteristic information in sample voice information is determined;Predetermined acoustic model is trained according to the characteristic information in sample voice information;Target voice information is identified to obtain control instruction using the predetermined acoustic model that training is completed;Control instruction is sent to terminal device by earphone, so that terminal device is according to the application program installed in control instruction controlling terminal equipment.The embodiment of the present invention passes through earphone collecting sample voice messaging, and using sample voice information training predetermined acoustic model, the predetermined acoustic model is enabled to identify that the voice messaging of user obtains control instruction, and the control instruction is sent to terminal device, the control to application program in the terminal device is realized, allows user that can also not open application program by the screen that point touches the terminal device.
Description
Technical field
The present embodiments relate to field of communication technology more particularly to a kind of terminal equipment control methods, equipment and storage
Medium.
Background technique
With the development of communication technology, terminal device has become means of communication indispensable in people's life, the end
End equipment is mountable to be had application program (Application, APP), and user can obtain corresponding network money by the application program
Source.
Usual user needs the screen of the point touching terminal device to open the application program installed in the terminal device, still
Sometimes user may in walking, cycling or hand neck thing, cause the user can not put touch the terminal device screen
To open application program.
Summary of the invention
The embodiment of the present invention provides a kind of terminal equipment control method, equipment and storage medium, so that user can not
Screen by touching the terminal device can also open application program.
In a first aspect, the embodiment of the present invention provides a kind of terminal equipment control method, including:
Earphone collecting sample voice messaging;
The earphone determines the characteristic information in the sample voice information according to the sample voice information;
The earphone is trained predetermined acoustic model according to the characteristic information in the sample voice information;
The earphone is identified to obtain control instruction using the predetermined acoustic model that training is completed to target voice information;
The control instruction is sent to terminal device by the earphone, so that the terminal device is according to the control instruction
Control the application program installed in the terminal device.
Second aspect, the embodiment of the present invention provide a kind of earphone, including:
Acquisition module is used for collecting sample voice messaging;
Determining module, for determining the characteristic information in the sample voice information according to the sample voice information;
Training module, for being trained according to the characteristic information in the sample voice information to predetermined acoustic model;
Identification module, the predetermined acoustic model for being completed using training, which identifies target voice information, to be controlled
Instruction;
Sending module, for the control instruction to be sent to terminal device, so that the terminal device is according to the control
System instruction controls the application program installed in the terminal device.
The third aspect, the embodiment of the present invention provide a kind of earphone, including:
Memory;
Processor;And
Computer program;
Wherein, the computer program stores in the memory, and is configured as being executed by the processor with reality
Method described in existing first aspect.
Fourth aspect, the embodiment of the present invention provide a kind of computer readable storage medium, are stored thereon with computer program,
The computer program is executed by processor to realize method described in first aspect.
Terminal equipment control method, equipment and storage medium provided in an embodiment of the present invention pass through earphone collecting sample language
Message breath, and using sample voice information training predetermined acoustic model, enable the predetermined acoustic model to identify user's
Voice messaging obtains control instruction, and the control instruction is sent to terminal device, realize in the terminal device apply journey
The control of sequence allows user that can also not open application program by the screen that point touches the terminal device.
Detailed description of the invention
Fig. 1 is a kind of schematic diagram of application scenarios provided in an embodiment of the present invention;
Fig. 2 is terminal equipment control method flow chart provided in an embodiment of the present invention;
Fig. 3 is terminal equipment control method flow chart provided in an embodiment of the present invention;
Fig. 4 be another embodiment of the present invention provides terminal equipment control method flow chart;
Fig. 5 is the structural schematic diagram of earphone provided in an embodiment of the present invention;
Fig. 6 is the structural schematic diagram of earphone provided in an embodiment of the present invention.
Through the above attached drawings, it has been shown that the specific embodiment of the disclosure will be hereinafter described in more detail.These attached drawings
It is not intended to limit the scope of this disclosure concept by any means with verbal description, but is by referring to specific embodiments
Those skilled in the art illustrate the concept of the disclosure.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to
When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment
Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended
The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
Terminal equipment control method provided by the invention can be adapted for communication system shown in FIG. 1.As shown in Figure 1, should
Communication system includes:Access network equipment 11 and terminal device 12.It should be noted that communication system shown in FIG. 1 can fit
For different network formats, for example, can be adapted for global system for mobile telecommunications (Global System of Mobile
Communication, abbreviation GSM), CDMA (Code Division Multiple Access, abbreviation CDMA), broadband
CDMA (Wideband Code Division Multiple Access, abbreviation WCDMA), TD SDMA
(Time Division-Synchronous Code Division Multiple Access, abbreviation TD-SCDMA), it drills for a long time
Into network formats such as the 5G in (Long Term Evolution, abbreviation LTE) system and future.Optionally, above-mentioned communication system can
Think highly reliable low time delay communication (Ultra-Reliable and Low Latency in 5G communication system
Communications, abbreviation URLLC) transmission scene in system.
So optionally, above-mentioned access network equipment 11 can be base station (the Base Transceiver in GSM or CDMA
Station, abbreviation BTS) and/or base station controller, it is also possible to base station (NodeB, abbreviation NB) in WCDMA and/or wireless
Network controller (Radio Network Controller, abbreviation RNC), can also be the evolved base station in LTE
(Evolutional Node B, abbreviation eNB or eNodeB) perhaps base in relay station or access point or future 5G network
Stand (gNB) etc., and the present invention does not limit herein.
Above-mentioned terminal device 12 can be wireless terminal and be also possible to catv terminal.Wireless terminal can be directed to user and mention
For voice and/or the equipment of other business datum connectivity, there is the handheld device of wireless connecting function or be connected to wireless
Other processing equipments of modem.Wireless terminal can be through wireless access network (Radio Access Network, abbreviation
RAN it) is communicated with one or more equipments of the core network, wireless terminal can be mobile terminal, as mobile phone (or is
" honeycomb " phone) and computer with mobile terminal, for example, it may be portable, pocket, hand-held, built-in computer
Or vehicle-mounted mobile device, they exchange language and/or data with wireless access network.For another example wireless terminal can be with
It is personal communication service (Personal Communication Service, abbreviation PCS) phone, wireless phone, session setup
Agreement (Session Initiation Protocol, abbreviation SIP) phone, wireless local loop (Wireless Local
Loop, abbreviation WLL) it stands, the equipment such as personal digital assistant (Personal Digital Assistant, abbreviation PDA).It is wireless whole
End is referred to as system, subscriber unit (Subscriber Unit), subscriber station (Subscriber Station), movement station
(Mobile Station), mobile station (Mobile), distant station (Remote Station), remote terminal (Remote
Terminal), access terminal (Access Terminal), user terminal (User Terminal), user agent (User
Agent), user equipment (User Device or User Equipment), is not limited thereto.Optionally, above-mentioned terminal is set
Standby 12 can also be the equipment such as smartwatch, tablet computer.
Terminal equipment control method provided by the invention, it is intended to solve the technical problem as above of the prior art.
How to be solved with technical solution of the specifically embodiment to technical solution of the present invention and the application below above-mentioned
Technical problem is described in detail.These specific embodiments can be combined with each other below, for the same or similar concept
Or process may repeat no more in certain embodiments.Below in conjunction with attached drawing, the embodiment of the present invention is described.
Fig. 2 is terminal equipment control method flow chart provided in an embodiment of the present invention.The embodiment of the present invention is directed to existing skill
The technical problem as above of art provides terminal equipment control method, and specific step is as follows for this method:
Step 201, earphone collecting sample voice messaging.
Earphone in the present embodiment specifically can be bluetooth headset, which has sound-recording function, the bluetooth headset
The voice messaging of user can be recorded by its sound-recording function.The bluetooth headset can record the voice letter of multiple and different users
Breath, the voice messaging of each user can be used as sample data and be trained to predetermined acoustic model.In addition, the bluetooth headset also has
There is playing function, the voice messaging which can be recorded by the playing function plays out, to detect its recording
Voice messaging it is whether clear.
Step 202, the earphone determine the feature letter in the sample voice information according to the sample voice information
Breath.
After the bluetooth headset collects multiple users different sample voice information, which can determine the sample
Characteristic information in this voice messaging, this feature information specifically can be mel-frequency cepstrum coefficient (Mel-frequency
Cepstral coefficients, MFCC) feature.
Step 203, the earphone instruct predetermined acoustic model according to the characteristic information in the sample voice information
Practice.
The bluetooth headset is trained predetermined acoustic model according to the characteristic information in the sample voice information.
Optionally, the bluetooth headset can construct stacking-type deepness auto encoder network (Deep Autoencoder,
DAE), and using its record sample voice information be used as training data, by it is unsupervised and have supervise two kinds of training methods mould
Type is respectively basic feature with the MFCC feature of the training data, its corresponding deep layer phonetic feature is extracted, then to based on hidden
The acoustic model of Markov model (Hidden Markov Model, HMM) carries out the training of repeatability, before finally using again
The voice messaging of recording verifies the testing result of the acoustic model as verify data.
Step 204, the earphone are identified to obtain using the predetermined acoustic model that training is completed to target voice information
Control instruction.
After the completion of the acoustic training model based on hidden Markov model, which can be used training completion
Acoustic model target voice information is identified to obtain control instruction, for example, the target voice information be specially " open it is micro-
Letter ", then the bluetooth headset obtains the control instruction of " opening wechat " after identifying to the target voice information.Optionally, institute
Stating control instruction includes AT instruction.
Optionally, AT command set is from terminal device (Terminal Equipment, TE) or data terminal equipment (Data
Terminal Equipment, DTE) to terminal adapter (Terminal Adapter, TA) or data circuit termination equipment
(Data Circuit Terminal Equipment, DCE) is sent.The size of AT instruction can be preset, example
Such as, it needs the two characters of A, T occur in AT instruction, in addition, can also include 1056 other characters in AT instruction, it should
1056 other characters include that the AT instructs last null character.
Optionally, an AT instruction is included in each AT order line;For what is reported from bluetooth headset to mobile phone end
URC instruction or response response, also require a line to be up to one, have in a line for not allowing to report it is a plurality of instruction or
Response.Optionally, AT instruction is using carriage return as ending.For example, opening the control instruction of wechat by bluetooth headset is specially AT+
WeixinOpen。
Bluetooth headset described in the present embodiment specifically can be as shown in figure 3, when there is voice to the identification process of voice messaging
When being input to bluetooth headset, bluetooth headset carries out voice messaging pretreatment first, such as removes the noise etc. in the voice messaging,
Feature extraction further is carried out to the voice messaging, to extract the characteristic information in the voice messaging, and is believed according to this feature
Breath is trained predetermined acoustic model, obtains corresponding model library, may include trained predetermined acoustic mould in the model library
Type.When bluetooth headset collects voice messaging such as voice to be identified again, by voice messaging pretreatment and feature extraction
The characteristic information of the voice to be identified is obtained, and mode is carried out to the voice to be identified by trained predetermined acoustic model
Match, obtain recognition result, which is specifically as follows control instruction.
The control instruction is sent to terminal device by step 205, the earphone, so that the terminal device is according to
Control instruction controls the application program installed in the terminal device.
Optionally, the control instruction is sent to terminal device by the earphone, including:The earphone passes through Bluetooth protocol
The control instruction is sent to the terminal device.
For example, bluetooth headset can be communicated with terminal device by Bluetooth protocol, when the bluetooth headset is according to user
Voice messaging obtain corresponding control instruction after, which is sent to by the terminal device by Bluetooth protocol, with
The terminal device is set to control the application program installed in the terminal device according to the control instruction.For example, the user
Voice messaging is specially " opening wechat ", then the bluetooth headset obtains the control of " opening wechat " after identifying to the voice messaging
System instruction, and the control instruction is sent to terminal device by Bluetooth protocol, so that the terminal device opens the micro- of its installation
Believe program.
The embodiment of the present invention trains predetermined acoustic by earphone collecting sample voice messaging, and using the sample voice information
Model enables the predetermined acoustic model to identify that the voice messaging of user obtains control instruction, and the control instruction is sent
To terminal device, the control to application program in the terminal device is realized, allows user not touch the terminal by point and sets
Standby screen can also open application program.
Fig. 4 be another embodiment of the present invention provides terminal equipment control method flow chart.On the basis of above-described embodiment
On, the earphone is trained predetermined acoustic model according to the characteristic information in the sample voice information, including walks as follows
Suddenly:
Step 401, the earphone determine the sample voice information according to the characteristic information in the sample voice information
In deep layer phonetic feature.
Optionally, the earphone determines the sample voice information according to the characteristic information in the sample voice information
In deep layer phonetic feature, including:
The earphone is according to the characteristic information in the sample voice information, certainly using the stacking-type depth in deep learning
Deep layer phonetic feature in sample voice information described in dynamic encoder network model extraction.
Step 402, the earphone according to the deep layer phonetic feature in the sample voice information to predetermined acoustic model into
Row training.
The present embodiment can construct stacking-type deepness auto encoder network (Deep Autoencoder, DAE), and use
Deep learning model is trained the network.Deep learning model uses successively greedy unsupervised pre-training, specifically, first
The network weight is initialized, then with largely pre-training is successively carried out without label data, i.e., is learnt by unsupervised learning mode
The structure of original input data, and be closer to the area of the optimal solution of target by this initial weight that this pre-training obtains
Between, so in next training, then with thering is label data to be finely adjusted whole network on a small quantity, could obtain so more preferable
Result.
Network made of being stacked as multiple autocoders is known as depth stacking-type autocoder network, it belongs to no prison
Superintend and direct model structure.A classifier is added in the most top coding layer of autocoder, is first carried out with the AE of multiple stackings non-supervisory
Training determines network initial value (reducing the requirement to data), an output layer (classifier) is finally being combined, with there is label
Data are top-down to be finely adjusted network using BP (Error Back Propagation) algorithm, thus becomes one
Kind has supervision and the unsupervised mixed model combined.
Unsupervised mode:In unsupervised stacking-type autocoder model, the first each autocoding of random initializtion
The weighting parameter of device;Reuse the mode training network weighting parameter of stochastic gradient descent method and serial training.Then, a frame one
The original 24 dimension MFCC feature of frame passes through first autocoder, first intermediate code layer (first hidden layer of arrival
H1), that is, original feature space is mapped in the new feature space that each node of hidden layer is constituted, it is obtained at first
The character representation form of hidden layer;Similar is successively every by an autocoder, all can be considered a feature again
Nonlinear Mapping is combined and through to reconstruct in this coding layer;Finally, the intermediate code value of the last one autocoder is as this
The new feature that depth model extracts.
There is monitor mode:Distinction training is to comprehensively consider influencing each other between different classes of training sample, to adjust
Whole different classes of line of demarcation mentions recognition correct rate if acoustic feature itself can be made also to have certain distinction
It rises and is certain to generate active influence.Have monitor model structure be exactly in the last one coding layer in unsupervised model again plus
Unsupervised feature is associated with generic, is to minimize loss function by a upper output layer (i.e. binary coding)
Criterion retunes whole network parameter by BP algorithm, finally obtains differentiating characteristics at H4 layers.
Feature extraction:Stacking-type deepness auto encoder network model in present invention deep learning extracts deep layer voice
Input data when feature, using extraction MFCC feature as the network node of the model.One time to a word of one people
The 24 dimension MFCC characteristics extracted after pronunciation, wherein the number of speech frames of the word pronunciation is 168 frames.
The embodiment of the present invention trains predetermined acoustic by earphone collecting sample voice messaging, and using the sample voice information
Model enables the predetermined acoustic model to identify that the voice messaging of user obtains control instruction, and the control instruction is sent
To terminal device, the control to application program in the terminal device is realized, allows user not touch the terminal by point and sets
Standby screen can also open application program.
Fig. 5 is the structural schematic diagram of earphone provided in an embodiment of the present invention.Earphone provided in an embodiment of the present invention can be held
The process flow that row terminal equipment control method embodiment provides, as shown in figure 5, earphone 50 includes:Acquisition module 51 determines mould
Block 52, training module 53, identification module 54 and sending module 55;Wherein, acquisition module 51 is used for collecting sample voice messaging;Really
Cover half block 52 is used to determine the characteristic information in the sample voice information according to the sample voice information;Training module 53
For being trained according to the characteristic information in the sample voice information to predetermined acoustic model;Identification module 54 is for using
The predetermined acoustic model that training is completed is identified to obtain control instruction to target voice information;Sending module 55 is used for will be described
Control instruction is sent to terminal device, installs so that the terminal device controls in the terminal device according to the control instruction
Application program.
Optionally, training module 53 includes:Determination unit 531 and training unit 532;Determination unit 531 is used for according to institute
The characteristic information in sample voice information is stated, determines the deep layer phonetic feature in the sample voice information;Training unit 532 is used
Predetermined acoustic model is trained according to the deep layer phonetic feature in the sample voice information.
Optionally, determination unit 531 is specifically used for:According to the characteristic information in the sample voice information, using depth
Stacking-type deepness auto encoder network model in study extracts the deep layer phonetic feature in the sample voice information.
Optionally, the control instruction includes AT instruction.
Optionally, sending module 55 is specifically used for:The control instruction terminal is sent to by Bluetooth protocol to set
It is standby.
The earphone of embodiment illustrated in fig. 5 can be used for executing the technical solution of above method embodiment, realization principle and skill
Art effect is similar, and details are not described herein again.
Fig. 6 is the structural schematic diagram of earphone provided in an embodiment of the present invention.Earphone provided in an embodiment of the present invention can be held
Row terminal equipment control method embodiment provide process flow, as shown in fig. 6, earphone 60 include memory 61, processor 62,
Computer program and communication interface 63;Wherein, computer program is stored in memory 61, and is configured as being held by processor 62
Terminal equipment control method described in row above embodiments.
The earphone of embodiment illustrated in fig. 6 can be used for executing the technical solution of above method embodiment, realization principle and skill
Art effect is similar, and details are not described herein again.
In addition, the present embodiment also provides a kind of computer readable storage medium, it is stored thereon with computer program, the meter
Calculation machine program is executed by processor to realize terminal equipment control method described in above-described embodiment.
In several embodiments provided by the present invention, it should be understood that disclosed device and method can pass through it
Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only
Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied
Another system is closed or is desirably integrated into, or some features can be ignored or not executed.Another point, it is shown or discussed
Mutual coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or logical of device or unit
Letter connection can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
The above-mentioned integrated unit being realized in the form of SFU software functional unit can store and computer-readable deposit at one
In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions are used so that a computer
It is each that equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute the present invention
The part steps of embodiment the method.And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (Read-
Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic or disk etc. it is various
It can store the medium of program code.
Those skilled in the art can be understood that, for convenience and simplicity of description, only with above-mentioned each functional module
Division progress for example, in practical application, can according to need and above-mentioned function distribution is complete by different functional modules
At the internal structure of device being divided into different functional modules, to complete all or part of the functions described above.On
The specific work process for stating the device of description, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
Finally it should be noted that:The above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent
Present invention has been described in detail with reference to the aforementioned embodiments for pipe, those skilled in the art should understand that:Its according to
So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into
Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution
The range of scheme.
Claims (12)
1. a kind of terminal equipment control method, which is characterized in that including:
Earphone collecting sample voice messaging;
The earphone determines the characteristic information in the sample voice information according to the sample voice information;
The earphone is trained predetermined acoustic model according to the characteristic information in the sample voice information;
The earphone is identified to obtain control instruction using the predetermined acoustic model that training is completed to target voice information;
The control instruction is sent to terminal device by the earphone, so that the terminal device is controlled according to the control instruction
The application program installed in the terminal device.
2. the method according to claim 1, wherein the earphone is according to the feature in the sample voice information
Information is trained predetermined acoustic model, including:
The earphone determines the deep layer voice in the sample voice information according to the characteristic information in the sample voice information
Feature;
The earphone is trained predetermined acoustic model according to the deep layer phonetic feature in the sample voice information.
3. according to the method described in claim 2, it is characterized in that, the earphone is according to the feature in the sample voice information
Information determines the deep layer phonetic feature in the sample voice information, including:
The earphone is compiled according to the characteristic information in the sample voice information using the stacking-type depth in deep learning automatically
Code device network model extracts the deep layer phonetic feature in the sample voice information.
4. method according to claim 1-3, which is characterized in that the control instruction includes AT instruction.
5. according to the method described in claim 4, it is characterized in that, the control instruction is sent to terminal by the earphone sets
It is standby, including:
The control instruction is sent to the terminal device by Bluetooth protocol by the earphone.
6. a kind of earphone, which is characterized in that including:
Acquisition module is used for collecting sample voice messaging;
Determining module, for determining the characteristic information in the sample voice information according to the sample voice information;
Training module, for being trained according to the characteristic information in the sample voice information to predetermined acoustic model;
Identification module, the predetermined acoustic model for being completed using training is identified to obtain to control to target voice information to be referred to
It enables;
Sending module, for the control instruction to be sent to terminal device, so that the terminal device refers to according to the control
Enable the application program for controlling and installing in the terminal device.
7. earphone according to claim 6, which is characterized in that the training module includes:Determination unit and training unit;
The determination unit is used to be determined in the sample voice information according to the characteristic information in the sample voice information
Deep layer phonetic feature;
The training unit is for instructing predetermined acoustic model according to the deep layer phonetic feature in the sample voice information
Practice.
8. earphone according to claim 7, which is characterized in that the determination unit is specifically used for:
According to the characteristic information in the sample voice information, using the stacking-type deepness auto encoder network in deep learning
Deep layer phonetic feature in sample voice information described in model extraction.
9. according to the described in any item earphones of claim 6-8, which is characterized in that the control instruction includes AT instruction.
10. earphone according to claim 9, which is characterized in that the sending module is specifically used for:It will by Bluetooth protocol
The control instruction is sent to the terminal device.
11. a kind of earphone, which is characterized in that including:
Memory;
Processor;And
Computer program;
Wherein, the computer program stores in the memory, and is configured as being executed by the processor to realize such as
The described in any item methods of claim 1-5.
12. a kind of computer readable storage medium, which is characterized in that be stored thereon with computer program, the computer program
It is executed by processor to realize the method according to claim 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810822087.9A CN108899025A (en) | 2018-07-24 | 2018-07-24 | Terminal equipment control method, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810822087.9A CN108899025A (en) | 2018-07-24 | 2018-07-24 | Terminal equipment control method, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108899025A true CN108899025A (en) | 2018-11-27 |
Family
ID=64352444
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810822087.9A Pending CN108899025A (en) | 2018-07-24 | 2018-07-24 | Terminal equipment control method, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108899025A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110944315A (en) * | 2019-12-14 | 2020-03-31 | 华为技术有限公司 | Data processing method, terminal device, Bluetooth device and storage medium |
CN111405105A (en) * | 2020-03-20 | 2020-07-10 | 深圳市未艾智能有限公司 | Method and apparatus for controlling bluetooth headset, and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106372653A (en) * | 2016-08-29 | 2017-02-01 | 中国传媒大学 | Stack type automatic coder-based advertisement identification method |
CN106909813A (en) * | 2015-12-23 | 2017-06-30 | 北京奇虎科技有限公司 | Control the method and device of application program |
CN107591152A (en) * | 2017-08-30 | 2018-01-16 | 百度在线网络技术(北京)有限公司 | Sound control method, device and its equipment based on earphone |
CN108108142A (en) * | 2017-12-14 | 2018-06-01 | 广东欧珀移动通信有限公司 | Voice information processing method, device, terminal device and storage medium |
CN108305626A (en) * | 2018-01-31 | 2018-07-20 | 百度在线网络技术(北京)有限公司 | The sound control method and device of application program |
-
2018
- 2018-07-24 CN CN201810822087.9A patent/CN108899025A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106909813A (en) * | 2015-12-23 | 2017-06-30 | 北京奇虎科技有限公司 | Control the method and device of application program |
CN106372653A (en) * | 2016-08-29 | 2017-02-01 | 中国传媒大学 | Stack type automatic coder-based advertisement identification method |
CN107591152A (en) * | 2017-08-30 | 2018-01-16 | 百度在线网络技术(北京)有限公司 | Sound control method, device and its equipment based on earphone |
CN108108142A (en) * | 2017-12-14 | 2018-06-01 | 广东欧珀移动通信有限公司 | Voice information processing method, device, terminal device and storage medium |
CN108305626A (en) * | 2018-01-31 | 2018-07-20 | 百度在线网络技术(北京)有限公司 | The sound control method and device of application program |
Non-Patent Citations (1)
Title |
---|
周慧琼: "基于深度学习的孤立词语音识别系统设计", 《中国优秀硕士学位论文全文数据库·信息科技辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110944315A (en) * | 2019-12-14 | 2020-03-31 | 华为技术有限公司 | Data processing method, terminal device, Bluetooth device and storage medium |
WO2021114952A1 (en) * | 2019-12-14 | 2021-06-17 | 华为技术有限公司 | Data processing method, terminal device, bluetooth device, and storage medium |
CN111405105A (en) * | 2020-03-20 | 2020-07-10 | 深圳市未艾智能有限公司 | Method and apparatus for controlling bluetooth headset, and storage medium |
CN111405105B (en) * | 2020-03-20 | 2022-03-29 | 深圳市未艾智能有限公司 | Method and apparatus for controlling bluetooth headset, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2821992B1 (en) | Method for updating voiceprint feature model and terminal | |
CN103247291B (en) | A kind of update method of speech recognition apparatus, Apparatus and system | |
CN111261144B (en) | Voice recognition method, device, terminal and storage medium | |
CN102842306B (en) | Sound control method and device, voice response method and device | |
CN104185868B (en) | Authentication voice and speech recognition system and method | |
CN108520743A (en) | Sound control method, smart machine and the computer-readable medium of smart machine | |
CN104052846B (en) | Game application in voice communication method and system | |
CN106537493A (en) | Speech recognition system and method, client device and cloud server | |
CN108182944A (en) | Control the method, apparatus and intelligent terminal of intelligent terminal | |
CN108701453A (en) | Modularization deep learning model | |
US7392184B2 (en) | Arrangement of speaker-independent speech recognition | |
CN105719659A (en) | Recording file separation method and device based on voiceprint identification | |
CN105393302A (en) | Multi-level speech recognition | |
CN107767861A (en) | voice awakening method, system and intelligent terminal | |
CN105489221A (en) | Voice recognition method and device | |
CN104407834A (en) | Message input method and device | |
CN103797761A (en) | Communication method, client, and terminal | |
CN108922521A (en) | A kind of voice keyword retrieval method, apparatus, equipment and storage medium | |
CN104766608A (en) | Voice control method and voice control device | |
JP2020067658A (en) | Device and method for recognizing voice, and device and method for training voice recognition model | |
CN105975063B (en) | A kind of method and apparatus controlling intelligent terminal | |
EP2747464A1 (en) | Sent message playing method, system and related device | |
CN108899025A (en) | Terminal equipment control method, equipment and storage medium | |
CN109729067A (en) | Voice punch card method, device, equipment and computer storage medium | |
CN107240396A (en) | Speaker adaptation method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181127 |
|
RJ01 | Rejection of invention patent application after publication |