CN104899087B - The speech recognition method and system of third-party application - Google Patents

The speech recognition method and system of third-party application Download PDF

Info

Publication number
CN104899087B
CN104899087B CN201510334239.7A CN201510334239A CN104899087B CN 104899087 B CN104899087 B CN 104899087B CN 201510334239 A CN201510334239 A CN 201510334239A CN 104899087 B CN104899087 B CN 104899087B
Authority
CN
China
Prior art keywords
terminal
client
voice
speech
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510334239.7A
Other languages
Chinese (zh)
Other versions
CN104899087A (en
Inventor
王夏鸣
胡浩
赵志翔
陶涛
童勇勇
崔阿鹏
储双双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN201510334239.7A priority Critical patent/CN104899087B/en
Publication of CN104899087A publication Critical patent/CN104899087A/en
Application granted granted Critical
Publication of CN104899087B publication Critical patent/CN104899087B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Telephonic Communication Services (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a kind of speech recognition method and systems of third-party application.Shown method includes:It is configured at the auxiliary client of first terminal, obtains and is configured at the speech-input instructions that the primary client of second terminal is initiated;It assists client to generate backstage recording according to speech-input instructions to ask, and is transferred to the operating system of first terminal, recorded with the sound pick-up outfit for asking first terminal to call second terminal;Auxiliary client controls the voice messaging that the second terminal obtains recording by first terminal and is identified, so that primary client handles voice recognition result.The present invention can achieve the effect that the user's operation of simplified third party's speech recognition, improve third party's audio identification efficiency.

Description

The speech recognition method and system of third-party application
Technical field
The present embodiments relate to a kind of knowledges of the voice of application software and network communication technology more particularly to third-party application Other method and system.
Background technology
8.0 versions of system iOS of iPhone (iPhone) support the input method of third party's keyboard, but since system is weighed Limit gauge is fixed, and third party's keyboard the microphone for not having permission to access iPhone, sound-recording function can not be provided on keyboard, with regard to nothing yet Method supports the speech identifying function of third party's input method.
The speech recognition schemes of 8 third party's keyboards of iOS are now:When user needs to input word by speech recognition, Load button is clicked in the interface of third party's keyboard first, the speech recognition main program of iOS offers is provided, is known in the voice Speech recognition is carried out in other main program, and (program that the main program is provided as iOS exploitations, tool have permission to access microphone, carry out Voice inputs);Application where needing the manual return key disk of user after speech recognition, then length paste text filed, tune on demand Go out system and paste menu, paste, completes input.
There are complicated for operation, the tediously long problems of interaction flow for existing iOS8 third party's speech recognition schemes.Complete flow It needs altogether:1. clicking microphone->2. jumping to main program->3. voice input->4. replicating identification content->5. returning manually former Using->6. long-press is text filed->7. click paste totally 7 steps.
Invention content
The present invention provides a kind of speech recognition method and system of third-party application, and easy third is provided to realize Square speech recognition schemes.
In a first aspect, an embodiment of the present invention provides a kind of speech recognition implementation methods of third-party application, including:
It is configured at the auxiliary client of first terminal, obtains and is configured at the voice input that the primary client of second terminal is initiated Instruction;
The auxiliary client generates backstage recording according to the speech-input instructions and asks, and is transferred to described first eventually The operating system at end is recorded with the sound pick-up outfit for asking the first terminal to call the second terminal;
The auxiliary client by the first terminal control the second terminal to the obtained voice messaging of recording into Row identification, for primary client processing institute speech recognition result.
Second aspect, the embodiment of the present invention additionally provide a kind of speech recognition realization system of third-party application, including:
Auxiliary client and primary client, the auxiliary client are configured in first terminal, the primary client configuration In second terminal;The auxiliary client includes:
Instruction acquisition module, the speech-input instructions for obtaining primary client initiation;
It records control module, is asked for generating backstage recording according to the speech-input instructions, and be transferred to described the The operating system of one terminal is recorded with the sound pick-up outfit for asking the first terminal to call the second terminal;
Speech recognition controlled module, for controlling the voice that the second terminal obtains recording by the first terminal Information is identified, so that the primary client handles voice recognition result;
The primary client includes:
Initiation module is instructed, for initiating the speech-input instructions;
Result treatment module, the speech recognition result for handling.
The present invention obtains the language for the primary client for being configured at second terminal by being configured at the auxiliary client of first terminal After sound input instruction, generates backstage recording and ask and recorded based on the recording request call second terminal, realize and pass through The mode of backstage recording makes primary client from the effect of second terminal acquisition recording permission.Auxiliary client passes through first terminal control Make the second terminal the obtained voice messaging of recording is identified and is exported, for primary client to voice recognition result into Row subsequent processing realizes third-party speech recognition.In the prior art, it needs to execute:1. clicking microphone->2. jumping to master Program->3. voice input->4. replicating identification content->5. returning to former application-manually>6. long-press is text filed->7. clicking viscous Patch, it is cumbersome.In the present invention, user only needs to click microphone (triggering speech-input instructions), and input voice information is Speech recognition can be achieved, without jump to main program, replicating that identification content, to return to former application, long-press manually text filed And the operation of click paste, achieve the effect that the user's operation of simplified third party's speech recognition, improves third party's speech recognition Efficiency.
Description of the drawings
Fig. 1 is the flow chart of the speech recognition implementation method of a third-party application in the embodiment of the present invention one;
Fig. 2 is the flow chart of the speech recognition implementation method of a third-party application in the embodiment of the present invention two;
Fig. 3 is the flow chart of the speech recognition implementation method of another third-party application in the embodiment of the present invention two;
Fig. 4 is that the structural schematic diagram of system is realized in the speech recognition of a third-party application in the embodiment of the present invention three;
Fig. 5 is that the structural representation of system is realized in the speech recognition of another third-party application in the embodiment of the present invention three Figure;
Fig. 6 is that the structural representation of system is realized in the speech recognition of another third-party application in the embodiment of the present invention three Figure.
Specific implementation mode
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention rather than limitation of the invention.It also should be noted that in order to just Only the parts related to the present invention are shown in description, attached drawing rather than entire infrastructure.
Embodiment one
Fig. 1 is the flow chart of the speech recognition implementation method for the third-party application that the embodiment of the present invention one provides, this implementation Example is applicable in iOS the case where carrying out speech recognition by third-party application, and this method can be by being configured with auxiliary client The first terminal (such as Apple Watch, apple wrist-watch) at end and second terminal (such as iPhone, apple configured with primary client Mobile phone) it cooperates to execute, specifically comprise the following steps:
Step 110, the auxiliary client for being configured at first terminal obtain the primary client initiation for being configured at second terminal Speech-input instructions.
Preferably, first terminal is portable intelligent wearable device, such as smartwatch, intelligent glasses etc..Second eventually End is to compare electronic equipment, such as smart mobile phone, tablet computer etc. with higher processing capacity than first terminal.
The application that third party has speech recognition demand, such as third are installed in the embodiment of the present invention, in primary client Square key disk.By taking third party's keyboard as an example, record button, click feelings of the primary client to record button are configured in third party's keyboard Condition is monitored.If user presses record button, the event of pressing is listened to;If user lifts record button, monitor To lifting event.When listen to press event when, trigger speech recognition sign on;When listen to lift event when, trigger language Sound identifies halt instruction.Wherein, voice input sign on or voice input halt instruction belong to speech-input instructions.Or Person, speech-input instructions can also by voice activation detect silence suppression (VAD) mode input.Can also click on by Button inputs sign on as voice, again taps on button and inputs halt instruction as voice.
First terminal obtains the speech-input instructions that primary client is initiated by being communicated with second terminal.It can lead to Cross the communication between the WatchKit realization first terminals of iOS and second terminal.It is carried out for example, by using WatchKit monitoring threads Communication, WatchKit monitoring threads are to be communicated for realizing iPhone and apple wrist-watch.
Step 120, auxiliary client generate backstage recording according to speech-input instructions and ask, and be transferred to first terminal Operating system is recorded with the sound pick-up outfit for asking first terminal to call second terminal.
If getting the speech-input instructions of primary client initiation, auxiliary client generates corresponding backstage recording and asks It asks.For example, when receiving voice input sign on, generates backstage recording and start request;It is raw when receiving voice input halt instruction It records at backstage and stops request.
After assisting client to generate backstage recording request (backstage recording starts request or backstage recording stops request), the Backstage recording request is transferred to the operating system of first terminal, wherein the operation system of first terminal in one terminal (local) Uniting has the permission of scheduling second terminal progress backstage recording.Later, the operating system of first terminal calls the record of second terminal Sound equipment is recorded.
Step 130, auxiliary client control the voice messaging that second terminal obtains recording by first terminal and know Not, so that primary client handles voice recognition result.
If necessary to carry out speech recognition, the operating system of client notification first terminal is assisted, by the behaviour of first terminal Make the voice messaging that system control second terminal obtains recording to be identified.It can be realized by speech recognition technology and voice is believed The identification of breath.The technical solution provided in the prior art can be used in speech recognition technology, and details are not described herein again.
The technical solution of the present embodiment can call second terminal to record, make using the operating system of first terminal Consistency operation realization recording can be passed through by obtaining third-party application.In the prior art, it needs to execute:1. clicking microphone->2. redirecting To main program->3. voice input->4. replicating identification content->5. returning to former application-manually>6. long-press is text filed->7. point Stickup is hit, it is cumbersome.In the present embodiment, user only needs to click microphone (triggering speech-input instructions), and inputs voice Information, without jump to main program, replicating identification content, return to that former application, long-press be text filed and point manually The operations such as stickup are hit, achievees the effect that the user's operation for simplifying third party's speech recognition in iOS, improves third party's speech recognition Efficiency.
Embodiment two
The present embodiment additionally provides a kind of speech recognition implementation method of third-party application, as to the specific of embodiment one Illustrate, as shown in Fig. 2, step 110, being configured at the auxiliary client of first terminal, obtains the host and guest family for being configured at second terminal The speech-input instructions initiated are held, including:
Step 110 ', be configured at the auxiliary client of first terminal by monitoring thread, monitoring is configured in second terminal Shared region, to obtain the speech-input instructions in primary client write-in shared region.
Since second terminal has the memory capacity of bigger compared with first terminal, one can be distributed in second terminal A exclusive storage region that data sharing is carried out with first terminal, referred to as shared region.First terminal is established with second terminal After connection, auxiliary client can monitor shared region by monitoring thread (such as WatchKit monitoring threads).When shared When having new data deposit in region, auxiliary client can be read out the data being newly stored in.
Correspondingly, step 130, auxiliary client control what the second terminal obtained recording by the first terminal Voice messaging is identified, so that primary client processing voice recognition result includes:
Step 130 ', auxiliary client controls second terminal by first terminal and knows to the obtained voice messaging of recording Not, and by voice recognition result shared region is written, so that primary client handles voice recognition result.
After second terminal carries out speech recognition, voice recognition result is written in shared region.It is configured at second terminal Primary client voice recognition result is read from shared region, and voice recognition result is handled.
The present embodiment additionally provides a kind of speech recognition implementation method of third-party application, is carried out as to above-described embodiment It illustrates, step 130, auxiliary client control the second terminal by the first terminal and believe the voice that recording obtains Breath is identified, and can be implemented by any one following mode:
1, auxiliary client controls second terminal by first terminal, and voice messaging is sent to server and is identified, And receive voice recognition result.
2, auxiliary client controls second terminal by first terminal, and local identification is carried out to voice messaging.
First terminal carries out speech recognition by controlling thread (such as WatchKit controls thread), control second terminal.It can According to the Internet Use of the processing capacity of second terminal and second terminal determination be identified using server or into The local identification of row.
Technical solution provided in this embodiment can make if carrying out speech recognition using server in second terminal Speech identifying function is realized with less system resource, improves the resource utilization of second terminal.If using client to language Message breath carries out local identification, can carry out speech recognition not against server, avoid because network failure leads to not obtain The problem of voice recognition result, improves the reliability of speech recognition.
The embodiment of the present invention additionally provides a kind of speech recognition implementation method of third-party application, as to above-described embodiment It being specifically described, in step 110, speech-input instructions are initiated in primary client, including:
The primary client of third party's input method receives the voice input sign on and language that user inputs in interface of input method Sound inputs halt instruction, and the shared region is written.
A kind of realization method of third-party application is third party's input method.Third party's input method is configured on first terminal Client is assisted, primary client is configured in second terminal.As a kind of realization method:Input of the user in primary client In method interface, by pressing corresponding function button, triggering voice inputs sign on;By lifting corresponding function button, It triggers voice and inputs halt instruction.Wherein, corresponding function button such as icon is the record button of loudspeaker or icon is red circle The record button etc. of shape.
Technical solution provided in this embodiment can receive voice input input by user in third party's interface of input method Sign on and voice input halt instruction, and are sent to first terminal by shared region, realize in third party's input method The effect of voice input is triggered in interface.
The embodiment of the present invention additionally provides a kind of speech recognition implementation method of third-party application, as to above-described embodiment It is specifically described, primary client processing institute speech recognition result in step 140, including:
The primary client from the shared region read voice recognition result, and in the text box of interface of input method into Row display.
Shared region can be primary client and assist the read-write operation of clients providing data.Second terminal believes voice After breath is identified, voice recognition result is written in shared region.Speech recognition knot is read from shared region in primary client Fruit, and shown in the text box of interface of input method, reach and voice messaging input by user is converted into text message Effect.
It should be noted that first terminal described in above-described embodiment is smartwatch, the second terminal is intelligent hand Machine, the operating system are iOS operating systems.
Above-described embodiment is specifically described below by a usage scenario:
First terminal is smartwatch (Apple Watch) in this usage scenario, and second terminal is smart mobile phone (iPhone).Wherein, the application (Application, APP) of third party's input method is respectively arranged in smartwatch and smart mobile phone, Third party's input method in smart mobile phone is primary client, and third party's input method in smartwatch is auxiliary client.User Smart mobile phone and smartwatch are matched, and third party's input method application in starting smart mobile phone and smartwatch.
As shown in figure 3, realizing language of third party's input method in smart mobile phone by following step in this usage scenario Sound inputs:
Step 301, when user starts third party's input method of smart mobile phone and smartwatch, be configured at smartwatch Auxiliary client starts Watchkit monitoring threads on the backstage of smartwatch and monitors shared region.
Step 302, user press speech voice input function key on the keyboard of third party's input method of primary client.
Step 303, when user presses speech voice input function key on the keyboard of third party's input method of primary client, it is main Client initiates voice and inputs sign on, and the instruction is written in shared region.Wherein, speech voice input function key has Microphone icon.
Step 304, auxiliary client read voice in shared region and input sign on, and start to refer to according to voice input The recording that starts for generating backstage is enabled to ask.
Step 305, auxiliary client ask the operating system for being transferred to first terminal by recording is started.
Step 306, the operating system of first terminal receive start recording request after, call the sound pick-up outfit of second terminal Start recording.
Step 307, sound pick-up outfit prompt user's input voice information.
Step 308, user carry out the input of voice messaging according to the prompt of sound pick-up outfit.After input, user is in master Speech voice input function key is lifted on the keyboard of third party's input method of client.
Step 309, when user lifts speech voice input function key on the keyboard of third party's input method of primary client, it is main Client initiates voice and inputs halt instruction, and the instruction is written in shared region.
Step 310, auxiliary client read the voice that primary client is initiated from shared region and input halt instruction.
Step 311, auxiliary client input halt instruction according to voice and generate the stopping recording request on backstage, and are transferred to The operating system of first terminal.
After step 312, the operating system of first terminal receive stopping recording request, the sound pick-up outfit of second terminal is called Stop recording.
The voice messaging that step 313, the operating system of the operating system of first terminal control second terminal obtain recording It is identified.
Wherein it is possible to after sending identification request to the operating system of first terminal from auxiliary client, by first terminal The operating system that operating system controls second terminal carries out speech recognition;Also can stopping received by the operating system of first terminal After recording request, the operating system for controlling second terminal carries out speech recognition.
Shared region is written in voice recognition result by step 314, the operating system of second terminal.
Step 315, primary client read voice recognition result from shared region, and handle voice recognition result.
In above-mentioned usage scenario, user can be in third party's keyboard in smart mobile phone, by clicking voice input Function key carries out voice input.With need to exit third party's keyboard in the prior art, be recorded by smart mobile phone, and will record The mode that sound result copies back third-party application is compared, and technical solution provided in this embodiment can simplify user's operation, convenient User uses.
Embodiment three
The embodiment of the present invention additionally provides a kind of speech recognition realization system of third-party application, for realizing above-mentioned side Method, as shown in figure 4, above system includes:
Auxiliary client 41 and primary client 51, the auxiliary client 41 are configured in first terminal 4, the host and guest family End 51 is configured in second terminal 5.As shown in figure 5, the auxiliary client 41 includes:
Instruction acquisition module 411, the speech-input instructions for obtaining the initiation of primary client 51;
Recording control module 412 for generating backstage recording request according to the speech-input instructions, and is transferred to described The operating system of first terminal 4 is recorded with the sound pick-up outfit for asking the first terminal 4 to call the second terminal 5;
Speech recognition controlled module 413 obtains recording for controlling the second terminal 5 by the first terminal 4 Voice messaging be identified, for the primary client 51 handle voice recognition result;
As shown in fig. 6, the primary client 51 includes:
Initiation module 511 is instructed, for initiating the speech-input instructions;
Result treatment module 512, the speech recognition result for handling.
Further, instruction acquisition module 411 is specifically used for:By monitoring thread, monitoring is configured in second terminal 5 Shared region, to obtain the speech-input instructions that the primary client 51 is written in shared region;
Speech recognition controlled module 413 is specifically used for:The second terminal 5 is controlled by the first terminal 4, by voice The shared region is written in recognition result.
Further, speech recognition controlled module 413 is specifically used for:
The second terminal 5 is controlled by the first terminal 4, the voice messaging, which is sent to server, to be known Not, and voice recognition result is received;Or
The second terminal 5 is controlled by the first terminal 4, local identification is carried out to the voice messaging.
Further, described instruction initiation module 511 is specifically used for:
The voice input sign on and voice input halt instruction that user input in interface of input method are received, described in write-in Shared region.
Further, the result treatment module 512 is specifically used for:
Voice recognition result is read in the primary client 51 from the shared region, and in the text box of interface of input method It is shown.
Further, the first terminal 4 is smartwatch, and the second terminal 5 is smart mobile phone, the operating system For iOS operating systems.
Note that above are only presently preferred embodiments of the present invention and institute's application technology principle.It will be appreciated by those skilled in the art that The present invention is not limited to specific embodiments described here, can carry out for a person skilled in the art it is various it is apparent variation, It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out to the present invention by above example It is described in further detail, but the present invention is not limited only to above example, without departing from the inventive concept, also May include other more equivalent embodiments, and the scope of the present invention is determined by scope of the appended claims.

Claims (10)

1. a kind of speech recognition implementation method of third-party application, which is characterized in that including:
The auxiliary client of first terminal is configured at by monitoring thread, monitors the shared region being configured in second terminal, with Obtain the speech-input instructions being configured in the primary client write-in shared region of second terminal;
The auxiliary client generates backstage recording according to the speech-input instructions and asks, and is transferred to the first terminal Operating system is recorded with the sound pick-up outfit for asking the first terminal to call the second terminal;
The auxiliary client controls the voice messaging that the second terminal obtains recording by the first terminal and knows Not, and by voice recognition result the shared region is written, so that the primary client handles voice recognition result.
2. according to the method described in claim 1, it is characterized in that, the auxiliary client controls institute by the first terminal State second terminal to the obtained voice messaging of recording be identified including:
The auxiliary client controls the second terminal by the first terminal, and the voice messaging is sent to server It is identified, and receives voice recognition result;Or
The auxiliary client controls the second terminal by the first terminal, and local knowledge is carried out to the voice messaging Not.
3. according to the method described in claim 1, it is characterized in that, primary client initiation speech-input instructions include:
The primary client of third party's input method receives the voice input sign on that user inputs in interface of input method and voice is defeated Enter halt instruction, the shared region is written.
4. according to the method described in claim 3, it is characterized in that, primary client processing voice recognition result includes:
Voice recognition result is read in the primary client from the shared region, and is shown in the text box of interface of input method Show.
5. according to the method described in claim 1, it is characterized in that:The first terminal is smartwatch, the second terminal For smart mobile phone, the operating system is iOS operating systems.
6. system is realized in a kind of speech recognition of third-party application, which is characterized in that including:
Auxiliary client and primary client, the auxiliary client are configured in first terminal, and the primary client is configured at the In two terminals;The auxiliary client includes:
Instruction acquisition module, for by monitoring thread, the shared region being configured in the second terminal being monitored, to obtain State the speech-input instructions in primary client write-in shared region;
Recording control module for generating backstage recording request according to the speech-input instructions, and is transferred to described first eventually The operating system at end is recorded with the sound pick-up outfit for asking the first terminal to call the second terminal;
Speech recognition controlled module, for controlling the voice messaging that the second terminal obtains recording by the first terminal It is identified, the second terminal is controlled by the first terminal, the shared region is written into voice recognition result, for The primary client handles voice recognition result;
The primary client includes:
Initiation module is instructed, for initiating the speech-input instructions;
Result treatment module, the speech recognition result for handling.
7. system according to claim 6, which is characterized in that the speech recognition controlled module is specifically used for:
The second terminal is controlled by the first terminal, the voice messaging is sent to server and is identified, and is connect Receive voice recognition result;Or
The second terminal is controlled by the first terminal, local identification is carried out to the voice messaging.
8. system according to claim 6, which is characterized in that described instruction initiation module is specifically used for:
The voice input sign on and voice input halt instruction that user inputs in interface of input method are received, is written described shared Region.
9. system according to claim 8, which is characterized in that the result treatment module is specifically used for:
Voice recognition result is read in the primary client from the shared region, and is shown in the text box of interface of input method Show.
10. system according to claim 6, it is characterised in that:The first terminal is smartwatch, the second terminal For smart mobile phone, the operating system is iOS operating systems.
CN201510334239.7A 2015-06-16 2015-06-16 The speech recognition method and system of third-party application Active CN104899087B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510334239.7A CN104899087B (en) 2015-06-16 2015-06-16 The speech recognition method and system of third-party application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510334239.7A CN104899087B (en) 2015-06-16 2015-06-16 The speech recognition method and system of third-party application

Publications (2)

Publication Number Publication Date
CN104899087A CN104899087A (en) 2015-09-09
CN104899087B true CN104899087B (en) 2018-08-24

Family

ID=54031765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510334239.7A Active CN104899087B (en) 2015-06-16 2015-06-16 The speech recognition method and system of third-party application

Country Status (1)

Country Link
CN (1) CN104899087B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105895134A (en) * 2016-05-10 2016-08-24 安徽声讯信息技术有限公司 Recording device with remote recording and cloud transliteration control functions and implementation method thereof
CN106201015B (en) * 2016-07-08 2019-04-19 百度在线网络技术(北京)有限公司 Pronunciation inputting method and device based on input method application software
CN107016998B (en) * 2017-03-20 2020-08-18 奇酷互联网络科技(深圳)有限公司 Method and system for voice recording between devices
CN107463539A (en) * 2017-07-20 2017-12-12 北京云知声信息技术有限公司 A kind of information stickup method and device
CN109966750B (en) * 2019-03-29 2020-12-22 浙江传媒学院 Sound control splicing toy
CN113157351B (en) * 2021-03-18 2022-06-07 福建马恒达信息科技有限公司 Voice plug-in construction method for quickly calling form tool

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739284A (en) * 2008-11-20 2010-06-16 联想(北京)有限公司 Computer and information processing method
CN102945673A (en) * 2012-11-24 2013-02-27 安徽科大讯飞信息科技股份有限公司 Continuous speech recognition method with speech command range changed dynamically
CN103730116A (en) * 2014-01-07 2014-04-16 苏州思必驰信息科技有限公司 System and method for achieving intelligent home device control on smart watch

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7574356B2 (en) * 2004-07-19 2009-08-11 At&T Intellectual Property Ii, L.P. System and method for spelling recognition using speech and non-speech input

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739284A (en) * 2008-11-20 2010-06-16 联想(北京)有限公司 Computer and information processing method
CN102945673A (en) * 2012-11-24 2013-02-27 安徽科大讯飞信息科技股份有限公司 Continuous speech recognition method with speech command range changed dynamically
CN103730116A (en) * 2014-01-07 2014-04-16 苏州思必驰信息科技有限公司 System and method for achieving intelligent home device control on smart watch

Also Published As

Publication number Publication date
CN104899087A (en) 2015-09-09

Similar Documents

Publication Publication Date Title
CN104899087B (en) The speech recognition method and system of third-party application
WO2019246551A1 (en) Facilitated conference joining
WO2017024875A1 (en) Voice information feedback method, device and television
US11762629B2 (en) System and method for providing a response to a user query using a visual assistant
US20180103376A1 (en) Device and method for authenticating a user of a voice user interface and selectively managing incoming communications
CN104866172A (en) Fault feedback method and fault feedback device
US9088655B2 (en) Automated response system
JP2006119625A (en) Verb error recovery in speech recognition
KR102535790B1 (en) Methods and apparatus for managing holds
CN110389697B (en) Data interaction method and device, storage medium and electronic device
US20160259525A1 (en) Method and apparatus for acquiring and processing an operation instruction
CN106228047B (en) A kind of application icon processing method and terminal device
CN107680592A (en) A kind of mobile terminal sound recognition methods and mobile terminal and storage medium
CN111353771A (en) Method, device, equipment and medium for remotely controlling payment
CN117472321A (en) Audio processing method and device, storage medium and electronic equipment
CN115150501A (en) Voice interaction method and electronic equipment
CN103973870B (en) Information processing device and information processing method
CN110634478A (en) Method and apparatus for processing speech signal
CN115118820A (en) Call processing method and device, computer equipment and storage medium
EP4027630A1 (en) Group calling system, group calling method, and program
CN113852835A (en) Live broadcast audio processing method and device, electronic equipment and storage medium
CN109343761B (en) Data processing method based on intelligent interaction equipment and related equipment
US11722572B2 (en) Communication platform shifting for voice-enabled device
CN116250833B (en) Hearing detection method and device based on mobile equipment
US20230032167A1 (en) Agent assist design - autoplay

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant