CN107767864A - Method, apparatus and mobile terminal based on voice sharing information - Google Patents
Method, apparatus and mobile terminal based on voice sharing information Download PDFInfo
- Publication number
- CN107767864A CN107767864A CN201610710046.1A CN201610710046A CN107767864A CN 107767864 A CN107767864 A CN 107767864A CN 201610710046 A CN201610710046 A CN 201610710046A CN 107767864 A CN107767864 A CN 107767864A
- Authority
- CN
- China
- Prior art keywords
- information
- user
- destination object
- intended application
- mark
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
Abstract
The present invention provides a kind of method, apparatus and mobile terminal based on voice sharing information, and method includes:Shows target pages, and receive user input share instruction;According to instruction is shared, speech data is obtained, speech data comprises at least:The information relevant with destination object to be shared;Destination object to be shared at least is determined based on speech data, and determines intended application to be shared;The information of target pages is sent to destination object by intended application.In accordance with the invention it is possible to avoid complexity share flow, brought great convenience to user.
Description
Technical field
The present invention relates to voice control technology, more particularly to a kind of method, apparatus and movement based on voice sharing information
Terminal.
Background technology
With the rapid development of Internet technology, various terminals have become indispensable communication in people's daily life
Instrument, the function of terminal is also increasingly abundanter, such as people can be by terminal come sharing information.
In the prior art, the mode of terminals share information is specific as follows:Click on and share button on target pages, from showing
Select sharing of showing one in channel, such as address list, wechat, microblogging, then determine specifically to share in channel from sharing again
Object, finally click on and confirm, give the Information Sharing of target pages to the object, such process of sharing at least needs more than 3 times
Operation, it is cumbersome to share process, and great inconvenience is brought to user.
The content of the invention
The present invention provides a kind of method, apparatus and mobile terminal based on voice sharing information, to solve in the prior art
Share the problem of process is cumbersome.
On one side, the present invention provides a kind of method based on voice sharing information, including:
Shows target pages, and receive user input share instruction;
Share instruction according to described, obtain speech data, the speech data comprises at least:With destination object to be shared
Relevant information;
Destination object to be shared at least is determined based on the speech data, and determines intended application to be shared;
The information related to the target pages is sent to the destination object by the intended application.
On the other hand, the present invention provides a kind of device shared based on voice, including:
Display module, for shows target pages;
Receiving module, for receive user input share instruction;
Acquisition module, for sharing instruction according to, speech data is obtained, the speech data comprises at least:With treating
The relevant information of the destination object shared;
Determining module, at least determining destination object to be shared based on the speech data, and determine to be shared
Intended application;
Sending module, for sending the letter related to the target pages to the destination object by the intended application
Breath.
Another further aspect, the present invention provide a kind of device based on voice sharing information, including:Input equipment, processor, show
Display screen;
The processor, for controlling the display screen shows target pages;
The input equipment, share instruction acquisition voice number for receiving the instruction of sharing of user's input, and according to described
According to the speech data comprises at least:The information relevant with destination object to be shared;
The processor, it is additionally operable at least determine destination object to be shared based on the speech data, and determines to treat point
The intended application enjoyed, the information related to the target pages is sent to the destination object by the intended application.
Another aspect, the present invention provide a kind of mobile terminal, including the device described in foregoing any one.
In the present invention, should in target including destination object by sending if user wants the information for sharing target pages
The voice of information in, it becomes possible to realize and the information related to target pages is sent to destination object by intended application, this
Sample, share the information of target pages without manually operating as far as possible, and then avoid complexity shares flow, to user with
Carry out great convenience.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are this hairs
Some bright embodiments, for those of ordinary skill in the art, without having to pay creative labor, can be with
Other accompanying drawings are obtained according to these accompanying drawings.
Figure 1A is the schematic flow sheet for the method based on voice sharing information that one embodiment of the invention provides;
Figure 1B is the schematic flow sheet for the method based on voice sharing information that another embodiment of the present invention provides;
Fig. 2A is the schematic flow sheet for the method based on voice sharing information that another embodiment of the present invention provides;
Each step of the method based on voice sharing information of the offer of Fig. 2 B to Fig. 2 G another embodiment of the present invention
Show the schematic diagram of the page;
Fig. 3 A are the schematic flow sheet for the method based on voice sharing information that further embodiment of this invention provides;
Each step of the method based on voice sharing information of the offer of Fig. 3 B and Fig. 3 C further embodiment of this invention
Show the schematic diagram of the page;
Fig. 4 A are the structural representation for the device based on voice sharing information that one embodiment of the invention provides;
Fig. 4 B are the structural representation for the device based on voice sharing information that further embodiment of this invention provides;
Fig. 5 A are the structural representation for the device based on voice sharing information that another embodiment of the present invention provides;
Fig. 5 B are the structural representation for the device based on voice sharing information that further embodiment of this invention provides;
Fig. 6 is the structural representation for the device based on voice sharing information that yet another embodiment of the invention provides.
Embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Following description is related to
During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous key element.Following exemplary embodiment
Described in embodiment do not represent and the consistent all embodiments of the present invention.On the contrary, they be only with it is such as appended
The example of the consistent apparatus and method of some aspects being described in detail in claims, of the invention.
Embodiment one
The present embodiment provides a kind of method based on voice sharing information, to realize sharing for information by voice.This
The executive agent of embodiment is the device based on voice sharing information.
As shown in Figure 1A, it is the schematic flow sheet of the method based on voice sharing information according to the present embodiment.This method
Including:
Step 111, shows target pages, and receive user input share instruction.
Target pages are the webpage being currently displaying on terminal display screen.The instruction of sharing of the present embodiment is used to share letter
Breath, such as the information related to target pages.
User input share instruction mode have a lot, for example, receive user on target pages by press or
That clicks on pre-set button transmission shares instruction, or shares instruction, example by performing predetermined registration operation to input on target pages
Share instruction to input such as long-press target pages, or to showing that the terminal of the target pages carries out predetermined registration operation, such as rock
Terminal, or share instruction by sending phonetic order to input, it can specifically be selected according to being actually needed, herein no longer
Repeat.
Step 112, according to instruction is shared, speech data is obtained, speech data comprises at least:With destination object to be shared
Relevant information.
The information relevant with destination object to be shared is some information of destination object.Speech data is what user sent
Sound, destination object included by the speech data are to be intended to target pages sending user extremely.The information of destination object is for example
The information such as the account for being destination object in intended application, user name, head portrait, the self-defined pet name.
Alternatively, if receive user on target pages by press pre-set button send share instruction, can obtain
Take the speech data that family inputs in Continued depression button procedure, share instruction according to other manner input, then can be with
The voice of user's input in preset time is obtained after input instruction is identified and then obtains speech data.
Step 113, destination object to be shared at least is determined based on the speech data, and determines that target to be shared should
With.
On the basis of previous step obtains speech data, skilled in the art realises that, multiple technologies can be used
Mode extracts the required information content from speech data.In one embodiment, it can extract and/or determine to treat
The destination object shared.This programme is not intended to limit determines target pair to be shared using some ad hoc fashions from speech data
As.Those skilled in the art can according to implementation environment (such as, but not limited to, hardware limitation, software limitation, security restriction, etc.)
Situation, arbitrarily select a variety of methods of the prior art to realize this step, such as every voice content that can be based on is carried out
The mode of audio retrieval can.
For example, the letter of destination object to be shared in speech data can be obtained according to the mode of keyword extraction
Breath.Keyword extraction is exactly that given word, i.e. keyword are identified in continuous language, and ignore in language except keyword with
Outer other times and various non-voices, such as breathing, cough, musical sound, background noise etc..Specifically, first to speech data
Front-end processing, the influence that elimination noise and different speakers bring are carried out, then carries out feature extraction, to identify keyword.It is real
In the application of border, keyword extraction can be carried out based on filler (Filter) model, Hidden Marcov can also be utilized
Models (HMM) carries out keyword extraction, is also based on online rubbish (On-Line Garbage, OLG) model, to enter
Row keyword extraction, it can specifically be selected according to being actually needed, not limited herein.
In other embodiments, other audio signal processing techniques are also based on required letter is extracted from speech data
Cease content.For example, converting voice data into word using various automatic speech recognitions (ASR) technology, and use kinds of words
Participle technique intercepts out keyword.This programme is not limited and not tired out and stated herein.
As a kind of exemplary illustration, device is based on speech data and has identified multiple destination objects simultaneously, can basis
Required destination object is determined in the selection of user, for example, show each destination object identified to user, for
Family is selected, and required destination object, such as the mesh by highest priority can also be determined according to the priority of destination object
Object is marked as the destination object finally determined, further, it is also possible to using the multiple destination objects identified as final true
Fixed destination object.
For example, two kinds of speech datas of user's input are respectively " Wang Xiaohua " and " are shared with Wang Xiaohua ", device identification
Go out three objects " Wang Xiaohua ", " campus belle " and " sweet smell rings ", then can will contact most frequent object as final destination object
And user is first given, it is for selection that multiple destination objects can also be provided a user.It is, of course, also possible to determine otherwise most
Whole destination object, it can specifically be determined according to being actually needed.
In another embodiment, device can obtain all information of each object in various intended applications in advance,
After the information of destination object is received, all pre-existing destination objects are just matched in intended application.At one
Under exemplary scenario, device can obtain the associated person information in instant messaging (IM, Instant Messaging) application, and
Object " Wang Xiaohua ", " campus belle " and " the sweet smell sound " in the speech data that user inputs in last example are matched, it is determined that matching pre-
The two objects object " Wang Xiaohua " that pre-exists, " campus belle ", and it is supplied to user to be selected this two objects object.
Under an also exemplary scenario, device can be written by the information conversion of each object in various intended applications
This information, after device determines the information of destination object, just in all text messages matching corresponding to destination object.We
Case does not do expansion description herein.
Based on above-mentioned steps, after destination object is determined, just match intended application, if such as match one should
With can then perform step 114, if matching multiple applications, an application can also be randomly choosed as intended application and held
Row step 114, according to frequency of use an application can also be selected as intended application and to perform step 114, certainly can be with
Using other modes selection target application, can specifically be set according to being actually needed.
Alternatively, intended application can also be determined by the manual mode of user.Specifically, when sharing letter based on voice
After the device of breath determines destination object, the multiple applications related to destination object are just obtained, show the first of multiple applications
Mark, and corresponding to selection of the user to the first mark, it is determined that the intended application corresponding with the first selected mark.Citing
For, if the information of destination object is the title of destination object, such as " Wang Xiaohua ", and destination object is in microblogging, wechat, Taobao
The title is used Deng three sections of applications, then the device based on voice sharing information identifies that this is more according to the title of destination object
After in individual application, the mark or title of this three sections applications can be shown, user selects one of them as intended application, example
Such as, one of title applied is clicked on to send the first confirmation, and application as target should corresponding to the title clicked on
With.
Step 114, the information with target pages is sent to destination object by intended application.
The information related to target pages can include at least one of following information:The link of target pages, mesh
Mark snapshot, the information of user-defined target pages of the page.User corresponding to destination object can open mesh by linking
The page is marked, the relevant information of target pages can be known by the snapshot of target pages.In addition, the user that instruction is shared in input can
With the information of customized target pages, such as word or input speech data etc. are edited on the snapshot of target pages, so
For targeted customer corresponding to destination object it is seen that the self-defined information or hearing the self-defined information, i.e. self-defined information can
Presented in a manner of using voice and/or word.
Specifically, after step 112, and before step 113, in addition to:
Editing prompt information is shown, and obtains the self-defined letter that user inputs during the display of editing prompt information
Breath, self-defined information include self-defined voice messaging or self-defined text information;
The information of target pages of the generation with self-defined information.
User can enter edlin to the relevant information of target pages for wanting to send, the editor can be voice edition or
Copy editor.For example, self-defined voice messaging is inputted during editing prompt information is shown, or in display editing prompt
The self-defined text information of input in the input frame of information, device generate according to the relevant information of the self-defined information and target pages
It will send to the information of the target pages of destination object.
Alternatively, after step 111, in addition to:
The cancellation instruction that user sends is received, is stopped according to instruction is cancelled according to the operation for sharing instruction triggers.
Many times, it may be possible to which instruction is shared in triggering for user error, and user is not intended to the correlation to target pages
Information is shared, and for the facility of user, user can send cancellation instruction, such as click on the pre-set button on target pages
Cancel instruction to trigger, or share instruction by performing predetermined registration operation to trigger on target pages, such as adopting consecutive click chemical reaction mesh
The page is marked, or shares instruction to showing that the terminal of the target pages carries out predetermined registration operation to trigger, such as rocks terminal, or
Share instruction to trigger by sending phonetic order, can specifically be selected, will not be repeated here according to being actually needed.Work as base
After the device of voice sharing information identifies cancellation instruction, just terminate according to the operation for sharing instruction triggers, i.e. termination point
Enjoy operation.
Alternatively, at least determine that destination object to be shared includes based on speech data:
The multiple objects related to the information of destination object are matched in intended application;
Show the second mark of multiple objects;
Corresponding to selection of the user to the second mark, it is determined that the destination object corresponding with the second selected mark.
Specifically, corresponding to selection of the user to the second mark, it is determined that the target corresponding with the second selected mark
Object includes:
Corresponding to user via phonetic entry or clicking operation selected by mark, it is determined that with selected second mark phase
Corresponding destination object;Or
Corresponding to multiple second marks of user's selection, it is determined that the multiple destination objects corresponding with the multiple second marks.
Destination object selected by user can be one or multiple.
For example, if entitled " Wang Xiaohua " of the destination object in intended application, user are passing through phonetic entry mesh
After the division name " dawn China " for marking object, division name progress of the device based on voice sharing information to the destination object
Match somebody with somebody, match multiple related objects, be " Li Xiaohua ", " Zhao Xiaohua " and " Wang Xiaohua " respectively, then show that this is multiple to user
Second mark of object, such as this multiple respective head portrait of object is shown, user clicks on the head portrait of Liu Xiao China, based on voice point
Head portrait of the device of information according to selected by user is enjoyed, determines Liu Xiao Huawei destination object.
According to the present embodiment, if user wants the information for sharing target pages, by sending including destination object in target
The voice of information in, it becomes possible to realize the information for sending target pages to destination object by intended application, so, to the greatest extent
The information of target pages may be shared without manually operating, and then avoid complexity shares flow, to user brings pole
Big facility.
Embodiment two
The present embodiment provides a kind of method based on voice sharing information, to realize sharing for information by voice.This
The executive agent of embodiment is the device based on voice sharing information.
As shown in Figure 1B, it is the schematic flow sheet of the method based on voice sharing information according to the present embodiment.This method
Including:
Step 101, shows target pages, and receive user input share instruction.
Target pages are the webpage being currently displaying on terminal display screen.The instruction of sharing of the present embodiment is used to share letter
Breath, such as the information related to target pages.
User input share instruction mode have a lot, for example, receive user on target pages by press or
That clicks on pre-set button transmission shares instruction, or shares instruction, example by performing predetermined registration operation to input on target pages
Share instruction to input such as long-press target pages, or to showing that the terminal of the target pages carries out predetermined registration operation, such as rock
Terminal, or share instruction by sending phonetic order to input, it can specifically be selected according to being actually needed, herein no longer
Repeat.
Step 102, according to instruction acquisition speech data is shared, speech data comprises at least intended application and destination object
Information.
Speech data is the sound that user sends, and the intended application included by the speech data is intended to share target pages
Applied to used in other users, destination object is to be intended to target pages sending user extremely.
The intended application of the present embodiment is one in applying below:Short message, social networking application.The social networking application can be micro-
The applications such as letter, microblogging, nail nail, can also be other application, for example, Taobao, Alipay, facebook, twitter,
Instagram etc., as long as can realize that the application that information exchange is carried out with other objects can serve as intended application.
Alternatively, if receive user on target pages by press pre-set button send share instruction, can obtain
Take the speech data that family inputs in Continued depression button procedure, share instruction according to other manner input, then can be with
The voice of user's input in preset time is obtained after input instruction is identified and then obtains speech data.
The information of intended application is the information relevant with intended application, and the information of destination object is relevant with destination object
Information.User just inputs speech data after instruction is shared in input, and device can obtain mesh according to the mode of keyword extraction
Mark application and the information of destination object.For example, device can obtain the information of various intended applications in advance and target should
The information of each object in, such as the name of some popular applications is counted, and then analysis identification is carried out to speech data, knowing
Do not go out after intended application, the speech data of user's input can be matched in intended application, to identify target pair
As.
Step 103, according to the speech data, the letter related to target pages is sent to destination object by intended application
Breath.
The information related to target pages can include at least one of following information:The link of target pages, mesh
Mark snapshot, the information of user-defined target pages of the page.User corresponding to destination object can open mesh by linking
The page is marked, the relevant information of target pages can be known by the snapshot of target pages.In addition, the user that instruction is shared in input can
With the information of customized target pages, such as word or input speech data etc. are edited on the snapshot of target pages, so
For targeted customer corresponding to destination object it is seen that the self-defined information or hearing the self-defined information, i.e. self-defined information can
Presented in a manner of using voice and/or word.
Specifically, after step 102, and before step 103, in addition to:
Editing prompt information is shown, and obtains the self-defined letter that user inputs during the display of editing prompt information
Breath, self-defined information include self-defined voice messaging or self-defined text information;
The information of target pages of the generation with self-defined information.
User can enter edlin to the relevant information of target pages for wanting to send, the editor can be voice edition or
Copy editor.For example, self-defined voice messaging is inputted during editing prompt information is shown, or in display editing prompt
The self-defined text information of input in the input frame of information, device generate according to the relevant information of the self-defined information and target pages
It will send to the information of the target pages of destination object.
Alternatively, after step 101, in addition to:
The cancellation instruction that user sends is received, is stopped according to instruction is cancelled according to the operation for sharing instruction triggers.
Many times, it may be possible to which instruction is shared in triggering for user error, and user is not intended to the correlation to target pages
Information is shared, and for the facility of user, user can send cancellation instruction, such as click on the pre-set button on target pages
Cancel instruction to trigger, or share instruction by performing predetermined registration operation to trigger on target pages, such as adopting consecutive click chemical reaction mesh
The page is marked, or shares instruction to showing that the terminal of the target pages carries out predetermined registration operation to trigger, such as rocks terminal, or
Share instruction to trigger by sending phonetic order, can specifically be selected, will not be repeated here according to being actually needed.Work as base
After the device of voice sharing information identifies cancellation instruction, just terminate according to the operation for sharing instruction triggers, i.e. termination point
Enjoy operation.
Alternatively, after step 102, and before step 103, in addition to:
Obtain the multiple applications related to the information of intended application;
Show the 3rd mark of multiple applications;
Corresponding to selection of the user to the 3rd mark, it is determined that the intended application corresponding with the 3rd selected mark.
For example, if the application in user terminal have it is multiple, such as two of which application entitled " day cat " and " my god
Its express delivery ", user by the division name of phonetic entry intended application " my god " after, the device pair based on voice sharing information
The division name of the intended application is matched, and matches two applications to be determined, is " day cat " and " express delivery everyday " respectively, then
This multiple application to be determined is shown to user, such as shows the mark of application to be determined, the mark that user clicks on this day cat is sent
3rd confirmation, the device based on voice sharing information identify the 3rd confirmation, it is determined that " day cat " is intended application.
Next, after intended application is determined, following operation can also carry out:
The multiple objects related to the information of destination object are matched in intended application;
Show the second mark of multiple objects;
Corresponding to selection of the user to the second mark, it is determined that the destination object corresponding with the second selected mark.
For example, if entitled " Wang Xiaohua " of the destination object in intended application, user are passing through phonetic entry mesh
After the division name " dawn China " for marking object, division name progress of the device based on voice sharing information to the destination object
Match somebody with somebody, match multiple objects, be " Li Xiaohua ", " Zhao Xiaohua " and " Wang Xiaohua " respectively, then show this multiple object, example to user
The head portrait of object to be determined is such as shown, user clicks on the head portrait of Liu Xiao China, and the device based on voice sharing information identifies the choosing
Select, determine Liu Xiao Huawei destination object.
According to the present embodiment, if user wants the information for sharing target pages, by sending including intended application and mesh
The voice of mark object can be realized as sending the information of target pages to destination object by intended application, so, as much as possible
Share the information of target pages without manually operating, and then avoid complexity shares flow, brings greatly to user
It is convenient.
Embodiment three
The present embodiment does further supplementary notes to the method based on voice sharing information of embodiment two.
As shown in Figure 2 A, it is the schematic flow sheet of the method based on voice sharing information according to the present embodiment.This method
Including:
Step 201, shows target pages, and user is received by pressing sharing for the transmission of the pre-set button on target pages
Instruction, perform step 202.
Target pages are the page being currently displaying on terminal display screen, such as webpage.The present embodiment shares instruction
For sharing information, such as share the information related to target pages.
Predeterminated position on target pages can be provided with virtual push button, trigger and share when user presses the virtual push button
Instruction.
Step 202, if the time for identifying user's Continued depression pre-set button is more than the first predetermined threshold value, first is shown
Prompt message, and the first speech data that user inputs during the display of the first prompt message is obtained, by the first voice number
According to the information as intended application, step 203 is performed.
Speech data is the sound that user sends, and the intended application that the speech data is previously mentioned is intended to share target pages
Applied to used in targeted customer, destination object is to be intended to the information of target pages sending targeted customer extremely in intended application
The middle account used.
The intended application of the present embodiment is one in applying below:Short message, social networking application.The social networking application can be micro-
The applications such as letter, microblogging, nail nail, can also be other application, for example, Taobao, Alipay, facebook, twitter,
Instagram etc., as long as can realize that the application that information exchange is carried out with other objects can serve as intended application.
First predetermined threshold value of the present embodiment can be set according to being actually needed, such as 0.5 second, then in terminal
The prompt message of screen display first is shown, first prompt message is used for the information for reminding user to input intended application, so used
The information of family can input intended application.
Step 203, the second prompt message is shown, and obtains user inputs during the second prompt message is shown second
Speech data, the information using second speech data as destination object, perform step 204.
After showing the first prompt message preset time or when identifying that user does not input the time of voice more than presetting
Between after the upper limit, the second prompt message is shown, to prompt user to input the information of destination object.
Step 204, speech data is analyzed, obtains the information of intended application and the information of destination object, performs step 205.
For example, after user decontrols pressed pre-set button, device can will be in user in Continued depression
Acquired speech data is analyzed during pre-set button, to obtain the information of intended application and destination object.Specifically
Ground, device can be stored speech data to different positions to distinguish target according to the first prompt message and the second prompt message
Using the information with destination object.
Step 205, whether if identifying terminal installation targets application, being inquired about in intended application has destination object,
If Query Result is yes, step 206 is performed, otherwise performs step 207.
Before step 205 is performed, it can first determine whether terminal has been mounted with intended application, however, it is determined that result is
It is then to perform step 205.Otherwise, it can be downloaded automatically by network and the intended application is installed, then perform step 205, or
It is to show the information for prompting user not install the intended application.
Step 206, the information related to target pages is sent to destination object.
For example, device opens dialog interface corresponding to destination object, and target is sent to destination object by dialog interface
The information of the page, the process can be shown to user, be then back to target pages.
Alternatively, inquiry is after to have destination object in intended application, and is opening dialogue corresponding to destination object
Between interface, in addition to:
Display target is applied and/or the information of destination object;
If receiving the confirmation that user sends according to the information of shown intended application and/or destination object to instruct,
Perform the operation for opening dialog interface corresponding to destination object.
For example, after having destination object in inquiring intended application, can be with display target object in intended application
Information, such as head image information, user confirms instruction by clicking on the information of the display and sending, such as clicks on the head portrait shown.
So, user can further confirm that destination object, to avoid sending mistake.
Step 207, the query failure message for indicating no destination object in intended application is shown.
For example, if targeted customer corresponding to destination object has different information in different applications, the information is, for example,
Title, it is more likely that the user that instruction is shared in input remembers wrongly targeted customer's information in intended application, can so show and look into
Failure information is ask, to prompt there is no the destination object in user's intended application.
Below, the method for the present embodiment is illustrated in a manner of concrete example.
User A opens target pages 210 as shown in Figure 2 B, thinks that the article shown by the target pages is very good, thinks
Targeted customer B is recommended, then can press " sharing " button 211 on target pages.And after pressing 0.1 second, display
First prompt message 212 as that shown in fig. 2 c, the content of first prompt message 212 is " title that please input intended application ",
Now user can input voice " wechat ", and after device identifies that user did not inputted time of voice more than 1 second, display is as schemed
The second prompt message 213 shown in 2D, the content of second prompt message 213 is " asking the appearance of the title into destination object ", now
User can input voice " Xiao Fang ", then, editing prompt information 214 can be shown such as Fig. 2 E, content is that " please input self-defined
Information ", user A input voices " this thing is very good ", or such as Fig. 2 F display the prompt box 215, ask user's input self-defined
Information, user input word " this thing is very good ", then click on ACK button 216.
When device identify user click on ACK button 216 after, then calling and obtaining user input intended application information, and
The title of destination object is matched in intended application, if the match is successful, shows that destination object as shown in Figure 2 G should in target
Head portrait 217 in, some other relevant informations can also be shown, user A, which clicks on the head portrait 217 and sent, confirms instruction, device
After receiving confirmation instruction, the snapshot of the target pages with self-defined information is sent to destination object.
User B opens the information of user A transmissions by wechat, identifies that this is a photo by preview graph, next
Click on the photo and be amplified display, the terminal corresponding to user B is defeated in the colleague broadcasting user A that the snapshot of target pages is presented
The self-defined voice messaging entered, or the snapshot of target pages of the display with user's A self-defined informations.
According to the present embodiment, when user wants the information for sharing target pages, instruction is shared in input, is carried by display first
Show information and the second prompt message to prompt user to input intended application and the information of destination object, not only the nothing so that user tries one's best
The information of target pages need to be shared by the manually operated of complexity, be brought great convenience to user, and enable to fill
Accurate identification intended application and the information of destination object are put, and then avoids the information of target pages being sent to the mesh of mistake as far as possible
Mark object.
Example IV
The present embodiment does further supplementary notes to the method based on voice sharing information of embodiment two.
As shown in Figure 3A, it is the schematic flow sheet of the method based on voice sharing information according to the present embodiment.This method
Including:
Step 301, shows target pages, and user is received by pressing sharing for the transmission of the pre-set button on target pages
Instruction, perform step 302.
Target pages are the page being currently displaying on terminal display screen, such as webpage.The present embodiment shares instruction
For sharing information, such as share the information related to target pages.
Step 302, whether two sections are included in the speech data that identification user generates during Continued depression pre-set button
Voice subdata, the time interval between two sections of voice subdatas, if recognition result is yes, perform more than the second predetermined threshold value
Step 303.
Speech data is the sound that user sends, and the intended application that the speech data is previously mentioned is intended to share target pages
Applied to used in targeted customer, destination object is to be intended to the information of target pages sending targeted customer extremely in intended application
The middle account used.
The intended application of the present embodiment is one in applying below:Short message, social networking application.The social networking application can be micro-
The applications such as letter, microblogging, nail nail, can also be other application, for example, Taobao, Alipay, facebook, twitter,
Instagram etc., as long as can realize that the application that information exchange is carried out with other objects can serve as intended application.
Second predetermined threshold value of the present embodiment can be set as needed, such as 0.5 second or 1 second, it can also set certainly
For other time, do not limit herein.
If identifying does not include two sections of voice subdatas in speech data, shown to user for prompting None- identified
Information, to inform that user re-enters speech data, and time interval therebetween is elongated when inputting two sections of voice subdatas.
Step 303, the information using first paragraph voice subdata as intended application, using second segment voice subdata as mesh
The information of object is marked, performs step 304.
Step 304, speech data is analyzed, obtains the information of intended application and the information of destination object, performs step 305.
For example, after user decontrols pressed pre-set button, device can will be in user in Continued depression
Acquired speech data is analyzed during pre-set button, to obtain the information of intended application and destination object.Specifically
Ground, device can be stored speech data to different positions to distinguish target according to the first prompt message and the second prompt message
Using the information with destination object.
Analysis speech data mode how can specifically realize by the way of keyword extraction and belong to prior art,
It will not be repeated here.
Step 305, whether if identifying terminal installation targets application, being inquired about in intended application has destination object,
If Query Result is yes, step 306 is performed, otherwise performs step 307.
Before step 305 is performed, it can first determine whether terminal has been mounted with intended application, however, it is determined that result is
It is then to perform step 305.Otherwise, it can be downloaded automatically by network and the intended application is installed, then perform step 305, or
It is to show the information for prompting user not install the intended application.
Step 306, the information related to target pages is sent to destination object.
For example, device open destination object corresponding to dialog interface, and by dialog interface to destination object send and mesh
The related information of the page is marked, the process can be shown to user, be then back to target pages.
Alternatively, inquiry is after to have destination object in intended application, and is opening dialogue corresponding to destination object
Between interface, in addition to:
Display target is applied and/or the information of destination object;
If receiving the confirmation that user sends according to the information of shown intended application and/or destination object to instruct,
Perform the operation for opening dialog interface corresponding to destination object.
For example, after having destination object in inquiring intended application, can be with display target object in intended application
Information, the information are, for example, head image information, and user confirms instruction by clicking on the information that this shows and sending, such as clicks on aobvious
The head portrait for showing to come, which is sent, confirms instruction.So, user can further confirm that destination object, to avoid sending mistake.
Step 307, the query failure message for indicating no destination object in intended application is shown.
For example, if targeted customer corresponding to destination object has different information in different applications, the information is, for example,
Title, it is more likely that the user that instruction is shared in input remembers wrongly information of the targeted customer in intended application, can so show
Query failure message, to prompt not having the destination object in user's intended application.
Below, the method for the present embodiment is illustrated in a manner of concrete example.
It is assumed that the second predetermined threshold value is 0.5 second.
User A opens target pages 310 as shown in Figure 3 B, thinks that the article shown by the target pages is very good, thinks
Recommend user B, then can be with " sharing " button 311 on Continued depression target pages, now, user can input two sections of languages
Phone data, it is respectively " wechat " and " Xiao Fang " that the time interval between two sections of voice subdatas is 1 second.Next, no longer
Pressing should " sharing " button 311.
When device identify user decontrol " sharing " button 311 after, then calling and obtaining user input intended application information,
And the title of destination object is matched in intended application, if the match is successful, show destination object as shown in Figure 3 C in target
Head portrait 312 in, user A, which clicks on the head portrait 312 and sent, confirms instruction, after device receives confirmation instruction, to target
Object sends the snapshot of the target pages with self-defined information.
User B opens the information of user A transmissions by wechat, identifies that this is a photo by preview graph, next
Click on the photo and be amplified display, the terminal corresponding to user B is defeated in the colleague broadcasting user A that the snapshot of target pages is presented
The self-defined voice messaging entered, or the snapshot of target pages of the display with user's A self-defined informations.
According to the present embodiment, when user wants the information for sharing target pages, instruction is shared in triggering, and by identifying user
Whether include two sections of voice subdatas in the speech data generated during Continued depression pre-set button, not only cause user without
The information of target pages need to be shared by the manually operated of complexity, be brought great convenience to user, and enable to fill
Accurate identification intended application and the information of destination object are put, and then avoids the information of target pages being sent to the mesh of mistake as far as possible
Mark object.
Embodiment five
The present embodiment provides a kind of device based on voice sharing information, for performing foregoing method.
As shown in Figure 4 A, it is the structural representation of the device based on voice sharing information according to the present embodiment.This implementation
The device of example includes display module 401, receiving module 402, acquisition module 403, determining module 404 and sending module 405.
Wherein, display module 401 is used for shows target pages;Receiving module 402 be used for receive user input share finger
Order;Acquisition module 403 is used to, according to instruction is shared, obtain speech data, speech data comprises at least:With target pair to be shared
As relevant information;Determining module 404 is used at least determine destination object to be shared based on the speech data, and determines to treat
The intended application shared;Sending module 405 is used to send the information related to target pages to destination object by intended application.
Alternatively, as shown in Figure 4 B, the determining module 404 includes the display mark submodule of Target Acquisition submodule 412, first
The determination sub-module 414 of block 413 and first, wherein, it is related to the information of destination object that Target Acquisition submodule 412 is used for acquisition
Multiple applications;First display labeling submodule 413 is used for the first mark for showing multiple applications;First determination sub-module 414 is used
In corresponding to user to first mark selection, it is determined that the intended application corresponding with the first selected mark.
Alternatively, the first determination sub-module 414 is specifically used for:
Corresponding to user via the first mark selected by phonetic entry or clicking operation, it is determined that being marked with selected first
Intended application corresponding to sensible.
Alternatively, the first determination sub-module 414 is specifically used for:
Corresponding to multiple first marks of user's selection, it is determined that the multiple intended applications corresponding with the multiple first marks.
Alternatively, as shown in Figure 4 B, the determining module 404 also includes the first matched sub-block 415, second display mark
The determination sub-module 417 of module 416 and second, wherein, the first matched sub-block 415 is used for matching and target pair in intended application
The related multiple objects of the information of elephant;Second display labeling submodule 416 is used for the second mark for showing multiple objects;Second is true
Stator modules 417 are used for corresponding to selection of the user to the second mark, it is determined that the target corresponding with the second selected mark
Object.
Alternatively, the second determination sub-module 417 is specifically used for:
Corresponding to user via phonetic entry or clicking operation selected by mark, it is determined that with selected second mark phase
Corresponding destination object;Or
Corresponding to multiple second marks of user's selection, it is determined that the multiple destination objects corresponding with the multiple second marks.
Alternatively, speech data includes the information of intended application.Alternatively, as shown in Figure 4 B, determining module 404 is also wrapped
Include the second matched sub-block the 420, the 3rd and show the determination sub-module 422 of labeling submodule 421 and the 3rd.Wherein, the second matching
Module 420 is used to obtain the multiple applications related to the information of intended application;3rd display labeling submodule 421 is more for showing
3rd mark of individual application;3rd determination sub-module 422 be used for corresponding to user to the 3rd mark selection, it is determined that with it is selected
The 3rd corresponding intended application of mark.
It is pointed out that Target Acquisition submodule 412, first shows the determination sub-module of labeling submodule 413 and first
414 can show that the determination sub-module 417 of labeling submodule 416 and second exists simultaneously with the first matched sub-block 415, second,
It can also be individually present respectively, similarly, Target Acquisition submodule 412, first shows that labeling submodule 413 and first determines son
Module 414 can show that the determination sub-module 422 of labeling submodule 421 and the 3rd is deposited simultaneously with the second matched sub-block the 420, the 3rd
It can also be respectively present, the first matched sub-block 415, second shows the determination sub-module 417 of labeling submodule 416 and second
It can show that the determination sub-module 422 of labeling submodule 421 and the 3rd exists simultaneously with the second matched sub-block the 420, the 3rd, also may be used
To be respectively present, shown in Fig. 4 B with the existing situation of upper module.
Correspondingly, receiving module 402 is specifically used for:Receive what user sent by pressing the pre-set button on target pages
Share instruction;
Alternatively, acquisition module 403 is specifically used for:Obtain the voice number that user inputs in Continued depression button procedure
According to.
Alternatively, the information of destination object includes at least one in following information:Destination object is in intended application
Title, head portrait.
Alternatively, the information of target pages includes at least one of following information:The link of target pages, target pages
Snapshot, the information of user-defined target pages.
Alternatively, intended application is one in applying below:Short message, social networking application.
Alternatively, speech data also includes the self-defined information of user.
Alternatively, the information of target pages also includes self-defined information.
According to the present embodiment, if user wants the information for sharing target pages, by sending including intended application and mesh
The voice of mark object can be realized as sending the information of target pages to destination object by intended application, so, as much as possible
Share the information of target pages without manually operating, and then avoid complexity shares flow, brings greatly to user
It is convenient.
Embodiment six
The present embodiment does further supplementary notes to the device of example IV.The present embodiment is mainly to the specific of acquisition module
Mode of operation is described further.
Mode one:As shown in Figure 5A, it is the structural representation of the device according to the present embodiment.Acquisition module in the device
404 include the first acquisition submodule 4031 and the second acquisition submodule 4032.Wherein, if the first acquisition submodule 4031 is used to know
Do not go out the time of user's Continued depression pre-set button more than the first predetermined threshold value, then show the first prompt message, and obtain user
The first speech data inputted during the display of the first prompt message, correspondingly, determining module 404 is by the first speech data
Information as intended application;Second acquisition submodule 4032 is used to show the second prompt message, and obtains user and carried second
Show the second speech data inputted in information display process, correspondingly, determining module 404 is using second speech data as target pair
The information of elephant.
Mode two:Determining module 404 is specifically used for:
Whether include two sections of voice subnumbers in the speech data that identification user generates during Continued depression pre-set button
According to the time interval between two sections of voice subdatas is more than the second predetermined threshold value;
If recognition result is yes, the information using first paragraph voice subdata as intended application, by second segment voice
Information of the data as destination object.
Mode three:Determining module 404 is specifically used for:
The information of the intended application and destination object in speech data is obtained using keyword extraction mode.
Alternatively, the sending module 405 of the present embodiment is specifically used for:
Speech data is analyzed, obtains the information of intended application and the information of destination object;
Whether if identifying terminal installation targets application, being inquired about in intended application has destination object;
If Query Result is yes, dialog interface corresponding to destination object is opened, and by dialog interface to destination object
Send the information of target pages.
Alternatively, the sending module 405 is additionally operable to:
If Query Result is no, the query failure message for indicating no destination object in intended application is shown.
Alternatively, sending module 405 is additionally operable to:
Trigger the information of the application of the display target of display module 401 and/or destination object;
If receiving the confirmation that user sends according to the information of shown intended application and/or destination object to instruct,
Perform the operation for opening dialog interface corresponding to destination object.
Alternatively, as shown in Figure 5 B, the device of the present embodiment also includes editor module 502, and the editor module 502 is used for:
Editing prompt information is shown, and obtains the self-defined letter that user inputs during the display of editing prompt information
Breath, self-defined information include self-defined voice messaging or self-defined text information;
The information of target pages of the generation with self-defined information.
Alternatively, the device also includes cancelling module 503, cancels the cancellation instruction that module 503 is used to receive user's transmission,
The operation for instructing and stopping according to instruction triggers are shared according to cancelling.Show that cancel module is connected with display module 503 in Fig. 5 B
Connect, the cancellation module can be connected with the operational blocks which partition system in Fig. 5 B, to cancel the operation of corresponding module.
On the device in the present embodiment, wherein modules perform the concrete mode of operation in relevant this method
It is described in detail in embodiment, explanation will be not set forth in detail herein.
According to the present embodiment, when user wants the information for sharing target pages, instruction is shared in triggering, and passes through different modes
The information of intended application and destination object is identified, not only causes user without sharing target pages by the manually operated of complexity
Information, brought great convenience to user, and enable to device accurately to identify intended application and the information of destination object,
And then avoid the information of target pages being sent to the destination object of mistake as far as possible.
Embodiment seven
The present embodiment provides another device based on voice sharing information, for performing foregoing method.
The device of the present embodiment includes input equipment, processor, display screen;
Wherein, processor is used for control display screen shows target pages;Input equipment is used to receive sharing for user's input
Instruction, and speech data is obtained according to instruction is shared, speech data comprises at least the information relevant with destination object to be shared;
Processor is additionally operable at least determine destination object to be shared based on the speech data, and determines intended application to be shared,
The information related to target pages is sent to destination object by intended application.
Alternatively, processor is specifically used for:
Obtain the multiple applications related to the information of destination object;
Control display screen shows the first mark of multiple applications;
Corresponding to selection of the user to the first mark, it is determined that the intended application corresponding with the first selected mark.
Alternatively, processor is specifically used for:
The multiple objects related to the information of destination object are matched in intended application;
Control display screen shows the second mark of multiple objects;
Corresponding to selection of the user to the second mark, it is determined that the destination object corresponding with the second selected mark.
Alternatively, the information of intended application is also included in speech data.
Alternatively, processor is specifically used for:
Obtain the multiple applications related to the information of intended application;
Show the 3rd mark of multiple applications;
Corresponding to selection of the user to the 3rd mark, it is determined that the intended application corresponding with the 3rd selected mark.
Correspondingly, processor is specifically used for:
If identifying the time of user's Continued depression pre-set button more than the first predetermined threshold value, control display screen shows
One prompt message, and the first speech data that user inputs during the display of the first prompt message is obtained, by the first voice
Information of the data as intended application;
Control display screen shows the second prompt message, and obtains user inputs during the second prompt message is shown
Two speech datas, the information using second speech data as destination object;Or
Alternatively, processor is specifically used for:
Whether include two sections of voice subnumbers in the speech data that identification user generates during Continued depression pre-set button
According to the time interval between two sections of voice subdatas is more than the second predetermined threshold value;
If recognition result is yes, the information using first paragraph voice subdata as intended application, by second segment voice
Information of the data as destination object.
Alternatively, processor is additionally operable to control display screen and shows editing prompt information, and obtains user and believe in editing prompt
The self-defined information inputted during the display of breath, self-defined information include self-defined voice messaging or self-defined text information;
The information of target pages of the generation with self-defined information.
Alternatively, the input equipment input equipment, processor, display screen, processor, display screen of the present embodiment are additionally operable to hold
The foregoing corresponding method of row, is specifically repeated no more.
As shown in fig. 6, the structural representation for the device based on voice sharing information according to the present embodiment.
For example, device 600 can be mobile phone, and computer, digital broadcast terminal, messaging devices, game control
Platform, tablet device, Medical Devices, body-building equipment, personal digital assistant etc..
Reference picture 6, device 600 can include following one or more assemblies:Processing component 602, memory 604, power supply
Component 606, multimedia groupware 608, audio-frequency assembly 610, the interface 612 of input/output (I/O), sensor cluster 614, and
Communication component 616.
The integrated operation of the usual control device 600 of processing component 602, such as communicated with display, call, data, phase
The operation that machine operates and record operation is associated.Processing component 602 can refer to including one or more processors 620 to perform
Order, to complete all or part of step of above-mentioned method.In addition, processing component 602 can include one or more modules, just
Interaction between processing component 602 and other assemblies.For example, processing component 602 can include multi-media module, it is more to facilitate
Interaction between media component 608 and processing component 602.
Memory 604 is configured as storing various types of data to support the operation in equipment 600.These data are shown
Example includes the instruction of any application program or method for being operated on device 600, contact data, telephone book data, disappears
Breath, picture, video etc..Memory 604 can be by any kind of volatibility or non-volatile memory device or their group
Close and realize, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM) are erasable to compile
Journey read-only storage (EPROM), programmable read only memory (PROM), read-only storage (ROM), magnetic memory, flash
Device, disk or CD.
Power supply module 606 provides electric power for the various assemblies of device 600.Power supply module 606 can include power management system
System, one or more power supplys, and other components associated with generating, managing and distributing electric power for device 600.
Multimedia groupware 608 is included in the display screen of one output interface of offer between device 600 and user.At some
In embodiment, display screen can include liquid crystal display (LCD) and touch panel (TP).If display screen includes touch panel,
Display screen may be implemented as touch-screen, to receive the input signal from user.Touch panel includes one or more touch
Sensor is with the gesture on sensing touch, slip and touch panel.Touch sensor can not only sensing touch or sliding action
Border, but also detect and touch or the related duration and pressure of slide.In certain embodiments, multimedia group
Part 608 includes a front camera and/or rear camera.When equipment 600 is in operator scheme, such as screening-mode or video
During pattern, front camera and/or rear camera can receive outside multi-medium data.Each front camera and rearmounted
Camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio-frequency assembly 610 is configured as output and/or input audio signal.For example, audio-frequency assembly 610 includes a Mike
Wind (MIC), when device 600 is in operator scheme, during such as call model, logging mode and speech recognition mode, microphone by with
It is set to reception external audio signal.The audio signal received can be further stored in memory 604 or via communication set
Part 616 is sent.In certain embodiments, audio-frequency assembly 610 also includes a loudspeaker, for exports audio signal.
I/O interfaces 612 provide interface between processing component 602 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock
Determine button.
Sensor cluster 614 includes one or more sensors, and the state for providing various aspects for device 600 is commented
Estimate.For example, sensor cluster 614 can detect opening/closed mode of equipment 600, the relative positioning of component, such as component
For the display and keypad of device 600, sensor cluster 614 can be with the position of 600 1 components of detection means 600 or device
Put change, the existence or non-existence that user contacts with device 600, the orientation of device 600 or acceleration/deceleration and the temperature of device 600
Change.Sensor cluster 614 can include proximity transducer, be configured in no any physical contact near detection
The presence of object.Sensor cluster 614 can also include optical sensor, such as CMOS or ccd image sensor, for should in imaging
With middle use.In certain embodiments, the sensor cluster 614 can also include acceleration transducer, gyro sensor, magnetic
Sensor, pressure sensor or temperature sensor.
Communication component 616 is configured to facilitate the communication of wired or wireless way between device 600 and other equipment.Device
600 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.In an exemplary implementation
In example, communication component 616 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel.
In one exemplary embodiment, communication component 616 also includes near-field communication (NFC) module, to promote junction service.For example,
Radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology, bluetooth can be based in NFC module
(BT) technology and other technologies are realized.
In the exemplary embodiment, device 600 can be believed by one or more application specific integrated circuits (ASIC), numeral
Number processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for performing the above method.
From the foregoing, it will be observed that communication component 616, audio-frequency assembly 610 and input/output involved in Fig. 6 embodiments connects
Mouth 612 can be as the implementation of input equipment.
According to the present embodiment, user wants the information for sharing target pages, by sending including intended application and target
The voice of object can be realized as sending the information of target pages to destination object by intended application, so without manually
The information of target pages is shared in operation, and then avoid complexity shares flow, is brought great convenience to user.
Present invention additionally comprises a kind of mobile terminal, include the device of foregoing any embodiment.
Finally it should be noted that:Various embodiments above is merely illustrative of the technical solution of the present invention, rather than its limitations;To the greatest extent
The present invention is described in detail with reference to foregoing embodiments for pipe, it will be understood by those within the art that:Its according to
The technical scheme described in foregoing embodiments can so be modified, either which part or all technical characteristic are entered
Row equivalent substitution;And these modifications or replacement, the essence of appropriate technical solution is departed from various embodiments of the present invention technology
The scope of scheme.
Claims (53)
- A kind of 1. method based on voice sharing information, it is characterised in that including:Shows target pages, and receive user input share instruction;Share instruction according to described, obtain speech data, the speech data comprises at least:It is relevant with destination object to be shared Information;Destination object to be shared at least is determined based on the speech data, and determines intended application to be shared;The information related to the target pages is sent to the destination object by the intended application.
- 2. according to the method for claim 1, it is characterised in that described to determine that intended application to be shared includes:Obtain the multiple applications related to the destination object;Show the first mark of the multiple application;Corresponding to selection of the user to the described first mark, it is determined that the target corresponding with selected first mark should With.
- 3. according to the method for claim 2, it is characterised in that the selection for corresponding to user to the described first mark, It is determined that the intended application corresponding with selected first mark includes:Corresponding to the user via the first mark selected by phonetic entry or clicking operation, it is determined that with selected described the The corresponding intended application of one mark.
- 4. according to the method for claim 2, it is characterised in that the selection for corresponding to user to the described first mark, It is determined that the intended application corresponding with selected first mark includes:Corresponding to multiple first marks of user selection, it is determined that the multiple targets corresponding with the multiple first mark should With.
- 5. according to the method for claim 1, it is characterised in that it is described at least determined based on the speech data it is to be shared Destination object includes:The multiple objects related to the information of the destination object are matched in the intended application;Show the second mark of the multiple object;Corresponding to selection of the user to the described second mark, it is determined that the target pair corresponding with selected second mark As.
- 6. according to the method for claim 5, it is characterised in that the selection for corresponding to user to the described second mark, It is determined that the destination object corresponding with selected second mark includes:Corresponding to the user via phonetic entry or clicking operation selected by mark, it is determined that with selected second mark Destination object corresponding to sensible.
- 7. according to the method for claim 5, it is characterised in that the selection for corresponding to user to the described second mark, It is determined that the destination object corresponding with selected second mark includes:Corresponding to multiple second marks of user selection, it is determined that the multiple targets pair corresponding with the multiple second mark As.
- 8. according to the method for claim 1, it is characterised in that the speech data includes the information of intended application.
- 9. according to the method for claim 8, it is characterised in that described to determine that intended application to be shared includes:Obtain the multiple applications related to the information of the intended application;Show the 3rd mark of the multiple application;Corresponding to selection of the user to the described 3rd mark, it is determined that the target corresponding with selected the 3rd mark should With.
- 10. according to the method for claim 8, it is characterised in thatThe instruction of sharing for receiving user's input includes:Receive user by press the pre-set button on the target pages send share instruction;Sharing instruction acquisition speech data described in the basis includes:Obtain the speech data that the user inputs in button procedure described in Continued depression.
- 11. according to the method for claim 10, it is characterised in that it is described it is few determined based on the speech data it is to be shared Destination object, and intended application to be shared is determined, including:If identify the time of pre-set button described in user's Continued depression more than the first predetermined threshold value, the prompting of display first Information, and the first speech data that the user inputs during the display of the first prompt message is obtained, by first language Information of the sound data as intended application;The second prompt message is shown, and obtains the second voice that the user inputs during second prompt message is shown Data, the information using the second speech data as destination object.
- 12. according to the method for claim 10, it is characterised in that it is described it is few determined based on the speech data it is to be shared Destination object, and intended application to be shared is determined, including:Whether identify in the speech data that the user inputs during pre-set button described in Continued depression includes two sections of voices Subdata, the time interval between two sections of voice subdatas is more than the second predetermined threshold value;If recognition result is yes, the information using voice subdata described in first paragraph as intended application, by the second segment language Information of the phone data as destination object.
- 13. according to the method for claim 1, it is characterised in that described at least to determine to wait to share based on the speech data Destination object include:The information of destination object described in speech data is obtained using keyword extraction mode.
- 14. according to the method for claim 8, it is characterised in that it is described by the intended application to the destination object Sending the information related to the target pages includes:The speech data is analyzed, obtains the information of the intended application and the information of the destination object;If identifying, terminal has installed the intended application, and whether inquired about in the intended application has the destination object;If Query Result is yes, dialog interface corresponding to the destination object is opened, and by the dialog interface to described Destination object sends the related information of the target pages.
- 15. according to the method for claim 14, it is characterised in that also include:If Query Result is no, show for representing the query failure message without destination object in the intended application.
- 16. according to the method any one of claim 1-15, it is characterised in that the information of the destination object is included such as It is at least one in lower information:Title of the destination object in the intended application, head portrait.
- 17. according to the method any one of claim 1-15, it is characterised in that the information related to target pages Including at least one of following information:The linking of the target pages, the snapshot of the target pages, user-defined institute State the information of target pages.
- 18. according to the method any one of claim 1-15, it is characterised in that during the intended application is applies below One:Short message, social networking application.
- 19. according to the method any one of claim 1-15, it is characterised in that the speech data also includes user's Self-defined information.
- 20. according to the method for claim 19, it is characterised in that the related information of the target pages also include it is described from Define information.
- 21. according to the method for claim 20, it is characterised in that according to described in share instruction obtain speech data it Afterwards, and before the information related to the target pages is sent to the destination object by the intended application, in addition to:Editing prompt information is shown, and obtains the self-defined letter that the user inputs during the display of editing prompt information Breath, the self-defined information include self-defined voice messaging or self-defined text information;The information of the target pages of the generation with the self-defined information.
- 22. according to the method any one of claim 1-15, it is characterised in that share instruction in reception user's input Afterwards, in addition to:The cancellation instruction that user sends is received, according to the operation cancelled instruction and stop to share instruction triggers according to.
- A kind of 23. device shared based on voice, it is characterised in that including:Display module, for shows target pages;Receiving module, for receive user input share instruction;Acquisition module, for sharing instruction according to, speech data is obtained, the speech data comprises at least:With waiting to share The relevant information of destination object;Determining module, at least determining destination object to be shared based on the speech data, and determine target to be shared Using;Sending module, for sending the information related to the target pages to the destination object by the intended application.
- 24. device according to claim 23, it is characterised in that the determining module specifically includes:Target Acquisition submodule, for obtaining the multiple applications related to the information of the destination object;First display labeling submodule, for showing the first mark of the multiple application;First determination sub-module, for corresponding to selection of the user to the described first mark, it is determined that with selected described first Identify corresponding intended application.
- 25. device according to claim 24, it is characterised in that first determination sub-module is specifically used for:Corresponding to the user via the first mark selected by phonetic entry or clicking operation, it is determined that with selected described the The corresponding intended application of one mark.
- 26. device according to claim 24, it is characterised in that first determination sub-module is specifically used for:Corresponding to multiple first marks of user selection, it is determined that the multiple targets corresponding with the multiple first mark should With.
- 27. device according to claim 24, it is characterised in that the determining module includes:First matched sub-block is related to the information of the destination object multiple right for being matched in the intended application As;Second display labeling submodule, for showing the second mark of the multiple object;Second determination sub-module, for corresponding to selection of the user to the described second mark, it is determined that with selected described second Identify corresponding destination object.
- 28. device according to claim 27, it is characterised in that second determination sub-module is specifically used for:Corresponding to the user via phonetic entry or clicking operation selected by mark, it is determined that with selected second mark Destination object corresponding to sensible.
- 29. device according to claim 27, it is characterised in that second determination sub-module is specifically used for:Corresponding to multiple second marks of user selection, it is determined that the multiple targets pair corresponding with the multiple second mark As.
- 30. device according to claim 23, it is characterised in that the speech data includes the information of intended application.
- 31. device according to claim 30, it is characterised in that the determining module specifically includes:Second matched sub-block, for obtaining the multiple applications related to the information of the intended application;3rd display labeling submodule, for showing the 3rd mark of the multiple application;3rd determination sub-module, for corresponding to selection of the user to the described 3rd mark, it is determined that with the selected the described 3rd Identify corresponding intended application.
- 32. device according to claim 30, it is characterised in thatThe receiving module is specifically used for:Receive user by press the pre-set button on the target pages send share finger Order;The acquisition module is specifically used for:Obtain the speech data that the user inputs in button procedure described in Continued depression.
- 33. device according to claim 32, it is characterised in that the acquisition module includes the first acquisition submodule and the Two acquisition submodules,Wherein, if first acquisition submodule is used to identify that the time of pre-set button described in user's Continued depression exceedes First predetermined threshold value, then the first prompt message is shown, and obtain the user and inputted during the display of the first prompt message The first speech data, correspondingly, the determining module is specifically used for letter using first speech data as intended application Breath;Second acquisition submodule is used to show the second prompt message, and obtains the user and show in second prompt message The second speech data inputted during showing, correspondingly, the determining module is using the second speech data as destination object Information.
- 34. device according to claim 32, it is characterised in that the determining module is specifically used for:Whether identify in the speech data that the user inputs during pre-set button described in Continued depression includes two sections of voices Subdata, the time interval between two sections of voice subdatas is more than the second predetermined threshold value;If recognition result is yes, the information using voice subdata described in first paragraph as intended application, by the second segment language Information of the phone data as destination object.
- 35. device according to claim 23, it is characterised in that the determining module is specifically used for:The information of the intended application and destination object in speech data is obtained using keyword extraction mode.
- 36. device according to claim 30, it is characterised in that the sending module is specifically used for:The speech data is analyzed, obtains the information of the intended application and the information of the destination object;If identifying, terminal has installed the intended application, and whether inquired about in the intended application has the destination object;If Query Result is yes, dialog interface corresponding to the destination object is opened, and by the dialog interface to described Destination object sends the information of the target pages.
- 37. device according to claim 36, it is characterised in that the sending module is additionally operable to:If Query Result is no, the query failure message for indicating no destination object in the intended application is shown.
- 38. according to the device any one of claim 23-37, it is characterised in that the information of the destination object includes It is at least one in following information:Title of the destination object in the intended application, head portrait.
- 39. according to the device any one of claim 23-37, it is characterised in that the letter related to target pages Breath includes at least one of following information:It is the linking of the target pages, the snapshot of the target pages, user-defined The information of the target pages.
- 40. according to the device any one of claim 23-37, it is characterised in that the intended application is to apply below In one:Short message, social networking application.
- 41. according to the device any one of claim 23-37, it is characterised in that the speech data also includes user Self-defined information.
- 42. device according to claim 41, it is characterised in that the information related to target pages also includes described Self-defined information.
- 43. device according to claim 42, it is characterised in that be also used for including editor module, the editor module:Editing prompt information is shown, and obtains the self-defined letter that the user inputs during the display of editing prompt information Breath, the self-defined information include self-defined voice messaging or self-defined text information;The information of the target pages of the generation with the self-defined information.
- 44. according to the device any one of claim 23-37, it is characterised in that also include cancelling module, the cancellation Module is used for the cancellation instruction for receiving user's transmission, according to the behaviour for cancelling instruction and stopping to share instruction triggers according to Make.
- A kind of 45. device based on voice sharing information, it is characterised in that including:Input equipment, processor, display screen;The processor, for controlling the display screen shows target pages;The input equipment, share instruction acquisition speech data, institute for receiving the instruction of sharing of user's input, and according to described Speech data is stated to comprise at least:The information relevant with destination object to be shared;The processor, it is additionally operable at least determine destination object to be shared based on the speech data, and determines to be shared Intended application, the information related to the target pages is sent to the destination object by the intended application.
- 46. device according to claim 45, it is characterised in that the processor is specifically used for:Obtain the multiple applications related to the information of the destination object;The display screen is controlled to show the first mark of the multiple application;Corresponding to selection of the user to the described first mark, it is determined that the target corresponding with selected first mark should With.
- 47. device according to claim 45, it is characterised in that the processor is specifically used for:The multiple objects related to the information of the destination object are matched in the intended application;The display screen is controlled to show the second mark of the multiple object;Corresponding to selection of the user to the described second mark, it is determined that the target pair corresponding with selected second mark As.
- 48. device according to claim 45, it is characterised in that also include the letter of intended application in the speech data Breath.
- 49. device according to claim 48, it is characterised in that the processor is specifically used for:Obtain the multiple applications related to the information of the intended application;Show the 3rd mark of the multiple application;Corresponding to selection of the user to the described 3rd mark, it is determined that the target corresponding with selected the 3rd mark should With.
- 50. device according to claim 48, it is characterised in that the processor is specifically used for:If identifying, the time of pre-set button described in user's Continued depression more than the first predetermined threshold value, controls the display Screen shows the first prompt message, and obtains the first voice number that the user inputs during the display of the first prompt message According to the information using first speech data as intended application;Control the display screen to show the second prompt message, and obtain the user during second prompt message is shown The second speech data of input, the information using the second speech data as destination object.
- 51. device according to claim 48, it is characterised in that the input equipment is specifically used for:Whether identify in the speech data that the user generates during pre-set button described in Continued depression includes two sections of voices Subdata, the time interval between two sections of voice subdatas is more than the second predetermined threshold value;If recognition result is yes, the information using voice subdata described in first paragraph as intended application, by the second segment language Information of the phone data as destination object.
- 52. according to the device any one of claim 45-51, it is characterised in thatThe processor is additionally operable to control the display screen to show editing prompt information, and obtains the user and believe in editing prompt The self-defined information inputted during the display of breath, the self-defined information includes self-defined voice messaging or self-defined word is believed Breath;The information of the target pages of the generation with the self-defined information.
- 53. a kind of mobile terminal, it is characterised in that including the device any one of claim 23-52.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610710046.1A CN107767864B (en) | 2016-08-23 | 2016-08-23 | Method and device for sharing information based on voice and mobile terminal |
TW106119676A TW201807565A (en) | 2016-08-23 | 2017-06-13 | Voice-based information sharing method, device, and mobile terminal |
PCT/CN2017/097012 WO2018036392A1 (en) | 2016-08-23 | 2017-08-11 | Voice-based information sharing method, device, and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610710046.1A CN107767864B (en) | 2016-08-23 | 2016-08-23 | Method and device for sharing information based on voice and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107767864A true CN107767864A (en) | 2018-03-06 |
CN107767864B CN107767864B (en) | 2021-06-29 |
Family
ID=61246382
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610710046.1A Active CN107767864B (en) | 2016-08-23 | 2016-08-23 | Method and device for sharing information based on voice and mobile terminal |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN107767864B (en) |
TW (1) | TW201807565A (en) |
WO (1) | WO2018036392A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108920119A (en) * | 2018-06-29 | 2018-11-30 | 维沃移动通信有限公司 | A kind of sharing method and mobile terminal |
CN109065049A (en) * | 2018-09-13 | 2018-12-21 | 苏州思必驰信息科技有限公司 | Social sharing method and system, the intelligent terminal of intelligent terminal based on interactive voice |
CN110347303A (en) * | 2018-04-04 | 2019-10-18 | 腾讯科技(深圳)有限公司 | A kind of information processing method and relevant device |
CN110544473A (en) * | 2018-05-28 | 2019-12-06 | 百度在线网络技术(北京)有限公司 | Voice interaction method and device |
CN110728586A (en) * | 2019-09-25 | 2020-01-24 | 支付宝(杭州)信息技术有限公司 | Data sharing method and device and application popularization method and device |
CN111583929A (en) * | 2020-05-13 | 2020-08-25 | 军事科学院系统工程研究院后勤科学与技术研究所 | Control method and device using offline voice and readable equipment |
CN108470566B (en) * | 2018-03-08 | 2020-09-15 | 腾讯科技(深圳)有限公司 | Application operation method and device |
CN113037924A (en) * | 2021-01-27 | 2021-06-25 | 维沃移动通信有限公司 | Voice sending method and device and electronic equipment |
CN113113005A (en) * | 2021-03-19 | 2021-07-13 | 大众问问(北京)信息科技有限公司 | Voice data processing method and device, computer equipment and storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110333836B (en) * | 2019-07-05 | 2023-08-25 | 网易(杭州)网络有限公司 | Information screen projection method and device, storage medium and electronic device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007026997A1 (en) * | 2005-08-30 | 2007-03-08 | Kt Corporation | System for service sharing and controling contents in voice session and thereof method |
CN104065718A (en) * | 2014-06-19 | 2014-09-24 | 深圳米唐科技有限公司 | Method and system for achieving social sharing through intelligent loudspeaker box |
CN104063155A (en) * | 2013-03-20 | 2014-09-24 | 腾讯科技(深圳)有限公司 | Content sharing method and device and electronic equipment |
US8977248B1 (en) * | 2007-03-26 | 2015-03-10 | Callwave Communications, Llc | Methods and systems for managing telecommunications and for translating voice messages to text messages |
CN105100449A (en) * | 2015-06-30 | 2015-11-25 | 广东欧珀移动通信有限公司 | Picture sharing method and mobile terminal |
CN105094801A (en) * | 2015-06-12 | 2015-11-25 | 阿里巴巴集团控股有限公司 | Application function activating method and application function activating device |
CN105656753A (en) * | 2015-12-16 | 2016-06-08 | 魅族科技(中国)有限公司 | Sending method and device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6017219A (en) * | 1997-06-18 | 2000-01-25 | International Business Machines Corporation | System and method for interactive reading and language instruction |
CN103680497B (en) * | 2012-08-31 | 2017-03-15 | 百度在线网络技术(北京)有限公司 | Speech recognition system and method based on video |
CN104023040B (en) * | 2013-03-01 | 2018-06-01 | 联想(北京)有限公司 | A kind of method and device of information processing |
CN104580534B (en) * | 2015-02-06 | 2018-08-31 | 联想(北京)有限公司 | Information processing method, device and electronic equipment |
-
2016
- 2016-08-23 CN CN201610710046.1A patent/CN107767864B/en active Active
-
2017
- 2017-06-13 TW TW106119676A patent/TW201807565A/en unknown
- 2017-08-11 WO PCT/CN2017/097012 patent/WO2018036392A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007026997A1 (en) * | 2005-08-30 | 2007-03-08 | Kt Corporation | System for service sharing and controling contents in voice session and thereof method |
US8977248B1 (en) * | 2007-03-26 | 2015-03-10 | Callwave Communications, Llc | Methods and systems for managing telecommunications and for translating voice messages to text messages |
CN104063155A (en) * | 2013-03-20 | 2014-09-24 | 腾讯科技(深圳)有限公司 | Content sharing method and device and electronic equipment |
CN104065718A (en) * | 2014-06-19 | 2014-09-24 | 深圳米唐科技有限公司 | Method and system for achieving social sharing through intelligent loudspeaker box |
CN105094801A (en) * | 2015-06-12 | 2015-11-25 | 阿里巴巴集团控股有限公司 | Application function activating method and application function activating device |
CN105100449A (en) * | 2015-06-30 | 2015-11-25 | 广东欧珀移动通信有限公司 | Picture sharing method and mobile terminal |
CN105656753A (en) * | 2015-12-16 | 2016-06-08 | 魅族科技(中国)有限公司 | Sending method and device |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108470566B (en) * | 2018-03-08 | 2020-09-15 | 腾讯科技(深圳)有限公司 | Application operation method and device |
CN110347303A (en) * | 2018-04-04 | 2019-10-18 | 腾讯科技(深圳)有限公司 | A kind of information processing method and relevant device |
CN110544473A (en) * | 2018-05-28 | 2019-12-06 | 百度在线网络技术(北京)有限公司 | Voice interaction method and device |
US11238858B2 (en) | 2018-05-28 | 2022-02-01 | Baidu Online Network Technology (Beijing) Co., Ltd. | Speech interactive method and device |
CN108920119A (en) * | 2018-06-29 | 2018-11-30 | 维沃移动通信有限公司 | A kind of sharing method and mobile terminal |
CN109065049A (en) * | 2018-09-13 | 2018-12-21 | 苏州思必驰信息科技有限公司 | Social sharing method and system, the intelligent terminal of intelligent terminal based on interactive voice |
CN110728586A (en) * | 2019-09-25 | 2020-01-24 | 支付宝(杭州)信息技术有限公司 | Data sharing method and device and application popularization method and device |
CN111583929A (en) * | 2020-05-13 | 2020-08-25 | 军事科学院系统工程研究院后勤科学与技术研究所 | Control method and device using offline voice and readable equipment |
CN113037924A (en) * | 2021-01-27 | 2021-06-25 | 维沃移动通信有限公司 | Voice sending method and device and electronic equipment |
CN113037924B (en) * | 2021-01-27 | 2022-11-25 | 维沃移动通信有限公司 | Voice transmission method, device, electronic equipment and readable storage medium |
CN113113005A (en) * | 2021-03-19 | 2021-07-13 | 大众问问(北京)信息科技有限公司 | Voice data processing method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
TW201807565A (en) | 2018-03-01 |
CN107767864B (en) | 2021-06-29 |
WO2018036392A1 (en) | 2018-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107767864A (en) | Method, apparatus and mobile terminal based on voice sharing information | |
CN104780155B (en) | Apparatus bound method and device | |
CN101557432B (en) | Mobile terminal and menu control method thereof | |
CN104980580B (en) | Short message inspection method and device | |
CN104391870B (en) | Logistics information acquisition methods and device | |
CN105072178B (en) | Cell-phone number binding information acquisition methods and device | |
CN104184870A (en) | Call log marking method and device and electronic equipment | |
CN104639972B (en) | The method, apparatus and equipment of a kind of sharing contents | |
CN105430146A (en) | Telephone number identification method and device | |
CN106209604A (en) | Add the method and device of good friend | |
CN107544802A (en) | device identification method and device | |
CN103997574B (en) | The method and apparatus for obtaining voice service | |
CN106990903A (en) | Display and the method and device of hide application program | |
CN107423386A (en) | Generate the method and device of electronic card | |
CN107301242A (en) | Method, device and the storage medium of page jump | |
CN107220059A (en) | The display methods and device of application interface | |
CN106921958A (en) | The method and apparatus for quitting the subscription of business | |
CN104536753B (en) | Backlog labeling method and device | |
CN106302116A (en) | Message method and device | |
CN107295099A (en) | PUSH message processing method, device and storage medium | |
CN104902055B (en) | Contact person's creation method and device | |
CN107257318A (en) | Control method, device and the computer-readable recording medium of electronic equipment | |
CN107070707A (en) | Router initializes the determination method and apparatus of pattern | |
CN107018502A (en) | Short message recognition methods and device | |
CN105512542A (en) | Information inputting method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1251709 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |