CN103869948A - Voice command processing method and electronic device - Google Patents

Voice command processing method and electronic device Download PDF

Info

Publication number
CN103869948A
CN103869948A CN201210546374.4A CN201210546374A CN103869948A CN 103869948 A CN103869948 A CN 103869948A CN 201210546374 A CN201210546374 A CN 201210546374A CN 103869948 A CN103869948 A CN 103869948A
Authority
CN
China
Prior art keywords
keyword
scope
user
input
demonstration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210546374.4A
Other languages
Chinese (zh)
Other versions
CN103869948B (en
Inventor
郭诚
杨振奕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201210546374.4A priority Critical patent/CN103869948B/en
Publication of CN103869948A publication Critical patent/CN103869948A/en
Application granted granted Critical
Publication of CN103869948B publication Critical patent/CN103869948B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a voice command processing method and a corresponding electronic device. The voice command processing method is applied to the electronic device, and an interface including display objects corresponding to operations are displayed on the display screen of the electronic device. The voice command processing method comprises the steps of: receiving a first input implemented by a user in a first manner; determining a range in the interface according to the first input, wherein the range relates to one or more display objects; providing the range to the user; processing each related display object in the range, and thus determining a keyword for indicating each display object, wherein the keywords of the display objects are different from one another; providing the keywords to the user; using the keywords as matching words for implementing voice matching, so that when inputting the keywords through voice, the user executes the corresponding operation of the display object corresponding to the keword. According to the voice command processing method and the electronic device, the user can input a voice command conveniently and rapidly.

Description

Voice command disposal route and electronic equipment
Technical field
The present invention relates to voice command disposal route and corresponding electronic equipment in electronic equipment.
Background technology
At present, voice technology has obtained development and application widely at a high speed.The application of voice technology makes the user of electronic equipment can just can be to electronic equipment input command without hand, for example, open application or put through phone.But, being limited to the accuracy of speech recognition and semantic understanding, current voice command treatment technology is also difficult to any phonetic entry identify and understand.
On the other hand, on the screen of electronic equipment, tend to show to there are multiple interfaces that can operand, for example, on webpage display, conventionally there is hyperlink text or the picture that can click in a large number.Conventionally, user can realize corresponding operation by operand by clicking these, for example, open the corresponding page of hyperlink address of hyperlink text by clickable hyperlinks text.But more convenient in order to use, user also wishes to carry out associative operation by the mode of voice command sometimes.In addition,, sometimes because the demonstration object on screen is too intensive, if directly carry out clicking operation, especially, in the situation that clicking with hand touch, may cause maloperation.
Summary of the invention
In view of the above problems with the defect of prior art, the present invention by first on screen selected certain limit can operand, realize again the processing of voice command according to the method for the keyword input voice command of prompting.
One embodiment of the present of invention provide a kind of voice command disposal route, be applied to an electronic equipment, on the display screen of wherein said electronic equipment, show the interface that comprises the demonstration object with respective operations, described voice command disposal route comprises: receive the first input that user carries out with first method; In described interface, determine a scope according to described the first input, wherein said scope relates to one or more described demonstration objects; Point out described scope to user; The each demonstration object relating in described scope is processed, thereby determined the keyword of indicating each demonstration object, wherein each keyword that shows object differs from one another; Point out described keyword to user; Coupling word using described keyword as voice match so that user by carrying out the respective operations of the demonstration object that described keyword is corresponding described in phonetic entry when keyword.
Another embodiment of the present invention provides a kind of voice command disposal route, be applied to an electronic equipment, on the display screen of wherein said electronic equipment, show the interface that comprises the demonstration object with respective operations, described voice command disposal route comprises: receive the first input that user carries out with first method; In described interface, determine a scope according to described the first input, wherein said scope relates to one or more described demonstration objects; Point out described scope to user; For the each demonstration object relating in described scope, set the keyword of this demonstration object of instruction, wherein each keyword that shows object differs from one another; Point out described keyword to user; Receive the second input that user carries out with second method according to suggested keyword, wherein said second method is phonetic entry mode; Described the second input is carried out to speech recognition, and obtain recognition result; According to described recognition result, in described keyword, determine the target keyword corresponding with described the second input; And carry out the respective operations of the indicated demonstration object of described target keyword.
Another embodiment of the present invention provides a kind of electronic equipment, comprising: display unit shows the interface that comprises the demonstration object with respective operations on display screen; The first receiving element, receives the first input that user carries out with first method; Scope determining unit is determined a scope according to described the first input in described interface, and wherein said scope relates to one or more described demonstration objects; Region cue unit, points out described scope to user; Keyword setup unit, processes the each demonstration object relating in described scope, thereby determines the keyword of the each demonstration object of instruction, and wherein each keyword that shows object differs from one another; Keyword Tip element, points out described keyword to user; And keyword operating unit, the coupling word using described keyword as voice match so that user by carrying out the respective operations of the demonstration object that described keyword is corresponding described in phonetic entry when keyword.
Another embodiment of the present invention provides a kind of electronic equipment, comprising: display unit shows the interface that comprises the demonstration object with respective operations on display screen; The first receiving element, receives the first input that user carries out with first method; Scope determining unit is determined a scope according to described the first input in described interface, and wherein said scope relates to one or more described demonstration objects; Region cue unit, points out described scope to user; Keyword setup unit, for the each demonstration object relating in described scope, sets the keyword of this demonstration object of instruction, and wherein each keyword that shows object differs from one another; Keyword Tip element, points out described keyword to user; The second receiving element, receives the second input that user carries out with second method according to suggested keyword, and wherein said second method is phonetic entry mode; Voice recognition unit, carries out speech recognition to described the second input, and obtains recognition result; Target keyword determining unit according to described recognition result, is determined the target keyword corresponding with described the second input in described keyword; And command executing unit, carry out the respective operations of the indicated demonstration object of described target keyword.
The technical scheme providing by the above embodiment of the present invention, makes user can input quickly and easily voice command, and has reduced electronic equipment identification simultaneously and understood the difficulty of voice command and improved the accuracy of voice command identification.
Brief description of the drawings
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, will the accompanying drawing of required use in the description of embodiment be briefly described below.Accompanying drawing in the following describes is only exemplary embodiment of the present invention.
Fig. 1 illustrates according to the process flow diagram of the voice command disposal route 100 of the first embodiment of the present invention.
Fig. 2 a-2c is schematically illustrated according to the example interface 200 of the embodiment of the present invention.
Fig. 3 illustrates the process flow diagram of voice command disposal route 300 according to a second embodiment of the present invention.
Fig. 4 illustrates the schematic block diagram of the electronic equipment 400 of a third embodiment in accordance with the invention.
Fig. 5 illustrates the schematic block diagram of the electronic equipment 500 of a fourth embodiment in accordance with the invention.
Embodiment
Hereinafter, describe the preferred embodiments of the present invention in detail with reference to accompanying drawing.Note, in this instructions and accompanying drawing, substantially the same step and element represent with same or analogous Reference numeral, and will be omitted the repetition of explanation of these steps and element.
In following examples of the present invention, electronic equipment refers to any electronic equipment with display screen, and its concrete form includes but not limited to personal computer, intelligent television, panel computer, mobile phone, digital camera, personal digital assistant, portable computer, game machine etc.The display screen of electronic equipment can be the screen of any type, for example, comprise LCD, LED, TFT, IPS, OLED etc., also can comprise that the display screen of low-power consumption is as Memory LCD, E-Ink etc.The type of display screen does not form limitation of the invention.
Fig. 1 illustrates according to the process flow diagram of the voice command disposal route 100 of the first embodiment of the present invention.Voice command disposal route 100 is applied to an electronic equipment, and this electronic equipment can be any electronic equipment as described above.On the display screen of described electronic equipment, show the interface that comprises the demonstration object with respective operations.Interface described here is the general name of the content that shows on screen, and it can be the interface of certain or some application, can be also the desktop of operating system.The included demonstration object in interface can be any displaying contents on interface, and for example, it can be word, picture, action button etc.For example, above-mentioned interface can be the Webpage in web browser, and described interface can be the display interface of webpage, and described demonstration object can be word or the picture etc. in webpage.These show that object has corresponding operation, in the time that user shows that to these object carries out specific action, electronic equipment will be carried out corresponding operating, wherein, described respective operations can be any operation that electronic equipment can be carried out, for example, open certain application, show the new page, export voice etc.For example, in the example of above-mentioned Webpage, the word in webpage or picture have hyperlink conventionally, and when user for example clicks while having the word of hyperlink or picture with mouse, web browser can be opened the new page corresponding to this hyperlink.In this example, open corresponding to the new page of hyperlink and be the respective operations that shows that object (word or picture etc.) has.Preferably, described in there is respective operations demonstration to as if described webpage in hyperlink text, and described respective operations is to open the corresponding hyperlink page in chained address of described hyperlink text.
Fig. 2 a schematically shows the example interface 200 according to the embodiment of the present invention.This example interface 200 is Webpages of a simplification, has wherein shown many headline 201-206 with hyperlink, shows object 201-206.In the time that user for example clicks arbitrary headline wherein, web browser can be opened the new page that shows corresponding news, and described demonstration object 201-206 has corresponding operation.
Continue with reference to figure 1, in the step S101 of described voice command disposal route 100, receive the first input that user carries out with first method; In step S102, in described interface, determine a scope according to described the first input, wherein said scope relates to one or more described demonstration objects; And in step S103, point out described scope to user.In step S101 and S102, user determines and relates to the scope of one or more demonstration objects by inputting, and its object is the scope of the demonstration object that selected voice command may relate to.In step S103, by described region cue to user, the demonstration object that makes user know described scope and relate to.Above-mentioned the first input can be any input mode, as long as can determine above-mentioned scope according to it.Preferably, above-mentioned first method can be that touch or mouse move input mode, and user is moved and formed track by touch or mouse on interface, for example, on interface, draw a circle; And above-mentioned steps S102 comprises that the scope that user is drawn a circle to approve in described interface is defined as described scope according to the motion track of described the first input, that is, determine described scope according to user by the track touching or mouse forms on interface.It should be noted that, the motion track here can be the track of sealing, also can be the track not sealing, in the time determining described scope according to this motion track, can be directly using the track of sealing as described scope, for example using drawn circle directly as above-mentioned scope, or shape motion track being formed according to predetermined algorithm is converted to given shape (for example rectangle or circle), and then using the given shape after this conversion as described scope.For example, if above-mentioned motion track is only a curve not sealing, the line of two points farthest on this curve can be determined to a circle as diameter, then the above-mentioned scope as delineation demonstration object using this circle.Certainly, those skilled in the art can set the concrete mode of determining above-mentioned scope according to practical application scene, as long as it can determine the scope that relates to one or more demonstration objects according to user's input.As another preferred implementation, above-mentioned steps S102 can comprise: detect the cursor position of cursor in described interface; And set described scope taking described cursor position as approximate centre.In this preferred implementation, only need to detect the position of cursor on interface, and (the such as centre of form of round other shape such as the center of circle or rectangle) sets for example scope of the shape such as circle or rectangle taking cursor position as approximate centre.The cursor here can refer to the mark of locative arbitrary shape on interface, such as arrow, vertical line or the spider etc. of beating.In addition, user can be by mouse, keyboard, touch, closely induction, control lever etc. change or set the position of cursor, using as the first input.An example determining and point out above-mentioned scope in interface is described below with reference to Fig. 2 b.As shown in Figure 2 b, for the example interface 200 shown in Fig. 2 a, user has determined the scope 200S of a circular by the first input, and here, the first input can be touch mentioned above or the mobile input mode of mouse, or cursor is changed or set.In Fig. 2 b, scope 200S relates to and shows object 201-203, and the mode of surrounding by explicit sideline is pointed out described scope 200S to user.It should be noted that, in this example and other embodiments of the invention, " relate to " and might not refer to " covering completely ", it also can refer to " covering at least partly ", for example, in this example, scope 200S also not exclusively covers demonstration object 201-203, and only cover their part, but still can think that scope 200S relates to and show object 201-203.In addition, point out the mode of described scope to be also not limited to the mode that shows that sideline surrounds to user, it can be to allow user know the any-mode of described scope, for example, improve display brightness in described scope, change display color in described scope, increase display size in described scope or the combination in any of aforesaid way, etc.
Then, in step S104, the each demonstration object relating in described scope is processed, thereby determined the keyword of indicating each demonstration object, wherein each keyword that shows object differs from one another.In this step, for the each demonstration object relating in above-mentioned scope is determined a different keyword, for phonetic entry and the identification of subsequent step.Above-mentioned keyword can be a part for the demonstration object of its instruction, for example, the any one or more words of text display object, in the example as shown in Fig. 2 b, for showing object 201, described keyword can be any one or more words that show object 201, such as " first " or " amusement " etc.Further, described keyword can be a part for the demonstration object of its instruction part in described scope, for example, in the example shown in Fig. 2 b, for showing object 201, " Article 1 " is to show the part of object 201 in described scope 200S, i.e. part in circle 200S, keyword can be the part " first " of " Article 1 ".In addition, keyword can also not be a part that shows object, but other any one or more words, as long as the keyword of each demonstration object is different just passable, for example keyword can be defined as to the simple numeral such as " 1 ", " 2 ", thereby be convenient to phonetic entry and identification.Concrete definite mode of keyword can be set according to practical application scene, and it does not form limitation of the invention.When having determined after keyword, in step S105, point out described keyword to user.The mode of prompting can arrange according to practical application scene and user's preference, as long as allow user know described keyword, for example, can be in the following ways: keyword described in the described keyword of highlighted demonstration, flickering display, use keyword described in the color displays different from showing object around, use the font different with word around to show combination in any of described keyword or aforesaid way etc.As example, Fig. 2 c shows example keyword and the prompting mode thereof for the example interface 200 of Fig. 2 a and Fig. 2 b.For the demonstration object 201 in Fig. 2 c, keyword is confirmed as " amusement ", and it is a part that shows object 201, and " amusement " is highlighted; For showing object 202, keyword is confirmed as " second ", it is a part that shows the part " Article 2 political affairs " of object 202 in scope 200s, and " second " shows (background font is as regular script, and " second " changes into black matrix) by overstriking and taking different fonts; For showing object 203, keyword is set to the numeral " 3 " that is not a part that shows object 203, and it is presented on demonstration object 203 by the font with different (black matrix) and size.Can find out from Fig. 2 c, user can clearly see the corresponding keyword of each demonstration object.Therefore,, by step S104 and S105, user can be known the keyword that shows object association with each, thereby can carry out voice command input according to suggested keyword, to carry out corresponding operation.
In step S106, the coupling word using described keyword as voice match so that user by carrying out the respective operations of the demonstration object that described keyword is corresponding described in phonetic entry when keyword.In this step, if user, by the above-mentioned keyword of phonetic entry, can carry out the respective operations of the demonstration object that keyword is corresponding, its effect is equal in prior art to be clicked and is shown that object is the same by mouse.For example, in the example of Fig. 2 c, if user is with phonetic entry " amusement ", can opens and show the corresponding hyperlink page of object 201, i.e. the news content page of " Article 1 entertainment news "; If user uses phonetic entry " second ", can open the news content page of " Article 2 political news "; If user uses phonetic entry " 3 ", can open the news content page of " Article 3 social news ".It should be noted that above-mentioned respective operations is not limited to open hyperlink webpage, it can be the operation of any appropriate, for example, opens certain application, output voice etc.
Fig. 3 shows the process flow diagram of voice command disposal route 300 according to a second embodiment of the present invention.The electronic equipment, interface and the demonstration object that in the second embodiment, relate to are identical with above-mentioned the first embodiment, and the step S301-S303 of voice command disposal route 300 is identical with the step S101-S103 of voice command disposal route 100, is not repeated here.Various embodiments or preferred exemplary in the first embodiment are equally applicable to the second embodiment.
In step S304, for the each demonstration object relating in described scope, set the keyword of this demonstration object of instruction, wherein each keyword that shows object differs from one another.S104 is similar with step, in this step, for the each demonstration object relating in above-mentioned scope is set a different keyword, for phonetic entry and the identification of subsequent step.Then, S105 is identical with step, in step S305, points out described keyword to user.Also be applicable to step S304 and S305 about illustrating of step S104 and S105 above.
In step S306, receive the second input that user carries out with second method according to suggested keyword, wherein said second method is phonetic entry mode.In this step, user carries out phonetic entry according to suggested keyword, and electronic equipment receives this phonetic entry by for example microphone.In step S307, described the second input is carried out to speech recognition, and obtain recognition result.Here, speech recognition can be any speech recognition technology of the prior art, and recognition result can be for example the corresponding text formatting of voice of input.Then,, in step S308, according to described recognition result, in described keyword, determine the target keyword corresponding with described the second input.In this step, determine user's phonetic entry is corresponding to which keyword, and corresponding keyword is called target keyword, for example, in the example of Fig. 2 c, be " amusement " if identify user's phonetic entry, the keyword " amusement " that shows object 201 is defined as to target keyword.Then, in step S309, carry out the respective operations of the indicated demonstration object of described target keyword, for example, in the example of Fig. 2 c, if " amusement " is confirmed as target keyword, opens and show the corresponding hyperlink page of object 201, i.e. the news content page of " Article 1 entertainment news ".Certainly, as described in the first embodiment, above-mentioned respective operations is not limited to open hyperlink webpage, and it can be the operation of any appropriate.
Below with reference to Fig. 4 and Fig. 5 to describing according to the electronic equipment of the third and fourth embodiment of the present invention.The performed function of unit in described electronic equipment is corresponding with the voice command disposal route of above the first and second embodiment below, be equally applicable to the electronic equipment here about the described various aspects of voice command disposal route, therefore, no longer to its detailed description.
Fig. 4 shows the schematic block diagram of the electronic equipment 400 of a third embodiment in accordance with the invention.As mentioned above, electronic equipment 400 refers to any electronic equipment with display screen, and its concrete form includes but not limited to personal computer, intelligent television, panel computer, mobile phone, digital camera, personal digital assistant, portable computer, game machine etc.Electronic equipment 400 comprises: display unit 401 shows the interface that comprises the demonstration object with respective operations on display screen; The first receiving element 402, receives the first input that user carries out with first method; Scope determining unit 403 is determined a scope according to described the first input in described interface, and wherein said scope relates to one or more described demonstration objects; Region cue unit 404, points out described scope to user; Keyword setup unit 405, processes the each demonstration object relating in described scope, thereby determines the keyword of the each demonstration object of instruction, and wherein each keyword that shows object differs from one another; Keyword Tip element 406, points out described keyword to user; And keyword operating unit 407, the coupling word using described keyword as voice match so that user by carrying out the respective operations of the demonstration object that described keyword is corresponding described in phonetic entry when keyword.
Fig. 5 shows the schematic block diagram of the electronic equipment 500 of a fourth embodiment in accordance with the invention.As mentioned above, electronic equipment 500 refers to any electronic equipment with display screen, and its concrete form includes but not limited to personal computer, intelligent television, panel computer, mobile phone, digital camera, personal digital assistant, portable computer, game machine etc.Electronic equipment 500 comprises: display unit 501 shows the interface that comprises the demonstration object with respective operations on display screen; The first receiving element 502, receives the first input that user carries out with first method; Scope determining unit 503 is determined a scope according to described the first input in described interface, and wherein said scope relates to one or more described demonstration objects; Region cue unit 504, points out described scope to user; Keyword setup unit 505, for the each demonstration object relating in described scope, sets the keyword of this demonstration object of instruction, and wherein each keyword that shows object differs from one another; Keyword Tip element 506, points out described keyword to user; The second receiving element 507, receives the second input that user carries out with second method according to suggested keyword, and wherein said second method is phonetic entry mode; Voice recognition unit 508, carries out speech recognition to described the second input, and obtains recognition result; Target keyword determining unit 509 according to described recognition result, is determined the target keyword corresponding with described the second input in described keyword; And command executing unit 510, carry out the respective operations of the indicated demonstration object of described target keyword.
It should be noted that, the unit of the electronic equipment 400 and 500 of above-mentioned the third and fourth embodiment can be unit independently, can be also the assembled unit forming together with other unit.And Fig. 4 and unit illustrated in fig. 5 can be only the part unit of electronic equipment 400 and 500, electronic equipment 400 and 500 also may comprise other unit, for example CPU (central processing unit), storage unit, etc.
Preferably, in above-mentioned the third and fourth embodiment, described first method is that touch or mouse move input mode; And described scope determining unit 403 and 503 is configured to according to the motion track of described the first input, and the scope that user is drawn a circle to approve in described interface is defined as described scope.
Preferably, in above-mentioned the third and fourth embodiment, wherein said scope determining unit 403 and 503 is configured to: detect the cursor position of cursor in described interface; And set described scope taking described cursor position as approximate centre.
Preferably, in above-mentioned the third and fourth embodiment, described region cue unit 404 and 504 is configured to: surround described scope, improve display brightness in described scope, change the display color in described scope and/or increase the display size in described scope with explicit sideline.
Preferably, in above-mentioned the third and fourth embodiment, described keyword Tip element 406 is configured to 506: keyword described in the described keyword of highlighted demonstration, flickering display, use keyword described in the color displays different from showing object around and/or use the font different with word around to show described keyword.
By technical scheme provided by the present invention, make user can input quickly and easily voice command, and reduced electronic equipment identification simultaneously and understood the difficulty of voice command and improved the accuracy of voice command identification.
Those of ordinary skill in the art can recognize, unit and the algorithm steps of each example of describing in conjunction with embodiment disclosed herein, can realize with electronic hardware, computer software or the combination of the two.And software module can be placed in the computer-readable storage medium of arbitrary form.For the interchangeability of hardware and software is clearly described, composition and the step of each example described according to function in the above description in general manner.These functions are carried out with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Those skilled in the art can realize described function with distinct methods to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
It should be appreciated by those skilled in the art that can be dependent on design requirement and other factors carries out various amendments, combination, part combination and replace the present invention, as long as they are in claims and the scope that is equal to thereof.

Claims (15)

1. a voice command disposal route, is applied to an electronic equipment, shows the interface that comprises the demonstration object with respective operations on the display screen of wherein said electronic equipment, and described voice command disposal route comprises:
Receive the first input that user carries out with first method;
In described interface, determine a scope according to described the first input, wherein said scope relates to one or more described demonstration objects;
Point out described scope to user;
The each demonstration object relating in described scope is processed, thereby determined the keyword of indicating each demonstration object, wherein each keyword that shows object differs from one another;
Point out described keyword to user; And
Coupling word using described keyword as voice match so that user by carrying out the respective operations of the demonstration object that described keyword is corresponding described in phonetic entry when keyword.
2. a voice command disposal route, is applied to an electronic equipment, shows the interface that comprises the demonstration object with respective operations on the display screen of wherein said electronic equipment, and described voice command disposal route comprises:
Receive the first input that user carries out with first method;
In described interface, determine a scope according to described the first input, wherein said scope relates to one or more described demonstration objects;
Point out described scope to user;
For the each demonstration object relating in described scope, set the keyword of this demonstration object of instruction, wherein each keyword that shows object differs from one another;
Point out described keyword to user;
Receive the second input that user carries out with second method according to suggested keyword, wherein said second method is phonetic entry mode;
Described the second input is carried out to speech recognition, and obtain recognition result;
According to described recognition result, in described keyword, determine the target keyword corresponding with described the second input; And
Carry out the respective operations of the indicated demonstration object of described target keyword.
3. method as claimed in claim 1 or 2, wherein
Described first method is that touch or mouse move input mode;
And described according to described first input in described interface determine a scope comprise:
According to the motion track of described the first input, the scope that user is drawn a circle to approve in described interface is defined as described scope.
4. method as claimed in claim 1 or 2, wherein in described interface, determine that according to described the first input a scope comprises:
Detect the cursor position of cursor in described interface; And
Set described scope taking described cursor position as approximate centre.
5. method as claimed in claim 1 or 2, wherein saidly point out described scope to comprise to user:
Surround described scope, improve display brightness in described scope, change the display color in described scope and/or increase the display size in described scope with explicit sideline.
6. method as claimed in claim 1 or 2, wherein
Described keyword is a part for the demonstration object of its instruction.
7. method as claimed in claim 6, wherein
Described keyword is a part for the demonstration object of its instruction part in described scope.
8. method as claimed in claim 1 or 2, wherein saidly point out described keyword to comprise to user:
Keyword described in the described keyword of highlighted demonstration, flickering display, use keyword described in the color displays different from showing object around and/or use the font different with word around to show described keyword.
9. method as claimed in claim 1 or 2, wherein
Described interface is the display interface of webpage; And
The described demonstration with respective operations is to liking the hyperlink text in described webpage, and described respective operations is to open the corresponding hyperlink page in chained address of described hyperlink text.
10. an electronic equipment, comprising:
Display unit shows the interface that comprises the demonstration object with respective operations on display screen;
The first receiving element, receives the first input that user carries out with first method;
Scope determining unit is determined a scope according to described the first input in described interface, and wherein said scope relates to one or more described demonstration objects;
Region cue unit, points out described scope to user;
Keyword setup unit, processes the each demonstration object relating in described scope, thereby determines the keyword of the each demonstration object of instruction, and wherein each keyword that shows object differs from one another;
Keyword Tip element, points out described keyword to user; And
Keyword operating unit, the coupling word using described keyword as voice match so that user by carrying out the respective operations of the demonstration object that described keyword is corresponding described in phonetic entry when keyword.
11. 1 kinds of electronic equipments, comprising:
Display unit shows the interface that comprises the demonstration object with respective operations on display screen;
The first receiving element, receives the first input that user carries out with first method;
Scope determining unit is determined a scope according to described the first input in described interface, and wherein said scope relates to one or more described demonstration objects;
Region cue unit, points out described scope to user;
Keyword setup unit, for the each demonstration object relating in described scope, sets the keyword of this demonstration object of instruction, and wherein each keyword that shows object differs from one another;
Keyword Tip element, points out described keyword to user;
The second receiving element, receives the second input that user carries out with second method according to suggested keyword, and wherein said second method is phonetic entry mode;
Voice recognition unit, carries out speech recognition to described the second input, and obtains recognition result;
Target keyword determining unit according to described recognition result, is determined the target keyword corresponding with described the second input in described keyword; And
Command executing unit, carries out the respective operations of the indicated demonstration object of described target keyword.
12. electronic equipments as described in claim 10 or 11, wherein
Described first method is that touch or mouse move input mode;
And described scope determining unit is configured to:
According to the motion track of described the first input, the scope that user is drawn a circle to approve in described interface is defined as described scope.
13. electronic equipments as described in claim 10 or 11, wherein said scope determining unit is configured to:
Detect the cursor position of cursor in described interface; And
Set described scope taking described cursor position as approximate centre.
14. electronic equipments as described in claim 10 or 11, wherein said region cue unit is configured to:
Surround described scope, improve display brightness in described scope, change the display color in described scope and/or increase the display size in described scope with explicit sideline.
15. electronic equipments as described in claim 10 or 11, wherein said keyword Tip element is configured to:
Keyword described in the described keyword of highlighted demonstration, flickering display, use keyword described in the color displays different from showing object around and/or use the font different with word around to show described keyword.
CN201210546374.4A 2012-12-14 2012-12-14 Voice command processing method and electronic equipment Active CN103869948B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210546374.4A CN103869948B (en) 2012-12-14 2012-12-14 Voice command processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210546374.4A CN103869948B (en) 2012-12-14 2012-12-14 Voice command processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN103869948A true CN103869948A (en) 2014-06-18
CN103869948B CN103869948B (en) 2019-01-15

Family

ID=50908575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210546374.4A Active CN103869948B (en) 2012-12-14 2012-12-14 Voice command processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN103869948B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104184890A (en) * 2014-08-11 2014-12-03 联想(北京)有限公司 Information processing method and electronic device
CN104318923A (en) * 2014-11-06 2015-01-28 广州三星通信技术研究有限公司 Speech processing method and device and terminal
CN106168895A (en) * 2016-07-07 2016-11-30 北京行云时空科技有限公司 Sound control method and intelligent terminal for intelligent terminal
CN106383847A (en) * 2016-08-30 2017-02-08 宇龙计算机通信科技(深圳)有限公司 Page content processing method and device
CN107277630A (en) * 2017-07-20 2017-10-20 海信集团有限公司 The display methods and device of information of voice prompt
CN107408118A (en) * 2015-03-18 2017-11-28 三菱电机株式会社 Information providing system
CN108534187A (en) * 2018-03-08 2018-09-14 新智数字科技有限公司 A kind of control method and device of gas-cooker, a kind of gas-cooker
CN110544473A (en) * 2018-05-28 2019-12-06 百度在线网络技术(北京)有限公司 Voice interaction method and device
CN111061452A (en) * 2019-12-17 2020-04-24 北京小米智能科技有限公司 Voice control method and device of user interface

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030065515A1 (en) * 2001-10-03 2003-04-03 Toshikazu Yokota Information processing system and method operable with voice input command
CN1647023A (en) * 2002-02-15 2005-07-27 Sap股份公司 Voice-controlled data entry
CN101329673A (en) * 2007-06-22 2008-12-24 刘艳萍 Method and device for processing document information
CN101557651A (en) * 2008-04-08 2009-10-14 Lg电子株式会社 Mobile terminal and menu control method thereof
CN101769758A (en) * 2008-12-30 2010-07-07 英华达(上海)科技有限公司 Planning method for search range of interest point
CN101807398A (en) * 2009-02-16 2010-08-18 宏正自动科技股份有限公司 Voice identification device and operation method thereof
JP2010218238A (en) * 2009-03-17 2010-09-30 Fujitsu Ltd Text editing device
CN101853253A (en) * 2009-03-30 2010-10-06 三星电子株式会社 Equipment and method for managing multimedia contents in mobile terminal
CN102339193A (en) * 2010-07-21 2012-02-01 Tcl集团股份有限公司 Voice control conference speed method and system
CN102609208A (en) * 2012-02-13 2012-07-25 广州市动景计算机科技有限公司 Method and system for word capture on screen of touch screen equipment, and touch screen equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030065515A1 (en) * 2001-10-03 2003-04-03 Toshikazu Yokota Information processing system and method operable with voice input command
CN1647023A (en) * 2002-02-15 2005-07-27 Sap股份公司 Voice-controlled data entry
CN101329673A (en) * 2007-06-22 2008-12-24 刘艳萍 Method and device for processing document information
CN101557651A (en) * 2008-04-08 2009-10-14 Lg电子株式会社 Mobile terminal and menu control method thereof
CN101769758A (en) * 2008-12-30 2010-07-07 英华达(上海)科技有限公司 Planning method for search range of interest point
CN101807398A (en) * 2009-02-16 2010-08-18 宏正自动科技股份有限公司 Voice identification device and operation method thereof
JP2010218238A (en) * 2009-03-17 2010-09-30 Fujitsu Ltd Text editing device
CN101853253A (en) * 2009-03-30 2010-10-06 三星电子株式会社 Equipment and method for managing multimedia contents in mobile terminal
CN102339193A (en) * 2010-07-21 2012-02-01 Tcl集团股份有限公司 Voice control conference speed method and system
CN102609208A (en) * 2012-02-13 2012-07-25 广州市动景计算机科技有限公司 Method and system for word capture on screen of touch screen equipment, and touch screen equipment

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104184890A (en) * 2014-08-11 2014-12-03 联想(北京)有限公司 Information processing method and electronic device
CN104318923A (en) * 2014-11-06 2015-01-28 广州三星通信技术研究有限公司 Speech processing method and device and terminal
CN104318923B (en) * 2014-11-06 2020-08-11 广州三星通信技术研究有限公司 Voice processing method and device and terminal
CN107408118A (en) * 2015-03-18 2017-11-28 三菱电机株式会社 Information providing system
CN106168895A (en) * 2016-07-07 2016-11-30 北京行云时空科技有限公司 Sound control method and intelligent terminal for intelligent terminal
WO2018040438A1 (en) * 2016-08-30 2018-03-08 宇龙计算机通信科技(深圳)有限公司 Page content processing method and device
CN106383847B (en) * 2016-08-30 2019-03-22 宇龙计算机通信科技(深圳)有限公司 A kind of page content processing method and device
CN106383847A (en) * 2016-08-30 2017-02-08 宇龙计算机通信科技(深圳)有限公司 Page content processing method and device
CN107277630A (en) * 2017-07-20 2017-10-20 海信集团有限公司 The display methods and device of information of voice prompt
CN108534187A (en) * 2018-03-08 2018-09-14 新智数字科技有限公司 A kind of control method and device of gas-cooker, a kind of gas-cooker
CN110544473A (en) * 2018-05-28 2019-12-06 百度在线网络技术(北京)有限公司 Voice interaction method and device
CN110544473B (en) * 2018-05-28 2022-11-08 百度在线网络技术(北京)有限公司 Voice interaction method and device
CN111061452A (en) * 2019-12-17 2020-04-24 北京小米智能科技有限公司 Voice control method and device of user interface

Also Published As

Publication number Publication date
CN103869948B (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN103869948A (en) Voice command processing method and electronic device
US9384503B2 (en) Terminal apparatus, advertisement display control apparatus, and advertisement display method
US9691381B2 (en) Voice command recognition method and related electronic device and computer-readable medium
US8547347B2 (en) Method for generating multiple windows frames, electronic device thereof, and computer program product using the method
US20140317547A1 (en) Dynamically-positioned character string suggestions for gesture typing
WO2022121790A1 (en) Split-screen display method and apparatus, electronic device, and readable storage medium
CN101419624A (en) Mobile information device and method for browsing web page
JP2013218676A (en) Input method, input device and terminal
EP2909702B1 (en) Contextually-specific automatic separators
CN105488051B (en) Webpage processing method and device
WO2015045676A1 (en) Information processing device and control program
CN105786976A (en) Mobile terminal and application search method thereof
CN103747308A (en) Method and system for controlling smart television with analog keys, and mobile terminal
CN102768583B (en) The candidate word filter method of intelligent and portable equipment and the input of whole sentence thereof and device
CN101526949A (en) Operating method of desktop selected column after selecting text string by dragging and dropping left key of mouse
CN111880668A (en) Input display method and device and electronic equipment
JP2013025441A (en) Information processing device, system, method, and program
US20160292140A1 (en) Associative input method and terminal
US10261979B2 (en) Method and apparatus for rendering a screen-representation of an electronic document
JP5791668B2 (en) Information processing apparatus, method, and computer program
EP3776161B1 (en) Method and electronic device for configuring touch screen keyboard
CN112764551A (en) Vocabulary display method and device and electronic equipment
CN107180039A (en) A kind of text information recognition methods and device based on picture
CN106919558B (en) Translation method and translation device based on natural conversation mode for mobile equipment
CN101539820A (en) Touch screen drawing input method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant