CN103869948B - Voice command processing method and electronic equipment - Google Patents

Voice command processing method and electronic equipment Download PDF

Info

Publication number
CN103869948B
CN103869948B CN201210546374.4A CN201210546374A CN103869948B CN 103869948 B CN103869948 B CN 103869948B CN 201210546374 A CN201210546374 A CN 201210546374A CN 103869948 B CN103869948 B CN 103869948B
Authority
CN
China
Prior art keywords
keyword
range
display
input
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210546374.4A
Other languages
Chinese (zh)
Other versions
CN103869948A (en
Inventor
郭诚
杨振奕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201210546374.4A priority Critical patent/CN103869948B/en
Publication of CN103869948A publication Critical patent/CN103869948A/en
Application granted granted Critical
Publication of CN103869948B publication Critical patent/CN103869948B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides voice command processing method and corresponding electronic equipments.Institute's speech commands processing method is applied to an electronic equipment, and display includes the interface with the display object of respective operations on the display screen of the electronic equipment.Institute's speech commands processing method includes: the first input for receiving user and carrying out in the first way;A range is determined in the interface according to first input, wherein the range is related to one or more display objects;Prompt the user with the range;Each display object involved in the range is handled, so that it is determined that the keyword of each display object of instruction, wherein the keyword of each display object is different from each other;Prompt the user with the keyword;And using the keyword as the matching word of voice match, so that user executes the respective operations of the corresponding display object of the keyword when inputting the keyword by voice.Through the invention, user can conveniently and efficiently input voice command.

Description

Voice command processing method and electronic equipment
Technical field
The present invention relates in electronic equipment voice command processing method and corresponding electronic equipment.
Background technique
Currently, voice technology has obtained the development of high speed and has been widely applied.The application of voice technology is so that electronic equipment User can not have to hand and can input to electronic equipment order, such as open application or put through phone.However, being limited to voice The accuracy of identification and semantic understanding, current voice command processing technique are also difficult to input any voice and are identified and managed Solution.
On the other hand, on the screen of electronic equipment often display have it is multiple can operation object interface, such as webpage Usually there is the hyperlink text that can largely click or picture on the display page.In general, user can be operated pair by clicking these It is opened corresponding to the hyperlink address of hyperlink text as to realize corresponding operation, such as by clickable hyperlinks text The page.However, more convenient in order to use, user also wishes to execute relevant operation by way of voice command sometimes.In addition, Sometimes due to the display object on screen is excessively intensive, if directly carrying out clicking operation, especially carried out a little with hand touch In the case where hitting, maloperation may result in.
Summary of the invention
In view of the above problems with the defect of the prior art, the present invention by first selecting a certain range of operate on the screen Object, the processing that voice command is realized further according to the method for the keyword input voice command of prompt.
An embodiment provides a kind of voice command processing methods, are applied to an electronic equipment, wherein institute Stating display on the display screen of electronic equipment includes the interface with the display object of respective operations, speech commands processing side of institute Method includes: the first input for receiving user and carrying out in the first way;A model is determined in the interface according to first input It encloses, wherein the range is related to one or more display objects;Prompt the user with the range;To involved in the range Each display object handled, so that it is determined that instruction it is each display object keyword, wherein it is each display object pass Keyword is different from each other;Prompt the user with the keyword;Using the keyword as the matching word of voice match, so that user The respective operations of the corresponding display object of the keyword are executed when inputting the keyword by voice.
Another embodiment of the present invention provides a kind of voice command processing method, is applied to an electronic equipment, wherein institute Stating display on the display screen of electronic equipment includes the interface with the display object of respective operations, speech commands processing side of institute Method includes: the first input for receiving user and carrying out in the first way;A model is determined in the interface according to first input It encloses, wherein the range is related to one or more display objects;Prompt the user with the range;For being related in the range And each display object, setting indicate the display object keyword, wherein it is each display object keyword it is different from each other; Prompt the user with the keyword;The second input that user carries out in a second manner according to suggested keyword is received, Described in second method be voice input mode;Speech recognition is carried out to second input, and obtains recognition result;According to institute Recognition result is stated, target keyword corresponding with second input is determined in the keyword;And execute the target The respective operations of object are shown indicated by keyword.
Another embodiment of the present invention provides a kind of electronic equipment, comprising: display unit shows packet on the display screen Include the interface of the display object with respective operations;First receiving unit receives the first input that user carries out in the first way; Range determination unit determines a range according to first input, wherein the range is related to one or more in the interface A display object;Region cue unit prompts the user with the range;Keyword setup unit, to being related in the range And each display object handled, so that it is determined that the keyword of each display object of instruction, wherein each display object Keyword is different from each other;Keyword prompt unit prompts the user with the keyword;And keyword operating unit, it will be described Matching word of the keyword as voice match, so that user executes the keyword pair when inputting the keyword by voice The respective operations for the display object answered.
Another embodiment of the present invention provides a kind of electronic equipment, comprising: display unit shows packet on the display screen Include the interface of the display object with respective operations;First receiving unit receives the first input that user carries out in the first way; Range determination unit determines a range according to first input, wherein the range is related to one or more in the interface A display object;Region cue unit prompts the user with the range;Keyword setup unit, in the range The each display object being related to, setting indicate the keyword of the display object, wherein the keyword of each display object is each other not Together;Keyword prompt unit prompts the user with the keyword;Second receiving unit receives user according to suggested key Word and carry out in a second manner second input, wherein the second method be voice input mode;Voice recognition unit, to institute It states the second input and carries out speech recognition, and obtain recognition result;Target keyword determination unit, according to the recognition result, Target keyword corresponding with second input is determined in the keyword;And command executing unit, execute the target The respective operations of object are shown indicated by keyword.
The technical solution provided through the above embodiments of the present invention allows users to conveniently and efficiently input voice life It enables, and reduces electronic equipment identification simultaneously and understand the difficulty of voice command and improve the accurate of voice command identification Degree.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, making below by required in the description to embodiment Attached drawing is briefly described.The accompanying drawings in the following description is only exemplary embodiment of the present invention.
Fig. 1 shows the flow chart of the voice command processing method 100 of first embodiment according to the present invention.
Fig. 2 a-2c schematically shows example interface 200 according to an embodiment of the present invention.
Fig. 3 shows the flow chart of the voice command processing method 300 of second embodiment according to the present invention.
Fig. 4 shows the schematic block diagram of the electronic equipment 400 of third embodiment according to the present invention.
Fig. 5 shows the schematic block diagram of the electronic equipment 500 of fourth embodiment according to the present invention.
Specific embodiment
Hereinafter, by preferred embodiments of the present invention will be described in detail with reference to the annexed drawings.Note that in the specification and drawings In, substantially the same step and element are indicated with the same or similar appended drawing reference, and to the weight of these steps and element Multiple explain will be omitted.
In following embodiment of the invention, electronic equipment refers to any electronic equipment with display screen, specific Form include but is not limited to personal computer, smart television, tablet computer, mobile phone, digital camera, personal digital assistant, Portable computer, game machine etc..The display screen of electronic equipment can be any type of screen, for example including LCD, LED, TFT, IPS, OLED etc. may also comprise display screen such as Memory LCD, E-Ink of low-power consumption etc..Show the type of screen not Constitute limitation of the invention.
Fig. 1 shows the flow chart of the voice command processing method 100 of first embodiment according to the present invention.At voice command Reason method 100 is applied to an electronic equipment, which can be any electronic equipment as described above.The electronic equipment Display screen on display include have respective operations display object interface.Interface described here is shown on screen The general name of content can be the interface of some or certain applications, be also possible to the desktop of operating system.It is shown included by interface Show that object can be any display content on interface, for example, it can be text, picture, operation button etc..On for example, The display interface of webpage, the display pair can be can be for the Webpage in web browser, i.e., the described interface by stating interface As that can be text or the picture etc. in webpage.These display objects have corresponding operation, i.e., when user is to these displays pair When as carrying out specific action, electronic equipment will execute corresponding operating, wherein the respective operations can be electronic equipment and can hold Capable any operation, such as open some application, show the new page, output voice etc..For example, in above-mentioned Webpage In example, text or picture in webpage usually have hyperlink, when user for example clicks the text with hyperlink with mouse Or when picture, web browser can open the new page corresponding to the hyperlink.In this example, it opens and corresponds to hyperlink New page is to show respective operations possessed by object (text or picture etc.).Preferably, described aobvious with respective operations Show that object is the hyperlink text in the webpage, and the respective operations are to open the chained address of the hyperlink text The corresponding hyperlink page.
Fig. 2 a schematically shows example interface 200 according to an embodiment of the present invention.The example interface 200 is a letter The Webpage of change, which show a plurality of headline 201-206 with hyperlink, that is, show object 201-206.When with When any bar headline therein is for example clicked at family, web browser can open the new page for showing corresponding news, i.e. institute Stating display object 201-206 has corresponding operation.
It continues to refer to figure 1, in the step S101 of institute's speech commands processing method 100, receives user in the first way The first input carried out;In step s 102, a range is determined in the interface according to first input, wherein the model It encloses and is related to one or more display objects;And in step s 103, the range is prompted the user with.In step S101 and In S102, for user by inputting the ranges for determining and being related to one or more and showing objects, its object is to selected voice commands can The range for the display object that can relate to.In step s 103, by the region cue to user, so that user knows the range And its display object being related to.Above-mentioned first input can be any input mode, as long as above range can be determined according to it ?.Preferably, above-mentioned first method can be existed for touch or the mobile input mode of mouse, i.e. user by touch or mouse It is moved on interface and forms track, such as draw a circle on interface;And above-mentioned steps S102 includes defeated according to described first The range that user draws a circle to approve in the interface is determined as the range by the motion track entered, that is, according to user by touch or The track that mouse is formed on interface determines the range.It should be noted that motion track here can be it is closed Track is also possible to not closed track, can be directly by closed track when determining the range according to the motion track As the range, such as by drawn circle directly as above range, or according to scheduled algorithm motion track is formed Shape be converted to specific shape (such as rectangle or circle), then again using the specific shape after the conversion as the range. For example, if above-mentioned motion track is only a not closed curve, it can be by the line of two points farthest on the curve A circle is determined as diameter, then using the circle as the above range of delineation display object.Certainly, those skilled in the art The concrete mode of determining above range can be set according to practical application scene, as long as it can be determined according to the input of user It is related to the range of one or more display objects.As another preferred embodiment, above-mentioned steps S102 may include: inspection Survey cursor position of the cursor in the interface;And the range is set by approximate centre of the cursor position.It is excellent at this Select in embodiment, need to only detect the position of cursor on interface, and using cursor position as approximate centre (such as the round center of circle or The other shapes of centroid such as rectangle) setting shape such as round or rectangle range.Here cursor can refer to the table on interface Show mark of the arbitrary shape of position, such as arrow, the vertical line of bounce or spider etc..In addition, user can by mouse, Keyboard, touch, closely induction, operating stick etc. change or set the position of cursor, to input as first.Referring to figure 2b come illustrate in interface determine and prompt above range an example.As shown in Figure 2 b, for example circle shown in Fig. 2 a Face 200, user have determined a generally circular range 200S by the first input, and here, the first input can be described above Touch or mouse mobile input mode, or to cursor change or set.In figure 2b, range 200S is related to showing object 201-203, and the range 200S is prompted the user in such a way that explicit sideline surrounds.It should be noted that in the example And in other embodiments of the invention, " being related to " is not necessarily referring to " being completely covered ", can also refer to " at least partly covering ", For example, in this example, range 200S not fully covers display object 201-203, and their a part is only covered, but still So it is considered that range 200S is related to showing object 201-203.In addition, the mode for prompting the user with the range is also not limited to It shows the mode that sideline surrounds, can be any way that user can be allowed to know the range, for example, improving in the range Display brightness, change display color in the range, increase display size in the range or aforesaid way is appointed Meaning combination, etc..
Then, in step S104, each display object involved in the range is handled, so that it is determined that instruction The keyword of each display object, wherein the keyword of each display object is different from each other.In this step, in above range The each display object being related to determines a different keyword, for the voice input and identification in subsequent step.Above-mentioned pass A part that keyword can be the display object of its instruction is such as being schemed for example, text shows any one or more words of object In example shown in 2b, for showing that object 201, the keyword can be any one or more words of display object 201, Such as " first " or " amusement " etc..Further, the keyword can be the display object of its instruction in the range Part a part, such as in the example shown in Fig. 2 b, for showing object 201, " first " is that display object 201 exists Part in the range 200S, the i.e. part in circle 200S, keyword can be a part " first " of " first ". In addition, keyword can not also be a part for showing object, but other any one or more words, as long as each display pair The keyword difference of elephant, such as keyword can be determined as to the simple number such as " 1 ", " 2 ", consequently facilitating voice is defeated Enter and identifies.The specific method of determination of keyword can be set according to practical application scene, not constituted to limit of the invention It is fixed.After keyword has been determined, in step s105, the keyword is prompted the user with.The mode of prompt can be according to reality The preference of application scenarios and user and be arranged, as long as user is allowed to know the keyword, for example, can use with lower section It is highlighted the keyword, flashing shows the keyword, the pass described in the color displays different from surrounding display object likes: Keyword shows any combination of the keyword or aforesaid way etc. with the font different from surrounding text.As showing Example, Fig. 2 c show the example keyword and its prompting mode of the example interface 200 for Fig. 2 a and Fig. 2 b.For in Fig. 2 c Show object 201, keyword is confirmed as " entertaining ", be show a part of object 201, and " amusement " be highlighted it is aobvious Show;For showing object 202, keyword is confirmed as " second ", is the part " for showing object 202 in range 200s A part of two political affairs ", and " second " shows that (as regular script, " second " changes BACKGROUND Font by overstriking and using different fonts For black matrix);For show object 203, keyword be set to be not a part for showing object 203 number " 3 ", and its It is shown on display object 203 with different fonts (black matrix) and size.Can be seen that user from Fig. 2 c can be clearly To keyword corresponding to each display object.Therefore, by step S104 and S105, user can be known and each display pair As associated keyword, so as to carry out voice command input according to suggested keyword, to execute corresponding operation.
In step s 106, using the keyword as the matching word of voice match, so that user is inputted by voice The respective operations of the corresponding display object of the keyword are executed when the keyword.In this step, if user passes through language Sound inputs above-mentioned keyword, then can execute the respective operations of the corresponding display object of keyword, and effect is equal to the prior art In by mouse click display object it is the same.For example, in the example of Fig. 2 c, it, can if user inputs " amusement " with voice Open the hyperlink page, i.e. the news content page of " first entertainment news " corresponding to display object 201;If user uses Voice inputs " second ", then can open the news content page of " Article 2 political news ";If user inputs " 3 " with voice, The news content page of " Article 3 social news " can then be opened.It should be noted that above-mentioned respective operations are not limited to open Hyperlinked web can be any appropriate operation, for example, opening some application, output voice etc..
Fig. 3 shows the flow chart of the voice command processing method 300 of second embodiment according to the present invention.Second implements Electronic equipment involved in example, interface and display object are identical as above-mentioned first embodiment, and voice command processing method 300 step S301-S303 is identical with the step S101-S103 of voice command processing method 100, is described again here. Various specific embodiments or preferable example in first embodiment are equally applicable to second embodiment.
In step s 304, for each display object involved in the range, setting indicates the pass of the display object Keyword, wherein the keyword of each display object is different from each other.It is similar with step S104, it is in above range in this step The each display object being related to sets a different keyword, for the voice input and identification in subsequent step.Then, with Step S105 is identical, and the keyword is prompted the user in step S305.Specifically above with respect to step S104 and S105 It is bright to be also applied for step S304 and S305.
In step S306, the second input that user carries out in a second manner according to suggested keyword is received, Described in second method be voice input mode.In this step, user carries out voice input according to suggested keyword, Electronic equipment receives voice input for example, by microphone.In step S307, speech recognition is carried out to second input, And obtain recognition result.Here, speech recognition can be any speech recognition technology in the prior art, and recognition result can example Text formatting corresponding to the voice inputted in this way.Then, in step S308, according to the recognition result, in the key Target keyword corresponding with second input is determined in word.In this step, determine which the voice input of user corresponds to A keyword, corresponding keyword are known as target keyword, for example, in the example of Fig. 2 c, if identifying the language of user Sound input is " amusement ", then the keyword " amusement " for showing object 201 is determined as target keyword.Then, in step S309 In, the respective operations that object is shown indicated by the target keyword are executed, for example, in the example of Fig. 2 c, if " joy It is happy " be targeted keyword, then open the hyperlink page, i.e. " first entertainment news " corresponding to display object 201 The news content page.Certainly, as in the first embodiment, above-mentioned respective operations are not limited to open hyperlinked web, can To be any appropriate operation.
It is illustrated referring to electronic equipment of the Fig. 4 and Fig. 5 to the third and fourth embodiment according to the present invention.Below The voice command of function performed by each unit in described electronic equipment and the above the first and second embodiments is handled Method is corresponding, and the various aspects described in voice command processing method are equally applicable to electronic equipment here, therefore, No longer it is described in detail.
Fig. 4 shows the schematic block diagram of the electronic equipment 400 of third embodiment according to the present invention.As described above, electronics Equipment 400 refers to that any electronic equipment with display screen, concrete form include but is not limited to personal computer, intelligence electricity Depending on, tablet computer, mobile phone, digital camera, personal digital assistant, portable computer, game machine etc..Electronic equipment 400 It include: display unit 401, display includes the interface with the display object of respective operations on the display screen;First receives list Member 402 receives the first input that user carries out in the first way;Range determination unit 403, according to first input described A range is determined in interface, wherein the range is related to one or more display objects;Region cue unit 404, Xiang Yong Family prompts the range;Keyword setup unit 405 handles each display object involved in the range, thus The keyword of each display object of instruction is determined, wherein the keyword of each display object is different from each other;Keyword prompt unit 406, prompt the user with the keyword;And keyword operating unit 407, using the keyword as the matching of voice match Word, so that user executes the respective operations of the corresponding display object of the keyword when inputting the keyword by voice.
Fig. 5 shows the schematic block diagram of the electronic equipment 500 of fourth embodiment according to the present invention.As described above, electronics Equipment 500 refers to that any electronic equipment with display screen, concrete form include but is not limited to personal computer, intelligence electricity Depending on, tablet computer, mobile phone, digital camera, personal digital assistant, portable computer, game machine etc..Electronic equipment 500 It include: display unit 501, display includes the interface with the display object of respective operations on the display screen;First receives list Member 502 receives the first input that user carries out in the first way;Range determination unit 503, according to first input described A range is determined in interface, wherein the range is related to one or more display objects;Region cue unit 504, Xiang Yong Family prompts the range;Keyword setup unit 505, for each display object involved in the range, setting instruction should The keyword of object is shown, wherein the keyword of each display object is different from each other;Keyword prompt unit 506, mentions to user Show the keyword;Second receiving unit 507 receives user carries out in a second manner according to suggested keyword second Input, wherein the second method is voice input mode;Voice recognition unit 508 carries out voice knowledge to second input Not, and recognition result is obtained;Target keyword determination unit 509, according to the recognition result, determined in the keyword with Described second inputs corresponding target keyword;And command executing unit 510, it executes and is shown indicated by the target keyword Show the respective operations of object.
It should be noted that each unit of the electronic equipment 400 and 500 of above-mentioned third and fourth embodiment can be solely Vertical unit is also possible to the assembled unit being formed together with other units.Also, Fig. 4 and unit illustrated in fig. 5 can be only It is the unit of electronic equipment 400 and 500, electronic equipment 400 and 500 is also possible that other units, such as central processing Unit, storage unit, etc..
Preferably, in above-mentioned third and fourth embodiment, the first method is touch or the mobile input mode of mouse; And the range determination unit 403 and 503 is configured as the motion track according to first input, by user on the boundary The range drawn a circle to approve in face is determined as the range.
Preferably, in above-mentioned third and fourth embodiment, wherein the range determination unit 403 and 503 is configured as: Detection light is marked on the cursor position in the interface;And the range is set by approximate centre of the cursor position.
Preferably, in above-mentioned third and fourth embodiment, the region cue unit 404 and 504 is configured as: being used aobvious Formula sideline surrounds the range, the display brightness in the raising range, the display color in the change range, and/or increasing Display size in the big range.
Preferably, in above-mentioned third and fourth embodiment, the keyword prompt unit 406 and 506 is configured as: high The bright display keyword, flashing show the keyword, the keyword described in showing the different color displays of object from surrounding, And/or the keyword is shown with the font different from surrounding text.
Provided technical solution through the invention allows users to conveniently and efficiently input voice command, and simultaneously It reduces electronic equipment identification and understands the difficulty of voice command and improve the accuracy of voice command identification.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps, can be realized with electronic hardware, computer software, or a combination of the two.And software module can be set In any form of computer storage medium.In order to clearly illustrate the interchangeability of hardware and software, in the above description Each exemplary composition and step are generally described according to function.These functions are come actually with hardware or software mode It executes, specific application and design constraint depending on technical solution.Those skilled in the art can specifically answer each For using different methods to achieve the described function, but such implementation should not be considered as beyond the scope of the present invention.
Various repair is carried out to the present invention it should be appreciated by those skilled in the art that can be dependent on design requirement and other factors Change, combine, partially combining and replacing, as long as they are in appended claims and its equivalent range.

Claims (15)

1. a kind of voice command processing method is applied to an electronic equipment, wherein showing on the display screen of the electronic equipment Interface including the display object with respective operations, institute's speech commands processing method include:
Receive the first input that user carries out in the first way;
A range is determined in the interface according to first input, wherein the range is related to one or more displays Object;
Prompt the user with the range;
Each display object involved in the range is handled, so that it is determined that the keyword of each display object of instruction, Wherein the keyword of each display object is different from each other;
Prompt the user with the keyword;And
Using the keyword as the matching word of voice match, so that user executes institute when inputting the keyword by voice State the respective operations of the corresponding display object of keyword.
2. a kind of voice command processing method is applied to an electronic equipment, wherein showing on the display screen of the electronic equipment Interface including the display object with respective operations, institute's speech commands processing method include:
Receive the first input that user carries out in the first way;
A range is determined in the interface according to first input, wherein the range is related to one or more displays Object;
Prompt the user with the range;
For each display object involved in the range, setting indicates the keyword of the display object, wherein each display The keyword of object is different from each other;
Prompt the user with the keyword;
The second input that user carries out in a second manner according to suggested keyword is received, wherein the second method is language Sound input mode;
Speech recognition is carried out to second input, and obtains recognition result;
According to the recognition result, target keyword corresponding with second input is determined in the keyword;And
Execute the respective operations that object is shown indicated by the target keyword.
3. it is method according to claim 1 or 2, wherein
The first method is touch or the mobile input mode of mouse;
And it is described to determine that a range includes: in the interface according to first input
According to the motion track of first input, the range that user draws a circle to approve in the interface is determined as the range.
4. it is method according to claim 1 or 2, wherein determining a range packet in the interface according to first input It includes:
Detection light is marked on the cursor position in the interface;And
The range is set by approximate centre of the cursor position.
5. method according to claim 1 or 2, prompt the user with the range wherein described and include:
Surround the range with explicit sideline, improve display brightness in the range, change display color in the range, And/or the display size in the increase range.
6. it is method according to claim 1 or 2, wherein
The keyword is a part of the display object of its instruction.
7. method as claimed in claim 6, wherein
The keyword is a part of part of the display object of its instruction in the range.
8. method according to claim 1 or 2, prompt the user with the keyword wherein described and include:
It is highlighted the keyword, flashing shows the keyword, described in the color displays different from surrounding display object Keyword, and/or the keyword is shown with the font different from surrounding text.
9. it is method according to claim 1 or 2, wherein
The interface is the display interface of webpage;And
The display object with respective operations is the hyperlink text in the webpage, and the respective operations are to open The hyperlink page corresponding to the chained address of the hyperlink text.
10. a kind of electronic equipment, comprising:
Display unit, display includes the interface with the display object of respective operations on the display screen;
First receiving unit receives the first input that user carries out in the first way;
Range determination unit determines a range according to first input, wherein the range is related to one in the interface Or multiple display objects;
Region cue unit prompts the user with the range;
Keyword setup unit handles each display object involved in the range, so that it is determined that instruction is each aobvious Show the keyword of object, wherein the keyword of each display object is different from each other;
Keyword prompt unit prompts the user with the keyword;And
Keyword operating unit, using the keyword as the matching word of voice match, so that user inputs institute by voice The respective operations of the corresponding display object of the keyword are executed when stating keyword.
11. a kind of electronic equipment, comprising:
Display unit, display includes the interface with the display object of respective operations on the display screen;
First receiving unit receives the first input that user carries out in the first way;
Range determination unit determines a range according to first input, wherein the range is related to one in the interface Or multiple display objects;
Region cue unit prompts the user with the range;
Keyword setup unit, for each display object involved in the range, setting indicates the key of the display object Word, wherein the keyword of each display object is different from each other;
Keyword prompt unit prompts the user with the keyword;
Second receiving unit receives the second input that user carries out in a second manner according to suggested keyword, wherein institute Stating second method is voice input mode;
Voice recognition unit carries out speech recognition to second input, and obtains recognition result;
Target keyword determination unit, according to the recognition result, determination is corresponding with second input in the keyword Target keyword;And
Command executing unit executes the respective operations that object is shown indicated by the target keyword.
12. electronic equipment as described in claim 10 or 11, wherein
The first method is touch or the mobile input mode of mouse;
And the range determination unit is configured as:
According to the motion track of first input, the range that user draws a circle to approve in the interface is determined as the range.
13. electronic equipment as described in claim 10 or 11, wherein the range determination unit is configured as:
Detection light is marked on the cursor position in the interface;And
The range is set by approximate centre of the cursor position.
14. electronic equipment as described in claim 10 or 11, wherein the region cue unit is configured as:
Surround the range with explicit sideline, improve display brightness in the range, change display color in the range, And/or the display size in the increase range.
15. electronic equipment as described in claim 10 or 11, wherein the keyword prompt unit is configured as:
It is highlighted the keyword, flashing shows the keyword, described in the color displays different from surrounding display object Keyword, and/or the keyword is shown with the font different from surrounding text.
CN201210546374.4A 2012-12-14 2012-12-14 Voice command processing method and electronic equipment Active CN103869948B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210546374.4A CN103869948B (en) 2012-12-14 2012-12-14 Voice command processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210546374.4A CN103869948B (en) 2012-12-14 2012-12-14 Voice command processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN103869948A CN103869948A (en) 2014-06-18
CN103869948B true CN103869948B (en) 2019-01-15

Family

ID=50908575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210546374.4A Active CN103869948B (en) 2012-12-14 2012-12-14 Voice command processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN103869948B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104184890A (en) * 2014-08-11 2014-12-03 联想(北京)有限公司 Information processing method and electronic device
CN104318923B (en) * 2014-11-06 2020-08-11 广州三星通信技术研究有限公司 Voice processing method and device and terminal
WO2016147342A1 (en) * 2015-03-18 2016-09-22 三菱電機株式会社 Information provision system
CN106168895A (en) * 2016-07-07 2016-11-30 北京行云时空科技有限公司 Sound control method and intelligent terminal for intelligent terminal
CN106383847B (en) * 2016-08-30 2019-03-22 宇龙计算机通信科技(深圳)有限公司 A kind of page content processing method and device
CN107277630B (en) * 2017-07-20 2019-07-09 海信集团有限公司 The display methods and device of speech prompt information
CN108534187A (en) * 2018-03-08 2018-09-14 新智数字科技有限公司 A kind of control method and device of gas-cooker, a kind of gas-cooker
CN110544473B (en) * 2018-05-28 2022-11-08 百度在线网络技术(北京)有限公司 Voice interaction method and device
CN111061452A (en) * 2019-12-17 2020-04-24 北京小米智能科技有限公司 Voice control method and device of user interface

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1647023A (en) * 2002-02-15 2005-07-27 Sap股份公司 Voice-controlled data entry
CN101329673A (en) * 2007-06-22 2008-12-24 刘艳萍 Method and device for processing document information
CN101557651A (en) * 2008-04-08 2009-10-14 Lg电子株式会社 Mobile terminal and menu control method thereof
CN101807398A (en) * 2009-02-16 2010-08-18 宏正自动科技股份有限公司 Voice identification device and operation method thereof
JP2010218238A (en) * 2009-03-17 2010-09-30 Fujitsu Ltd Text editing device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003114698A (en) * 2001-10-03 2003-04-18 Denso Corp Command acceptance device and program
CN101769758A (en) * 2008-12-30 2010-07-07 英华达(上海)科技有限公司 Planning method for search range of interest point
CN101853253A (en) * 2009-03-30 2010-10-06 三星电子株式会社 Equipment and method for managing multimedia contents in mobile terminal
CN102339193A (en) * 2010-07-21 2012-02-01 Tcl集团股份有限公司 Voice control conference speed method and system
CN102609208B (en) * 2012-02-13 2014-01-15 广州市动景计算机科技有限公司 Method and system for word capture on screen of touch screen equipment, and touch screen equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1647023A (en) * 2002-02-15 2005-07-27 Sap股份公司 Voice-controlled data entry
CN101329673A (en) * 2007-06-22 2008-12-24 刘艳萍 Method and device for processing document information
CN101557651A (en) * 2008-04-08 2009-10-14 Lg电子株式会社 Mobile terminal and menu control method thereof
CN101807398A (en) * 2009-02-16 2010-08-18 宏正自动科技股份有限公司 Voice identification device and operation method thereof
JP2010218238A (en) * 2009-03-17 2010-09-30 Fujitsu Ltd Text editing device

Also Published As

Publication number Publication date
CN103869948A (en) 2014-06-18

Similar Documents

Publication Publication Date Title
CN103869948B (en) Voice command processing method and electronic equipment
JP5706137B2 (en) Method and computer program for displaying a plurality of posts (groups of data) on a computer screen in real time along a plurality of axes
US20170344224A1 (en) Suggesting emojis to users for insertion into text-based messages
CN103038728B (en) Such as use the multi-mode text input system of touch-screen on a cellular telephone
JP5097198B2 (en) Apparatus and method for inserting image artifacts into a text document
JP2021515322A (en) Translation model training methods, phrase translation methods, equipment, storage media and computer programs
US9043300B2 (en) Input method editor integration
US20170300676A1 (en) Method and device for realizing verification code
JP2010524138A (en) Multiple mode input method editor
JP2014535110A (en) Gesture-based search
CN107885823B (en) Audio information playing method and device, storage medium and electronic equipment
CN104133815B (en) The method and system of input and search
CN104156161A (en) System and method for carrying out clicking, word capturing and searching on information equipment screen
CN113220848A (en) Automatic question answering method and device for man-machine interaction and intelligent equipment
CN115859220A (en) Data processing method, related device and storage medium
CN109656510A (en) The method and terminal of voice input in a kind of webpage
CN109165389A (en) A kind of data processing method, device and the device for data processing
CN111880668A (en) Input display method and device and electronic equipment
WO2024152669A1 (en) Content search method and apparatus, computer device, storage medium, and computer program product
CN112000766B (en) Data processing method, device and medium
US20140098031A1 (en) Device and method for extracting data on a touch screen
CN104424324B (en) The method and device of locating list item in list element
US20140181672A1 (en) Information processing method and electronic apparatus
CN117033587A (en) Man-machine interaction method and device, electronic equipment and medium
WO2015122742A1 (en) Electronic device and method for extracting and using sematic entity in text message of electronic device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant