CN103838487B - A kind of information processing method and electronic equipment - Google Patents

A kind of information processing method and electronic equipment Download PDF

Info

Publication number
CN103838487B
CN103838487B CN201410125517.3A CN201410125517A CN103838487B CN 103838487 B CN103838487 B CN 103838487B CN 201410125517 A CN201410125517 A CN 201410125517A CN 103838487 B CN103838487 B CN 103838487B
Authority
CN
China
Prior art keywords
speech engine
interaction interface
voice interaction
electronic equipment
viewing area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410125517.3A
Other languages
Chinese (zh)
Other versions
CN103838487A (en
Inventor
王鸷翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410125517.3A priority Critical patent/CN103838487B/en
Publication of CN103838487A publication Critical patent/CN103838487A/en
Application granted granted Critical
Publication of CN103838487B publication Critical patent/CN103838487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The present invention discloses a kind of information processing method and electronic equipment.Methods described includes:The sliding trace of operating body is obtained by sensing unit;When sliding trace show operating body away from viewing area first edge when, triggering vice activation operation;Display location of the voice Interaction Interface in viewing area is determined according to the end point of sliding trace;Voice Interaction Interface is shown based on display location, so that operating body carries out convenient input operation for voice Interaction Interface;Speech engine is opened in voice responsive start-up operation, tune;Wherein, voice Interaction Interface takes a part for viewing area;Speech engine is used for the phonetic entry for processing electronic equipment acquisition;Voice Interaction Interface is used to indicate the processing procedure that speech engine processes phonetic entry.Using the method for the present invention or electronic equipment, it can be ensured that user keep be hold by one hand mode it is constant in the case of, it is easy to touch control operation is carried out to the corresponding program of the voice Interaction Interface.

Description

A kind of information processing method and electronic equipment
Technical field
The present invention relates to data processing field, more particularly to a kind of information processing method and electronic equipment.
Background technology
As the continuous development of touch technology, increasing electronic equipment have touch screen, can support that touch-control is grasped Make.User can operate the application program installed in electronic equipment by touch control operation.
In prior art, the operational order for application program of the electronic equipment receiving user's input of touch control operation is supported Method, mainly precalculated position show application program icon or operation and control interface, when user precalculated position perform touch During control operation, electronic equipment can be responded to the operational order of user input.
But, the development trend of current electronic equipment is that touch screen gradually increases.When the size of the touch screen of electronic equipment More than to a certain degree when, user just cannot touch the touch of electronic equipment by gripping handss when electronic equipment is hold by one hand The Zone Full of screen.Therefore, cannot touch when certain application program receives gripping handss of the precalculated position of operational order in user During the region encountered, user just becomes very difficult for the operation of the application program.
The content of the invention
It is an object of the invention to provide a kind of information processing method and electronic equipment, can refer to according to the operation of user input Order, adjusts the display location of the operation and control interface of application program, facilitates user to carry out touch control operation to application program.
For achieving the above object, the invention provides following scheme:
A kind of information processing method, methods described are applied in electronic equipment, and the electronic equipment includes display unit, sense Answer unit and speech engine;The display unit includes viewing area, and methods described includes:
The sliding trace of operating body is obtained by the sensing unit;
When the sliding trace show the operating body away from the viewing area first edge when, trigger vice activation Operation;
Display location of the voice Interaction Interface in the viewing area is determined according to the end point of the sliding trace;
The voice Interaction Interface is shown based on the display location, so that the operating body is directed to the interactive voice Interface carries out convenient input operation;
The vice activation operation is responded, tune opens the speech engine;
Wherein, the voice Interaction Interface takes a part for the viewing area;The speech engine is used to process institute State the phonetic entry of electronic equipment acquisition;The voice Interaction Interface is used to indicate that the speech engine processes the phonetic entry Processing procedure.
Optionally, before the sliding trace of operating body is obtained by the sensing unit, methods described also includes:
In the first edge display reminding information of the viewing area, the information is used to point out tune to open voice The input mode of engine;
Or/and,
Judge whether the operating body is located at the first edge region of the viewing area;
When the operating body is located at the first edge region of the viewing area, described the first of the viewing area Edge display reminding information, the information are used to point out to adjust the input mode for opening speech engine.
Optionally, adjust after opening the speech engine, methods described also includes:
Mobile input for the voice Interaction Interface is obtained by the sensing unit;
Movement of the voice Interaction Interface in the viewing area is controlled based on the mobile input;
Parameter determination control instruction based on the mobile input;The control instruction is used to control the speech engine Processing procedure.
Optionally, processing stage the speech engine has M, M >=1;In processing stage described M, each processes rank One prompting effect of section correspondence, each prompting effect are different, processing stage described M in each processing stage it is corresponding There is output content;
After tune opens the speech engine, methods described also includes:
During the phonetic entry that the speech engine process electronic equipment is obtained, hand in the voice in real time The processing stage corresponding prompting effect residing for the speech engine is mutually shown in interface, and is shown residing for the speech engine Processing stage corresponding output content.
Optionally, processing stage the speech engine has M, M >=1;Each process in processing stage described M The N number of control instruction of stage correspondence;Processing stage different described, corresponding control instruction is different;
After tune opens the speech engine, methods described also includes:
During the phonetic entry that the speech engine process electronic equipment is obtained, in real time the voice is drawn Processing stage holding up residing, the command identification of corresponding N number of control instruction is displayed in interactive voice circle according to predetermined relationship Around face, so that parameter determination corresponding control instruction of the electronic equipment based on the mobile input.
A kind of electronic equipment, the electronic equipment include display unit, sensing unit and speech engine;The display unit Including viewing area, the electronic equipment also includes:
Sliding trace acquiring unit, for the sliding trace of operating body is obtained by the sensing unit;
Vice activation operates trigger element, for showing the operating body away from the viewing area when the sliding trace First edge when, triggering vice activation operation;
Display location determining unit, for determining voice Interaction Interface described aobvious according to the end point of the sliding trace Show the display location in region;
Voice Interaction Interface display unit, for showing the voice Interaction Interface based on the display location, so that The operating body carries out convenient input operation for the voice Interaction Interface;
Speech engine is adjusted and opens unit, and for responding the vice activation operation, tune opens the speech engine;
Wherein, the voice Interaction Interface takes a part for the viewing area;The speech engine is used to process institute State the phonetic entry of electronic equipment acquisition;The voice Interaction Interface is used to indicate that the speech engine processes the phonetic entry Processing procedure.
Optionally, the electronic equipment also includes:
First Tip element, for before the sliding trace of operating body is obtained by the sensing unit, described aobvious Show the first edge display reminding information in region, the information is used to point out to adjust the input mode for opening speech engine;
Or/and,
Second Tip element, for before the sliding trace of operating body is obtained by the sensing unit, judging described Whether operating body is located at the first edge region of the viewing area;
When the operating body is located at the first edge region of the viewing area, described the first of the viewing area Edge display reminding information, the information are used to point out to adjust the input mode for opening speech engine.
Optionally, the electronic equipment also includes:
Mobile input acquiring unit, for, after tune opens the speech engine, being obtained for institute by the sensing unit State the mobile input of voice Interaction Interface;
Mobile control unit, for based on the mobile input control voice Interaction Interface in the viewing area Movement;
Control instruction determining unit, for the parameter determination control instruction based on the mobile input;The control instruction For controlling the processing procedure of the speech engine.
Optionally, processing stage the speech engine has M, M >=1;In processing stage described M, each processes rank One prompting effect of section correspondence, each prompting effect are different, processing stage described M in each processing stage it is corresponding There is output content;
The electronic equipment also includes:
Processing stage Tip element, for after tune opens the speech engine, in the speech engine process electronics During the phonetic entry that equipment is obtained, the process residing for the speech engine is shown in real time in the voice Interaction Interface Stage corresponding prompting effect, and show the processing stage corresponding output content residing for the speech engine.
Optionally, processing stage the speech engine has M, M >=1;Each process in processing stage described M The N number of control instruction of stage correspondence;Processing stage different described, corresponding control instruction is different;
The electronic equipment also includes:
Command identification display unit, for after tune opens the speech engine, in the speech engine process electronics During the phonetic entry that equipment is obtained, in real time the processing stage corresponding N number of control residing for the speech engine is referred to The command identification of order is displayed in around the voice Interaction Interface according to predetermined relationship, so that the electronic equipment is based on institute State the corresponding control instruction of parameter determination of mobile input.
According to the specific embodiment that the present invention is provided, the invention discloses following technique effect:
The information processing method and electronic equipment of the present invention, by determining that voice is handed over according to the end point of the sliding trace Display location of the mutual interface in the viewing area;The voice Interaction Interface is shown based on the display location, so that institute Stating operating body carries out convenient input operation for the voice Interaction Interface;When user by the way of being hold by one hand to the electricity When sub- equipment is operated, the end point of sliding trace that can be according to the finger of user on the touchscreen determines interactive voice The display location at interface, such that it is able to guarantee user keep be hold by one hand mode it is constant in the case of, it is also possible to by voice The convenient input operation of interactive interface, makes user be easy to carry out touch control operation to the corresponding program of the voice Interaction Interface.
Description of the drawings
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to institute in embodiment The accompanying drawing that needs are used is briefly described, it should be apparent that, drawings in the following description are only some enforcements of the present invention Example, for those of ordinary skill in the art, without having to pay creative labor, can be with according to these accompanying drawings Obtain other accompanying drawings.
Fig. 1 is the flow chart of the information processing method embodiment 1 of the present invention;
Fig. 2 is a kind of schematic diagram of specific implementation of information processing method of the present invention;
Fig. 3 is the schematic diagram of the information processing method another kind specific implementation of the present invention;
Fig. 4 is the flow chart of the information processing method embodiment 2 of the present invention;
Fig. 5 is the flow chart of the information processing method embodiment 3 of the present invention;
Fig. 6 is the flow chart of the information processing method embodiment 4 of the present invention;
Fig. 7 is the processing stage schematic diagram of a kind of specific implementation pointed out to speech engine;
Fig. 8 is the processing stage schematic diagram of another kind of specific implementation pointed out to speech engine;
Fig. 9 is the flow chart of the information processing method embodiment 5 of the present invention;
Figure 10 is a kind of schematic diagram of specific implementation of idsplay order mark in the present invention;
Figure 11 is the schematic diagram of another kind of specific implementation of idsplay order mark in the present invention;
Figure 12 is the structure chart of the electronic equipment embodiment of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than the embodiment of whole.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
It is understandable to enable the above objects, features and advantages of the present invention to become apparent from, it is below in conjunction with the accompanying drawings and concrete real The present invention is further detailed explanation to apply mode.
The information processing method of the present invention, is applied in electronic equipment.The electronic equipment can be mobile phone, panel computer Deng equipment.
The electronic equipment includes display unit, sensing unit and speech engine;The display unit includes viewing area. Wherein, the display unit can include various types of display screens, such as touch screen.The sensing unit can be had The unit of touch sensing function, touch control operation that can be with inductive operation body on the touchscreen.The speech engine can receive use The voice messaging of family input, is analyzed identification, and process obtains the corresponding instruction of the voice messaging to the voice messaging. For example, when user says string number, and speech engine identifies that the numeral meets the composition rule of telephone number, can hold Row dial instruction, puts through the telephone number of the numeral composition;When the weather of user's query today, speech engine can pass through Call the application program of weather forecast type or directly current Weather information inquired about by the server of network side, And Query Result is prompted to into user.
Fig. 1 is the flow chart of the information processing method embodiment 1 of the present invention.As shown in figure 1, the method can include:
Step 101:The sliding trace of operating body is obtained by the sensing unit;
The operating body can be the equipment such as the finger, or felt pen of user.Operating body touching in electronic equipment Touch on screen when sliding, the sliding trace that the sensing unit obtains the operating body can be passed through.
The sliding trace can be that the operating body for gripping the electronic equipment is maintaining holding for the electronic equipment Hold.For example, user can be hold by one hand the electronic equipment, by the handss for gripping handss Refer to, perform slide on the touchscreen, so as to generate sliding trace.
Step 102:When the sliding trace show the operating body away from the viewing area first edge when, triggering Vice activation is operated;
Generally, the viewing area of the display unit is rectangle.The viewing area of rectangle has a four edges, and each edge can be with As an edge.When user plane is to the screen of electronic equipment, the viewing area can be divided into top, bottom sides Edge, and two edges in left and right.
When using bottom margin as first edge, when operating body direction on the touchscreen from bottom margin is to top sides When the direction of edge is slided, the sliding trace of formation just may indicate that first edge of the operating body away from the viewing area. At this point it is possible to trigger vice activation operation.
The vice activation operation can be used for starting the speech engine.
Step 103:Display of the voice Interaction Interface in the viewing area is determined according to the end point of the sliding trace Position;
When operating body slides on the touch screen of the electronic equipment, the corresponding voice Interaction Interface of the speech engine Operating body can be followed to move together with the contact position of touch screen.When operating body leaves the touch screen of the electronic equipment, Display location of the voice Interaction Interface in the viewing area can be determined according to the end point of the sliding trace.
Specifically, can determine that operating body leaves the touch screen of the electronic equipment according to the end point of the sliding trace When contact position, the contact position is defined as into display location of the voice Interaction Interface in the viewing area.
For example, during the bottom of the viewing area for being located at the touch screen when the end point of the sliding trace, can show Show that the bottom in region is defined as display location of the voice Interaction Interface in the viewing area;When the end point of the sliding trace Positioned at the middle part of the viewing area of the touch screen when, can will be defined as voice Interaction Interface in the middle part of viewing area described The display location of viewing area.Certainly, the end point of the sliding trace may be located at the optional position of the viewing area.This Place will not enumerate.
Step 104:The voice Interaction Interface is shown based on the display location, so that the operating body is for described Voice Interaction Interface carries out convenient input operation;
After the display location is determined, the voice Interaction Interface can be shown in the display location.By institute Voice Interaction Interface is stated, user can carry out convenient input operation by operating body.
Can show with functional behaviour on the voice Interaction Interface or around the voice Interaction Interface Make region or button.User can be conveniently inputted into operation by aforesaid operations region or button.The operation of input can be generated For the control instruction of interactive voice process.
Step 105:The vice activation operation is responded, tune opens the speech engine;
Wherein, the voice Interaction Interface takes a part for the viewing area;The speech engine is used to process institute State the phonetic entry of electronic equipment acquisition;The voice Interaction Interface is used to indicate that the speech engine processes the phonetic entry Processing procedure.
After tune opens the speech engine, the electronic equipment can obtain the voice messaging that user sends, according to institute's predicate Message breath generates corresponding operational order and performs.
In sum, in the present embodiment, by determining voice Interaction Interface in institute according to the end point of the sliding trace State the display location of viewing area;The voice Interaction Interface is shown based on the display location, so that the operation body acupuncture Convenient input operation is carried out to the voice Interaction Interface;When user is carried out to the electronic equipment by the way of being hold by one hand During operation, the end point of sliding trace that can be according to the finger of user on the touchscreen determines the display of voice Interaction Interface Position, such that it is able to guarantee user keep be hold by one hand mode it is constant in the case of, it is also possible to by voice Interaction Interface just Prompt input operation, makes user be easy to carry out touch control operation to the corresponding program of the voice Interaction Interface.
Fig. 2 is a kind of schematic diagram of specific implementation of information processing method of the present invention.As shown in Fig. 2 operating body exists The end point of the sliding trace 20 formed on the viewing area of electronic equipment 10 is A points.Now, the display of voice Interaction Interface 30 Position may be located at the corresponding horizontal zone of A points.
Fig. 3 is the schematic diagram of the information processing method another kind specific implementation of the present invention.As shown in figure 3, operating body The end point of the sliding trace 20 formed on the viewing area of electronic equipment 10 is B points.Now, voice Interaction Interface 30 is aobvious Show that position may be located at the corresponding horizontal zone of B points.
Contrast Fig. 2 and Fig. 3 can be seen that A points and first edge(Bottom of screen edge)Distance be more than B points and the first side The distance of edge.Specific in practical application, the sliding trace in Fig. 2 can be formed using the user operation of longer operation finger , the sliding trace in Fig. 3 can be formed using the user operation compared with short operation finger.Due to voice Interaction Interface Display location is determined according to the end point of sliding trace, therefore, in two kinds of implementations in Fig. 2 and Fig. 3, interactive voice The final display location at interface, with being hold by one hand under the constant mode of posture preserving per family, is touched using a finger To the display location that voice Interaction Interface 30 is located, so as to realize the touch control operation to voice Interaction Interface 30.
Fig. 4 is the flow chart of the information processing method embodiment 2 of the present invention.As shown in figure 4, the method can include:
Step 401:In the first edge display reminding information of the viewing area, the information is used to point out Tune opens the input mode of speech engine;
In the present embodiment, when user is provided without operating body performs operation to the electronic equipment, can be in the display The first edge display reminding information in region, with this point out user for point out tune open the input mode of speech engine.Example Such as, can be highlighted in the first edge, to point out the user now can be by sliding up from the first edge Dynamic mode, tune open speech engine.
In actual applications, when the electronic equipment is in screen lock state, aforesaid way can be adopted, in the display The first edge display reminding information in region.Now, the electronic equipment under screen lock state can directly receive user The tune of input opens the operation of speech engine, i.e. user and can not understand lock screen, in locking screen interface, directly inputs tune and opens speech engine Operation.The tune opens the sliding trace that the operation of speech engine can be generated in the present embodiment.
It should be noted that in practical application, it is also possible to first determine whether whether the operating body is located at the viewing area First edge region;The first edge region can include first edge and the region near first edge.
When the operating body is located at the first edge region of the viewing area, then described the of the viewing area One edge display reminding information, the information are used to point out to adjust the input mode for opening speech engine.
When the operating body is located at the first edge region of the viewing area, then described the of the viewing area One edge display reminding information, can prevent the continuously display information of the first edge in the viewing area, to The interference that family is likely to result in.Particularly when the screen of electronic equipment is in released state, user browses net using electronic equipment When page or video, in the continuously display information of the first edge of the viewing area, it is user to be interfered 's.Now, when the finger of user touches first edge region, then the first edge in the viewing area shows and carries Show information, interference of the information to user can be prevented.
Step 402:The sliding trace of operating body is obtained by the sensing unit;
Step 403:When the sliding trace show the operating body away from the viewing area first edge when, triggering Vice activation is operated;
Step 404:Display of the voice Interaction Interface in the viewing area is determined according to the end point of the sliding trace Position;
Step 405:The voice Interaction Interface is shown based on the display location, so that the operating body is for described Voice Interaction Interface carries out convenient input operation;
Step 406:The vice activation operation is responded, tune opens the speech engine;
Wherein, the voice Interaction Interface takes a part for the viewing area;The speech engine is used to process institute State the phonetic entry of electronic equipment acquisition;The voice Interaction Interface is used to indicate that the speech engine processes the phonetic entry Processing procedure.
Fig. 5 is the flow chart of the information processing method embodiment 3 of the present invention.As shown in figure 5, the method can include:
Step 501:The sliding trace of operating body is obtained by the sensing unit;
Step 502:When the sliding trace show the operating body away from the viewing area first edge when, triggering Vice activation is operated;
Step 503:Display of the voice Interaction Interface in the viewing area is determined according to the end point of the sliding trace Position;
Step 504:The voice Interaction Interface is shown based on the display location, so that the operating body is for described Voice Interaction Interface carries out convenient input operation;
Step 505:The vice activation operation is responded, tune opens the speech engine;
Wherein, the voice Interaction Interface takes a part for the viewing area;The speech engine is used to process institute State the phonetic entry of electronic equipment acquisition;The voice Interaction Interface is used to indicate that the speech engine processes the phonetic entry Processing procedure.
Step 506:Mobile input for the voice Interaction Interface is obtained by the sensing unit;
Show the voice Interaction Interface and tune open the speech engine after, can be directed to by the sensing unit The mobile input of the voice Interaction Interface.The mobile input, can be one or more in the convenient input operation.
Step 507:Movement of the voice Interaction Interface in the viewing area is controlled based on the mobile input;
The movement of operating body, can be such that the voice Interaction Interface moves together.For example, user clicks on and pins institute's predicate Move to above screen after sound interactive interface, the voice Interaction Interface can be driven to move to above screen;User clicks on simultaneously Move to below screen after pinning the voice Interaction Interface, the voice Interaction Interface can be driven to move to below screen.
Step 508:Parameter determination control instruction based on the mobile input;The control instruction is used to control institute's predicate The processing procedure of sound engine.
The parameter can be the parameter of the directional information for representing the mobile input.When the parameter represents the movement The direction of input is first direction(For example upwards)When, it may be determined that the first control instruction;When the parameter represents the movement The direction of input is second direction(For example it is downward)When, it may be determined that the second control instruction.
For example, during user is by one telephone number of phonetic entry, when speech engine is completed for voice The identification of signal, after voice Interaction Interface shows the numeral of representative telephone number, if receiving user's upward sliding Mobile input, then can determine that corresponding control instruction is to breathe out current telephone number immediately;If it is downward to receive user The mobile input slided, then can determine corresponding control instruction to receive again voice input signal, carry out speech recognition.
In the present embodiment, the mobile input for the voice Interaction Interface is obtained by the sensing unit;Based on institute State movement of the mobile input control voice Interaction Interface in the viewing area;Parameter based on the mobile input is true Determine control instruction;The control instruction is used for the processing procedure for controlling the speech engine;User is allow by for described The mobile input of voice Interaction Interface, is controlled to the processing procedure of speech engine, allows user easily for described Voice Interaction Interface input operation.When user is using the electronic equipment is hold by one hand, can keep being hold by one hand posture On the premise of constant, for the input operated by the voice Interaction Interface.
Fig. 6 is the flow chart of the information processing method embodiment 4 of the present invention.As shown in fig. 6, the method can include:
Step 601:The sliding trace of operating body is obtained by the sensing unit;
Step 602:When the sliding trace show the operating body away from the viewing area first edge when, triggering Vice activation is operated;
Step 603:Display of the voice Interaction Interface in the viewing area is determined according to the end point of the sliding trace Position;
Step 604:The voice Interaction Interface is shown based on the display location, so that the operating body is for described Voice Interaction Interface carries out convenient input operation;
Step 605:The vice activation operation is responded, tune opens the speech engine;
Wherein, the voice Interaction Interface takes a part for the viewing area;The speech engine is used to process institute State the phonetic entry of electronic equipment acquisition;The voice Interaction Interface is used to indicate that the speech engine processes the phonetic entry Processing procedure.
Step 606:During the phonetic entry that the speech engine process electronic equipment is obtained, in real time in institute The processing stage corresponding prompting effect residing for the speech engine is shown in stating voice Interaction Interface, and shows the voice Processing stage corresponding output content residing for engine.
Wherein, processing stage the speech engine has M, M >=1;In processing stage described M each processing stage Correspondence one prompting effect, each prompting effect it is different, processing stage described M in each processing stage to having Output content.
For example, processing stage the speech engine can have prompting, collection, recognize, exports four.Different process ranks Section, can be pointed out using different background colours in voice Interaction Interface.For example, the stage is pointed out to adopt red background Color, acquisition phase can adopt orange background colour, cognitive phase adopt yellow background color, and the output stage can be using green Background colour.
The output content, can be the information of some character properties.The information, can be to processing stage Prompting, or the prompting carried out based on the operation of user under certain moment.For example, in the prompting stage, can To show word in voice Interaction Interface:" you are it can be said that phone Zhang San ";In acquisition phase, can be in interactive voice circle Face shows word:" in listening attentively to ";In cognitive phase, word can be shown in voice Interaction Interface:" in identification ";In output rank Section, can show word in voice Interaction Interface:" calling ... " etc..
Also, it should be noted that can also by the change in pattern of voice Interaction Interface, come to it is different processing stage enter Row prompting.For example, can show a wave in the voice Interaction Interface, the apical position of the wave, with Processing stage go deep into, can gradually rise.
Fig. 7 is the processing stage schematic diagram of a kind of specific implementation pointed out to speech engine.Such as Fig. 7 institutes Show, when acquisition phase is reached in the stage residing for speech engine, can show in voice Interaction Interface 30 " in listening attentively to " and Wave 31.In practical application, wave 31 can be the wave with dynamic effect.Wherein, the top of wave 31 The height of position is the first height.
Fig. 8 is the processing stage schematic diagram of another kind of specific implementation pointed out to speech engine.Such as Fig. 8 institutes Show, when cognitive phase is reached in the stage residing for speech engine, can show in voice Interaction Interface 30 " in identification " and Wave 31.Wherein, the height of the position on the top of wave 31 is the second height.Contrast Fig. 7 and Fig. 8, it can be seen that second Highly more than the first height.According to the height of the position on the top of wave 31, user can be visually known current speech Residing for engine processing stage.
Fig. 9 is the flow chart of the information processing method embodiment 5 of the present invention.As shown in figure 9, the method can include:
Step 901:The sliding trace of operating body is obtained by the sensing unit;
Step 902:When the sliding trace show the operating body away from the viewing area first edge when, triggering Vice activation is operated;
Step 903:Display of the voice Interaction Interface in the viewing area is determined according to the end point of the sliding trace Position;
Step 904:The voice Interaction Interface is shown based on the display location, so that the operating body is for described Voice Interaction Interface carries out convenient input operation;
Step 905:The vice activation operation is responded, tune opens the speech engine;
Wherein, the voice Interaction Interface takes a part for the viewing area;The speech engine is used to process institute State the phonetic entry of electronic equipment acquisition;The voice Interaction Interface is used to indicate that the speech engine processes the phonetic entry Processing procedure.
Step 906:During the phonetic entry that the speech engine process electronic equipment is obtained, in real time by institute State residing for speech engine processing stage corresponding N number of control instruction command identification be displayed in institute's predicate according to predetermined relationship Around sound interactive interface, so that parameter determination corresponding control instruction of the electronic equipment based on the mobile input.
Wherein, processing stage the speech engine has M, M >=1;Each in processing stage described M processes rank The N number of control instruction of section correspondence;Processing stage different described, corresponding control instruction is different.
Processing stage different, the speech engine can be controlled by different control instructions and perform different behaviour Make.For example, in cognitive phase, can pass through to retell to instruct makes speech engine terminate current identification process, comes back to voice Acquisition phase.At this point it is possible in the either above or below of voice Interaction Interface, display retells the corresponding command identification of instruction.Weight Say that the corresponding command identification of instruction can be " retelling " two words.When user clicks on the voice Interaction Interface and towards instruction mark When knowing " retelling " movement, the electronic equipment can determine the corresponding control instruction of the mobile input to retell instruction.
When residing for the speech engine processing stage corresponding control instruction for it is multiple when, multiple controls can be referred to The command identification of order is displayed in around the voice Interaction Interface according to predetermined relationship.For example, when the control instruction When number is two, corresponding command identification is two, two command identifications can be respectively displayed on the voice Interaction Interface Above and below.When the voice Interaction Interface is positioned at the blockage of the viewing area, when the control instruction When number is two, two command identifications can be respectively displayed on above and below the voice Interaction Interface, Huo Zheke So that two command identifications to be respectively displayed on the left side and the right of the voice Interaction Interface;When the number of the control instruction is When three, three command identifications can be respectively displayed on the vicinity on three sides of the voice Interaction Interface;When the control The number of instruction be four when, can by four command identifications be respectively displayed on the voice Interaction Interface four edges it is attached Closely.
In practical application, the predetermined relationship may refer to make the direction between mark and the voice Interaction Interface close System, i.e. the command identification can be pre-defined positioned at certain direction of the voice Interaction Interface;The predetermined relationship also may be used Being that command identification is particularly shown region, i.e. can pre-define week of the command identification in the voice Interaction Interface That what is enclosed is particularly shown region.
By residing for the speech engine processing stage corresponding N number of control instruction command identification according to predetermined relationship After being displayed in around the voice Interaction Interface, can be referred to based on the corresponding control of parameter determination of the mobile input Order.For example, the moving direction of the touch track that can be formed according to operating body on the touchscreen, determines corresponding control instruction; Whether specific region can also be entered into according to the touch track that operating body is formed on the touchscreen, determine that corresponding control refers to Order.
Figure 10 is a kind of schematic diagram of specific implementation of idsplay order mark in the present invention.As shown in Figure 10, it is described Speech engine is in cognitive phase, shows dotted arrow in the lower section of the voice Interaction Interface 30 and " retelling " identifies, with This prompting user can be input into and retell operation by the voice Interaction Interface 30 is moved to below screen.
Figure 11 is the schematic diagram of another kind of specific implementation of idsplay order mark in the present invention.As shown in figure 11, institute Speech engine is stated in the output stage, downward dotted arrow and " retelling " is shown in the lower section of the voice Interaction Interface 30 Mark, points out user to be input into and retell operation by the voice Interaction Interface 30 is moved to below screen with this; The top of the voice Interaction Interface 30 shows dotted arrow upwards and " breathing out immediately " mark, points out the user can be with this By the voice Interaction Interface 30 is moved to above screen, the phone number for currently identifying by speech engine is breathed out immediately Code.
In the present embodiment, during processing the phonetic entry that the electronic equipment is obtained in the speech engine, In real time by residing for the speech engine processing stage corresponding N number of control instruction command identification show according to predetermined relationship Around the voice Interaction Interface;Parameter determination of the electronic equipment based on the mobile input can be caused corresponding Control instruction, and then allow user in the different disposal stage of the speech engine, the speech engine is performed different Control instruction, lifts user for the control ability of information process.
The invention also discloses a kind of electronic equipment.The electronic equipment can be the equipment such as mobile phone, panel computer.
The electronic equipment includes display unit, sensing unit and speech engine;The display unit includes viewing area. Wherein, the display unit can include various types of display screens, such as touch screen.The sensing unit can be had The unit of touch sensing function, touch control operation that can be with inductive operation body on the touchscreen.The speech engine can receive use The voice messaging of family input, is analyzed identification, and process obtains the corresponding instruction of the voice messaging to the voice messaging. For example, when user says string number, and speech engine identifies that the numeral meets the composition rule of telephone number, can hold Row dial instruction, puts through the telephone number of the numeral composition;When the weather of user's query today, speech engine can pass through Call the application program of weather forecast type or directly current Weather information inquired about by the server of network side, And Query Result is prompted to into user.
Figure 12 is the structure chart of the electronic equipment embodiment of the present invention.As shown in figure 12, the electronic equipment also includes:
Sliding trace acquiring unit 1201, for the sliding trace of operating body is obtained by the sensing unit;
The operating body can be the equipment such as the finger, or felt pen of user.Operating body touching in electronic equipment Touch on screen when sliding, the sliding trace that the sensing unit obtains the operating body can be passed through.
The sliding trace can be that the operating body for gripping the electronic equipment is maintaining holding for the electronic equipment Hold.For example, user can be hold by one hand the electronic equipment, by the handss for gripping handss Refer to, perform slide on the touchscreen, so as to generate sliding trace.
Vice activation operates trigger element 1202, for showing the operating body away from the display when the sliding trace During the first edge in region, triggering vice activation operation;
Generally, the viewing area of the display unit is rectangle.The viewing area of rectangle has a four edges, and each edge can be with As an edge.When user plane is to the screen of electronic equipment, the viewing area can be divided into top, bottom sides Edge, and two edges in left and right.
When using bottom margin as first edge, when operating body direction on the touchscreen from bottom margin is to top sides When the direction of edge is slided, the sliding trace of formation just may indicate that first edge of the operating body away from the viewing area. At this point it is possible to trigger vice activation operation.
The vice activation operation can be used for starting the speech engine.
Display location determining unit 1203, for determining voice Interaction Interface in institute according to the end point of the sliding trace State the display location of viewing area;
When operating body slides on the touch screen of the electronic equipment, the corresponding voice Interaction Interface of the speech engine Operating body can be followed to move together with the contact position of touch screen.When operating body leaves the touch screen of the electronic equipment, Display location of the voice Interaction Interface in the viewing area can be determined according to the end point of the sliding trace.
Specifically, can determine that operating body leaves the touch screen of the electronic equipment according to the end point of the sliding trace When contact position, the contact position is defined as into display location of the voice Interaction Interface in the viewing area.
For example, during the bottom of the viewing area for being located at the touch screen when the end point of the sliding trace, can show Show that the bottom in region is defined as display location of the voice Interaction Interface in the viewing area;When the end point of the sliding trace Positioned at the middle part of the viewing area of the touch screen when, can will be defined as voice Interaction Interface in the middle part of viewing area described The display location of viewing area.Certainly, the end point of the sliding trace may be located at the optional position of the viewing area.This Place will not enumerate.
Voice Interaction Interface display unit 1204, for showing the voice Interaction Interface based on the display location, with So that the operating body carries out convenient input operation for the voice Interaction Interface;
After the display location is determined, the voice Interaction Interface can be shown in the display location.By institute Voice Interaction Interface is stated, user can carry out convenient input operation by operating body.
Can show with functional behaviour on the voice Interaction Interface or around the voice Interaction Interface Make region or button.User can be conveniently inputted into operation by aforesaid operations region or button.The operation of input can be generated For the control instruction of interactive voice process.
Speech engine is adjusted and opens unit 1205, and for responding the vice activation operation, tune opens the speech engine;
Wherein, the voice Interaction Interface takes a part for the viewing area;The speech engine is used to process institute State the phonetic entry of electronic equipment acquisition;The voice Interaction Interface is used to indicate that the speech engine processes the phonetic entry Processing procedure.
Wherein, the voice Interaction Interface takes a part for the viewing area;The speech engine is used to process institute State the phonetic entry of electronic equipment acquisition;The voice Interaction Interface is used to indicate that the speech engine processes the phonetic entry Processing procedure.
After tune opens the speech engine, the electronic equipment can obtain the voice messaging that user sends, according to institute's predicate Message breath generates corresponding operational order and performs.
In sum, in the present embodiment, by determining voice Interaction Interface in institute according to the end point of the sliding trace State the display location of viewing area;The voice Interaction Interface is shown based on the display location, so that the operation body acupuncture Convenient input operation is carried out to the voice Interaction Interface;When user is carried out to the electronic equipment by the way of being hold by one hand During operation, the end point of sliding trace that can be according to the finger of user on the touchscreen determines the display of voice Interaction Interface Position, such that it is able to guarantee user keep be hold by one hand mode it is constant in the case of, it is also possible to by voice Interaction Interface just Prompt input operation, makes user be easy to carry out touch control operation to the corresponding program of the voice Interaction Interface.
In practical application, the electronic equipment can also include:
First Tip element, for before the sliding trace of operating body is obtained by the sensing unit, described aobvious Show the first edge display reminding information in region, the information is used to point out to adjust the input mode for opening speech engine;
Or/and,
Second Tip element, for before the sliding trace of operating body is obtained by the sensing unit, judging described Whether operating body is located at the first edge region of the viewing area;
When the operating body is located at the first edge region of the viewing area, described the first of the viewing area Edge display reminding information, the information are used to point out to adjust the input mode for opening speech engine.
In practical application, the electronic equipment can also include:
Mobile input acquiring unit, for, after tune opens the speech engine, being obtained for institute by the sensing unit State the mobile input of voice Interaction Interface;
Mobile control unit, for based on the mobile input control voice Interaction Interface in the viewing area Movement;
Control instruction determining unit, for the parameter determination control instruction based on the mobile input;The control instruction For controlling the processing procedure of the speech engine.
In practical application, processing stage the speech engine has M, M >=1;In processing stage described M at each The reason stage correspondence one prompting effect, each prompting effect it is different, processing stage described M in each processing stage To there is output content;
The electronic equipment can also include:
Processing stage Tip element, for after tune opens the speech engine, in the speech engine process electronics During the phonetic entry that equipment is obtained, the process residing for the speech engine is shown in real time in the voice Interaction Interface Stage corresponding prompting effect, and show the processing stage corresponding output content residing for the speech engine.
In practical application, processing stage the speech engine has M, M >=1;Each in processing stage described M Processing stage the N number of control instruction of correspondence;Processing stage different described, corresponding control instruction is different;
The electronic equipment can also include:
Command identification display unit, for after tune opens the speech engine, in the speech engine process electronics During the phonetic entry that equipment is obtained, in real time the processing stage corresponding N number of control residing for the speech engine is referred to The command identification of order is displayed in around the voice Interaction Interface according to predetermined relationship, so that the electronic equipment is based on institute State the corresponding control instruction of parameter determination of mobile input.
Finally, in addition it is also necessary to explanation, herein, such as first and second or the like relational terms be used merely to by One entity or operation are made a distinction with another entity or operation, and are not necessarily required or implied these entities or operation Between there is any this actual relation or order.And, term " including ", "comprising" or its any other variant are anticipated Covering including for nonexcludability, so that a series of process, method, article or equipment including key elements not only includes that A little key elements, but also including other key elements being not expressly set out, or also include for this process, method, article or The intrinsic key element of equipment.In the absence of more restrictions, the key element for being limited by sentence "including a ...", does not arrange Except also there is other identical element in including the process of the key element, method, article or equipment.
Through the above description of the embodiments, those skilled in the art can be understood that the present invention can be by Software adds the mode of required hardware platform to realize, naturally it is also possible to all by hardware implementing, but in many cases before Person is more preferably embodiment.The whole that background technology contributed based on such understanding, technical scheme or Person part can be embodied in the form of software product, and the computer software product can be stored in storage medium, such as ROM/RAM, magnetic disc, CD etc., use so that a computer equipment including some instructions(Can be personal computer, service Device, or the network equipment etc.)Perform the method described in some parts of each embodiment of the invention or embodiment.
In this specification, each embodiment is described by the way of progressive, and what each embodiment was stressed is and other The difference of embodiment, between each embodiment identical similar portion mutually referring to.For electronics disclosed in embodiment For equipment, as which corresponds to the method disclosed in Example, so description is fairly simple, related part is referring to method portion Defend oneself bright.
Specific case used herein is set forth to the principle and embodiment of the present invention, and above example is said It is bright to be only intended to help and understand the method for the present invention and its core concept;Simultaneously for one of ordinary skill in the art, foundation The thought of the present invention, will change in specific embodiments and applications.In sum, this specification content is not It is interpreted as limitation of the present invention.

Claims (10)

1. a kind of information processing method, methods described are applied in electronic equipment, and the electronic equipment includes display unit, sensing Unit and speech engine;The display unit includes viewing area, it is characterised in that methods described includes:
The sliding trace of operating body is obtained by the sensing unit;
When the sliding trace show the operating body away from the viewing area first edge when, triggering vice activation behaviour Make;
Display location of the voice Interaction Interface in the viewing area is determined according to the end point of the sliding trace;
The voice Interaction Interface is shown based on the display location, so that the operating body is directed to the voice Interaction Interface Carry out convenient input operation;
The vice activation operation is responded, tune opens the speech engine;
Wherein, the voice Interaction Interface takes a part for the viewing area;The speech engine is used to process the electricity The phonetic entry that sub- equipment is obtained;The voice Interaction Interface is used to indicate the place that the speech engine processes the phonetic entry Reason process.
2. method according to claim 1, it is characterised in that the sliding trace of operating body is obtained by the sensing unit Before, methods described also includes:
In the first edge display reminding information of the viewing area, the information is used to point out tune to open speech engine Input mode;
Or/and,
Judge whether the operating body is located at the first edge region of the viewing area;
When the operating body is located at the first edge region of the viewing area, in the first edge of the viewing area Display reminding information, the information are used to point out to adjust the input mode for opening speech engine.
3. method according to claim 1, it is characterised in that after tune opens the speech engine, methods described also includes:
Mobile input for the voice Interaction Interface is obtained by the sensing unit;
Movement of the voice Interaction Interface in the viewing area is controlled based on the mobile input;
Parameter determination control instruction based on the mobile input;The control instruction is used for the process for controlling the speech engine Process.
4. method according to claim 1, it is characterised in that processing stage the speech engine has M, M >=1;Institute In processing stage stating M each processing stage one prompting effect of correspondence, each prompting effect is different, at described M To there is output content processing stage each in the reason stage;
After tune opens the speech engine, methods described also includes:
During the phonetic entry that the speech engine process electronic equipment is obtained, in real time in interactive voice circle The processing stage corresponding prompting effect residing for the speech engine is shown in face, and shows the place residing for the speech engine Reason stage corresponding output content.
5. method according to claim 3, it is characterised in that processing stage the speech engine has M, M >=1;Institute Correspondence N number of control instruction processing stage each in processing stage stating M;Corresponding control processing stage different described refers to Order is different;
After tune opens the speech engine, methods described also includes:
During the phonetic entry that the speech engine process electronic equipment is obtained, in real time by the speech engine institute Place processing stage corresponding N number of control instruction command identification be displayed in the voice Interaction Interface according to predetermined relationship Around, so that parameter determination corresponding control instruction of the electronic equipment based on the mobile input.
6. a kind of electronic equipment, the electronic equipment include display unit, sensing unit and speech engine;The display unit bag Include viewing area, it is characterised in that the electronic equipment also includes:
Sliding trace acquiring unit, for the sliding trace of operating body is obtained by the sensing unit;
Vice activation operates trigger element, for showing the operating body away from the of the viewing area when the sliding trace During one edge, triggering vice activation operation;
Display location determining unit, for determining voice Interaction Interface in the viewing area according to the end point of the sliding trace The display location in domain;
Voice Interaction Interface display unit, for showing the voice Interaction Interface based on the display location, so that described Operating body carries out convenient input operation for the voice Interaction Interface;
Speech engine is adjusted and opens unit, and for responding the vice activation operation, tune opens the speech engine;
Wherein, the voice Interaction Interface takes a part for the viewing area;The speech engine is used to process the electricity The phonetic entry that sub- equipment is obtained;The voice Interaction Interface is used to indicate the place that the speech engine processes the phonetic entry Reason process.
7. electronic equipment according to claim 6, it is characterised in that the electronic equipment also includes:
First Tip element, for before the sliding trace of operating body is obtained by the sensing unit, in the viewing area The first edge display reminding information in domain, the information are used to point out to adjust the input mode for opening speech engine;
Or/and,
Second Tip element, for, before the sliding trace of operating body is obtained by the sensing unit, judging the operation Whether body is located at the first edge region of the viewing area;
When the operating body is located at the first edge region of the viewing area, in the first edge of the viewing area Display reminding information, the information are used to point out to adjust the input mode for opening speech engine.
8. electronic equipment according to claim 6, it is characterised in that the electronic equipment also includes:
Mobile input acquiring unit, for, after tune opens the speech engine, being obtained for institute's predicate by the sensing unit The mobile input of sound interactive interface;
Mobile control unit, for controlling shifting of the voice Interaction Interface in the viewing area based on the mobile input It is dynamic;
Control instruction determining unit, for the parameter determination control instruction based on the mobile input;The control instruction is used for Control the processing procedure of the speech engine.
9. electronic equipment according to claim 6, it is characterised in that processing stage the speech engine has M, M >= 1;In processing stage described M each processing stage correspondence one prompting effect, each prompting effect it is different, the M To there is output content processing stage each in processing stage individual;
The electronic equipment also includes:
Processing stage Tip element, for after tune opens the speech engine, in the speech engine process electronic equipment During the phonetic entry of acquisition, show in the voice Interaction Interface in real time residing for the speech engine processing stage Corresponding prompting effect, and show the processing stage corresponding output content residing for the speech engine.
10. electronic equipment according to claim 8, it is characterised in that processing stage the speech engine has M, M >= 1;Correspondence N number of control instruction processing stage each in processing stage described M;Corresponding control processing stage different described System instruction is different;
The electronic equipment also includes:
Command identification display unit, for after tune opens the speech engine, in the speech engine process electronic equipment During the phonetic entry of acquisition, in real time by the processing stage corresponding N number of control instruction residing for the speech engine Command identification is displayed in around the voice Interaction Interface according to predetermined relationship, so that the electronic equipment is based on the shifting The corresponding control instruction of parameter determination of dynamic input.
CN201410125517.3A 2014-03-28 2014-03-28 A kind of information processing method and electronic equipment Active CN103838487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410125517.3A CN103838487B (en) 2014-03-28 2014-03-28 A kind of information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410125517.3A CN103838487B (en) 2014-03-28 2014-03-28 A kind of information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN103838487A CN103838487A (en) 2014-06-04
CN103838487B true CN103838487B (en) 2017-03-29

Family

ID=50802052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410125517.3A Active CN103838487B (en) 2014-03-28 2014-03-28 A kind of information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN103838487B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105094801B (en) 2015-06-12 2019-12-24 阿里巴巴集团控股有限公司 Application function activation method and device
CN110231863B (en) * 2018-03-06 2023-03-24 斑马智行网络(香港)有限公司 Voice interaction method and vehicle-mounted equipment
CN108491246B (en) * 2018-03-30 2021-06-15 联想(北京)有限公司 Voice processing method and electronic equipment
CN113495620A (en) * 2020-04-03 2021-10-12 百度在线网络技术(北京)有限公司 Interactive mode switching method and device, electronic equipment and storage medium
CN113495621A (en) * 2020-04-03 2021-10-12 百度在线网络技术(北京)有限公司 Interactive mode switching method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477432A (en) * 2008-01-04 2009-07-08 宏达国际电子股份有限公司 Electronic device, its application program opening method and function image display method
CN101866257A (en) * 2009-04-20 2010-10-20 鸿富锦精密工业(深圳)有限公司 Touch hand-held equipment and option display method thereof
US7962647B2 (en) * 2008-11-24 2011-06-14 Vmware, Inc. Application delivery control module for virtual network switch
CN102646016A (en) * 2012-02-13 2012-08-22 北京百纳信息技术有限公司 User terminal for displaying gesture-speech interaction unified interface and display method thereof
CN102750087A (en) * 2012-05-31 2012-10-24 华为终端有限公司 Method, device and terminal device for controlling speech recognition function

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477432A (en) * 2008-01-04 2009-07-08 宏达国际电子股份有限公司 Electronic device, its application program opening method and function image display method
US7962647B2 (en) * 2008-11-24 2011-06-14 Vmware, Inc. Application delivery control module for virtual network switch
CN101866257A (en) * 2009-04-20 2010-10-20 鸿富锦精密工业(深圳)有限公司 Touch hand-held equipment and option display method thereof
CN102646016A (en) * 2012-02-13 2012-08-22 北京百纳信息技术有限公司 User terminal for displaying gesture-speech interaction unified interface and display method thereof
CN102750087A (en) * 2012-05-31 2012-10-24 华为终端有限公司 Method, device and terminal device for controlling speech recognition function

Also Published As

Publication number Publication date
CN103838487A (en) 2014-06-04

Similar Documents

Publication Publication Date Title
JP6876749B2 (en) Continuity
KR102222143B1 (en) Handwriting keyboard for screens
CN103838487B (en) A kind of information processing method and electronic equipment
KR102187943B1 (en) Devices and methods for manipulating user interfaces with a stylus
US9367202B2 (en) Information processing method and electronic device
KR101624791B1 (en) Device, method, and graphical user interface for configuring restricted interaction with a user interface
CN102968206B (en) Input unit and method for the terminal device with touch modules
CN114127676A (en) Handwriting input on electronic devices
CN105117056B (en) A kind of method and apparatus of operation touch-screen
EP3862898A1 (en) Authenticated device used to unlock another device
CN104076916B (en) Information processing method and electronic device
CN103530047B (en) Touch screen equipment event triggering method and device
CN103631516B (en) The method of touch-sensitive device and the manipulation based on touch to content
CN104020948B (en) A kind of method and device that cursor position is determined in touch-screen
JP7076000B2 (en) Faster scrolling and selection
CN107967055A (en) A kind of man-machine interaction method, terminal and computer-readable medium
Pielot et al. PocketMenu: non-visual menus for touch screen devices
KR101901735B1 (en) Method and system for providing user interface, and non-transitory computer-readable recording medium
CN104461348B (en) Information choosing method and device
CN107908349A (en) Display interface amplification method, terminal and computer-readable recording medium
CN106959746A (en) The processing method and processing device of speech data
CN109144377A (en) Operating method, smartwatch and the computer readable storage medium of smartwatch
CN105867831B (en) The operating method and system of a kind of touch-screen
CN105867808A (en) Terminal unlocking method and terminal
CN107015735A (en) The control method and touch control device on a kind of browser operation column

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant