CN111679746A - Input method and device and electronic equipment - Google Patents

Input method and device and electronic equipment Download PDF

Info

Publication number
CN111679746A
CN111679746A CN202010444416.8A CN202010444416A CN111679746A CN 111679746 A CN111679746 A CN 111679746A CN 202010444416 A CN202010444416 A CN 202010444416A CN 111679746 A CN111679746 A CN 111679746A
Authority
CN
China
Prior art keywords
input
target
candidate item
user
input method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010444416.8A
Other languages
Chinese (zh)
Inventor
裘雅婷
韩秦
韩连华
侯毅
郑威
邬宇茜
卓兴中
邵亚飞
吴芳昱
左士刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN202010444416.8A priority Critical patent/CN111679746A/en
Publication of CN111679746A publication Critical patent/CN111679746A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides an input method, an input device and electronic equipment, wherein the method comprises the steps of determining a currently selected target object in an input method panel; the target object is used for processing according to corresponding user-defined function operation when the touch operation is monitored to be the designated gesture operation; and playing the voice information of the target object. According to the embodiment of the invention, the target object is correspondingly processed based on the user-defined function operation corresponding to the touch operation, so that the input of the user is more flexible and efficient in the process of using the input method, the input experience of the user is improved, in addition, the voice information corresponding to the target object is provided to prompt the user, and the input experience of the user is further improved.

Description

Input method and device and electronic equipment
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to an input method, an input device, and an electronic device.
Background
With the development of computer technology, electronic devices such as mobile phones and tablet computers are more and more popular, and great convenience is brought to life, study and work of people. These electronic devices are typically installed with an input method application (abbreviated as input method) so that a user can input information using the input method.
With the increasing demand of users, the functions of the input method also need to be enriched to better provide the required input services for different users, however, at present, in the process of using the input method, the input is still not flexible and efficient enough, and the user input experience is not high.
Disclosure of Invention
The embodiment of the invention provides an input method, so that the input of a user is more flexible and efficient in the process of using the input method, and the input experience of the user is improved.
Correspondingly, the embodiment of the invention also provides an input device and electronic equipment, which are used for ensuring the realization and application of the method.
In order to solve the above problem, an embodiment of the present invention discloses an input method, which specifically includes: determining a currently selected target object in the input method panel; the target object is used for processing according to corresponding user-defined function operation when the touch operation is monitored to be the designated gesture operation; and playing the voice information of the target object.
Optionally, the determining a currently selected target object in the input method panel includes: starting an input method panel, and determining a currently selected target control in the input method panel; and taking the target control as the currently selected target object.
Optionally, after the target control is taken as the currently selected target object, the method further includes: when touch control operation aiming at the target control is monitored, popping up a popup window corresponding to the target control; when touch control operation aiming at a setting item in the setting popup window is monitored, taking the setting item as a target setting item, and processing a sub-control contained in the target setting item; and when the touch operation outside the setting popup window is monitored, closing the setting popup window.
Optionally, the determining a currently selected target object in the input method panel includes: acquiring input information input by a user through an input method panel; acquiring a candidate item corresponding to the input information; and determining a target candidate item from the candidate items, and taking the target candidate item as a currently selected target object.
Optionally, the playing the voice information of the target object includes: determining a play mode of the target candidate item according to the number of the characters of the target candidate item; and playing the voice information of the target candidate item according to the playing mode.
Optionally, the determining a play mode of the target candidate item according to the number of characters of the target candidate item includes: when the number of characters of the target candidate item is less than or equal to a preset number, determining the play mode of the target candidate item as a first play mode; the first playing mode is that characters in the target candidate items and interpretation information corresponding to the characters are played one by one; when the number of the characters of the target candidate item is larger than the preset number, determining the play mode of the target candidate item as a second play mode; and after all the characters of the target candidate item are played, the characters in the target candidate item and the interpretation information corresponding to the characters are played one by one in the second playing mode.
Optionally, the acquiring input information input by the user through the input method panel includes: monitoring the retention time of the touch operation of the blank keys on the input method panel; when the holding time meets the preset response time, triggering a voice input mode; and acquiring the voice information of the user as the input information of the user in the voice input mode.
Optionally, the acquiring input information input by the user through the input method panel includes: monitoring touch operation on a key on an input method panel; and when the touch operation on the key is a preset touch operation, taking the character corresponding to the key as the input information of the user.
Optionally, the method further comprises: and when the key to which the touch operation is directed changes or the operation type of the touch operation changes, triggering prompt information corresponding to the key to which the touch operation is directed.
Optionally, the touch operation at least includes one of a pressing operation and a raising operation, and the prompt information at least includes one of a key sound effect and a key vibration.
Optionally, the determining a target candidate item from the candidate items includes: and taking the first candidate item in the candidate items as a target candidate item.
Optionally, after obtaining the candidate item corresponding to the input information, the method further includes: acquiring word segmentation symbols input by a user aiming at the input information through an input method panel; forming new input information by the input information and the word segmentation symbols, and acquiring update candidate items corresponding to the new input information;
the determining a target candidate item from the candidate items and using the target candidate item as a currently selected target object includes: and determining a target update candidate item from the update candidate items, and taking the target update candidate item as a currently selected target object.
Optionally, the method further comprises: after the input method panel is switched to a symbol keyboard, determining the currently selected symbol keyboard; and playing the voice information corresponding to the currently selected symbol keyboard.
Optionally, after the determining a target candidate from the candidates, the method comprises: acquiring touch operation of the user on the input method panel; and processing the candidate items according to the touch operation on the input method panel.
Optionally, the processing the candidate items according to the touch operation on the input method panel includes:
determining a first user-defined function operation corresponding to the touch operation of the input method panel, and processing the candidate item according to the first user-defined function operation; the first user-defined function operation corresponding to the touch operation is a function operation on a candidate item corresponding to a designated gesture operation matched with the touch operation; wherein the processing the candidate items comprises at least: acquiring a previous candidate item of the target candidate items as a new target candidate item; acquiring a next candidate item of the target candidate items as a new target candidate item; the target candidate item is displayed on a screen; after the input information is updated, obtaining a new candidate item, and taking the first new candidate item of the new candidate item as a new target candidate item; emptying the input information; acquiring a previous page candidate item corresponding to the target candidate item; and acquiring next page candidate items corresponding to the target candidate items.
Optionally, the method further comprises: when the candidate item corresponding to the input information is not obtained, obtaining the touch operation of the user on the input method panel; and processing the input information according to the touch operation on the input method panel.
Optionally, the processing the input information according to the touch operation on the input method panel includes: determining a second user-defined function operation corresponding to the touch operation of the input method panel, and processing the input information according to the second user-defined function operation; the second user-defined function operation corresponding to the touch operation is a function operation on input information corresponding to the designated gesture operation matched with the touch operation; wherein the processing the input information comprises at least: moving a cursor forward on the input information; moving a cursor backward on the input information; selecting characters in the input information before the cursor; and selecting the character behind the cursor in the input information.
Optionally, the designated gesture operation includes at least one of a left slide of the two fingers, a right slide of the two fingers, an up slide of the two fingers, and a down slide of the two fingers.
Optionally, the method further comprises: determining the keyboard type of the input method panel; acquiring touch operation of the user on an appointed key on the input method panel; and processing the candidate items according to the keyboard type and the touch operation of the specified key.
Optionally, the processing the candidate items according to the keyboard type and the touch operation on the specified key includes: when the input method panel is a full key panel, if the input method panel is a touch operation aiming at a comma key, acquiring a previous candidate item of the target candidate items as a new target candidate item; if the operation is the touch operation aiming at the period key, acquiring a next candidate item of the target candidate items as a new target candidate item; if the operation is the touch operation aiming at the digital switching key, acquiring a previous page candidate item corresponding to the target candidate item; if the operation is the touch operation aiming at the Chinese-English switching key, acquiring next page candidate items corresponding to the target candidate items; when the input method panel is a nine-key panel, if the input method panel is a touch operation aiming at a number switching key, acquiring a previous candidate item of the target candidate items as a new target candidate item; and if the target candidate item is the touch operation aiming at the Chinese-English switching key, acquiring the next candidate item of the target candidate items as a new target candidate item.
The embodiment of the invention also discloses an input device, which specifically comprises: the determining module is used for determining a currently selected target object in the input method panel; the target object is used for processing according to corresponding user-defined function operation when the touch operation is monitored to be the designated gesture operation; and the playing module is used for playing the voice information of the target object.
Optionally, the determining module is configured to start an input method panel, and determine a currently selected target control in the input method panel; and taking the target control as the currently selected target object.
Optionally, the apparatus further comprises: the monitoring module is used for popping up a setting popup window corresponding to the target control when the touch operation aiming at the target control is monitored; when touch control operation aiming at the setting items in the setting popup window is monitored, taking the setting items as target setting items, and processing sub-controls contained in the target setting items; and when the touch operation outside the setting popup window is monitored, closing the setting popup window.
Optionally, the determining module is configured to obtain input information input by a user through an input method panel; acquiring a candidate item corresponding to the input information; and determining a target candidate item from the candidate items, and taking the target candidate item as a currently selected target object.
Optionally, the playing module is configured to determine a playing mode of the target candidate item according to the number of characters of the target candidate item; and playing the voice information of the target candidate item according to the playing mode.
Optionally, the playing module is configured to determine that a playing mode of the target candidate item is a first playing mode when the number of characters of the target candidate item is less than or equal to a preset number; the first playing mode is that characters in the target candidate items and interpretation information corresponding to the characters are played one by one; when the number of the characters of the target candidate item is larger than the preset number, determining the play mode of the target candidate item as a second play mode; and after all the characters of the target candidate item are played, the characters in the target candidate item and the interpretation information corresponding to the characters are played one by one in the second playing mode.
Optionally, the determining module is configured to monitor a retention time of a touch operation on the input method panel for the blank key; when the holding time meets the preset response time, triggering a voice input mode; and acquiring the voice information of the user as the input information of the user in the voice input mode.
Optionally, the determining module is configured to monitor a touch operation on a key on the input method panel; and when the touch operation on the key is a preset touch operation, taking the character corresponding to the key as the input information of the user.
Optionally, the determining module is configured to trigger a prompt message corresponding to the key to which the touch operation is directed when the key to which the touch operation is directed changes or the operation type of the touch operation changes.
Optionally, the touch operation at least includes one of a pressing operation and a raising operation, and the prompt information at least includes one of a key sound effect and a key vibration.
Optionally, the determining module is configured to use the first candidate item in the candidate items as a target candidate item.
Optionally, the apparatus further comprises: the updating module is used for acquiring word segmentation symbols input by a user aiming at the input information through an input method panel; forming new input information by the input information and the word segmentation symbols, and acquiring update candidate items corresponding to the new input information; the determining module is used for determining a target update candidate item from the update candidate items and taking the target update candidate item as a currently selected target object.
Optionally, the apparatus further comprises: the symbol keyboard voice playing module is used for determining the currently selected symbol keyboard after switching to the symbol keyboard on the input method panel; and playing the voice information corresponding to the currently selected symbol keyboard.
Optionally, the apparatus comprises: the first candidate item processing module is used for acquiring touch operation of the user on the input method panel; and processing the candidate items according to the touch operation on the input method panel.
Optionally, the first candidate item processing module is configured to determine a first user-defined function operation corresponding to a touch operation of the input method panel, and process the candidate item according to the first user-defined function operation; the first user-defined function operation corresponding to the touch operation is a function operation on a candidate item corresponding to a designated gesture operation matched with the touch operation; wherein the processing the candidate items comprises at least: acquiring a previous candidate item of the target candidate items as a new target candidate item; acquiring a next candidate item of the target candidate items as a new target candidate item; the target candidate item is displayed on a screen; after the input information is updated, obtaining a new candidate item, and taking the first new candidate item of the new candidate item as a new target candidate item; emptying the input information; acquiring a previous page candidate item corresponding to the target candidate item; and acquiring next page candidate items corresponding to the target candidate items.
Optionally, the apparatus further comprises: the input information processing module is used for acquiring the touch operation of the user on the input method panel when the candidate item corresponding to the input information is not acquired; and processing the input information according to the touch operation on the input method panel.
Optionally, the input information processing module is configured to determine a second user-defined function operation corresponding to the touch operation of the input method panel, and process the input information according to the second user-defined function operation; the second user-defined function operation corresponding to the touch operation is a function operation on input information corresponding to the designated gesture operation matched with the touch operation; wherein the processing the input information comprises at least: moving a cursor forward on the input information; moving a cursor backward on the input information; selecting characters in the input information before the cursor; and selecting the character behind the cursor in the input information.
Optionally, the designated gesture operation includes at least one of a left slide of the two fingers, a right slide of the two fingers, an up slide of the two fingers, and a down slide of the two fingers.
Optionally, the apparatus further comprises: the second candidate item processing module is used for determining the keyboard type of the input method panel; acquiring touch operation of the user on an appointed key on the input method panel; and processing the candidate items according to the keyboard type and the touch operation of the specified key.
Optionally, the second candidate item processing module is configured to, when the input method panel is a full key panel, if the input method panel is a touch operation for a comma key, obtain a last candidate item of the target candidate items as a new target candidate item; if the operation is the touch operation aiming at the period key, acquiring a next candidate item of the target candidate items as a new target candidate item; if the operation is the touch operation aiming at the digital switching key, acquiring a previous page candidate item corresponding to the target candidate item; if the operation is the touch operation aiming at the Chinese-English switching key, acquiring next page candidate items corresponding to the target candidate items; when the input method panel is a nine-key panel, if the input method panel is a touch operation aiming at a number switching key, acquiring a previous candidate item of the target candidate items as a new target candidate item; and if the target candidate item is the touch operation aiming at the Chinese-English switching key, acquiring the next candidate item of the target candidate items as a new target candidate item.
The embodiment of the invention also discloses a readable storage medium, and when the instructions in the storage medium are executed by a processor of the electronic equipment, the electronic equipment can execute the input method according to any one of the embodiments of the invention.
An embodiment of the present invention also discloses an electronic device, including a memory, and one or more programs, where the one or more programs are stored in the memory, and configured to be executed by one or more processors, and the one or more programs include instructions for: determining a currently selected target object in the input method panel; and playing the voice information of the target object.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, after the input method is started, the target object currently selected in the input method panel is determined, and the voice information of the target object is played, wherein the target object is used for processing according to the corresponding self-defined function operation when the touch operation is monitored to be the designated gesture operation, and the target object is correspondingly processed based on the self-defined function operation corresponding to the touch operation, so that the input of a user is more flexible and efficient in the process of using the input method, the input experience of the user is improved, the voice information corresponding to the target object is provided for prompting the user, and the input experience of the user is further improved.
Drawings
FIG. 1 is a flow chart of the steps of an input method embodiment of the present invention;
FIG. 2 is a flow chart of the steps of an alternative embodiment of an input method of the present invention;
FIG. 3 is a schematic view of a voice setting pop-up window of the present invention;
FIG. 4 is a schematic diagram of a speech translation panel of the present invention;
FIG. 5 is a flow chart of steps of yet another alternative embodiment of an input method of the present invention;
FIG. 6 is a schematic view of an unobstructed arrangement of the present invention;
FIG. 7 is a schematic illustration of a customization of a two-finger sliding operation of the present invention;
FIG. 8 is a block diagram of an input device according to an embodiment of the present invention;
FIG. 9 illustrates a block diagram of an electronic device for input, in accordance with an exemplary embodiment;
fig. 10 is a schematic structural diagram of an electronic device for input according to another exemplary embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of an input method according to the present invention is shown, which may specifically include the following steps:
step 102, determining a currently selected target object in an input method panel; and the target object is used for processing according to the corresponding user-defined function operation when the touch operation is monitored to be the designated gesture operation.
And 104, playing the voice information of the target object.
In using an input method, the input method panel needs to be started first. The input method panel includes various panels, such as a symbol panel, a number panel, a voice translation panel, a pinyin panel, an english panel, and the like. In the embodiment of the invention, the control displayed on the input method panel, the candidate obtained after the user inputs information through the input method panel, and the like can be used as the currently selected target object. The target object can be automatically positioned and selected by an input method or manually selected by a user.
According to the embodiment of the invention, the corresponding user-defined function operation is set for the designated gesture operation, and when the monitored touch operation is the designated gesture operation, the target object is processed according to the user-defined function operation corresponding to the designated gesture operation, so that the input of a user is more flexible and efficient in the process of using the input method, and the input experience of the user is improved. The system can also provide custom function operation, and a user can set the corresponding relation between the designated gesture operation and the custom function operation according to the user self, and can allow the designated gesture operation and the custom function operation added or deleted by the user to provide better input experience.
After the currently selected target object is determined, the voice information of the target object is played, so that the user can obtain the voice feedback of the target object. For example, after a certain input method panel is started, a setting control automatically positioned to the panel is selected as a target object, then the voice information corresponding to the playing setting control is set, and for example, after a user obtains a candidate item through the input method panel input information, the first candidate item is automatically used as the target object, and the voice information corresponding to the first candidate item is played.
In a preferred example, embodiments of the present invention may be applicable to visually impaired users. For the visually impaired user, because the visually impaired user cannot obtain clear visual feedback in the process of using the electronic equipment such as the mobile phone and the like, and can only distinguish the content of the current operation through the hearing sense or the touch sense, the embodiment of the invention can automatically determine or manually determine the target object currently selected in the input method panel by the user, and process the target object according to the corresponding user-defined function operation when the touch operation is monitored to be the designated gesture operation, so that the user input is more flexible and efficient, the input experience of the user is improved, in addition, the embodiment of the invention can also play the voice information corresponding to the target object, prompt the visually impaired user, and further improve the input experience of the user.
Referring to fig. 2, a flowchart illustrating steps of an alternative embodiment of the input method of the present invention is shown, which may specifically include the following steps:
step 202, starting an input method panel, and determining a currently selected target control in the input method panel.
And 204, taking the target control as the currently selected target object.
Step 206, playing the voice information of the target object.
The input method panel includes one or more controls, such as a speech setting control and a language selection control on a speech panel, a language selection control on a speech translation panel, and the like.
In the embodiment of the invention, one or more controls of the input method panel can be set as target controls in advance, and after the user starts the input method panel, the target controls of the input method panel are automatically selected. For example, if the voice panel includes a voice setting control and a language setting control, where the voice setting control is a target control, the voice setting control is selected after the voice panel is started; assuming that the speech translation panel comprises a language selection control, wherein the language selection control is a target control, the language selection control is selected after the speech translation panel is started. And determining a target control in the input method panel, and then taking the target control as a currently selected target object and playing voice information corresponding to the target object.
In an embodiment of the present invention, after the target control is taken as the currently selected target object, the method further includes: when touch control operation aiming at the target control is monitored, popping up a popup window corresponding to the target control; when touch control operation aiming at a setting item in the setting popup window is monitored, taking the setting item as a target setting item, and processing a sub-control contained in the target setting item; and when the touch operation outside the setting popup window is monitored, closing the setting popup window.
The touch operation may be a click operation, a press operation, or a double-click operation. In the embodiment of the invention, the setting popup window of the target control is started or closed according to the touch operation of the user.
In an example of the present invention, after the voice panel is started, the voice setting control of the voice panel is selected as a target control, and then the corresponding voice information "voice setting" is played, and if a double-click operation of the user is monitored, a voice setting popup is popped up, wherein each setting item in the voice setting popup is a whole, that is, all sub-controls of the setting item will perform corresponding play, selection/cancellation, or jump operation together. If the language selection control of the voice panel is selected as the target control, then playing corresponding voice information, wherein the language selection is xx currently, xx is the currently displayed language, if the double-click operation of the user is monitored, popping up a language selection popup window, wherein each setting item in the language selection popup window is a whole, namely all the sub-controls of the setting item perform corresponding playing, selecting/canceling or expanding/retracting operation together. Referring to fig. 3, a pop-up window for voice setting is shown, a frame selection portion is a setting item, and each setting item is a whole, so that corresponding play, selection/cancellation or jump operations are performed on all sub-controls of the frame-selected setting item, for example, when a play operation needs to be performed on the "whisper recognition" setting item selected in fig. 3, the whole content of the "whisper recognition" setting item is played, that is, voice information to be played is "whisper recognition" and "recognizable whisper.
In the embodiment of the invention, when a user enters a voice panel in a barrier-free mode (aiming at an input method mode of a visually impaired user), the voice input function is not automatically started temporarily, but the user is prompted to click a button to start the voice input function, and short vibration prompt is carried out when the user starts the voice input function, namely enters a recording state.
In another example of the present invention, referring to the speech translation panel shown in fig. 4, after the speech translation panel is started, a language translation control of the speech translation panel is selected as a target control, and then corresponding speech information is played, where the speech information is in a currently selected language, and if the currently selected language is switched, the speech information also needs to be played. For example, suppose that the voice message "x translates x" is played when initially selected, and when the language is switched, the voice message "x translates x" is played, where x represents the language. And if a drop-down arrow (language setting control) beside the language translation control clicked by the user is monitored, popping up a language setting popup, wherein each setting item in the popup is a whole, and executing corresponding playing, selecting/canceling operations.
By applying the embodiment of the invention, the adaptation operation can be carried out on the input method panel, and the voice information is provided for the control in the adaptation operation process, so that a user can carry out setting according to the voice information when carrying out input method setting, and the input method setting efficiency is improved.
It should be noted that the embodiment of the present invention may further perform settings such as a thesaurus, account management, a shortcut phrase, a clipboard, text editing, and shortcut translation, and the settings may be single-click selection and double-click activation in practical applications. Of course, other settings may be opened in addition to the setting for the input method, and the embodiment of the present invention is not limited to this.
Referring to fig. 5, a flowchart illustrating steps of an alternative embodiment of the input method of the present invention is shown, which may specifically include the following steps:
step 502, obtaining input information input by a user through an input method panel.
In the embodiment of the present invention, in the process of using the input method by the user, the input information input by the user through the input method panel may be acquired, where the input information may include: the user calls the information input by the input method in other application programs, and the other application programs may refer to application programs other than the input method, such as a chat application program, a game application program, and the like, which is not limited in this embodiment of the present invention. The input method for inputting information by the user through the input method panel may include various input methods, for example, keyboard input, handwriting input, voice input, and the like, which is not limited in the embodiments of the present invention.
In an example of the present invention, the step 502 of acquiring the input information input by the user through the input method panel may include: monitoring the retention time of the touch operation of the blank keys on the input method panel; when the holding time meets the preset response time, triggering a voice input mode; and acquiring the voice information of the user as the input information of the user in the voice input mode.
The space key in the input method panel can be a trigger key for voice input. In the embodiment of the invention, the preset response time for the space key is set to be 1s (second), when the touch operation (such as pressing operation) of a user on the key of the space key on the input method panel is monitored, the holding time of the touch operation for the pressing operation of the space key is counted, if the holding time reaches the preset response time, voice input can be triggered, and at the moment, the voice information of the user can be acquired and converted into characters to be used as the input information of the user.
It should be noted that, when counting the holding time of the space key in the pressing operation, the embodiment of the present invention does not care whether the direct falling point of the pressing operation of the user is at the position of the space key or slides from other keys to the position of the space key, that is, even if the finger of the user is not at the position of the space key when initially pressing, as long as the pressing time of 1s can be held after sliding to the space key, the voice input is triggered. The embodiment of the invention can more conveniently input the voice of the user, particularly the voice of the visually impaired user, by a triggering mode of not limiting the finger drop point of the user.
In an example of the present invention, the step 502 of acquiring the input information input by the user through the input method panel may include: monitoring touch operation on a key on an input method panel; and when the touch operation on the key is a preset touch operation, taking the character corresponding to the key as the input information of the user.
In the embodiment of the invention, the touch operation of the keys on the input method panel is detected in real time. In an example of the present invention, the touch operation includes at least one of a pressing operation and a raising operation, and the prompt information includes at least one of a key sound effect and a key vibration.
The keyboard input is finished by keys on the input method panel, and if the preset touch operation is a pressing operation and a hand-up operation, when the touch operation pressing operation of the keys on the input method panel and the hand-up operation are monitored, characters corresponding to the keys can be used as input information of a user. Of course, other touch operations, such as a double-click operation, may be set besides the preset touch operation described above, and the embodiment of the present invention is not limited thereto.
In one example of the present invention, the method may further include: and when the key to which the touch operation is directed changes or the operation type of the touch operation changes, triggering prompt information corresponding to the key to which the touch operation is directed.
In order to facilitate the input of a user, the embodiment of the invention is configured to trigger a key sound effect and key vibration if the change of a key corresponding to a touch operation is monitored when the user performs the touch operation on the key, for example, when the key moves from a key "a" to a key "S"; if the operation type of the touch operation is changed, such as changing from a pressing operation to a lifting operation, the key sound effect and the key vibration are also triggered.
That is, the button sound effect and the button vibration are triggered by the pressing operation, the raising operation and the change of the button currently targeted by the user. For example, when a user drops a finger and presses a certain key for the first time, the corresponding key sound effect and key vibration are triggered, then the user moves the finger to a new key when sliding selection is performed and the finger moves to other keys, the key sound effect and key vibration corresponding to the new key are triggered again, and finally the key sound effect and key vibration are triggered again when the user lifts the hand.
Step 504, obtaining a candidate item corresponding to the input information.
After the input information is acquired, candidate items corresponding to the input information can be searched in a preset word bank. Optionally, the preset lexicon may include: a user lexicon, a system lexicon, a cell lexicon, a cloud lexicon, and the like, and the embodiment of the present invention does not limit the specific lexicon.
Step 506, determining a target candidate item from the candidate items, and taking the target candidate item as a currently selected target object; and the target object is used for processing according to the corresponding user-defined function operation when the touch operation is monitored to be the designated gesture operation.
In the embodiment of the present invention, the target candidate item may be marked by using a focus, where the focus may be represented in the form of a cursor or a box, and may jump with the change of the target candidate item.
In an example of the present invention, the step 506 of determining a target candidate item from the candidate items may include: and taking the first candidate item in the candidate items as a target candidate item.
The first candidate item in the candidate items is usually the candidate item with the highest quality in the input process, and the probability of the user selecting the screen is high.
In one example of the present invention, the method may further include: acquiring word segmentation symbols input by a user aiming at the input information through an input method panel; forming new input information by the input information and the word segmentation symbols, and acquiring update candidate items corresponding to the new input information; step 506, determining a target candidate item from the candidate items, and using the target candidate item as the currently selected target object, may include: and determining a target update candidate item from the update candidate items, and taking the target update candidate item as a currently selected target object.
When the user inputs through the keyboard, the user may modify the input information that has been input, such as adding a word segmentation symbol to the input information, for example, assuming that the input information that has been input is "xi an", the input information becomes "xi' an" after adding the word segmentation symbol.
In the embodiment of the invention, after the user inputs the segmentation symbol, the candidate item is refreshed to obtain the update candidate item, wherein the update candidate item may be changed or may be kept unchanged, and in order to prompt the user to finish the input of the segmentation symbol normally, regardless of the change of the update candidate item, the first update candidate item is obtained from the update candidate item to be used as a target update candidate item, and is used as a currently selected target object, and the corresponding voice information is played.
In one example of the present invention, the method may further include: after the input method panel is switched to a symbol keyboard, determining the currently selected symbol keyboard; and playing the voice information corresponding to the currently selected symbol keyboard.
And if the user switches to the symbol keyboard on the input method panel, playing the voice information corresponding to the currently selected symbol keyboard. When the symbol keyboard is initially switched, the symbol keyboard is usually switched to a default symbol keyboard, and at the moment, the default symbol keyboard can be automatically selected, and corresponding voice information is played.
In practice, there may be a plurality of symbol keyboards, and if the user switches the symbol keyboards, the switched symbol keyboards are selected, and the corresponding voice message is played, for example, the "current xx symbol keyboard" (xx is the name of the keyboard, and usually only the non-number part in the name of the keyboard needs to be played).
For example, it is assumed that the symbol keypad includes a "1. common" symbol keypad, a "2. english" symbol keypad, and a "3. chinese" symbol keypad, and if the "1. common" symbol keypad is initially switched, the voice message "current common symbol keypad" is played, and within the entire input method panel, if the user switches to the "2. english" symbol keypad by, for example, a two-finger sliding operation (e.g., a two-finger left sliding operation for switching the last symbol keypad, a two-finger right sliding operation for switching the next symbol keypad) or a two-click operation, the voice message "current english symbol keypad" can be played.
Optionally, the embodiment of the present invention may further optimize the voice information corresponding to the symbol in the symbol keyboard, so that the pronunciation is more accurate and is easier for the user to perceive. In one example, the content before and after optimization of the speech information of the partial symbol may be: @: the former playing is 'Atty' and the optimized playing is 'at' which is an English pronunciation; and &: the playing is not carried out before, and the playing is 'sum symbol' after the optimization; b, ^ a: playing 'caret' at present, and playing the optimized 'caret' as a 'caret-off symbol'; "the method comprises the following steps: the former playing is 'single index reversal', and the optimized playing is 'accent symbol'; \\ comprises the following steps: the former playing is 'double reverse slashes', and the optimized playing is 'reverse slashes'. Of course, the optimization of the voice information of several symbols is only an example, and in practical use, problems may be continuously found and continuously optimized, which is not required to be limited by the embodiment of the present invention.
In one example of the present invention, the method may further include: acquiring touch operation of the user on the input method panel; and processing the candidate items according to the touch operation on the input method panel.
The designated gesture operation at least comprises one of left sliding of the double fingers, right sliding of the double fingers, upward sliding of the double fingers and downward sliding of the double fingers. When the candidate items corresponding to the input information are acquired, if touch operations such as left sliding of double fingers, right sliding of double fingers, upward sliding of double fingers, downward sliding of double fingers and the like used on the input method panel are monitored, it can be determined that the designated gesture operations are recognized, and the candidate items are processed according to the user-defined function operations corresponding to the designated gesture operations.
In an example of the present invention, a first user-defined function operation corresponding to a touch operation of the input method panel is determined, and the candidate item is processed according to the first user-defined function operation; the first user-defined function operation corresponding to the touch operation is a function operation on a candidate item corresponding to a designated gesture operation matched with the touch operation;
wherein the processing the candidate items comprises at least: acquiring a previous candidate item of the target candidate items as a new target candidate item; acquiring a next candidate item of the target candidate items as a new target candidate item; the target candidate item is displayed on a screen; after the input information is updated, obtaining a new candidate item, and taking the first new candidate item of the new candidate item as a new target candidate item; emptying the input information; acquiring a previous page candidate item corresponding to the target candidate item; and acquiring next page candidate items corresponding to the target candidate items.
In the embodiment of the invention, the corresponding user-defined function operation can be defined for the designated gesture operation in advance, and the corresponding user-defined function operation is executed when the touch operation on the input method panel is monitored to be the designated gesture operation. In one example, the "clear settings" panel of FIG. 6 may be entered, with gesture-customized options being entered for settings. Specifically, the four two-finger sliding operations of "two-finger up-sliding", "two-finger down-sliding", "two-finger left-sliding", and "two-finger right-sliding" are supported on the input method panel, and referring to the check box popup shown in fig. 7, the four two-finger sliding operations may support selection of a defined custom function operation, which may include: selecting the last candidate item; selecting a next candidate item; screen-up current candidate (target candidate); deleting a pinyin (character); emptying all pinyin (symbols); a previous page candidate item; next page candidates. When one user-defined function operation is selected by other double-finger sliding operations, the user-defined function operation is not displayed, and unnecessary troubles caused by the fact that the same user-defined function operation is set by the same double-finger sliding operation are avoided. After popping up the check box popup, a user can touch each function option (self-defined function operation) by a single finger, double-click to complete setting and automatically close the popup after selecting a target function option, and in addition, after popping up the check box popup, the user can touch a 'close' button to close the popup if not doing any setting.
In the embodiment of the invention, for the convenience of a user, default user-defined function operation is set in advance aiming at four double-finger sliding operations of 'double-finger up-sliding', 'double-finger down-sliding', 'double-finger left-sliding', 'double-finger right-sliding', for example, 'double-finger up-sliding' corresponds to a current candidate item on a screen, 'double-finger down-sliding' corresponds to deleting one pinyin, 'double-finger left-sliding' corresponds to selecting a previous candidate item, and 'double-finger right-sliding' corresponds to selecting a next candidate item.
In one example of the present invention, the method may further include: when the candidate item corresponding to the input information is not obtained, obtaining the touch operation of the user on the input method panel; and processing the input information according to the touch operation on the input method panel.
It should be noted that, in the foregoing case that the user-defined function operation related to the designated gesture operation is performed on the candidate item corresponding to the acquired input information, when the candidate item corresponding to the input information is not acquired, the input information is processed according to the touch operation.
In an example of the present invention, the processing the input information according to a touch operation on the input method panel includes: determining a second user-defined function operation corresponding to the touch operation of the input method panel, and processing the input information according to the second user-defined function operation; the second user-defined function operation corresponding to the touch operation is a function operation on input information corresponding to the designated gesture operation matched with the touch operation;
wherein the processing the input information comprises at least: moving a cursor forward on the input information; moving a cursor backward on the input information; selecting characters in the input information before the cursor; and selecting the character behind the cursor in the input information.
For convenience of setting by a user, in a case that a candidate item corresponding to input information cannot be acquired, in the embodiment of the present invention, a corresponding self-defined function operation may be set for four two-finger sliding operations, that is, "two-finger up-sliding," two-finger down-sliding, "" two-finger left-sliding, "and" two-finger right-sliding, "where the four two-finger sliding operations include" two-finger up-sliding "corresponding to forward movement of a cursor," two-finger down-sliding "corresponding to backward movement of a cursor," two-finger left-sliding "corresponding to selection of a character before the cursor, and" two-finger right-sliding.
The embodiment of the invention respectively sets the self-defined function operation aiming at the appointed gesture operation aiming at the condition that the input information has the candidate item and does not have the candidate item, so that different input operations can be respectively carried out under the condition that the candidate item and the candidate item do not exist, and the input efficiency is improved.
In one example of the present invention, the method may further include: determining the keyboard type of the input method panel; acquiring touch operation of the user on an appointed key on the input method panel; and processing the candidate items according to the keyboard type and the touch operation of the specified key.
In the embodiment of the invention, under the condition of acquiring the candidate item corresponding to the input information, the candidate item can be operated through four double-finger sliding operations, and the candidate item can also be operated through a keyboard of an input method panel. Preferably, the embodiment of the invention is directed to input method panels of different keyboard types, and the customized function operation of the keys is also different.
In an example of the present invention, the processing the candidate items according to the keyboard type and the touch operation on the specified key includes: when the input method panel is a full key panel, if the input method panel is a touch operation aiming at a comma key, acquiring a previous candidate item of the target candidate items as a new target candidate item; if the operation is the touch operation aiming at the period key, acquiring a next candidate item of the target candidate items as a new target candidate item; if the operation is the touch operation aiming at the digital switching key, acquiring a previous page candidate item corresponding to the target candidate item; if the operation is the touch operation aiming at the Chinese-English switching key, acquiring next page candidate items corresponding to the target candidate items; when the input method panel is a nine-key panel, if the input method panel is a touch operation aiming at a number switching key, acquiring a previous candidate item of the target candidate items as a new target candidate item; and if the target candidate item is the touch operation aiming at the Chinese-English switching key, acquiring the next candidate item of the target candidate items as a new target candidate item.
The keyboard type may include a full-key panel (e.g., a pinyin 26 key and an english 26 key) and a nine-key panel (e.g., a pinyin 9 key and an english 9 key). In the case of a full key panel, the space key left side "," comma keys and ". "the self-defining function operation of the period key is" the last candidate item "and" the next candidate item ", respectively; the ' 123 ' numeric switch key and the ' chinese/english switch key on the left side of the space are respectively defined as a ' previous candidate ' and a ' next candidate ', and when the nine-key panel is used, the nine-key panel only has the 123 ' numeric switch key and the ' chinese/english switch key, and can be respectively defined as a ' previous candidate ' and a ' next candidate '.
Step 508, determining the play mode of the target candidate item according to the number of characters of the target candidate item.
Step 510, playing the voice information of the target candidate item according to the playing mode.
After the candidate items corresponding to the input information are acquired, the candidate items can be presented at corresponding positions in the input method panel for the user to select. In practical application, a user can determine a target candidate item from the candidate items in a groping mode, the target candidate item is used as a currently selected target object, and corresponding voice information is played, so that whether the candidate item is the content which the user wants to input or not can be determined according to the voice information of the target candidate item.
Preferably, when the voice information of the target candidate item is played, different playing modes can be adopted according to the number of characters of the target candidate item, so as to improve the input efficiency of the user. Specifically, when the number of characters of the target candidate item is two characters and below the two characters, a first play mode of word-by-word interpretation is adopted; and if the number of the characters is more than two, adopting a second play mode of complete candidate play and character-by-character interpretation. For example, assuming that the target candidate is "experience", the first play mode will be adopted, the played voice message is "experience that is passed, the experience is history", and assuming that the target candidate is "chinese ethnicity", the second play mode will be adopted, the played voice message is "chinese ethnicity, the chinese is chinese, the chinese is magnificent china, the citizen is a citizen of people, and the ethnicity is a family of the aquarium.
The embodiment of the invention can improve the determination efficiency of the user on the actual meaning of the candidate item by playing the target candidate items with different character numbers in different playing modes, thereby improving the input efficiency of the user.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 8, a block diagram of an embodiment of an input device according to the present invention is shown, which may specifically include the following modules:
a determining module 802, configured to determine a currently selected target object in the input method panel; and the target object is used for processing according to the corresponding user-defined function operation when the touch operation is monitored to be the designated gesture operation.
A playing module 804, configured to play the voice information of the target object.
Optionally, the determining module 802 is configured to start an input method panel, and determine a currently selected target control in the input method panel; and taking the target control as the currently selected target object.
Optionally, the apparatus further comprises: the monitoring module is used for popping up a setting popup window corresponding to the target control when the touch operation aiming at the target control is monitored; when touch control operation aiming at the setting items in the setting popup window is monitored, taking the setting items as target setting items, and processing sub-controls contained in the target setting items; and when the touch operation outside the setting popup window is monitored, closing the setting popup window.
Optionally, the determining module 802 is configured to obtain input information input by a user through an input method panel; acquiring a candidate item corresponding to the input information; and determining a target candidate item from the candidate items, and taking the target candidate item as a currently selected target object.
Optionally, the playing module 804 is configured to determine a playing mode of the target candidate item according to the number of characters of the target candidate item; and playing the voice information of the target candidate item according to the playing mode.
Optionally, the playing module 804 is configured to determine that the playing mode of the target candidate item is the first playing mode when the number of characters of the target candidate item is less than or equal to a preset number; the first playing mode is that characters in the target candidate items and interpretation information corresponding to the characters are played one by one; when the number of the characters of the target candidate item is larger than the preset number, determining the play mode of the target candidate item as a second play mode; and after all the characters of the target candidate item are played, the characters in the target candidate item and the interpretation information corresponding to the characters are played one by one in the second playing mode.
Optionally, the determining module 802 is configured to monitor a retention time of a touch operation on the input method panel for the blank key; when the holding time meets the preset response time, triggering a voice input mode; and acquiring the voice information of the user as the input information of the user in the voice input mode.
Optionally, the determining module 802 is configured to monitor a touch operation on a key on an input method panel; and when the touch operation on the key is a preset touch operation, taking the character corresponding to the key as the input information of the user.
Optionally, the determining module 802 is configured to trigger a prompt message corresponding to the key to which the touch operation is directed when the key to which the touch operation is directed changes or the operation type of the touch operation changes.
Optionally, the touch operation at least includes one of a pressing operation and a raising operation, and the prompt information at least includes one of a key sound effect and a key vibration.
Optionally, the determining module 802 is configured to use the first candidate item in the candidate items as a target candidate item.
Optionally, the apparatus further comprises: the updating module is used for acquiring word segmentation symbols input by a user aiming at the input information through an input method panel; forming new input information by the input information and the word segmentation symbols, and acquiring update candidate items corresponding to the new input information; the determining module is used for determining a target update candidate item from the update candidate items and taking the target update candidate item as a currently selected target object.
Optionally, the apparatus further comprises: the symbol keyboard voice playing module is used for determining the currently selected symbol keyboard after switching to the symbol keyboard on the input method panel; and playing the voice information corresponding to the currently selected symbol keyboard.
Optionally, the apparatus comprises: the first candidate item processing module is used for acquiring touch operation of the user on the input method panel; and processing the candidate items according to the touch operation on the input method panel.
Optionally, the first candidate item processing module is configured to determine a first user-defined function operation corresponding to a touch operation of the input method panel, and process the candidate item according to the first user-defined function operation; the first user-defined function operation corresponding to the touch operation is a function operation on a candidate item corresponding to a designated gesture operation matched with the touch operation; wherein the processing the candidate items comprises at least: acquiring a previous candidate item of the target candidate items as a new target candidate item; acquiring a next candidate item of the target candidate items as a new target candidate item; the target candidate item is displayed on a screen; after the input information is updated, obtaining a new candidate item, and taking the first new candidate item of the new candidate item as a new target candidate item; emptying the input information; acquiring a previous page candidate item corresponding to the target candidate item; and acquiring next page candidate items corresponding to the target candidate items.
Optionally, the apparatus further comprises: the input information processing module is used for acquiring the touch operation of the user on the input method panel when the candidate item corresponding to the input information is not acquired; and processing the input information according to the touch operation on the input method panel.
Optionally, the input information processing module is configured to determine a second user-defined function operation corresponding to the touch operation of the input method panel, and process the input information according to the second user-defined function operation; the second user-defined function operation corresponding to the touch operation is a function operation on input information corresponding to the designated gesture operation matched with the touch operation;
wherein the processing the input information comprises at least: moving a cursor forward on the input information; moving a cursor backward on the input information; selecting characters in the input information before the cursor; and selecting the character behind the cursor in the input information.
Optionally, the designated gesture operation includes at least one of a left slide of the two fingers, a right slide of the two fingers, an up slide of the two fingers, and a down slide of the two fingers.
Optionally, the apparatus further comprises: the second candidate item processing module is used for determining the keyboard type of the input method panel; acquiring touch operation of the user on an appointed key on the input method panel; and processing the candidate items according to the keyboard type and the touch operation of the specified key.
Optionally, the second candidate item processing module is configured to, when the input method panel is a full key panel, if the input method panel is a touch operation for a comma key, obtain a last candidate item of the target candidate items as a new target candidate item; if the operation is the touch operation aiming at the period key, acquiring a next candidate item of the target candidate items as a new target candidate item; if the operation is the touch operation aiming at the digital switching key, acquiring a previous page candidate item corresponding to the target candidate item; if the operation is the touch operation aiming at the Chinese-English switching key, acquiring next page candidate items corresponding to the target candidate items; when the input method panel is a nine-key panel, if the input method panel is a touch operation aiming at a number switching key, acquiring a previous candidate item of the target candidate items as a new target candidate item; and if the target candidate item is the touch operation aiming at the Chinese-English switching key, acquiring the next candidate item of the target candidate items as a new target candidate item.
In the embodiment of the invention, after the input method is started, the target object currently selected in the input method panel is determined, and the voice information of the target object is played, wherein the target object is used for processing according to the corresponding self-defined function operation when the touch operation is monitored to be the designated gesture operation, and the target object is correspondingly processed based on the self-defined function operation corresponding to the touch operation, so that the input of a user is more flexible and efficient in the process of using the input method, the input experience of the user is improved, the voice information corresponding to the target object is provided for prompting the user, and the input experience of the user is further improved.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
FIG. 9 is a block diagram illustrating a structure of an electronic device 900 for input according to an example embodiment. For example, the electronic device 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 9, electronic device 900 may include one or more of the following components: a processing component 902, a memory 904, a power component 906, a multimedia component 908, an audio component 910, an input/output (I/O) interface 912, a sensor component 914, and a communication component 916.
The processing component 902 generally controls overall operation of the electronic device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing element 902 may include one or more processors 920 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operation at the device 900. Examples of such data include instructions for any application or method operating on the electronic device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power component 906 provides power to the various components of the electronic device 900. Power components 906 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 900.
The multimedia components 908 include a screen that provides an output interface between the electronic device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, the audio component 910 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 900 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 also includes a speaker for outputting audio signals.
I/O interface 912 provides an interface between processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing status evaluations of various aspects of the electronic device 900. For example, sensor assembly 914 may detect an open/closed state of device 900, the relative positioning of components, such as a display and keypad of electronic device 900, sensor assembly 914 may also detect a change in the position of electronic device 900 or a component of electronic device 900, the presence or absence of user contact with electronic device 900, orientation or acceleration/deceleration of electronic device 900, and a change in the temperature of electronic device 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate wired or wireless communication between the electronic device 900 and other devices. The electronic device 900 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication part 914 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 914 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 904 comprising instructions, executable by the processor 920 of the electronic device 900 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium, wherein instructions of the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform any one of the input methods according to embodiments of the present invention.
Fig. 10 is a schematic structural diagram of an electronic device 1000 for input according to another exemplary embodiment of the present invention. The electronic device 1000 may be a server, which may have large differences due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1022 (e.g., one or more processors) and a memory 1032, one or more storage media 1030 (e.g., one or more mass storage devices) storing an application 1042 or data 1044. Memory 1032 and storage medium 1030 may be, among other things, transient or persistent storage. The program stored on the storage medium 1030 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 1022 may be disposed in communication with the storage medium 1030, and execute a series of instruction operations in the storage medium 1030 on the server.
The server may also include one or more power supplies 1026, one or more wired or wireless network interfaces 1050, one or more input-output interfaces 1058, one or more keyboards 1056, and/or one or more operating systems 1041, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
An electronic device comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors, the one or more programs comprising operating instructions for performing the input method according to any of the embodiments of the present invention.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The input method, the input device and the electronic device provided by the invention are described in detail, and the principle and the implementation mode of the invention are explained by applying specific examples, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. An input method, comprising:
determining a currently selected target object in the input method panel; the target object is used for processing according to corresponding user-defined function operation when the touch operation is monitored to be the designated gesture operation;
and playing the voice information of the target object.
2. The method of claim 1, wherein determining the target object currently selected in the input method panel comprises:
starting an input method panel, and determining a currently selected target control in the input method panel;
and taking the target control as the currently selected target object.
3. The method of claim 2, wherein after the target control is treated as a currently selected target object, the method further comprises:
when touch control operation aiming at the target control is monitored, popping up a popup window corresponding to the target control;
when touch control operation aiming at a setting item in the setting popup window is monitored, taking the setting item as a target setting item, and processing a sub-control contained in the target setting item;
and when the touch operation outside the setting popup window is monitored, closing the setting popup window.
4. The method of claim 1, wherein determining the target object currently selected in the input method panel comprises:
acquiring input information input by a user through an input method panel;
acquiring a candidate item corresponding to the input information;
and determining a target candidate item from the candidate items, and taking the target candidate item as a currently selected target object.
5. The method of claim 4, wherein the playing the voice information of the target object comprises:
determining a play mode of the target candidate item according to the number of the characters of the target candidate item;
and playing the voice information of the target candidate item according to the playing mode.
6. The method of claim 5, wherein said determining a play mode of said target candidate according to said number of characters of said target candidate comprises:
when the number of characters of the target candidate item is less than or equal to a preset number, determining the play mode of the target candidate item as a first play mode; the first playing mode is that characters in the target candidate items and interpretation information corresponding to the characters are played one by one;
when the number of the characters of the target candidate item is larger than the preset number, determining the play mode of the target candidate item as a second play mode; and after all the characters of the target candidate item are played, the characters in the target candidate item and the interpretation information corresponding to the characters are played one by one in the second playing mode.
7. The method of claim 4, wherein the obtaining of the input information input by the user through the input method panel comprises:
monitoring the retention time of the touch operation of the blank keys on the input method panel;
when the holding time meets the preset response time, triggering a voice input mode;
and acquiring the voice information of the user as the input information of the user in the voice input mode.
8. An input device, comprising:
the determining module is used for determining a currently selected target object in the input method panel; the target object is used for processing according to corresponding user-defined function operation when the touch operation is monitored to be the designated gesture operation;
and the playing module is used for playing the voice information of the target object.
9. An electronic device comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors the one or more programs including instructions for:
determining a currently selected target object in the input method panel; the target object is used for processing according to corresponding user-defined function operation when the touch operation is monitored to be the designated gesture operation;
and playing the voice information of the target object.
10. A readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the input method according to any of method claims 1-7.
CN202010444416.8A 2020-05-22 2020-05-22 Input method and device and electronic equipment Pending CN111679746A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010444416.8A CN111679746A (en) 2020-05-22 2020-05-22 Input method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010444416.8A CN111679746A (en) 2020-05-22 2020-05-22 Input method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111679746A true CN111679746A (en) 2020-09-18

Family

ID=72434253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010444416.8A Pending CN111679746A (en) 2020-05-22 2020-05-22 Input method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111679746A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114527920A (en) * 2020-10-30 2022-05-24 华为终端有限公司 Man-machine interaction method and electronic equipment
WO2022206477A1 (en) * 2021-03-29 2022-10-06 华为技术有限公司 Input method and terminal

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106055260A (en) * 2016-06-03 2016-10-26 深圳市联谛信息无障碍有限责任公司 Screen reading method and device of secure keyboard
CN108073291A (en) * 2016-11-09 2018-05-25 北京搜狗科技发展有限公司 A kind of input method and device, a kind of device for input

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106055260A (en) * 2016-06-03 2016-10-26 深圳市联谛信息无障碍有限责任公司 Screen reading method and device of secure keyboard
CN108073291A (en) * 2016-11-09 2018-05-25 北京搜狗科技发展有限公司 A kind of input method and device, a kind of device for input

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114527920A (en) * 2020-10-30 2022-05-24 华为终端有限公司 Man-machine interaction method and electronic equipment
WO2022206477A1 (en) * 2021-03-29 2022-10-06 华为技术有限公司 Input method and terminal

Similar Documents

Publication Publication Date Title
EP3260967A1 (en) Method and apparatus for text selection
CN111831806B (en) Semantic integrity determination method, device, electronic equipment and storage medium
CN107291260B (en) Information input method and device for inputting information
CN108803892B (en) Method and device for calling third party application program in input method
CN111679746A (en) Input method and device and electronic equipment
CN108766427B (en) Voice control method and device
CN112068764B (en) Language switching method and device for language switching
CN112199032A (en) Expression recommendation method and device and electronic equipment
CN110908523A (en) Input method and device
CN111092971A (en) Display method and device for displaying
CN107340881B (en) Input method and electronic equipment
CN113946228A (en) Statement recommendation method and device, electronic equipment and readable storage medium
CN113035189A (en) Document demonstration control method, device and equipment
CN112148132A (en) Information setting method and device and electronic equipment
CN108227952B (en) Method and system for generating custom word and device for generating custom word
CN112306251A (en) Input method, input device and input device
CN112507162B (en) Information processing method, device, terminal and storage medium
CN113220208B (en) Data processing method and device and electronic equipment
CN110716653B (en) Method and device for determining association source
CN112199033B (en) Voice input method and device and electronic equipment
CN111722726B (en) Method and device for determining pigment and text
CN114527919B (en) Information display method and device and electronic equipment
CN111124142B (en) Input method, device and device for inputting
CN110580126B (en) Virtual keyboard and input method based on virtual keyboard
CN114201058A (en) Input method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination