CN111382402B - Character input method and device - Google Patents

Character input method and device Download PDF

Info

Publication number
CN111382402B
CN111382402B CN201811625153.XA CN201811625153A CN111382402B CN 111382402 B CN111382402 B CN 111382402B CN 201811625153 A CN201811625153 A CN 201811625153A CN 111382402 B CN111382402 B CN 111382402B
Authority
CN
China
Prior art keywords
character
input
characters
focusing frame
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811625153.XA
Other languages
Chinese (zh)
Other versions
CN111382402A (en
Inventor
刘允庆
马海斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Artek Microelectronics Co Ltd
Original Assignee
Artek Microelectronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Artek Microelectronics Co Ltd filed Critical Artek Microelectronics Co Ltd
Priority to CN201811625153.XA priority Critical patent/CN111382402B/en
Publication of CN111382402A publication Critical patent/CN111382402A/en
Application granted granted Critical
Publication of CN111382402B publication Critical patent/CN111382402B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Input From Keyboards Or The Like (AREA)

Abstract

The embodiment of the invention provides a character input method, which comprises the following steps: displaying characters to be selected and a focus frame on a display interface; identifying the selection operation of the user on the character to be selected, wherein the selection operation enables the character to be selected to fall into the focusing frame to become the character to be input; and identifying the confirmation operation of the user and inputting the character to be input. By using the input method, more efficient character input can be realized.

Description

Character input method and device
Technical Field
The application relates to the technical field of terminals, in particular to a character input method and device.
Background
The existing various login operations, network connection and the like need login authentication operations, a password input box needs to be popped up, and a virtual keyboard is basically used except a physical keyboard. The input method is usually used for inputting corresponding input keyboards, and the input box is a 9-grid or full-keyboard input box.
As shown in fig. 1, a conventional 9-grid in the prior art is disclosed in the input method keyboard patent CN1614541A published in china 2005.5.11, "double-key character fast input method and application in remote controllers, mobile phones and telephones" input by 9-grid mode; for the nine-square-grid keyboard, the key position is large, but multiple operations are needed, for example, when abc selects c, the key position needs to be continuously pressed for 3 times; for full keyboard keys, the keys are also mainly aimed at letter keys, the buttons are required to be switched when the keys are switched to the letter keys, and small-screen pictures are easy to trigger by mistake; for the common login password or wifi password which is a special character combination of caption numbers, the conventional keyboard is required to be used during inputting, and letters or symbols are continuously switched by each key; case or symbol switching is required. The input gear needs to be switched continuously or is triggered by mistake, and the input is inconvenient.
Fig. 2 is a full keyboard input mode commonly used in the prior art, in which a full keyboard can display all letters relative to a 9-grid, but frequent switching is required when inputting symbols and other special characters, and many people do not adapt to the full keyboard input mode.
In summary, the authentication login input method in the prior art has a big problem, and a new input method is urgently needed.
Disclosure of Invention
In view of this, embodiments of the present invention mainly provide a character input method, which can implement efficient character input.
The embodiment of the invention is realized in such a way that a character input method comprises the following steps:
displaying characters to be selected and a focusing frame on a display interface;
identifying the selection operation of the user on the character to be selected, wherein the selection operation enables the character to be selected to fall into the focusing frame to become the character to be input;
and identifying the confirmation operation of the user and inputting the character to be input.
Optionally, the characters to be selected are presented on the display interface in a suspended state, and the shape of the characters to be selected is: planar, cambered, spherical, or cubic.
Optionally, the identifying a selection operation of the user on the character to be determined includes:
identifying the moving operation of the user on the character to be selected; alternatively, the first and second liquid crystal display panels may be,
identifying a movement operation of a user on the focusing frame; alternatively, the first and second electrodes may be,
identifying the moving operation of the user on the character to be selected and the moving operation on the focusing frame at the same time; alternatively, the first and second electrodes may be,
identifying touch operation of a user on the focusing frame, and simultaneously identifying the inclination direction of the terminal equipment to move the character to be selected; alternatively, the first and second electrodes may be,
recognizing a tilt direction of the mobile terminal to move the focus frame.
Optionally, when the character to be selected falls into the focus frame, the character to be selected is displayed in an enlarged manner.
Optionally, the recognizing a confirmation operation of the user and inputting the character to be input includes:
and identifying the multi-point touch of the user, and inputting a plurality of identical characters to be input, wherein the number of the characters to be input is identical to the number of the touch points of the user.
Optionally, the focus frame comprises a first focus frame and a second focus frame; the operation of identifying the selection of the user on the character to be selected comprises the following steps: identifying simultaneous sliding operation of at least two touch points of a user so as to enable the characters to be selected to fall into at least two corresponding focusing frames respectively; the recognizing the confirmation operation of the user and inputting the character to be input includes: and recognizing that the character to be input corresponding to the focus frame which is released and controlled firstly completes the input.
Optionally, the focus frame comprises a hold state;
when the focusing frame is in the holding state, sliding the focusing frame, and taking the sequence of the characters to be selected, which are slid by the focusing frame, as characters to be input; alternatively, the first and second electrodes may be,
when the focusing frame is in the holding state, the character to be input is kept fixed, and the focusing frame slides left and right or up and down to realize repeated input or deletion of the character to be input; alternatively, the first and second liquid crystal display panels may be,
when the focusing frame is in the holding state, the characters to be input are kept fixed in the left-right direction, the up-down direction is not fixed, when the focusing frame slides in the up-down direction, the characters to be selected, which slide through the focusing frame, are sequentially used as characters to be input, and when the focusing frame slides in the left-right direction, repeated input or deletion of the characters to be input is realized; alternatively, the first and second electrodes may be,
when the focusing frame is in the holding state, the characters to be input are kept fixed in the vertical direction and the horizontal direction is not fixed, when the focusing frame slides in the horizontal direction, the characters to be selected, which slide through the focusing frame, are sequentially used as characters to be input, and when the focusing frame slides in the vertical direction, repeated input or deletion of the characters to be input is realized; alternatively, the first and second liquid crystal display panels may be,
when the focusing frame is in the holding state, the focusing frame is moved to the position of any character to be selected through the operation on any character to be selected.
Optionally, when the focus frame is in the hold state, the characters to be selected are blurred except the characters falling into the focus frame.
Optionally, when the focus frame is in the hold state, the focus frame displays operation instructions in at least two directions, and performs operation prompts in each direction according to the function setting of the focus frame in the hold state.
According to another aspect of the embodiments of the present invention, there is also provided a character input device, which can realize efficient character input. The embodiment of the invention is realized in such a way that a character input method comprises the following steps:
the display device is used for displaying the characters to be selected and the focusing frame on the display interface;
the recognition device is used for recognizing the selection operation of the user on the character to be selected, and the selection operation enables the character to be selected to fall into the focusing frame to become a character to be input;
and the confirmation device is used for identifying the confirmation operation of the user and inputting the character to be input.
According to the technical scheme, the embodiment of the invention has the following effects: the character input method provided by the invention does not need to frequently switch gear shifting keys, case keys and the like for quick input, cannot mistakenly trigger other characters due to the limitation of too small screen, and is efficient and convenient. The keyboard input layout and the method can be widely applied to places such as a login interface and the like needing to input character verification codes and passwords without frequently switching the keyboard layout. Therefore, similar login and wifi connection passwords can be input quickly and conveniently, and the method can also be applied to character input, particularly English.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 shows an interface diagram of a Sudoku input mode in the prior art;
FIG. 2 illustrates an interface diagram of a full keyboard input mode in the prior art;
FIG. 3 is a flow chart illustrating an input method provided by an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an input method provided by an embodiment of the application;
fig. 5 is a schematic diagram illustrating an input method according to another embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
As shown in fig. 3, a flowchart of a character input method provided in an embodiment of the present invention includes the following steps:
s101, displaying characters to be selected and a focus frame on a display interface;
s102, identifying selection operation of a user on the character to be selected, wherein the selection operation enables the character to be selected to fall into the focusing frame to become a character to be input;
s103, identifying the confirmation operation of the user and inputting the character to be input.
The 95 visible characters from 0x20 to 0x7e in the sequence in the ascii table in the following table 1 are candidate characters. The characters in the Ascii table are only for illustration, when the characters are input, any character and inputtable content can be input by using the method of the embodiment of the present invention, and the characters contained in the primary index page and the secondary character page in the table below can be arranged arbitrarily.
Index page \ character page Character 1 Character 2 Character 3 Character 4 Character 5 Character 6 Character 7 Character 8 Character 9
Index character 1 Air conditioner @ # $ & _ `
Index character 2 ( ) { } [ ] < >
Index character 3 0 1 2 3 4
Index character 4 5 6 7 8 9
Index character 5 + - * / | \ ^
Index character 6 A B C D E F G H I
Index character 7 J K L M N O P Q R
Index character 8 S T U V W X Y Z
Index character 9 a b c d e f g h i
Index character 10 j k l m n o p q r
Index character 11 s t u v w x y z
Index character 12 : , . " '
TABLE 1
To explain by taking fig. 4 as a specific example, a character 1012 to be selected and a focusing frame 1013 are displayed on a display interface of the touch screen, where the character to be selected is suspended in a specific shape and appears on the display interface, for example, in an arc shape in fig. 4; further identifying the selection operation of the user 1052 on the character to be selected, so that the character to be selected falls into the focusing frame to become a character to be input; and finally, recognizing the confirmation operation of the user and inputting the character to be input corresponding to the focusing frame at the moment, for example, recognizing the confirmation input of the user on the touch operation of the confirmation key 1031 on the display interface.
In the above embodiment, the page of the character to be selected is presented on the display interface in a suspended state, the shape of the character to be selected is a rectangular plane, a square plane, an arc surface, a sphere, a cuboid or a cube, and when the character to be selected is a plane or an arc surface, all the characters are uniformly or non-uniformly distributed and displayed on the surface of the plane or the arc surface; when spherical, a uniform or non-uniform distribution of all characters is displayed on the spherical surface; in the case of a cube or a rectangular parallelepiped, corresponding characters can be displayed on each surface of the cube or the rectangular parallelepiped according to a predetermined setting.
In the above embodiment, the selection operation of the user on the character to be selected is identified so that the character to be selected falls in the focus frame and becomes a character to be input, which may specifically use the following modes: 1. identifying the movement operation of the user on the character to be selected, wherein the selection operation can drag the page of the character to be selected to enable the character to be selected to fall into a focusing frame; 2. recognizing the movement operation of a user on the focusing frame, fixing a character page to be selected at the moment, dragging the focusing frame, and setting the corresponding character in the focusing frame as a character to be input at the moment; 3. identifying the moving operation of the user on the character to be selected and the moving operation on the focusing frame at the same time, wherein the page of the character to be selected can be moved by one hand, and the focusing frame can be moved by the other hand, so that the character to be selected falls into the focusing frame; 4. identifying touch operation of a user on the focusing frame, and simultaneously identifying the inclination direction of the terminal equipment to move the character to be selected, wherein the mode identifies the gyroscope function of the mobile terminal, and identifies the rotation angle to further control the movement of the character page to be selected; 5. and identifying the inclination direction of the mobile terminal to move the focusing frame, wherein the mode does not need to be selected through touch screen contact, but adopts a gyroscope function of identifying the mobile terminal, identifies the rotation angle of the mobile terminal to further control the movement of the focusing frame, and finally fixes the focusing frame on the character to be input.
The invention provides a preferred embodiment, on the basis of the above embodiment, when the character to be selected falls into the focusing frame, the character to be selected is displayed in the focusing frame in an enlarged manner, which is very suitable for the case that the display screen is small or the user wants the character to be larger. Of course, the amplification may be at other locations.
In a preferred embodiment of the present invention, the confirming operation of the user is performed, the input of the character to be input is performed by multi-touch of the user, and a plurality of identical characters to be input are input, where the number of the characters to be input is the same as the number of the touch points of the user. Specifically, the multi-touch here refers to a scene of simultaneous multi-touch, and the multi-touch is recognized on the confirmation key, and at this time, a plurality of identical characters to be input are input, for example, a character to be input corresponding to the current focus frame is "x", and at this time, it is recognized that the user touches the confirmation key with 3 fingers, and at this time, it is confirmed that "xxx" is continuously input. For the confirmation operation of the recognition user, the confirmation operation can be determined by releasing the touch screen after the recognition user selects the character without specially setting a confirmation key.
The present invention also provides a preferred embodiment, as shown in fig. 5, there are two focusing frames, namely a first focusing frame 1013 and a second focusing frame 1014; the identification of the selection operation of the user on the character to be selected comprises the following steps: identifying simultaneous sliding operation of at least two touch points of a user so as to enable the characters to be selected to fall into at least two corresponding focusing frames respectively; the recognizing the confirmation operation of the user and inputting the character to be input includes: and recognizing that the character to be input corresponding to the focus frame which is released and controlled firstly completes the input. Specifically, the sliding operations of the first focus frame 1013 and the second focus frame 1014 by the user are respectively recognized so that the characters to be selected fall into two focus frames, respectively, and the user can simultaneously perform the input of two characters. And inputting the character corresponding to the focus frame with the touch released firstly. For example, it is recognized that the user moves the first focus frame to the character "9" and the second focus frame to the character "b", and at this time, it is recognized that the touch corresponding to the second focus frame is released first, and then the character "b" to be selected is input first, and then the character "9" to be selected is input. In this state, of course, the two focus frames may be fixed, and the page of the character to be selected may be further subjected to a sliding operation, but in this case, since the positions of the two focus frames are fixed, the character to be input may also present a fixed input.
The invention also provides a preferred embodiment, the focusing frame comprises a holding state, the holding state can be entered by specially setting a holding key, and can also be entered by double-clicking or triple-clicking or long-pressing the touch screen. The holding key has a part or all of a function that can be set as desired when designing a specific product.
When the focusing frame is in a holding state, sliding the focusing frame, and taking the sequence of the characters to be selected which are slid by the focusing frame as characters to be input; for example, in fig. 4, when the focus frame is in the hold state and the recognition focus frame continuously slides over the character "abcdml", the character "abcdml" to be selected may be continuously input without performing the confirmation operation a plurality of times at this time.
When the focusing frame is in a holding state, the character to be input is kept fixed, and the focusing frame is slid left and right or up and down to realize repeated input or deletion of the character to be input; for example, the focus frame is currently located at the character "b", and a plurality of characters b can be continuously input by sliding left and right to input the character "bbbbbb", and when the focus frame is slid up and down, a plurality of input characters are continuously deleted.
When the focusing frame is in a holding state, the characters to be input are kept fixed in the left-right direction, the up-down direction is not fixed, when the focusing frame slides in the up-down direction, the characters to be selected, which slide through the focusing frame, are sequentially used as characters to be input, and when the focusing frame slides in the left-right direction, repeated input or deletion of the characters to be input is realized; for example, when an X character falls into a focus frame, a user operation is recognized, the focus frame enters a hold state, at this time, the left and right direction of the character to be input is kept fixed, which means that dragging the focus frame in the left and right direction cannot change the character in the focus frame, only can input the X character and delete the input character, and as for which direction is input, which direction is deleted can be set by itself, and dragging the focus frame up and down can realize sequential input of the characters falling into the focus frame.
When the focusing frame is in a holding state, the characters to be input are kept fixed in the vertical direction, the left and right directions are not fixed, when the focusing frame slides in the left and right directions, the characters to be selected, which slide through the focusing frame, are sequentially used as the characters to be input, and when the focusing frame slides in the vertical direction, repeated input or deletion of the characters to be input is realized; this embodiment is different from the previous embodiment only in the fixed direction, and is not described again.
When the focusing frame is in a holding state, the focusing frame is moved to the position of any character to be selected through the operation on any character to be selected. For example, when the X character falls into the focusing frame, the user operation is recognized, the focusing frame enters the holding state, and in the holding state, the operation of the Y character by the user is recognized, and if the operation is clicked, the focusing frame directly moves to the Y character.
In one embodiment, when the focus frame is in the hold state, the characters to be selected are blurred except the character falling into the focus frame. Note that, in general, the blurring is slight to indicate that the hold state is entered and to highlight the character falling into the focus frame.
In one embodiment, when the focusing frame is in a holding state, the focusing frame displays operation instructions in at least two directions, and operation prompt in each direction is carried out according to the function setting of the focusing frame in the holding state. For example, when the device enters the hold state, the focus frame displays a cross-shaped indication of up, down, left, and right, the left and right directions prompt sliding input, the up direction prompt input of a currently-to-be-input character, and the down direction prompt deletion of an input character.
According to another aspect of the embodiments of the present invention, there is also provided a character input device, which can implement efficient character input. The embodiment of the invention is realized in such a way that a character input method comprises the following steps:
the display device is used for displaying the characters to be selected and the focusing frame on the display interface;
the recognition device is used for recognizing the selection operation of the user on the character to be selected, and the selection operation enables the character to be selected to fall into the focusing frame to become the character to be input;
and the confirmation device is used for identifying the confirmation operation of the user and inputting the character to be input.
The device corresponds to the above-mentioned embodiment of the input method, and is not described herein again.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention according to the present application is not limited to the specific combination of the above-mentioned features, but also covers other embodiments where any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (9)

1. A character input method, comprising:
displaying characters to be selected and a focusing frame on a display interface;
identifying the selection operation of the user on the character to be selected, wherein the selection operation enables the character to be selected to fall into the focusing frame to become the character to be input;
identifying the confirmation operation of a user and inputting the character to be input;
the focus frame comprises a hold state;
when the focusing frame is in the holding state, sliding the focusing frame, and taking the sequence of the characters to be selected which are slid by the focusing frame as characters to be input; alternatively, the first and second electrodes may be,
when the focusing frame is in the holding state, the character to be input is kept fixed, and the focusing frame is slid left and right or up and down to realize repeated input or deletion of the character to be input; alternatively, the first and second liquid crystal display panels may be,
when the focusing frame is in the holding state, the characters to be input are kept fixed in the left-right direction, the up-down direction is not fixed, when the focusing frame slides in the up-down direction, the characters to be selected, which slide through the focusing frame, are sequentially used as characters to be input, and when the focusing frame slides in the left-right direction, repeated input or deletion of the characters to be input is realized; alternatively, the first and second liquid crystal display panels may be,
when the focusing frame is in the holding state, the characters to be input are kept fixed in the vertical direction, the left and right directions are not fixed, when the focusing frame slides in the left and right directions, the characters to be selected, which slide through the focusing frame, are sequentially used as the characters to be input, and when the focusing frame slides in the vertical direction, repeated input or deletion of the characters to be input is realized; alternatively, the first and second liquid crystal display panels may be,
when the focusing frame is in the holding state, the focusing frame is moved to the position of any character to be selected through the operation on any character to be selected.
2. The character input method according to claim 1, wherein the characters to be selected are presented on the display interface in a floating state, and the characters to be selected are formed in the shapes of: planar, cambered, spherical, or cubic.
3. The character input method according to claim 1, wherein the recognizing of the selection operation of the user on the character to be selected comprises:
identifying the moving operation of the user on the character to be selected; alternatively, the first and second liquid crystal display panels may be,
identifying a movement operation of a user on the focusing frame; alternatively, the first and second liquid crystal display panels may be,
identifying the moving operation of the user on the character to be selected and the moving operation on the focusing frame at the same time; alternatively, the first and second electrodes may be,
identifying touch operation of a user on the focusing frame, and simultaneously identifying the inclination direction of the terminal equipment to move the character to be selected; alternatively, the first and second electrodes may be,
recognizing a tilt direction of the mobile terminal to move the focus frame.
4. The character input method according to claim 1, wherein when the character to be selected falls in the focus frame, the character to be selected is displayed enlarged.
5. The character input method according to claim 1, wherein the recognizing a confirmation operation of a user and inputting the character to be input includes:
and identifying the multi-point touch of the user, and inputting a plurality of identical characters to be input, wherein the number of the characters to be input is identical to the number of the touch points of the user.
6. The character input method according to claim 1, wherein the focus frame includes a first focus frame and a second focus frame; the operation of identifying the selection of the user on the character to be selected comprises the following steps: identifying simultaneous sliding operation of at least two touch points of a user so as to enable the characters to be selected to fall into at least two corresponding focusing frames respectively; the recognizing the confirmation operation of the user and inputting the character to be input includes: and recognizing the character to be input corresponding to the focus frame which is controlled to be released firstly, and completing input.
7. The character input method according to claim 1, wherein when the focus frame is in a hold state, the characters to be selected other than the characters falling into the focus frame are blurred.
8. The character input method according to claim 1, wherein when the focus frame is in a hold state, the focus frame displays operation instructions in at least two directions, and operation prompts are performed in each direction in accordance with a function setting of the focus frame in the hold state.
9. A character input device comprising:
the display device is used for displaying the characters to be selected and the focusing frame on the display interface;
the recognition device is used for recognizing the selection operation of the user on the character to be selected, and the selection operation enables the character to be selected to fall into the focusing frame to become a character to be input;
the confirming device is used for identifying the confirming operation of the user and inputting the character to be input;
the focus frame comprises a hold state;
when the focusing frame is in the holding state, sliding the focusing frame, and taking the sequence of the characters to be selected, which are slid by the focusing frame, as characters to be input; alternatively, the first and second electrodes may be,
when the focusing frame is in the holding state, the character to be input is kept fixed, and the focusing frame slides left and right or up and down to realize repeated input or deletion of the character to be input; alternatively, the first and second electrodes may be,
when the focusing frame is in the holding state, the characters to be input are kept fixed in the left-right direction, the up-down direction is not fixed, when the focusing frame slides in the up-down direction, the characters to be selected, which slide through the focusing frame, are sequentially used as characters to be input, and when the focusing frame slides in the left-right direction, repeated input or deletion of the characters to be input is realized; alternatively, the first and second liquid crystal display panels may be,
when the focusing frame is in the holding state, the characters to be input are kept fixed in the vertical direction, the left and right directions are not fixed, when the focusing frame slides in the left and right directions, the characters to be selected, which slide through the focusing frame, are sequentially used as the characters to be input, and when the focusing frame slides in the vertical direction, repeated input or deletion of the characters to be input is realized; alternatively, the first and second liquid crystal display panels may be,
when the focusing frame is in the holding state, the focusing frame is moved to the position of any character to be selected through the operation on any character to be selected.
CN201811625153.XA 2018-12-28 2018-12-28 Character input method and device Active CN111382402B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811625153.XA CN111382402B (en) 2018-12-28 2018-12-28 Character input method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811625153.XA CN111382402B (en) 2018-12-28 2018-12-28 Character input method and device

Publications (2)

Publication Number Publication Date
CN111382402A CN111382402A (en) 2020-07-07
CN111382402B true CN111382402B (en) 2022-11-29

Family

ID=71220545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811625153.XA Active CN111382402B (en) 2018-12-28 2018-12-28 Character input method and device

Country Status (1)

Country Link
CN (1) CN111382402B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114968003B (en) * 2022-04-20 2023-08-11 中电信数智科技有限公司 Verification code input method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010016065A1 (en) * 2008-08-08 2010-02-11 Moonsun Io Ltd. Method and device of stroke based user input
KR20110109133A (en) * 2010-03-30 2011-10-06 삼성전자주식회사 Method and apparatus for providing character inputting virtual keypad in a touch terminal

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4437548B2 (en) * 2005-12-09 2010-03-24 ソニー株式会社 Music content display device, music content display method, and music content display program
CN101908284A (en) * 2010-08-17 2010-12-08 王永民 Jumping learning machine of Wangma computer
CN102023806B (en) * 2010-12-17 2013-03-06 广东威创视讯科技股份有限公司 Input method for touch screen
CN102999288A (en) * 2011-09-08 2013-03-27 北京三星通信技术研究有限公司 Input method and keyboard of terminal
US10275152B2 (en) * 2014-10-28 2019-04-30 Idelan, Inc. Advanced methods and systems for text input error correction
JP7067124B2 (en) * 2018-03-05 2022-05-16 京セラドキュメントソリューションズ株式会社 Display input device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010016065A1 (en) * 2008-08-08 2010-02-11 Moonsun Io Ltd. Method and device of stroke based user input
KR20110109133A (en) * 2010-03-30 2011-10-06 삼성전자주식회사 Method and apparatus for providing character inputting virtual keypad in a touch terminal

Also Published As

Publication number Publication date
CN111382402A (en) 2020-07-07

Similar Documents

Publication Publication Date Title
US20100241985A1 (en) Providing Virtual Keyboard
CN103064629B (en) It is adapted dynamically mancarried electronic aid and the method for graphical control
KR101589994B1 (en) Input method, input apparatus and terminal device
KR102091235B1 (en) Apparatus and method for editing a message in a portable terminal
EP1873620A1 (en) Character recognizing method and character input method for touch panel
CN103543947B (en) The method and system of input content are corrected on the electronic equipment with touch-screen
US20150253870A1 (en) Portable terminal
EP2629180A2 (en) Mobile terminal having a multifaceted graphical object and method for performing a display switching operation
JPH1127368A (en) Mobile station with contact sensing input having automatic symbol magnification function
KR20120079812A (en) Information processing apparatus, information processing method, and computer program
CN107272881B (en) Information input method and device, input method keyboard and electronic equipment
US9274702B2 (en) Drawing device, drawing control method, and drawing control program for drawing graphics in accordance with input through input device that allows for input at multiple points
CN104679278A (en) Character input method and device
JP2014238755A (en) Input system, input method, and smartphone
CN107509098A (en) Multi-lingual characters input method and device based on dummy keyboard
CN111382402B (en) Character input method and device
KR20160019762A (en) Method for controlling touch screen with one hand
JP5916573B2 (en) Display device, control method, control program, and recording medium
KR101872879B1 (en) Keyboard for typing chinese character
KR20090131423A (en) Character input apparatus, and method thereof
JP2014081800A (en) Handwriting input device and function control program
WO2013047023A1 (en) Display apparatus, display method, and program
CN104714739B (en) Information processing method and electronic equipment
CN106774991A (en) The processing method of input data, device and keyboard
CN106648437A (en) Keyboard switching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant