CN111382402A - Character input method and device - Google Patents

Character input method and device Download PDF

Info

Publication number
CN111382402A
CN111382402A CN201811625153.XA CN201811625153A CN111382402A CN 111382402 A CN111382402 A CN 111382402A CN 201811625153 A CN201811625153 A CN 201811625153A CN 111382402 A CN111382402 A CN 111382402A
Authority
CN
China
Prior art keywords
character
input
characters
frame
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811625153.XA
Other languages
Chinese (zh)
Other versions
CN111382402B (en
Inventor
刘允庆
马海斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jucai Microelectronics Shenzhen Co ltd
Original Assignee
Jucai Microelectronics Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jucai Microelectronics Shenzhen Co ltd filed Critical Jucai Microelectronics Shenzhen Co ltd
Priority to CN201811625153.XA priority Critical patent/CN111382402B/en
Publication of CN111382402A publication Critical patent/CN111382402A/en
Application granted granted Critical
Publication of CN111382402B publication Critical patent/CN111382402B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Abstract

The embodiment of the invention provides a character input method, which comprises the following steps: displaying characters to be selected and a focus frame on a display interface; identifying the selection operation of the user on the character to be selected, wherein the selection operation enables the character to be selected to fall into the focusing frame to become the character to be input; and identifying the confirmation operation of the user and inputting the character to be input. By using the input method, more efficient character input can be realized.

Description

Character input method and device
Technical Field
The application relates to the technical field of terminals, in particular to a character input method and device.
Background
Various existing login operations, network connection and the like need login authentication operations, a password input box needs to be popped up, and a virtual keyboard is basically used except a physical keyboard. The input method is usually used for inputting, and the input box is a 9-grid or full-keyboard input box.
As shown in fig. 1, a conventional 9-grid in the prior art, which is an input method keyboard patent CN1614541A published in 2005.5.11 of China, and "double-key character fast input method and application in remote controllers, mobile phones and telephones" is input in a 9-grid mode; for the nine-square-grid keyboard, the key position is large, but multiple operations are needed, for example, when abc selects c, the key position needs to be continuously pressed for 3 times; for full keyboard keys, the switching of character keys is mainly performed by aiming at letter keys, a button needs to be switched, and a small-screen picture is easy to trigger by mistake; for the common login password or the wifi password which is the special character combination of the caption number, the requirement of using a conventional keyboard is used during the input, and each key continuously switches letters or symbols; case or symbol switching is required. The input gear needs to be switched continuously or is triggered by mistake, and the input is inconvenient.
Fig. 2 is a full keyboard input mode commonly used in the prior art, wherein a full keyboard can display all letters relative to a 9-grid, but frequent switching is required when inputting symbols and other special characters, and many people do not adapt to the full keyboard input mode.
In summary, the authentication login input method in the prior art has a big problem, and a new input method is urgently needed.
Disclosure of Invention
In view of the above, embodiments of the present invention mainly aim to provide a character input method, which can realize efficient character input.
The embodiment of the invention is realized in such a way that a character input method comprises the following steps:
displaying characters to be selected and a focus frame on a display interface;
identifying the selection operation of the user on the character to be selected, wherein the selection operation enables the character to be selected to fall into the focusing frame to become the character to be input;
and identifying the confirmation operation of the user and inputting the character to be input.
Optionally, the characters to be selected are presented on the display interface in a suspended state, and the shape of the characters to be selected is: planar, cambered, spherical, or cubic.
Optionally, the identifying a selection operation of the user on the character to be determined includes:
identifying the moving operation of the user on the character to be selected; alternatively, the first and second electrodes may be,
identifying a moving operation of a user on the focusing frame; alternatively, the first and second electrodes may be,
identifying the moving operation of the user on the character to be selected and the moving operation on the focusing frame at the same time; alternatively, the first and second electrodes may be,
identifying touch operation of a user on the focusing frame, and simultaneously identifying the inclination direction of the terminal equipment to move the character to be selected; alternatively, the first and second electrodes may be,
recognizing a tilt direction of the mobile terminal to move the focus frame.
Optionally, when the character to be selected falls into the focus frame, the character to be selected is displayed in an enlarged manner.
Optionally, the recognizing the confirmation operation of the user and inputting the character to be input includes:
and identifying the multi-point touch of the user, and inputting a plurality of identical characters to be input, wherein the number of the characters to be input is identical to the number of the touch points of the user.
Optionally, the focus frame comprises a first focus frame and a second focus frame; the operation of identifying the selection of the user on the character to be selected comprises the following steps: identifying simultaneous sliding operation of at least two touch points of a user so as to enable the characters to be selected to fall into at least two corresponding focusing frames respectively; the recognizing the confirmation operation of the user and inputting the character to be input includes: and recognizing that the character to be input corresponding to the focus frame which is released and controlled firstly completes the input.
Optionally, the focus frame comprises a hold state;
when the focusing frame is in the holding state, sliding the focusing frame, and taking the sequence of the characters to be selected, which are slid by the focusing frame, as characters to be input; alternatively, the first and second electrodes may be,
when the focusing frame is in the holding state, the character to be input is kept fixed, and the focusing frame slides left and right or up and down to realize repeated input or deletion of the character to be input; alternatively, the first and second electrodes may be,
when the focusing frame is in the holding state, the characters to be input are kept fixed in the left-right direction, the up-down direction is not fixed, when the focusing frame slides in the up-down direction, the characters to be selected, which slide through the focusing frame, are sequentially used as characters to be input, and when the focusing frame slides in the left-right direction, repeated input or deletion of the characters to be input is realized; alternatively, the first and second electrodes may be,
when the focusing frame is in the holding state, the characters to be input are kept fixed in the vertical direction and the horizontal direction is not fixed, when the focusing frame slides in the horizontal direction, the characters to be selected, which slide through the focusing frame, are sequentially used as characters to be input, and when the focusing frame slides in the vertical direction, repeated input or deletion of the characters to be input is realized; alternatively, the first and second electrodes may be,
when the focusing frame is in the holding state, the focusing frame is moved to the position of any character to be selected through the operation of any character to be selected.
Optionally, when the focus frame is in the hold state, the characters to be selected are blurred except the characters falling into the focus frame.
Optionally, when the focus frame is in the hold state, the focus frame displays operation instructions in at least two directions, and performs operation prompts in each direction according to the function setting of the focus frame in the hold state.
According to another aspect of the embodiments of the present invention, there is also provided a character input device, which can realize efficient character input. The embodiment of the invention is realized in such a way that a character input method comprises the following steps:
the display device is used for displaying the characters to be selected and the focusing frame on the display interface;
the recognition device is used for recognizing the selection operation of the user on the character to be selected, and the selection operation enables the character to be selected to fall into the focusing frame to become the character to be input;
and the confirmation device is used for identifying the confirmation operation of the user and inputting the character to be input.
According to the technical scheme, the embodiment of the invention has the following effects: the character input method provided by the invention does not need to frequently switch the shift keys, case keys and the like for fast input, does not mistakenly trigger other characters due to the limitation of too small screen, and is efficient and convenient. The keyboard input layout and the method can be widely applied to places such as a login interface and the like needing to input character verification codes and passwords without frequently switching the keyboard layout. Therefore, similar login and wifi connection passwords can be input quickly and conveniently, and the method can also be applied to character input, particularly English.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a prior art interface diagram of a Sudoku input mode;
FIG. 2 illustrates an interface diagram of a full keyboard input mode in the prior art;
FIG. 3 is a flow chart illustrating an input method provided by an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an input method provided by an embodiment of the present application;
fig. 5 is a schematic diagram illustrating an input method according to another embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
As shown in fig. 3, a flowchart of a character input method provided in an embodiment of the present invention includes the following steps:
s101, displaying characters to be selected and a focusing frame on a display interface;
s102, identifying the selection operation of the user on the character to be selected, wherein the selection operation enables the character to be selected to fall into the focusing frame to become the character to be input;
s103, identifying the confirmation operation of the user and inputting the character to be input.
95 visible characters from 0x 20-0 x7e in the sequence in the ascii table in the following table 1 are candidate characters. The characters in the Ascii table are only for illustration, when the characters are input, any character and inputtable content can be input by using the method of the embodiment of the present invention, and the characters contained in the first-level index page and the second-level character page in the table below can be arranged arbitrarily.
Index page \ character page Character 1 Character 2 Character 3 Character 4 Character 5 Character 6 Character 7 Character 8 Character 9
Index character 1 Air conditioner @ # $ & _ `
Index character 2 ( ) { } [ ] < >
Index character 3 0 1 2 3 4
Index character 4 5 6 7 8 9
Index character 5 + - * / | \ ^
Index character 6 A B C D E F G H I
Index character 7 J K L M N O P Q R
Index character 8 S T U V W X Y Z
Index character 9 a b c d e f g h i
Index character 10 j k l m n o p q r
Index character 11 s t u v w x y z
Index character 12 : , . " '
TABLE 1
Taking fig. 4 as a specific example, a candidate character 1012 and a focus frame 1013 are displayed on a display interface of the touch screen, wherein the candidate character is suspended in a specific shape, such as an arc shape in fig. 4, and presented on the display interface; further recognizing the selection operation of the user 1052 on the character to be selected, so that the character to be selected falls into the focusing frame to become a character to be input; and finally, recognizing the confirmation operation of the user and inputting the character to be input corresponding to the focusing frame, for example, recognizing the confirmation input of the user to the touch operation of the confirmation key 1031 on the display interface.
In the above embodiment, the page of the character to be selected is presented on the display interface in a suspended state, the shape of the character to be selected is a rectangular plane, a square plane, an arc surface, a sphere, a cuboid or a cube, and when the character to be selected is a plane or an arc surface, all the characters are uniformly or non-uniformly distributed and displayed on the surface of the plane or the arc surface; when spherical, a uniform or non-uniform distribution of all characters is displayed on the spherical surface; in the case of a cube or a rectangular parallelepiped, corresponding characters can be displayed on each surface of the cube or the rectangular parallelepiped according to a predetermined setting.
In the above embodiment, the selection operation of the user on the character to be selected is identified so that the character to be selected falls in the focus box and becomes a character to be input, and the following ways may be specifically used: 1. identifying the movement operation of the user on the character to be selected, wherein the selection operation can drag the page of the character to be selected to enable the character to be selected to fall into a focusing frame; 2. recognizing the movement operation of a user on the focusing frame, fixing a character page to be selected at the moment, dragging the focusing frame, and setting the corresponding character in the focusing frame as a character to be input at the moment; 3. identifying the movement operation of the user on the character to be selected and the movement operation on the focusing frame at the same time, wherein the page of the character to be selected can be moved by one hand, and the focusing frame can be moved by the other hand, so that the character to be selected falls into the focusing frame; 4. identifying touch operation of a user on the focusing frame, and simultaneously identifying the inclination direction of the terminal equipment to move the character to be selected, wherein the mode identifies the gyroscope function of the mobile terminal, and identifies the rotation angle to further control the movement of the character page to be selected; 5. and identifying the inclination direction of the mobile terminal to move the focusing frame, wherein the mode does not need to be selected through touch screen contact, but adopts a gyroscope function of identifying the mobile terminal, identifies the rotation angle of the mobile terminal to further control the movement of the focusing frame, and finally fixes the focusing frame on the character to be input.
The invention provides a preferred embodiment, on the basis of the above embodiment, when the character to be selected falls into the focusing frame, the character to be selected is displayed in the focusing frame in an enlarged manner, which is very suitable for the case that the display screen is small or the user wants the character to be larger. Of course, the amplification may be at other locations.
In a preferred embodiment of the present invention, the confirming operation of the user is performed, the input of the character to be input is performed by multi-touch of the user, and a plurality of identical characters to be input are input, where the number of the characters to be input is the same as the number of the touch points of the user. Specifically, the multi-touch here refers to a scene of simultaneous multi-touch, and the multi-touch is recognized on the confirmation key, and at this time, a plurality of identical characters to be input are input, for example, a character to be input corresponding to the current focus frame is "x", and at this time, it is recognized that the user touches the confirmation key with 3 fingers, and at this time, it is confirmed that "xxx" is continuously input. For the confirmation operation of the recognition user, the confirmation operation can be determined by releasing the touch screen after the recognition user selects the character without specially setting a confirmation key.
The present invention also provides a preferred embodiment, as shown in fig. 5, there are two focusing frames, namely a first focusing frame 1013 and a second focusing frame 1014; the identification of the selection operation of the user on the character to be selected comprises the following steps: identifying simultaneous sliding operation of at least two touch points of a user so as to enable the characters to be selected to fall into at least two corresponding focusing frames respectively; the recognizing the confirmation operation of the user and inputting the character to be input includes: and recognizing that the character to be input corresponding to the focus frame which is released and controlled firstly completes the input. Specifically, the sliding operations of the first focus frame 1013 and the second focus frame 1014 by the user are respectively recognized so that the characters to be selected fall into the two focus frames, respectively, and the user can simultaneously perform the input of the two characters. And inputting the character corresponding to the focus frame which is released to touch first. For example, it is recognized that the user moves the first focus frame to the character "9" and the second focus frame to the character "b", and at this time, it is recognized that the touch corresponding to the second focus frame is released first, and then the character "b" to be selected is input first, and then the character "9" to be selected is input. In this state, of course, the two focus frames may be fixed, and the page of the character to be selected may be further subjected to a sliding operation, but in this case, since the positions of the two focus frames are fixed, the character to be input may also present a fixed input.
The invention also provides a preferred embodiment, the focusing frame comprises a holding state, the holding state can be entered by specially setting a holding key, and can also be entered by double-clicking or triple-clicking or long-pressing the touch screen. The holding key has a part or all of a function that can be set as desired when designing a specific product.
When the focusing frame is in a holding state, sliding the focusing frame, and taking the sequence of the characters to be selected which are slid by the focusing frame as characters to be input; for example, in fig. 4, when the focus frame is in the hold state and the recognition focus frame continuously slides over the character "abcdml", the character "abcdml" to be selected may be continuously input without performing the confirmation operation a plurality of times at this time.
When the focusing frame is in a holding state, the character to be input is kept fixed, and the focusing frame slides left and right or up and down to realize repeated input or deletion of the character to be input; for example, the focus frame is currently located at the character "b", and a plurality of characters b can be continuously input by sliding left and right to input the character "bbbbbb", and when the focus frame is slid up and down, a plurality of input characters are continuously deleted.
When the focusing frame is in a holding state, the characters to be input are kept fixed in the left-right direction, the up-down direction is not fixed, when the focusing frame slides in the up-down direction, the characters to be selected, which slide through the focusing frame, are sequentially used as characters to be input, and when the focusing frame slides in the left-right direction, repeated input or deletion of the characters to be input is realized; for example, when an X character falls into a focus box, a user operation is recognized, the focus box enters a hold state, at this time, holding a left-right direction of a character to be input fixed means that dragging the focus box in the left-right direction cannot change the character in the focus box, only can input the X character and delete the input character, and as to which direction is input, which direction is deleted can be set by itself, and dragging the focus box up and down can realize sequential input of the characters falling into the focus box.
When the focusing frame is in a holding state, the characters to be input are kept fixed in the vertical direction, the left and right directions are not fixed, when the focusing frame slides in the left and right directions, the characters to be selected, which slide through the focusing frame, are sequentially used as the characters to be input, and when the focusing frame slides in the vertical direction, repeated input or deletion of the characters to be input is realized; this embodiment is different from the previous embodiment only in the fixed direction, and is not described again.
When the focusing frame is in a holding state, the focusing frame is moved to the position of any character to be selected through the operation of any character to be selected. For example, when the X character falls into the focusing frame, the user operation is recognized, the focusing frame enters the holding state, and in the holding state, the operation of the Y character by the user is recognized, and if the operation is clicked, the focusing frame directly moves to the Y character.
In one embodiment, when the focus frame is in the hold state, the characters to be selected are blurred except the characters falling into the focus frame. Note that, in general, the blurring is slight to indicate that the hold state is entered and to highlight the character falling into the focus box.
In one embodiment, when the focus frame is in the holding state, the focus frame displays operation instructions in at least two directions, and operation prompts in all directions are performed according to the function setting of the focus frame in the holding state. For example, when the device enters the hold state, the focus frame displays a cross-shaped indication of up, down, left, and right, the left and right directions prompt sliding input, the up direction prompt input of a currently-to-be-input character, and the down direction prompt deletion of an input character.
According to another aspect of the embodiments of the present invention, there is also provided a character input device, which can realize efficient character input. The embodiment of the invention is realized in such a way that a character input method comprises the following steps:
the display device is used for displaying the characters to be selected and the focusing frame on the display interface;
the recognition device is used for recognizing the selection operation of the user on the character to be selected, and the selection operation enables the character to be selected to fall into the focusing frame to become the character to be input;
and the confirmation device is used for identifying the confirmation operation of the user and inputting the character to be input.
The present device corresponds to the above-mentioned embodiment of the input method, and is not described herein again.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by a person skilled in the art that the scope of the invention as referred to in the present application is not limited to the embodiments with a specific combination of the above-mentioned features, but also covers other embodiments with any combination of the above-mentioned features or their equivalents without departing from the inventive concept. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (10)

1. A character input method comprising:
displaying characters to be selected and a focus frame on a display interface;
identifying the selection operation of the user on the character to be selected, wherein the selection operation enables the character to be selected to fall into the focusing frame to become the character to be input;
and identifying the confirmation operation of the user and inputting the character to be input.
2. The character input method according to claim 1, wherein the characters to be selected are presented on the display interface in a floating state, and the characters to be selected are formed in the shapes of: planar, cambered, spherical, or cubic.
3. The character input method according to claim 1, wherein the recognizing of the selection operation of the user on the character to be determined includes:
identifying the moving operation of the user on the character to be selected; alternatively, the first and second electrodes may be,
identifying a moving operation of a user on the focusing frame; alternatively, the first and second electrodes may be,
identifying the moving operation of the user on the character to be selected and the moving operation on the focusing frame at the same time; alternatively, the first and second electrodes may be,
identifying touch operation of a user on the focusing frame, and simultaneously identifying the inclination direction of the terminal equipment to move the character to be selected; alternatively, the first and second electrodes may be,
recognizing a tilt direction of the mobile terminal to move the focus frame.
4. The character input method according to claim 1, wherein when the character to be selected falls in the focus frame, the character to be selected is displayed enlarged.
5. The character input method according to claim 1, wherein the recognizing a confirmation operation of a user and inputting the character to be input includes:
and identifying the multi-point touch of the user, and inputting a plurality of identical characters to be input, wherein the number of the characters to be input is identical to the number of the touch points of the user.
6. The character input method according to claim 1, wherein the focus frame includes a first focus frame and a second focus frame; the operation of identifying the selection of the user on the character to be selected comprises the following steps: identifying simultaneous sliding operation of at least two touch points of a user so as to enable the characters to be selected to fall into at least two corresponding focusing frames respectively; the recognizing the confirmation operation of the user and inputting the character to be input includes: and recognizing that the character to be input corresponding to the focus frame which is released and controlled firstly completes the input.
7. The character input method according to any one of claims 1 to 6, wherein the focus frame includes a hold state;
when the focusing frame is in the holding state, sliding the focusing frame, and taking the sequence of the characters to be selected, which are slid by the focusing frame, as characters to be input; alternatively, the first and second electrodes may be,
when the focusing frame is in the holding state, the character to be input is kept fixed, and the focusing frame slides left and right or up and down to realize repeated input or deletion of the character to be input; alternatively, the first and second electrodes may be,
when the focusing frame is in the holding state, the characters to be input are kept fixed in the left-right direction, the up-down direction is not fixed, when the focusing frame slides in the up-down direction, the characters to be selected, which slide through the focusing frame, are sequentially used as characters to be input, and when the focusing frame slides in the left-right direction, repeated input or deletion of the characters to be input is realized; alternatively, the first and second electrodes may be,
when the focusing frame is in the holding state, the characters to be input are kept fixed in the vertical direction and the horizontal direction is not fixed, when the focusing frame slides in the horizontal direction, the characters to be selected, which slide through the focusing frame, are sequentially used as characters to be input, and when the focusing frame slides in the vertical direction, repeated input or deletion of the characters to be input is realized; alternatively, the first and second electrodes may be,
when the focusing frame is in the holding state, the focusing frame is moved to the position of any character to be selected through the operation of any character to be selected.
8. The character input method according to claim 7, wherein when the focus frame is in a hold state, the characters to be selected other than the characters falling into the focus frame are blurred.
9. The character input method according to claim 7, wherein when the focus frame is in a hold state, the focus frame displays operation instructions in at least two directions, and operation prompts are performed in each direction in accordance with a function setting of the focus frame in the hold state.
10. A character input device comprising:
the display device is used for displaying the characters to be selected and the focusing frame on the display interface;
the recognition device is used for recognizing the selection operation of the user on the character to be selected, and the selection operation enables the character to be selected to fall into the focusing frame to become the character to be input;
and the confirmation device is used for identifying the confirmation operation of the user and inputting the character to be input.
CN201811625153.XA 2018-12-28 2018-12-28 Character input method and device Active CN111382402B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811625153.XA CN111382402B (en) 2018-12-28 2018-12-28 Character input method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811625153.XA CN111382402B (en) 2018-12-28 2018-12-28 Character input method and device

Publications (2)

Publication Number Publication Date
CN111382402A true CN111382402A (en) 2020-07-07
CN111382402B CN111382402B (en) 2022-11-29

Family

ID=71220545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811625153.XA Active CN111382402B (en) 2018-12-28 2018-12-28 Character input method and device

Country Status (1)

Country Link
CN (1) CN111382402B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114968003A (en) * 2022-04-20 2022-08-30 中电信数智科技有限公司 Verification code input method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1979494A (en) * 2005-12-09 2007-06-13 索尼株式会社 Data display apparatus, data display method and data display program
WO2010016065A1 (en) * 2008-08-08 2010-02-11 Moonsun Io Ltd. Method and device of stroke based user input
CN101908284A (en) * 2010-08-17 2010-12-08 王永民 Jumping learning machine of Wangma computer
CN102023806A (en) * 2010-12-17 2011-04-20 广东威创视讯科技股份有限公司 Input method for touch screen
KR20110109133A (en) * 2010-03-30 2011-10-06 삼성전자주식회사 Method and apparatus for providing character inputting virtual keypad in a touch terminal
CN102999288A (en) * 2011-09-08 2013-03-27 北京三星通信技术研究有限公司 Input method and keyboard of terminal
US20160124926A1 (en) * 2014-10-28 2016-05-05 Idelan, Inc. Advanced methods and systems for text input error correction
US20190272092A1 (en) * 2018-03-05 2019-09-05 Kyocera Document Solutions Inc. Display input device and method for controlling display input device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1979494A (en) * 2005-12-09 2007-06-13 索尼株式会社 Data display apparatus, data display method and data display program
WO2010016065A1 (en) * 2008-08-08 2010-02-11 Moonsun Io Ltd. Method and device of stroke based user input
KR20110109133A (en) * 2010-03-30 2011-10-06 삼성전자주식회사 Method and apparatus for providing character inputting virtual keypad in a touch terminal
CN101908284A (en) * 2010-08-17 2010-12-08 王永民 Jumping learning machine of Wangma computer
CN102023806A (en) * 2010-12-17 2011-04-20 广东威创视讯科技股份有限公司 Input method for touch screen
CN102999288A (en) * 2011-09-08 2013-03-27 北京三星通信技术研究有限公司 Input method and keyboard of terminal
US20160124926A1 (en) * 2014-10-28 2016-05-05 Idelan, Inc. Advanced methods and systems for text input error correction
US20190272092A1 (en) * 2018-03-05 2019-09-05 Kyocera Document Solutions Inc. Display input device and method for controlling display input device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114968003A (en) * 2022-04-20 2022-08-30 中电信数智科技有限公司 Verification code input method and device
CN114968003B (en) * 2022-04-20 2023-08-11 中电信数智科技有限公司 Verification code input method and device

Also Published As

Publication number Publication date
CN111382402B (en) 2022-11-29

Similar Documents

Publication Publication Date Title
KR102091235B1 (en) Apparatus and method for editing a message in a portable terminal
KR101589994B1 (en) Input method, input apparatus and terminal device
US7487147B2 (en) Predictive user interface
CN103543947B (en) The method and system of input content are corrected on the electronic equipment with touch-screen
EP1873620A1 (en) Character recognizing method and character input method for touch panel
CN104102413B (en) Multi-lingual characters input method and device based on dummy keyboard
US20190227688A1 (en) Head mounted display device and content input method thereof
US20130263013A1 (en) Touch-Based Method and Apparatus for Sending Information
KR101846238B1 (en) Chinese character input apparatus and controlling method thereof
US20180081539A1 (en) Improved data entry systems
CN107272881B (en) Information input method and device, input method keyboard and electronic equipment
CN104679278A (en) Character input method and device
KR102253453B1 (en) Method and device for creating a group
JP2014238755A (en) Input system, input method, and smartphone
WO2011156162A2 (en) Character selection
JP5102894B1 (en) Character input device and portable terminal device
CN111382402B (en) Character input method and device
JP5980173B2 (en) Information processing apparatus and information processing method
JP5916573B2 (en) Display device, control method, control program, and recording medium
KR101872879B1 (en) Keyboard for typing chinese character
CN106774991A (en) The processing method of input data, device and keyboard
CN103731538A (en) Method for positioning contact in contact list and mobile terminal
CN106648437A (en) Keyboard switching method and device
TWI631484B (en) Direction-based text input method, system and computer-readable recording medium using the same
KR101594416B1 (en) Chinese character input method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant