CN111580737A - Password input method and terminal equipment - Google Patents

Password input method and terminal equipment Download PDF

Info

Publication number
CN111580737A
CN111580737A CN202010338676.7A CN202010338676A CN111580737A CN 111580737 A CN111580737 A CN 111580737A CN 202010338676 A CN202010338676 A CN 202010338676A CN 111580737 A CN111580737 A CN 111580737A
Authority
CN
China
Prior art keywords
gesture
preset gesture
virtual keyboard
touch
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010338676.7A
Other languages
Chinese (zh)
Inventor
李丽君
黄小宇
董德强
王锦锋
刘晓丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PAX Computer Technology Shenzhen Co Ltd
Original Assignee
PAX Computer Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PAX Computer Technology Shenzhen Co Ltd filed Critical PAX Computer Technology Shenzhen Co Ltd
Priority to CN202010338676.7A priority Critical patent/CN111580737A/en
Publication of CN111580737A publication Critical patent/CN111580737A/en
Priority to PCT/CN2021/080264 priority patent/WO2021218431A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Input From Keyboards Or The Like (AREA)

Abstract

The application is suitable for the technical field of password auxiliary input, and provides a password input method and terminal equipment, wherein an auxiliary device matched with a virtual keyboard is arranged on the terminal equipment, and the auxiliary device comprises: the password input method comprises the following steps of: after the vision disorder person is monitored to find the column where the target key is located by touching the touch mark on the first support, a sliding first preset gesture is performed along the column where the target key is located, or after the vision disorder person finds the row where the target key is located by touching the touch mark on the second support, a voice message is sent after the sliding first preset gesture is performed along the row where the target is located, after the second preset gesture is detected, characters of the contact position of the first preset gesture can be input, and therefore the vision disorder person is assisted to input the password through the touch screen.

Description

Password input method and terminal equipment
Technical Field
The application belongs to the technical field of password auxiliary input, and particularly relates to a password input method and terminal equipment.
Background
With the development of touch screen technology and miniaturization of equipment, physical keyboards are cancelled on many equipment, and virtual keyboards are displayed by adopting touch screens, so that the purpose of inputting characters is achieved. When the physical keyboard is used for inputting the password, the visually impaired can set braille on the physical keys of the physical keyboard, or set marks on the '5' physical keys as guidance for inputting the password.
However, when the touch screen is used to display the virtual keyboard for inputting the password, the visually impaired cannot input the password on the physical keyboard on the smooth touch screen. Therefore, a method for assisting the visually impaired to input the password on the touch screen is needed.
Disclosure of Invention
In view of this, the present application provides a password input method and a terminal device, which can assist a person with visual impairment to input a password through a touch screen.
A first aspect of an embodiment of the present application provides a password input method, which is applied to a terminal device, where the terminal device is provided with a touch screen for displaying a virtual keyboard and an auxiliary device configured to match with the virtual keyboard; the auxiliary device includes: the first support is provided with a touch mark for indicating the position of each row of keys of the virtual keyboard; the second support is connected with the first support and is provided with a touch mark for indicating the position of each row of keys of the virtual keyboard;
the password input method comprises the following steps:
displaying a virtual keyboard on the touch screen and monitoring gesture information on the touch screen in a password auxiliary input mode, wherein the virtual keyboard comprises four rows and three columns;
if the monitored gesture information is a first preset gesture, recording a contact position of the first preset gesture, wherein the first preset gesture comprises: after the vision-impaired person finds the column where the target key is located by touching the touch mark on the first support, performing a sliding gesture along the column where the target key is located, or after the vision-impaired person finds the row where the target key is located by touching the touch mark on the second support, performing a sliding gesture along the row where the target is located;
when the contact position of the first preset gesture is a character key, sending out first voice information;
after the first preset gesture is monitored to be finished, if the monitored gesture information is a second preset gesture, inputting a character corresponding to the last contact position of the first preset gesture when the key corresponding to the last contact position of the first preset gesture is the character.
In a possible embodiment of the first aspect, if the monitored gesture information is a first preset gesture, the method further includes:
when the contact position of the first preset gesture is an operation key, sending out second voice information;
correspondingly, if the monitored gesture information is a second preset gesture, the method further comprises the following steps:
and when the key corresponding to the last contact position of the first preset gesture is an operation key, performing operation corresponding to the operation key.
In a possible embodiment of the first aspect, the first preset gesture further comprises: a single-click gesture;
when the first preset gesture is a sliding gesture, the contact position of the first preset gesture is the real-time position of the contact in the sliding gesture;
when the first preset gesture is a single-click gesture, the position of the contact of the first preset gesture is the position of the contact in the single-click gesture;
the second preset gesture is a double-click gesture.
In a possible embodiment of the first aspect, the first speech information corresponding to each character key is the same;
and the second voice information corresponding to each operation key is the operation corresponding to the operation key.
In a possible embodiment of the first aspect, when the key corresponding to the last contact point position of the first preset gesture is an operation key, performing an operation corresponding to the operation key includes:
when the key corresponding to the last contact position of the first preset gesture is confirmed, uploading all inputted characters as passwords;
and deleting all the input characters when the key corresponding to the last contact position of the first preset gesture is cancelled.
In a possible embodiment of the first aspect, after inputting a character corresponding to a last contact point position of the first preset gesture, the method further includes:
and sending third prompt information, wherein the third prompt information is used for prompting the number of the characters which are input by the user.
In a possible embodiment of the first aspect, in the process of recording the contact point position of the first preset gesture, the method further includes:
and sending fourth prompt information when the contact position of the first preset gesture is in an area outside the virtual keyboard, wherein the fourth prompt information is used for prompting a user that the contact position of the first preset gesture is not in the area corresponding to the virtual keyboard and/or the position relation between the contact position of the first preset gesture and the area where the virtual keyboard is located.
In a possible embodiment of the first aspect, in the password-assisted input mode, the method further includes:
and playing fifth prompt information, wherein the fifth prompt information is used for prompting the layout of the virtual keyboard and a method for inputting a password by using the auxiliary device.
In a possible embodiment of the first aspect, the number of the touch marks on the first support is 3, and the number of the touch marks on the second support is 4;
the touch mark is a raised mark;
in a possible embodiment of the first aspect, the marks indicating the second row and the marks indicating the second column of the touch marks are both double-salient marks;
the marks in the touch marks indicating the first row, the third row, the fourth row, the first column and the third column are all single-bump marks.
A second aspect of an embodiment of the present application provides a terminal device, including:
a touch screen for displaying a virtual keyboard;
auxiliary device with the virtual keyboard matching sets up, auxiliary device includes: the first support is provided with a touch mark for indicating the position of each row of keys of the virtual keyboard; the second support is connected with the first support and is provided with a touch mark for indicating the position of each row of keys of the virtual keyboard;
a memory, a processor, and a computer program stored in the memory and executable on the processor; wherein the processor, when executing the computer program, implements the steps of the method according to any of claims 1 to 10.
The password input method that this application embodiment provided, this password input method is used for on the terminal equipment that is equipped with the touch screen, and the touch screen can show virtual keyboard when the input password, still is equipped with the auxiliary device who matches the setting with virtual keyboard on the terminal equipment, and auxiliary device includes: the first support is provided with a touch mark for indicating the position of each row of keys of the virtual keyboard; the second support is connected with the first support and is provided with a touch mark for indicating the position of each row of keys of the virtual keyboard; displaying a virtual keyboard on the touch screen and monitoring gesture information on the touch screen when a password is input in a password auxiliary input mode, wherein the virtual keyboard comprises four rows and three columns; the vision-impaired person can slide along the column where the target key is located after touching the touch mark on the first support to find the column where the target key is located, or the vision-impaired person can slide along the line where the target key is located after touching the touch mark on the second support to find the line where the target key is located, and first voice information is sent out when the contact point position of the sliding gesture is a character key; the vision-impaired person determines a key corresponding to the current touch point according to the number of the first voice messages heard during sliding, when the position of the touch point is the target key, a second preset gesture can be input on the touch screen, and after the terminal device monitors the second preset gesture, the terminal device inputs a character corresponding to the last touch point position of the sliding gesture. By the method, the vision-impaired person can independently input the password on the touch screen through the auxiliary device arranged on the terminal equipment only according to the voice information and the first preset gesture and the second preset gesture.
It is understood that the beneficial effects of the second aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic diagram of a layout structure of a physical key according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an auxiliary device according to an embodiment of the present application;
FIG. 3 is a schematic diagram of the positioning marks and the guiding marks in the auxiliary device provided in the embodiment of FIG. 2;
FIG. 4 is a schematic structural diagram of another auxiliary device provided in an embodiment of the present application;
FIG. 5 is a schematic structural diagram of another auxiliary device provided in an embodiment of the present application;
FIG. 6 is a schematic structural diagram of another auxiliary device provided in an embodiment of the present application;
fig. 7 is a schematic flowchart illustrating an implementation of a password input method according to an embodiment of the present application;
fig. 8 is a schematic block diagram of a terminal device provided in an embodiment of the present application;
fig. 9 is a schematic diagram of a terminal device according to an embodiment of the present application;
fig. 10 is a schematic diagram of a terminal device according to an embodiment of the present application;
fig. 11 is a schematic diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 1 is a layout of a physical keyboard, which is relatively common, in the physical keyboard shown in fig. 1, the first row is: "1", "2", "3"; the second row is sequentially: "4", "5", "6"; the third row in turn is: "7", "8", "9"; the fourth row is sequentially: "cancel", "0", "confirm". The physical keyboard shown in figure 1 can be seen to be consistent with a standard telephone key layout.
The password input method provided by the embodiment of the application is applied to the terminal equipment, the terminal equipment needs to be provided with a touch screen for displaying a virtual keyboard and an auxiliary device matched with the virtual keyboard, and before the password input method is introduced, the auxiliary device matched with the password input method provided by the embodiment of the application needs to be introduced.
Fig. 2 is an auxiliary device provided in an embodiment of the present application, for assisting a vision-impaired person to input a password through a virtual keyboard displayed on a touch screen, the auxiliary device including:
the first support 1 is provided with a touch mark for indicating the position of each row of keys of the virtual keyboard; and the second support 2 is connected with the first support 1 and is provided with a touch mark for indicating the position of each row of keys of the virtual keyboard.
In the embodiment of the present application, the touch mark provided on the first support may be referred to as a first mark 11, and the touch mark provided on the second support may be referred to as a second mark 21.
In an embodiment of the present application, the vision-impaired person includes: blind, visually/color impaired people, etc. The auxiliary device needs to be matched with a virtual keyboard displayed by a touch screen for use, when the virtual keyboard is square or rectangular, a first support of the auxiliary device is used for being arranged on the upper side or the lower side of the virtual keyboard, a second support of the auxiliary device is used for being arranged on the left side or the right side of the virtual keyboard, and the first support and the second support can be fixedly connected or detachably connected.
The first support is provided with a first mark, the first mark is a mark which can be used for touch recognition, for example, a convex point, a convex circle, a convex ring, a convex triangle and the like, and it needs to be noted that the first mark is used for indicating the position of each column of virtual keys in the virtual keyboard, and is not used for indicating the specific meaning of the virtual keys. Similarly, the second support is provided with a second mark, the second mark is also a mark that can be used for touch recognition, for example, a salient point, a raised circle, a raised ring, a raised triangle, and the like, and the second mark is used for indicating the position of each row of virtual keys in the virtual keyboard and is not used for indicating the specific meaning of the virtual keys. In view of the understanding of the first mark and the second mark, the first mark and the second mark may be the same mark or different marks, the first mark may be m, wherein m first marks may be the same mark or different marks, and the second mark may be n, wherein n second marks may be the same mark or different marks.
The auxiliary device is used in cooperation with a virtual keyboard on a touch screen, and therefore the layout mode of the virtual keyboard directly influences the position relation of the first mark and the second mark. By way of example, when the virtual keyboard has three rows and four columns, m is 4, n is 3, and when the virtual keyboard has four rows and three columns, m is 3, and n is 4. In the embodiment of the present application, in order to follow the past habit of the user, for example, the layout of the habit telephone keys, the virtual keyboard may be set to the same layout as the physical keyboard shown in fig. 1, where m is 3 and n is 4. Therefore, the vision-impaired person can find the target key (the key which the vision-impaired person wants to find) according to the cognition of the vision-impaired person on the standard telephone key, the learning cost of the vision-impaired person on the password input method provided by the embodiment of the application is reduced, and the vision-impaired person can use the password more easily.
Referring to fig. 3, fig. 3 is another auxiliary device provided in the embodiment of the present application, in order to clearly guide the visually impaired, the first mark 11 may be further divided into a first positioning mark 111 and a first guiding mark 112, as shown in the figure, the first positioning mark 111 is a double convex point, and the first guiding mark 112 is a single convex point.
In the embodiment of the present application, the first positioning mark may be a first mark of the first marks, and the rest are first guiding marks; the first positioning mark may be a mark that identifies a specific position in the first mark, and the rest may be the first guide mark.
For example, when the virtual keyboard and the physical keyboard shown in fig. 1 are arranged in the same manner, the first positioning mark 111 is a column where an indicator character "5" is located, that is, a column for indicating a position of a second column of virtual keys in the virtual keyboard.
Similarly, the second positioning mark 211 is a double-bump, the second index mark 212 is a single-bump, the second positioning mark may be a first mark in the second marks, and the rest are second index marks; the second positioning mark may be a mark that identifies a specific position in the second mark, and the rest may be the second guide mark.
For example, when the virtual keyboard and the physical keyboard shown in fig. 1 are arranged in the same manner, the second positioning mark 211 is a row where an indicator character "5" is located, that is, a row for indicating the position of a second row of virtual keys in the virtual keyboard.
Fig. 4 is another auxiliary device provided in the embodiment of the present application, where the auxiliary device further includes:
the first support 1 is provided with m first marks 11 for touch recognition, wherein the first marks 11 are used for indicating the position of each column of virtual keys in the virtual keyboard;
the second support 2 is connected with the first support 1 and is provided with n second marks 21 for touch identification, wherein the second marks 21 are used for indicating the position of each row of virtual keys in the virtual keyboard;
the third bracket 3 is arranged in parallel with the first bracket 1 and is connected with the second bracket 2;
and the combination of (a) and (b),
and the fourth bracket 4 is arranged in parallel with the second bracket 2 and is connected with the first bracket 1.
In the embodiment of the present application, the third support may not be provided with a third mark, the third support is used for supporting the auxiliary device so as to balance the auxiliary device, and m third marks (which may be identical to the pattern and arrangement of the first marks) for touch recognition may also be provided, the third marks are used for indicating the position of each column of virtual keys in the virtual keyboard, so that a visually impaired person can touch the marks from the upper side or the lower side to find the specific position of each key.
Similarly, the fourth support may not be provided with a fourth mark, and the fourth support is used for supporting the auxiliary device so as to balance the auxiliary device, or may be provided with n fourth marks (which may be identical to the second marks in pattern and arrangement) for touch recognition, where the fourth marks are used for indicating the position of each column of virtual keys in the virtual keyboard, so that a visually impaired person may touch the marks from the left side or from the right side to find the specific position of each key.
When the touch screen comprises the third support and the fourth support, the first support, the second support, the third support and the fourth support form a frame body, and when the touch screen is used, the frame body is arranged around a virtual keyboard displayed in the touch screen.
Fig. 5 is another auxiliary device provided in an embodiment of the present application, where the auxiliary device includes:
the first support 1 is provided with m first marks 11 for touch recognition, wherein the first marks 11 are used for indicating the position of each column of virtual keys in the virtual keyboard 0;
the second support 2 is connected with the first support 1 and is provided with n second marks 21 for touch identification, wherein the second marks 21 are used for indicating the position of each row of virtual keys in the virtual keyboard 0;
the fourth support 4 is arranged in parallel with the second support 2, is connected with the first support 1, and is provided with m fourth marks 41 for touch recognition, wherein the fourth marks 41 are used for indicating the position of each row of virtual keys in the virtual keyboard 0.
In this embodiment of the application, the auxiliary device may be a single component as shown in fig. 5, or may be a housing of a terminal device, referring to fig. 6, where fig. 6 is an auxiliary device provided in this embodiment of the application, a left frame of the housing of the terminal device that surrounds the touch screen is a second support, a right frame is a fourth support, and a lower frame is a first support. The first mark is arranged on the lower frame of the terminal equipment shell, the second mark is arranged on the left frame of the terminal equipment shell, and the fourth mark is arranged on the right frame of the terminal equipment shell.
As another embodiment of the present application, the auxiliary device may also include a first bracket, a second bracket, and a third bracket, and does not include a fourth bracket, which is not limited herein.
Fig. 7 is a schematic view of an implementation process of a password input method according to an embodiment of the present application, where the password input method may be applied to a terminal device provided with any one of the auxiliary devices according to the embodiment of the present application, and the auxiliary device may refer to the auxiliary device according to any one of the embodiments, and is not described herein again. As shown, the method includes:
step S701, displaying a virtual keyboard on the touch screen and monitoring gesture information on the touch screen in a password auxiliary input mode, wherein the virtual keyboard is divided into four rows and three columns.
In the embodiment of the application, the password input method can have two input modes, one mode is a password auxiliary input mode, in the mode, a person with visual impairment is mainly assisted to input the password, and the other mode is a password non-auxiliary input mode, in the mode, a person with normal vision can input the password, and certainly, for some people with weak vision/color, the password can also be input in the password non-auxiliary input mode.
The embodiment of the application mainly describes a password input process in the password auxiliary input mode, and after the password auxiliary input mode is entered, the virtual keyboard is displayed on the touch screen, and meanwhile gesture information on the touch screen can be monitored. The gesture information is a gesture which is realized by a user on the touch screen in a touch mode, such as a single-click gesture, a sliding gesture, a double-click gesture and the like.
As another embodiment of the present application, after entering the password auxiliary input mode, a fifth prompt message may be played first, where the fifth prompt message is used to prompt the layout of the virtual keyboard and a method for inputting a password by using the auxiliary device.
In the embodiment of the present application, in order to enable the visually impaired to smoothly input the password, the layout of the virtual keyboard and the password input method may be played in advance.
For some people with weak eyesight/color, each character can be input in a single click mode when the password is not required to be input by matching with an auxiliary device. However, for some people with weak eyesight or color or blind people, when the password needs to be input by matching with the auxiliary device, the layout of the virtual keyboard and the method for inputting the password through the auxiliary device are required to be played.
For example, the fifth prompting message is: please listen to the following voice prompt for payment operation. The layout of the password keyboard is consistent with that of a standard telephone keyboard, keys 1, 2 and 3 are arranged in the first row, and a cancel key, a number 0 key and a confirm key are arranged in the last row. Raised touch points around the keyboard will identify the positions of the rows and columns of the keyboard, with the rows and columns of the numeric keys '5' being marked with double dots. The sound feedback of the number keys is a buzz, and the confirm key and the cancel key can prompt 'confirm' and 'cancel' through voice. Please combine the position of the touch point mark, slide the finger on the touch screen, and identify the number key according to the buzzer feedback. After finding the target number, the finger is lifted and then double-clicked at any position on the screen, and the number is input. After all password numbers are input, the user moves to the lower right corner of the keyboard to find a confirmation key, and after the user hears voice 'confirmation', the user lifts the finger and then double clicks at any position of the screen to confirm the input. The cancel key is in the lower left corner of the keyboard, and the input of the cancel key will cancel all the numbers that have been input and cancel the current transaction. Please insert the card at the top (or bottom) end of the device to start payment.
Step S702, if the monitored gesture information is a first preset gesture, recording a contact position of the first preset gesture, wherein the first preset gesture comprises: and after the vision-impaired person finds the column where the target key is located by touching the touch mark on the first support, performing a sliding gesture along the column where the target key is located, or after the vision-impaired person finds the row where the target key is located by touching the touch mark on the second support, performing a sliding gesture along the row where the target is located.
In this embodiment of the application, as described in step S701, each character may be input in a single-click manner, that is, the first preset gesture includes: a single-tap gesture. When the visually impaired needs to input characters with the aid of the auxiliary device, the first preset gesture may also be a sliding gesture, for example, the first preset gesture may be: a swipe gesture starting from an edge of the virtual keyboard. This is because the visually impaired needs to find the corresponding row or column from the touch mark of the auxiliary device and then slide into the virtual keyboard area displayed on the touch screen, so as to find the character to be input or the operation to be performed.
The contact position of the first preset gesture is the real-time position of the contact in the sliding gesture. When the first preset gesture is a sliding gesture, the real-time position of the corresponding contact point changes along with sliding, and the real-time position of the contact point is the position of the terminal point in the real-time sliding track corresponding to the first preset gesture. Similarly, when the first preset gesture is a clicking gesture, the contact position of the first preset gesture is the position of the contact in the clicking gesture.
It should be noted that, when the auxiliary device includes: when the third support and/or the fourth support are used, the first preset gesture may further include: the vision-impaired person touches the touch mark on the third support to find the column where the target key is located, and then performs a sliding gesture along the column where the target key is located; and/or the vision-impaired person touches the touch mark on the fourth support to find the line where the target key is located, and then slides along the line where the target is located.
Step S703, when the contact position of the first preset gesture is a character key, sending a first voice message.
In the embodiment of the present application, the character keys are keys corresponding to numbers 0 to 9, respectively. When a user (a person with visual impairment) slides in the area of the virtual keyboard through a finger, if the touch point position of the sliding gesture is a character key, first voice information can be sent out, for safety, the first voice information does not prompt a specific character corresponding to the touch point position, but feeds back a character corresponding to the current touch point position by an undifferentiated buzzer, and when the touch point position is switched to the position of another character, feeds back the character corresponding to the current touch point position by the buzzer again. Therefore, the first voice information corresponding to each character key may be the same.
By way of example, when the visually impaired person needs to input the character "2", the user can find the uppermost touch mark on the second support by touching the touch mark on the second support on the left side of the auxiliary device, i.e., find the row of the target key "2", then slide horizontally to the right along the row of "2", when the virtual key 1 is slid to the position, a first voice prompt message (such as a buzzer) is sent out, and at the moment, the visually impaired people can clearly know that the current finger position is the virtual key 1, and the visually impaired people can clearly know that the target key 2 is on the right side of the virtual key 1 by means of the knowledge of the layout of the virtual keyboard, so that the visually impaired people can continuously slide to the right, when the first voice message prompt (buzzing sound) is heard again, the current position of the finger is known as the target key 2. Of course, in practical applications, the visually impaired can also find the row of the target key "2" by touching the touch mark on the fourth support on the right side of the auxiliary device, and then slide horizontally to the left, thereby finding the specific position of the target key "2". Similarly, the visually impaired can also find the middle touch mark (i.e. the column where the target key "2" is located) by touching the touch mark on the first support on the lower side of the auxiliary device, and then slide vertically and upwards along the column where the "2" is located, and after hearing the fourth first voice message (0-8-5-2), know that the current position of the finger is the position of the target key "2".
As another embodiment of the present application, when the contact position of the first preset gesture is an operation key, a second voice message is sent.
In an embodiment of the present application, the operation key includes: an "ok" button and a "cancel" button. The second voice message does not relate to privacy contents such as a password of the user, and therefore, the second voice message is an operation corresponding to the operation key, for example, when the contact point position of the sliding gesture is a "confirm" key, the second voice message is: confirming; when the touch point position of the sliding gesture is a cancel key, the second voice message is: and (6) cancelling.
As another embodiment of the present application, in the process of monitoring gesture information on a touch screen, the method further includes:
and sending fourth prompt information when the contact position of the first preset gesture is in an area outside the virtual keyboard, wherein the fourth prompt information is used for prompting a user that the contact position of the first preset gesture is not in the area corresponding to the virtual keyboard and/or the position relation between the contact position of the first preset gesture and the area where the virtual keyboard is located.
In this embodiment of the application, if the first preset gesture is a sliding gesture, and a real-time contact position of the sliding gesture already slides out of an area outside a virtual keyboard, it is necessary to prompt a user that the contact position of the first preset gesture is not located in the area corresponding to the virtual keyboard, and in addition, in order to guide the user to return to the area where the virtual keyboard is located, the user may also be prompted about a position relationship between the contact position of the first preset gesture and the area corresponding to the virtual keyboard, for example, the fourth prompting information is: the password keyboard is in a lower position.
Step S704, after monitoring that the first preset gesture is finished, if the monitored gesture information is the second preset gesture, when the key corresponding to the last contact position of the first preset gesture is a character, inputting the character corresponding to the last contact position of the first preset gesture.
Similarly, when the key corresponding to the last contact position of the first preset gesture is an operation key, performing operation corresponding to the operation key.
In this embodiment of the application, the first preset gesture is used to assist a visually impaired person to find a character to be input or an operation to be performed, and the second preset gesture is used to confirm a key found by the first preset gesture, so that the character is input or the character which is already input is operated only when the first preset gesture exists before the second preset gesture. Wherein the second preset gesture may be a double-click gesture.
Certainly, in practical applications, after the visually impaired person finds a character key to be input or an operation key to be performed through the first preset gesture, the visually impaired person may input a second preset gesture at the contact point position of the first preset gesture, under such a condition, the contact point position of the first preset gesture is consistent with the contact point position of the second preset gesture, if the contact point position is a character key, a character corresponding to the contact point position is input, and a password input box may be generally set, and the character corresponding to the contact point position is input into the password input box; and if the contact position is an operation key, performing corresponding operation. After the vision-impaired person finds out a character to be input or a target operation to be performed through the first preset gesture, a second preset gesture can be input at any position of the touch screen, and if the position of the last contact point of the last first preset gesture before the second preset gesture corresponds to a character key, the character represented by the character key is input; and if the last contact position of the last first preset gesture before the second preset gesture corresponds to an operation key, performing operation corresponding to the operation key.
For example, after the visually impaired person finds the position of the character "5" through the first preset gesture, the user may keep the position unchanged, input the second preset gesture, and at this time, the last touch point position of the last first preset gesture before the second preset gesture is the position of the character "5", and then input the character "5" into the password input box. Or after the vision-impaired person finds the position of the character '5' through the first preset gesture, inputting a second preset gesture at any position of the touch screen, wherein the last touch point position of the last first preset gesture before the second preset gesture is the position of the character '5', and inputting the character '5' into the password input box. For privacy of the password information, the characters may be entered into the password entry box in the form of hidden characters (e.g., ").
It should be noted that, if the visually impaired person inputs the first preset gesture, and the contact position of the first preset gesture sequentially passes through the characters "1", "2" and "5", and then the second preset gesture is not input, but the first preset gesture "1", "2" and "3" is continuously input, at this time, the second preset gesture is input, the input character is not "5", but "3", because the character corresponding to the last contact position of the last first preset gesture before the second preset gesture is input.
As another embodiment of the present application, after inputting a character corresponding to a final touch point position of the first preset gesture, the method further includes:
and sending third prompt information, wherein the third prompt information is used for prompting the number of the characters which are input by the user.
Certainly, in practical applications, when the last touch point position of the first preset gesture is an operation key, the voice message (for example, a buzzer sound same as the first voice message) may be sent first to prompt the visually impaired to input the current character successfully, and then the third prompt message is sent to prompt the user of the number (i.e., the number of digits) of the currently input characters, so that the user can know that the next-digit password character should be input.
As another embodiment of the present application, when the key corresponding to the last contact position of the first preset gesture is an operation key, performing an operation corresponding to the operation key includes:
when the key corresponding to the last contact position of the first preset gesture is confirmed, uploading all inputted characters as passwords;
and when the key corresponding to the last contact position of the first preset gesture is cancelled, deleting all the input characters and cancelling the current transaction.
In the embodiment of the application, after the vision-impaired person finds the position of the operation key 'confirmation' through the first preset gesture, the position can be kept unchanged, the second preset gesture is input, and at the moment, the characters in the password input box are used as the input password to be uploaded. Or after the vision-impaired person finds the position of the operation key 'confirmation' through the first preset gesture, inputting a second preset gesture at any position of the touch screen, and uploading the characters in the password input box as the input password at the moment. If the operation key is 'cancel', the operation comprises the following steps: all characters in the password entry box are deleted and the current transaction is cancelled. It should be noted that after the current transaction is cancelled, which is equivalent to the end of the password input process, the terminal device returns to the state of step S701, "display the virtual keyboard on the touch screen, and monitor the gesture information on the touch screen".
In order to provide a clearer description of the embodiments of the present application, a process of inputting a password by a visually impaired person through the auxiliary device and the password inputting method provided by the embodiments of the present application is illustrated.
If the password to be input by the vision-impaired person is '135678', entering the password auxiliary input mode, playing a fifth prompt message to prompt the layout of the virtual keyboard and the method for inputting the password by using the auxiliary device, and simultaneously displaying the virtual keyboard on the touch screen, the vision-impaired person finds the uppermost touch mark on the second support by touching the second support on the left side of the virtual keyboard, then horizontally slides to the right, and sends out a buzzer sound after sliding to the position of the first virtual key, at this time, the vision-impaired person is told that the finger is currently positioned at the position of the first line (the uppermost touch mark on the second support) from the left to the first key (the first buzzer sound), the vision-impaired person needs to find the character '1', the first line from the left in the virtual keyboard is the '1', therefore, the vision-impaired person can double-click at any position of the touch screen, at this time, the key corresponding to the last contact position of the sliding gesture before the double-click gesture is the character "1", so that the character "1" can be input in the password input box, and at this time, prompt information "one character is currently input" can be sent.
The visually impaired continues to input the character "3", the visually impaired finds the touch mark positioned at the uppermost position on the fourth support by touching the fourth support at the right side of the virtual keyboard, then horizontally slides leftward, after sliding to the position of the first virtual key, a buzzing sound is sent, at the moment, the vision-impaired person is told that the finger is currently positioned at the position of the first key (the first buzzing sound) from the right in the first row (the uppermost touch mark of the fourth support), the character "3" needs to be found by the vision-impaired person, the character "3" is the first key from the right in the first row in the virtual keyboard, therefore, the visually impaired can double-click at any position of the touch screen, at the moment, the key corresponding to the last contact position of the sliding gesture before the double-click gesture is the character '3', the character '3' can be input in the password input box, and at the moment, prompt information 'two characters are currently input' can be sent;
……;
after the visually impaired person inputs the last character 8 in the above mode, the visually impaired person can continuously find the confirmation key, and after the visually impaired person slides upwards through the rightmost touch mark in the first support positioned on the lower side of the virtual keyboard to hear the first buzzer, the visually impaired person confirms the first buzzer in a double-click mode on the touch screen, or after the visually impaired person slides leftwards through the rightmost touch mark in the fourth support positioned on the right side of the virtual keyboard to hear the first buzzer, the visually impaired person confirms the first buzzer in a double-click mode on the touch screen. After double-clicking, the voice message "confirm" may be played and then the word in the password entry box is uploaded as the password. After uploading, if the password is correct, a "transaction successful" or "transaction complete" may be played. Of course, if the uploaded password is wrong, the password error can be played, retry is requested, or transaction failure is played.
If the visually impaired finds the cancel button in the process of inputting the characters, and double-click is performed, the characters in the password input box are all deleted, and the current transaction is cancelled, for example, the user returns to the previous interface. At this time, the corresponding play "delete all" and/or "cancel transaction" is required.
After the transaction is completed or cancelled, the user can be continuously prompted with prompting contents such as 'please fetch the card'.
Because the voice prompt contents in the embodiment of the application do not relate to the privacy of the user, the vision-impaired people do not need to wear an earphone when inputting the password, and the voice contents can be played outside.
By the above examples, it can be seen that, in the process of inputting the password, the visually impaired can realize all operations of inputting the password (finding a target character, inputting the target character, deleting the input character, uploading the input character as the password, canceling the transaction, etc.) only based on the voice prompt information in cooperation with the sliding gesture and the double-click gesture.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 8 is a schematic block diagram of a terminal device according to an embodiment of the present application. As shown in fig. 8, the terminal device 8 of this embodiment includes: a touch screen for displaying a virtual keyboard, any of the auxiliary devices provided in embodiments of the present application, one or more processors 80, a memory 81, and a computer program 82 stored in the memory 81 and executable on the processor 80. The processor 80, when executing the computer program 82, implements the steps in the various method embodiments described above, such as the steps S701 to S704 shown in fig. 7.
Illustratively, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 82 in the terminal device 8.
The terminal device includes, but is not limited to, a processor 80 and a memory 81. Those skilled in the art will appreciate that fig. 8 is only one example of a terminal device 8, and does not constitute a limitation of the terminal device 8, and may include more or less components than those shown, or combine some components, or different components, for example, the terminal device 8 may further include an input device, an output device, a network access device, a bus, etc.
The Processor 80 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 81 may be an internal storage unit of the terminal device 8, such as a hard disk or a memory of the terminal device 8. The memory 81 may also be an external storage device of the terminal device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the terminal device 8. The memory 81 is used for storing the computer programs and other programs and data required by the terminal device 8. The memory 81 may also be used to temporarily store data that has been output or is to be output.
The terminal device provided by the embodiment of the application can be a payment terminal, such as a POS machine. The arrangement of the auxiliary device on the terminal device is not shown in the embodiment shown in fig. 8, and for a clearer understanding of the payment device provided with the auxiliary device, reference may be made to the schematic structural diagrams of the payment device shown in fig. 9 to 11. The payment devices shown in fig. 9 to 11 have different structures, and accordingly, there are some differences in the structure of the auxiliary devices and the manner in which the auxiliary devices are arranged on the terminal device. It should be noted that, although fig. 9 to 11 illustrate a payment device, the payment device illustrated in fig. 9 to 11 does not indicate that the present application is only used for a payment device, but may be applied to a terminal device that needs to perform a password input operation.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed terminal device and method may be implemented in other ways. For example, the above-described terminal device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical function division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the method embodiments described above when the computer program is executed by one or more processors.
Also, as a computer program product, when the computer program product runs on a terminal device, the terminal device is enabled to implement the steps in the above-mentioned method embodiments when executed.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (11)

1. A password input method is applied to terminal equipment and is characterized in that the terminal equipment is provided with a touch screen for displaying a virtual keyboard and an auxiliary device matched with the virtual keyboard; the auxiliary device includes: the first support is provided with a touch mark for indicating the position of each row of keys of the virtual keyboard; the second support is connected with the first support and is provided with a touch mark for indicating the position of each row of keys of the virtual keyboard;
the password input method comprises the following steps:
displaying a virtual keyboard on the touch screen and monitoring gesture information on the touch screen in a password auxiliary input mode, wherein the virtual keyboard comprises four rows and three columns;
if the monitored gesture information is a first preset gesture, recording a contact position of the first preset gesture, wherein the first preset gesture comprises: after the vision-impaired person finds the column where the target key is located by touching the touch mark on the first support, performing a sliding gesture along the column where the target key is located, or after the vision-impaired person finds the row where the target key is located by touching the touch mark on the second support, performing a sliding gesture along the row where the target is located;
when the contact position of the first preset gesture is a character key, sending out first voice information;
after the first preset gesture is monitored to be finished, if the monitored gesture information is a second preset gesture, inputting a character corresponding to the last contact position of the first preset gesture when the key corresponding to the last contact position of the first preset gesture is the character.
2. The password input method of claim 1, wherein if the monitored gesture information is a first preset gesture, further comprising:
when the contact position of the first preset gesture is an operation key, sending out second voice information;
correspondingly, if the monitored gesture information is a second preset gesture, the method further comprises the following steps:
and when the key corresponding to the last contact position of the first preset gesture is an operation key, performing operation corresponding to the operation key.
3. The password input method of claim 2, wherein the first preset gesture further comprises: a single-click gesture;
when the first preset gesture is a sliding gesture, the contact position of the first preset gesture is the real-time position of the contact in the sliding gesture;
when the first preset gesture is a single-click gesture, the position of the contact of the first preset gesture is the position of the contact in the single-click gesture;
the second preset gesture is a double-click gesture.
4. The password input method of claim 2, wherein the first voice information corresponding to each character key is the same;
and the second voice information corresponding to each operation key is the operation corresponding to the operation key.
5. The password input method according to claim 2, wherein when the key corresponding to the last contact position of the first preset gesture is an operation key, performing an operation corresponding to the operation key includes:
when the key corresponding to the last contact position of the first preset gesture is confirmed, uploading all inputted characters as passwords;
and when the key corresponding to the last contact position of the first preset gesture is cancelled, deleting all the input characters and cancelling the transaction.
6. The method for inputting a password according to claim 1, further comprising, after inputting a character corresponding to a final touch point position of the first preset gesture:
and sending third prompt information, wherein the third prompt information is used for prompting the number of the characters which are input by the user.
7. The password input method of claim 1, wherein in the process of recording the touch point position of the first preset gesture, further comprising:
and sending fourth prompt information when the contact position of the first preset gesture is in an area outside the virtual keyboard, wherein the fourth prompt information is used for prompting a user that the contact position of the first preset gesture is not in the area corresponding to the virtual keyboard and/or the position relation between the contact position of the first preset gesture and the area where the virtual keyboard is located.
8. The password input method of claim 1, wherein in the password auxiliary input mode, further comprising:
and playing fifth prompt information, wherein the fifth prompt information is used for prompting the layout of the virtual keyboard and a method for inputting a password by using the auxiliary device.
9. The password input method of claim 1, wherein the number of the touch marks on the first support is 3, and the number of the touch marks on the second support is 4;
the touch marks are raised marks.
10. The password input method of claim 9, wherein the mark indicating the second row and the mark indicating the second column of the touch marks are both double-convex-dot marks;
the marks in the touch marks indicating the first row, the third row, the fourth row, the first column and the third column are all single-bump marks.
11. A terminal device, comprising:
a touch screen for displaying a virtual keyboard;
auxiliary device with the virtual keyboard matching sets up, auxiliary device includes: the first support is provided with a touch mark for indicating the position of each row of keys of the virtual keyboard; the second support is connected with the first support and is provided with a touch mark for indicating the position of each row of keys of the virtual keyboard;
a memory, a processor, and a computer program stored in the memory and executable on the processor; wherein the processor, when executing the computer program, implements the steps of the method according to any of claims 1 to 10.
CN202010338676.7A 2020-04-26 2020-04-26 Password input method and terminal equipment Pending CN111580737A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010338676.7A CN111580737A (en) 2020-04-26 2020-04-26 Password input method and terminal equipment
PCT/CN2021/080264 WO2021218431A1 (en) 2020-04-26 2021-03-11 Password inputting method and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010338676.7A CN111580737A (en) 2020-04-26 2020-04-26 Password input method and terminal equipment

Publications (1)

Publication Number Publication Date
CN111580737A true CN111580737A (en) 2020-08-25

Family

ID=72115233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010338676.7A Pending CN111580737A (en) 2020-04-26 2020-04-26 Password input method and terminal equipment

Country Status (2)

Country Link
CN (1) CN111580737A (en)
WO (1) WO2021218431A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021218431A1 (en) * 2020-04-26 2021-11-04 百富计算机技术(深圳)有限公司 Password inputting method and terminal device
CN114489374A (en) * 2021-12-28 2022-05-13 深圳市百富智能新技术有限公司 Multi-point touch data input method and device, sales terminal and storage medium
US11341496B2 (en) 2020-06-03 2022-05-24 Fiserv, Inc. Hardware device for entering a PIN via tapping on a touch screen display
CN115145405A (en) * 2021-03-29 2022-10-04 华为技术有限公司 Input method and terminal

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388922A (en) * 2008-10-30 2009-03-18 深圳华为通信技术有限公司 Method and terminal for prompting character number
CN202372956U (en) * 2011-12-09 2012-08-08 无锡知谷网络科技有限公司 Touch screen with a touch identification function and for a mobile phone for sight-impaired people
CN103106031A (en) * 2013-01-22 2013-05-15 北京小米科技有限责任公司 Operation method and operation device of mobile terminal
CN103744605A (en) * 2013-09-14 2014-04-23 中华电信股份有限公司 Sudoku-based blind braille input device and method
CN104461346A (en) * 2014-10-20 2015-03-25 天闻数媒科技(北京)有限公司 Method and device for visually impaired people to touch screen and intelligent touch screen mobile terminal
CN105892919A (en) * 2016-03-30 2016-08-24 中国联合网络通信集团有限公司 Methods for recognizing key positions and feeding back input values on keyboard of touch screen
CN209216219U (en) * 2018-10-31 2019-08-06 福建新大陆支付技术有限公司 Blind person based on POS touch screen assists input frame-type bracket

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014224674A1 (en) * 2014-12-02 2016-06-02 Siemens Aktiengesellschaft User interface and method for operating a system
US10705723B2 (en) * 2015-11-23 2020-07-07 Verifone, Inc. Systems and methods for authentication code entry in touch-sensitive screen enabled devices
CN111580737A (en) * 2020-04-26 2020-08-25 百富计算机技术(深圳)有限公司 Password input method and terminal equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388922A (en) * 2008-10-30 2009-03-18 深圳华为通信技术有限公司 Method and terminal for prompting character number
CN202372956U (en) * 2011-12-09 2012-08-08 无锡知谷网络科技有限公司 Touch screen with a touch identification function and for a mobile phone for sight-impaired people
CN103106031A (en) * 2013-01-22 2013-05-15 北京小米科技有限责任公司 Operation method and operation device of mobile terminal
CN103744605A (en) * 2013-09-14 2014-04-23 中华电信股份有限公司 Sudoku-based blind braille input device and method
CN104461346A (en) * 2014-10-20 2015-03-25 天闻数媒科技(北京)有限公司 Method and device for visually impaired people to touch screen and intelligent touch screen mobile terminal
CN105892919A (en) * 2016-03-30 2016-08-24 中国联合网络通信集团有限公司 Methods for recognizing key positions and feeding back input values on keyboard of touch screen
CN209216219U (en) * 2018-10-31 2019-08-06 福建新大陆支付技术有限公司 Blind person based on POS touch screen assists input frame-type bracket

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021218431A1 (en) * 2020-04-26 2021-11-04 百富计算机技术(深圳)有限公司 Password inputting method and terminal device
US11341496B2 (en) 2020-06-03 2022-05-24 Fiserv, Inc. Hardware device for entering a PIN via tapping on a touch screen display
US11710126B2 (en) 2020-06-03 2023-07-25 Fiserv, Inc. Hardware device for entering a pin via tapping on a touch screen display
CN115145405A (en) * 2021-03-29 2022-10-04 华为技术有限公司 Input method and terminal
WO2022206477A1 (en) * 2021-03-29 2022-10-06 华为技术有限公司 Input method and terminal
CN114489374A (en) * 2021-12-28 2022-05-13 深圳市百富智能新技术有限公司 Multi-point touch data input method and device, sales terminal and storage medium
WO2023124418A1 (en) * 2021-12-28 2023-07-06 深圳市百富智能新技术有限公司 Multi-touch data input method and apparatus, sales terminal, and storage medium

Also Published As

Publication number Publication date
WO2021218431A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
CN111580737A (en) Password input method and terminal equipment
CN108475169B (en) System and method for authentication code entry in a touch-sensitive screen-enabled device
US10782874B2 (en) User interface and method for operating a system
US5589855A (en) Visually impaired customer activated terminal method and system
US10732823B2 (en) User interface and method for the protected input of characters
JPH03141490A (en) System and device for automatic cash transaction
CN110383328A (en) The learning Content display methods and its application program of terminal
US20200402423A1 (en) Device and method for identifying a user
JPH07306897A (en) Remote operation terminal system
US10705723B2 (en) Systems and methods for authentication code entry in touch-sensitive screen enabled devices
JP2005250530A (en) Character input device
CN111127780A (en) PIN input detection method of full-touch POS terminal
EP3540573A1 (en) Systems and methods for authentication code entry in touch-sensitive screen enabled devices
KR102035087B1 (en) Keypad system for foreigner korean alphabet learner
JPH0721444A (en) Automatic teller machine
CN109545019B (en) Learning support device and learning support method
JP2013105197A (en) Character string confirmation device and character string confirmation program
JP2003256911A (en) Automatic service device and method for selecting service
Alnfiai Accessible Tools On Touchscreen Devices For Blind And Visually Impaired People
CN110850972A (en) Braille input method, device and computer readable storage medium
CN117519566A (en) Touch screen information input method of electronic equipment and electronic equipment
Samanta et al. VectorEntry: Text Entry Mechanism Using Handheld Touch-Enabled Mobile Devices for People with Visual Impairments
JPH0348548B2 (en)
JP4953920B2 (en) Key input support device, key input method and program thereof.
CN117930993A (en) Method and device for assisting in inputting vision-impaired object and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200825

RJ01 Rejection of invention patent application after publication