CN113420279A - Password input method and device - Google Patents

Password input method and device Download PDF

Info

Publication number
CN113420279A
CN113420279A CN202110589823.2A CN202110589823A CN113420279A CN 113420279 A CN113420279 A CN 113420279A CN 202110589823 A CN202110589823 A CN 202110589823A CN 113420279 A CN113420279 A CN 113420279A
Authority
CN
China
Prior art keywords
pupil
information
password
eyeball
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110589823.2A
Other languages
Chinese (zh)
Inventor
王帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110589823.2A priority Critical patent/CN113420279A/en
Publication of CN113420279A publication Critical patent/CN113420279A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/02Banking, e.g. interest calculation or account maintenance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Finance (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Accounting & Taxation (AREA)
  • Computer Security & Cryptography (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a password input method and a password input device, which can be used in the field of finance, and the method comprises the following steps: acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images; identifying the position relation of the pupil and the eye in each eyeball image; and determining password information input by a user according to the position information of the eye where the pupil is located in each eyeball image. This application is through eyeball input password information for the password input process is more secret, does not have the input trace, has greatly improved the security, reduces the risk that the password is lost.

Description

Password input method and device
Technical Field
The application relates to the technical field of finance, in particular to a password input method and device.
Background
An ATM, that is, an ATM, is a small machine installed in different places of a bank, and records basic account information (usually, a bank card) of a customer by using a magnetic tape on a credit card sized adhesive tape, so that the customer can perform bank counter services such as withdrawal, deposit, transfer and the like through the machine, and most customers refer to the self-service machine as an ATM. The ATM safety protection equipment mainly refers to peripheral configuration of ATMs such as an ATM protection cover, an ATM protection booth, an ATM protection cabin and the like. ATMs are classified into outdoor ATMs, indoor ATMs, and independent ATMs according to installation locations. The indoor ATM has two types, namely a lobby type ATM and a through-wall type ATM according to the using mode of the ATM. According to the safety performance requirements, the outdoor ATM has a semi-closed type ATM protection pavilion and a fully-closed type ATM protection pavilion, the fully-closed type ATM protection pavilion can be divided into a square type and a round type according to the appearance shape, the square type is generally called as the outdoor ATM protection pavilion, and the round type is generally called as an ATM protection cabin. Independent self-service kiosk bank as a high-end ATM protection product is gradually paid more attention to the market. The characteristic of the independent operation of the kiosk bank also enables the kiosk bank to enter densely populated places such as districts, schools, squares and the like, and brings great convenience to production and life of people.
However, the password input of the existing ATM is performed by using keys, the input mode has poor safety, and the password of the bank card is easy to be remembered by lawless persons, so that the password is embezzled, and property loss is caused.
Disclosure of Invention
Aiming at the problems in the prior art, the application provides a password input method and a password input device, firstly, an eyeball image set of a user in a password input stage is obtained, and the eyeball image set comprises a plurality of eyeball images; identifying the position relation of the pupil and the eye in each eyeball image; and determining password information input by a user according to the position information of the eye where the pupil is located in each eyeball image. The invention inputs the password information through eyeballs, so that the password input process is more secret without input traces, the safety is greatly improved, and the risk of password loss is reduced.
In one aspect of the present invention, a password input method is provided, including:
acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
identifying the position relation of the pupil and the eye in each eyeball image;
and determining password information input by a user according to the position information of the eye where the pupil is located in each eyeball image.
In a preferred embodiment, the acquiring the eyeball image set of the user in the password input stage includes:
shooting a face video image of a user in a password input stage;
extracting a plurality of face images from the face video image;
and identifying eye regions in the face images, and intercepting images corresponding to the eye regions from each face image to serve as the eyeball images.
In a preferred embodiment, the identifying the position information of the eye where the pupil is located in each eyeball image includes:
determining a pupil area and an eye white area according to the pixel value range of the pupil and the pixel value range of the eye white;
and generating the position information of the pupil in the eye according to the size of the pupil area, the size of the eye white area and the position relation between the pupil area and the eye white area.
In a preferred embodiment, the determining the password information input by the user according to the position information of the eye where the pupil is located in each eyeball image includes:
generating pupil movement path information according to the position information of the eye where the pupil is located in each eyeball image;
and determining password information input by a user according to the pupil movement path information.
In a preferred embodiment, the generating of the pupil movement path information according to the position information of the eye where the pupil is located in each eyeball image includes:
drawing the pupil center position coordinates of all the image frames;
and connecting corresponding pupil center position coordinates according to the sequence of each frame of image in the image frame sequence to generate pupil motion path information.
In a preferred embodiment, the determining password information input by a user according to the pupil movement path information includes:
and determining input symbol information according to a preset mapping relation between standard pupil movement path information and input symbol information and the pupil movement path information.
In a preferred embodiment, the determining input symbol information according to a preset mapping relationship between standard pupil movement path information and input symbol information and the pupil movement path information includes:
calculating the similarity between the pupil movement path information and all standard pupil movement path information;
sequencing the similarity in a descending order to determine a standard pupil movement path corresponding to the similarity at the head position;
and searching the standard pupil movement path corresponding to the similarity at the head position in the preset mapping relation between the standard pupil movement path information and the input symbol information, and determining the corresponding input symbol information.
In a preferred embodiment, further comprising: and establishing a mapping relation between the standard pupil movement path information and the input symbol information.
In a preferred embodiment, the format of the password information is numbers, and the determining the password information according to the symbol information includes:
and determining the corresponding password information with the digital format according to the corresponding relation between the preset symbols and the numbers and the determined symbol information.
In another aspect of the present invention, a password input method is provided, including:
acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
positioning a focus in each eyeball image, and determining focus information of the eyeballs;
acquiring the motion path information of the concentrated pupils of the eyeball images in the password input stage;
and comparing the motion path information of the pupil with the focus information of the eyeball to determine the password information input by the user.
In still another aspect of the present invention, there is provided a password input apparatus including:
the eyeball image set acquisition module is used for acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
the pupil position identification module is used for identifying the position relation between the pupil and the eyes in each eyeball image;
and the input password determining module is used for determining password information input by a user according to the position information of the eye where the pupil is located in each eyeball image.
In still another aspect of the present invention, there is provided a password input apparatus including:
the eyeball image set acquisition module is used for acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
the focus determining module is used for positioning the focus in each eyeball image and determining the focus information of the eyeballs;
the motion path acquisition module is used for acquiring motion path information of the eyeballs in the eyeball image set in the password input stage;
and the input password determining module is used for comparing the motion path information of the pupil with the focus information of the eyeball and determining password information input by a user.
In another aspect of the present invention, the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the password input method when executing the program.
In still another aspect of the present invention, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the password input method described above.
According to the technical scheme, the password input method comprises the following steps: acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images; identifying the position relation of the pupil and the eye in each eyeball image; and determining password information input by a user according to the position information of the eye where the pupil is located in each eyeball image. The invention inputs the password information through eyeballs, so that the password input process is more secret without input traces, the safety is greatly improved, and the risk of password loss is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flow chart of a password input method.
Fig. 2 is a schematic view of an eyeball image set acquisition process.
Fig. 3 is a schematic diagram of a pupil position identification process.
Fig. 4 is a schematic diagram of an input password determination process.
Fig. 5 is a schematic diagram of a pupil movement path flow.
Fig. 6 is a mapping of a standard pupil movement path to an input symbol.
Fig. 7 is a schematic diagram of a motion path comparison process.
Fig. 8 is a first schematic structural diagram of the password input device.
Fig. 9 is a schematic structural diagram of a password input device.
Fig. 10 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the password input method and apparatus disclosed in the present application may be used in the field of financial technology, and may also be used in any field other than the field of financial technology.
An ATM, that is, an ATM, is a small machine installed in different places of a bank, and records basic account information (usually, a bank card) of a customer by using a magnetic tape on a credit card sized adhesive tape, so that the customer can perform bank counter services such as withdrawal, deposit, transfer and the like through the machine, and most customers refer to the self-service machine as an ATM. The ATM safety protection equipment mainly refers to peripheral configuration of ATMs such as an ATM protection cover, an ATM protection booth, an ATM protection cabin and the like. ATMs are classified into outdoor ATMs, indoor ATMs, and independent ATMs according to installation locations. The indoor ATM has two types, namely a lobby type ATM and a through-wall type ATM according to the using mode of the ATM. According to the safety performance requirements, the outdoor ATM has a semi-closed type ATM protection pavilion and a fully-closed type ATM protection pavilion, the fully-closed type ATM protection pavilion can be divided into a square type and a round type according to the appearance shape, the square type is generally called as the outdoor ATM protection pavilion, and the round type is generally called as an ATM protection cabin. Independent self-service kiosk bank as a high-end ATM protection product is gradually paid more attention to the market. The characteristic of the independent operation of the kiosk bank also enables the kiosk bank to enter densely populated places such as districts, schools, squares and the like, and brings great convenience to production and life of people.
However, the password input of the existing ATM is performed by using keys, the input mode has poor safety, and the password of the bank card is easy to be remembered by lawless persons, so that the password is embezzled, and property loss is caused.
Aiming at the problems in the prior art, the application provides a password input method and a password input device, firstly, an eyeball image set of a user in a password input stage is obtained, and the eyeball image set comprises a plurality of eyeball images; identifying the position relation of the pupil and the eye in each eyeball image; the password information input by the user is determined according to the position information of the eye where the pupil is located in each eyeball image, so that the password input process is more secret and has no input trace by inputting the password information through the eyeballs, the safety is greatly improved, and the risk of password loss is reduced.
The following describes the password input method and apparatus provided by the present invention in detail with reference to the accompanying drawings.
In a specific embodiment, the present application provides a password input method, as shown in fig. 1, specifically including:
s1, acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
specifically, the image set is generally obtained by a camera on the loading device, and when the user needs to input the password, the user adopts a preset expression or action, so that the expression or action is captured by the camera and recognized. Once the preset expression or action is recognized, the instruction starts to enter a password input stage, and the camera acquires an eyeball image set of the user in the password input stage. In a specific embodiment, the camera collects the eyeball image set of the user in the password input stage, as shown in fig. 2, including:
s11, shooting a face video image of the user in the password input stage;
specifically, for the precision of subsequent image processing, the shooting device needs to acquire a front video image of the human face as much as possible. The user may have a slight rotation of the face during the password input stage, so the photographing device for photographing the video image of the face needs to rotate along with the rotation of the face. In a specific embodiment, in order to enable the shooting device to rotate along with the face, the face feature points in the video image may be detected first, the deflection angle of the face relative to the shooting device is estimated according to the difference between the face feature points and the standard front face feature points, and then the shooting device is deflected by a corresponding angle, so that the shooting device faces the face.
S12, extracting a plurality of face images from the face video image;
specifically, a plurality of face images are extracted from a captured face video image, and the extraction interval needs to be determined first, which may be average extraction, for example, if the frame rate of the video is 20, that is, 20 frames of images are generated in one second, 1,5,10,15,20 of the images may be extracted; non-uniform extraction, such as extraction 2,6,10,14,18, may also be performed because the eyeball features are apparent in these images.
And S13, identifying the eye region in the face image, and intercepting the image corresponding to the eye region from each face image as the eyeball image.
Specifically, after the image extraction is completed, the eye region in the face image is identified by using a correlation algorithm of image identification, and in a specific embodiment, the eye region can be identified by using the pixel characteristics of the eye by using the conventional gray statistics, and can also be detected by using a machine learning algorithm.
S2, identifying the position relation of the pupil and the eye in each eyeball image;
specifically, the position of the pupil in the eye region determines the line of sight of the user, so that the position information of the eye where the exit pupil is located needs to be identified, as shown in fig. 3, the specific steps include:
s21, determining the pupil area and the white area according to the pupil pixel value range and the white pixel value range;
specifically, the pupil and the white of the eye are segmented by using the difference between the pixel gray values of the pupil and the white of the eye, in a specific embodiment, the gray statistics is performed on the image of the eye region, the gray threshold value for performing the binary segmentation is determined, and the pupil and the white of the eye region are segmented by using the gray threshold value. For example, when the binarized gradation threshold is determined to be 187 by counting the pixel gradation values of the eye region, the region having the pixel gradation value larger than 187 in the eye image is determined as the white of the eye, and the region smaller than 187 is determined as the pupil. In another specific embodiment, the gray scale values of the white eye and the pupil area of a plurality of historical eye images are counted to obtain a range of gray scale values of the pupil pixel and a range of gray scale values of the white eye, for example, the obtained range of the pupil pixel is 0-105, the range of gray scale values of the white eye is 200-255, and then the exit pupil area and the white eye area are respectively determined according to the determined range of gray scale values of the pupil pixel and the determined range of gray scale values of the white eye.
And S22, generating the position information of the pupil in the eye according to the size of the pupil area, the size of the white eye area and the position relation between the pupil area and the white eye area.
Specifically, the position information of the pupil in the eye includes: the pupil center position, the ratio of the pupil center in the horizontal direction of the eye, and the ratio of the pupil center in the vertical direction of the eye. The determination of the pupil center position may adopt: firstly, any point A on the edge of the pupil area is calculated, then the distance between the point and the rest points on the edge is calculated respectively, the point B with the maximum distance from the point is found, and the midpoint of the point AB is the center position of the pupil. The proportion of the pupil center in the horizontal direction of the eye can be determined by: the method comprises the steps of firstly, subtracting the minimum value and the maximum value of the horizontal coordinate of the eye white area from the horizontal coordinate of the pupil center respectively to obtain the distance between the pupil and the left side of the eye and the distance between the pupil and the right side of the eye, dividing the left side distance by the sum of the left side distance and the right side distance, namely the proportion of the pupil center in the horizontal direction, wherein the proportion value of two common human eyes is approximate. Similarly, the ratio of the pupil center in the vertical direction of the eye can be determined by: the method comprises the steps of firstly, subtracting the minimum value and the maximum value of the vertical coordinate of the white area from the vertical coordinate of the pupil center respectively to obtain the distance between the pupil and the upper edge of the eye and the distance between the pupil and the lower edge of the eye, and dividing the upper edge distance by the sum of the upper edge distance and the lower edge distance, namely the proportion of the pupil center in the vertical direction.
And S3, determining password information input by the user according to the position information of the eye where the pupil is located in each eyeball image.
In a specific embodiment, the determining, according to the position information of the eye where the pupil is located in each eyeball image, the password information input by the user, as shown in fig. 4, specifically includes the following steps:
s31, generating pupil movement path information according to the position information of the eye where the pupil is located in each eyeball image;
in a specific embodiment, the user inputs the password by the movement of the pupil, so that the generation of the pupil movement path is crucial to the determination of the input password. As shown in fig. 5, the generating of the pupil movement path information according to the position information of the eye where the pupil is located in each eyeball image includes:
s311, drawing the pupil center position coordinates of all the image frames;
specifically, according to the coordinates of the pupil center positions in the acquired eyeball image, all the pupil center positions are drawn in the same coordinate system, and it can be understood that each pupil center corresponds to one point in the coordinate system.
And S312, connecting corresponding pupil center position coordinates according to the sequence of each frame of image in the image frame sequence to generate pupil movement path information.
Specifically, the image capturing device captures a video, i.e., a sequence of image frames, each image frame being ordered in the sequence. And connecting the corresponding pupil centers according to the sequence of each image frame to form a motion path, namely the motion path of the pupil.
And S32, determining password information input by the user according to the pupil movement path information.
In a specific embodiment, the determining of the password information input by the user according to the pupil movement path information refers to determining input symbol information according to a preset mapping relationship between standard pupil movement path information and input symbol information and the pupil movement path information. The mapping relationship between the preset standard pupil movement path information and the input symbol information is pre-established, for example, a mapping relationship between the standard pupil movement path and the input symbol is established, as shown in fig. 6. After the mapping relationship exists, the input symbol information can be determined by comparing the current pupil movement path with the standard pupil movement path, and the specific steps of the comparison are as shown in fig. 7, and include:
s321, calculating the similarity between the pupil movement path information and all standard pupil movement path information;
specifically, the angle between the motion vector of each step in the motion path and the positive direction of the x-axis is used as the characteristic value of the step motion, for example, the standard pupil motion path in fig. 6 may be converted into a characteristic vector. The corresponding current pupil motion path is also converted into a corresponding feature vector. The similarity calculation of the two vectors is obtained by a cosine method, if the two vectors are completely consistent, the similarity is 1, and if the two vectors are completely irrelevant, the similarity is 0. The similarity between the pupil movement path information and all the standard pupil movement path information is a value between 0 and 1.
S322, sequencing the similarity in a descending order, and determining a standard pupil movement path corresponding to the similarity at the head position;
specifically, all the calculated similarities are sorted from large to small, wherein the standard pupil motion path corresponding to the maximum similarity is the closest to the current pupil motion path, and the standard pupil motion path corresponding to the maximum similarity is used as the approximation of the current pupil motion path. For example, if the similarity of the current pupil movement path and all the standard pupil movement paths is calculated to be 0.1,0.3,0.95,0.34,0.72,0.55,0.78,0.26, and 0.86, respectively, the maximum similarity is 0.95, and the corresponding movement path is an approximation of the current pupil movement path.
S323, searching the standard pupil movement path corresponding to the similarity at the head position in the mapping relation between the preset standard pupil movement path information and the input symbol information, and determining the corresponding input symbol information.
Specifically, a standard pupil movement path corresponding to the maximum similarity is determined, and corresponding input symbol information can be found out according to a preset mapping relationship. For example, in the mapping relationship, the pupil movement path corresponding to the maximum similarity represents symbol 3, and the input password symbol is determined to be 3.
In a specific embodiment, the format of the password information is a number, and determining the password information according to the symbol information includes:
and determining the corresponding password information with the digital format according to the corresponding relation between the preset symbols and the numbers and the determined symbol information.
The above method is further described with reference to a specific implementation scenario.
The user needs to input the password information 1223456, and the user clicks the first three times to be used as a signal for starting password input according to the method provided by the application. After the shooting equipment captures the signal, the shooting equipment starts to collect the front eyeball image of the user to form an eyeball image set in the password input stage. If the mapping relationship between the adopted pupil motion path and the input symbol is as shown in fig. 6, the user inputs 1, the pupil needs to be moved from the upper left to the center and then to the lower right, in the process, 40 frames of front eyeball images are shot, the position information of the pupil in the eyeball images is processed to obtain the motion path information, the path information is represented as (-40, -51) by a vector, the motion path is calculated to be closest to the character 1 through the similarity, and the character 1 is displayed on the screen. The user confirms the input combination, for example, using a mouth opening motion to indicate that the input is confirmed, and the password 1 is successfully input. And by analogy, sequentially inputting subsequent password characters.
In a specific embodiment, the present application further provides a password input method, including:
acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
positioning a focus in each eyeball image, and determining focus information of the eyeballs;
specifically, according to the information of the eyeball part where the pupil is located in the left eyeball and the right eyeball, the sight line directions of the left eyeball and the right eyeball can be respectively determined, the sight line directions of the two eyeballs are converged and finally intersected at one point, namely the focus of the eyeballs.
Acquiring the motion path information of the concentrated pupils of the eyeball images in the password input stage;
and comparing the motion path information of the pupil with the focus information of the eyeball to determine the password information input by the user.
As can be seen from the above description, according to the password input method provided by the present invention, an eyeball image set of a user in a password input stage is obtained, where the eyeball image set includes a plurality of eyeball images; identifying the position relation of the pupil and the eye in each eyeball image; the password information input by the user is determined according to the position information of the eye where the pupil is located in each eyeball image, so that the password input process is more secret and has no input trace by inputting the password information through the eyeballs, the safety is greatly improved, and the risk of password loss is reduced.
From the software aspect, the present application provides an embodiment of a password input device for executing all or part of the contents of the password input method, and referring to fig. 8, the password input device specifically includes the following contents:
the eyeball image set acquisition module is used for acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
the pupil position identification module is used for identifying the position relation between the pupil and the eyes in each eyeball image;
and the input password determining module is used for determining password information input by a user according to the position information of the eye where the pupil is located in each eyeball image.
Referring to fig. 9, the present application provides an embodiment of a password input device for executing all or part of the contents of the password input method, where the password input device specifically includes the following contents:
the eyeball image set acquisition module is used for acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
the focus determining module is used for positioning the focus in each eyeball image and determining the focus information of the eyeballs;
the motion path acquisition module is used for acquiring motion path information of the eyeballs in the eyeball image set in the password input stage;
and the input password determining module is used for comparing the motion path information of the pupil with the focus information of the eyeball and determining password information input by a user.
As can be seen from the above description, the password input device provided by the present invention first obtains an eyeball image set of a user in a password input stage, where the eyeball image set includes a plurality of eyeball images; identifying the position relation of the pupil and the eye in each eyeball image; and determining password information input by a user according to the position information of the eye where the pupil is located in each eyeball image. The invention inputs the password information through eyeballs, so that the password input process is more secret without input traces, the safety is greatly improved, and the risk of password loss is reduced.
In a specific embodiment, the present application provides a password input device for performing the following steps:
s1, acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
specifically, the image set is generally obtained by a camera on the loading device, and when the user needs to input the password, the user adopts a preset expression or action, so that the expression or action is captured by the camera and recognized. Once the preset expression or action is recognized, the instruction starts to enter a password input stage, and the camera acquires an eyeball image set of the user in the password input stage. In a specific embodiment, an eyeball image set acquisition module in the apparatus is configured to perform the following steps:
s11, shooting a face video image of the user in the password input stage;
specifically, for the precision of subsequent image processing, the shooting device needs to acquire a front video image of the human face as much as possible. The user may have a slight rotation of the face during the password input stage, so the photographing device for photographing the video image of the face needs to rotate along with the rotation of the face. In a specific embodiment, in order to enable the shooting device to rotate along with the face, the face feature points in the video image may be detected first, the deflection angle of the face relative to the shooting device is estimated according to the difference between the face feature points and the standard front face feature points, and then the shooting device is deflected by a corresponding angle, so that the shooting device faces the face.
S12, extracting a plurality of face images from the face video image;
specifically, a plurality of face images are extracted from a captured face video image, and the extraction interval needs to be determined first, which may be average extraction, for example, if the frame rate of the video is 20, that is, 20 frames of images are generated in one second, 1,5,10,15,20 of the images may be extracted; non-uniform extraction, such as extraction 2,6,10,14,18, may also be performed because the eyeball features are apparent in these images.
And S13, identifying the eye region in the face image, and intercepting the image corresponding to the eye region from each face image as the eyeball image.
Specifically, after the image extraction is completed, the eye region in the face image is identified by using a correlation algorithm of image identification, and in a specific embodiment, the eye region can be identified by using the pixel characteristics of the eye by using the conventional gray statistics, and can also be detected by using a machine learning algorithm.
S2, identifying the position relation of the pupil and the eye in each eyeball image;
specifically, the position of the pupil in the eye region determines the line of sight of the user, so the pupil position identification module is configured to perform the following specific steps:
s21, determining the pupil area and the white area according to the pupil pixel value range and the white pixel value range;
specifically, the pupil and the white of the eye are segmented by using the difference between the pixel gray values of the pupil and the white of the eye, in a specific embodiment, the gray statistics is performed on the image of the eye region, the gray threshold value for performing the binary segmentation is determined, and the pupil and the white of the eye region are segmented by using the gray threshold value. For example, when the binarized gradation threshold is determined to be 187 by counting the pixel gradation values of the eye region, the region having the pixel gradation value larger than 187 in the eye image is determined as the white of the eye, and the region smaller than 187 is determined as the pupil. In another specific embodiment, the gray scale values of the white eye and the pupil area of a plurality of historical eye images are counted to obtain a range of gray scale values of the pupil pixel and a range of gray scale values of the white eye, for example, the obtained range of the pupil pixel is 0-105, the range of gray scale values of the white eye is 200-255, and then the exit pupil area and the white eye area are respectively determined according to the determined range of gray scale values of the pupil pixel and the determined range of gray scale values of the white eye.
And S22, generating the position information of the pupil in the eye according to the size of the pupil area, the size of the white eye area and the position relation between the pupil area and the white eye area.
Specifically, the position information of the pupil in the eye includes: the pupil center position, the ratio of the pupil center in the horizontal direction of the eye, and the ratio of the pupil center in the vertical direction of the eye. The determination of the pupil center position may adopt: firstly, any point A on the edge of the pupil area is calculated, then the distance between the point and the rest points on the edge is calculated respectively, the point B with the maximum distance from the point is found, and the midpoint of the point AB is the center position of the pupil. The proportion of the pupil center in the horizontal direction of the eye can be determined by: the method comprises the steps of firstly, subtracting the minimum value and the maximum value of the horizontal coordinate of the eye white area from the horizontal coordinate of the pupil center respectively to obtain the distance between the pupil and the left side of the eye and the distance between the pupil and the right side of the eye, dividing the left side distance by the sum of the left side distance and the right side distance, namely the proportion of the pupil center in the horizontal direction, wherein the proportion value of two common human eyes is approximate. Similarly, the ratio of the pupil center in the vertical direction of the eye can be determined by: the method comprises the steps of firstly, subtracting the minimum value and the maximum value of the vertical coordinate of the white area from the vertical coordinate of the pupil center respectively to obtain the distance between the pupil and the upper edge of the eye and the distance between the pupil and the lower edge of the eye, and dividing the upper edge distance by the sum of the upper edge distance and the lower edge distance, namely the proportion of the pupil center in the vertical direction.
And S3, determining password information input by the user according to the position information of the eye where the pupil is located in each eyeball image.
In a specific embodiment, the input password determination module is configured to perform the following specific steps:
s31, generating pupil movement path information according to the position information of the eye where the pupil is located in each eyeball image;
in a specific embodiment, the user inputs the password by the movement of the pupil, so that the generation of the pupil movement path is crucial to the determination of the input password. The generating of the pupil movement path information according to the position information of the eye where the pupil is located in each eyeball image comprises:
s311, drawing the pupil center position coordinates of all the image frames;
specifically, according to the coordinates of the pupil center positions in the acquired eyeball image, all the pupil center positions are drawn in the same coordinate system, and it can be understood that each pupil center corresponds to one point in the coordinate system.
And S312, connecting corresponding pupil center position coordinates according to the sequence of each frame of image in the image frame sequence to generate pupil movement path information.
Specifically, the image capturing device captures a video, i.e., a sequence of image frames, each image frame being ordered in the sequence. And connecting the corresponding pupil centers according to the sequence of each image frame to form a motion path, namely the motion path of the pupil.
And S32, determining password information input by the user according to the pupil movement path information.
In a specific embodiment, the determining of the password information input by the user according to the pupil movement path information refers to determining input symbol information according to a preset mapping relationship between standard pupil movement path information and input symbol information and the pupil movement path information. The mapping relationship between the preset standard pupil movement path information and the input symbol information is pre-established, for example, a mapping relationship between the standard pupil movement path and the input symbol is established, as shown in fig. 6. After the mapping relationship exists, input symbol information can be determined by comparing the current pupil movement path with the standard pupil movement path, and the specific steps of comparison comprise:
s321, calculating the similarity between the pupil movement path information and all standard pupil movement path information;
specifically, the angle between the motion vector of each step in the motion path and the positive direction of the x-axis is used as the characteristic value of the step motion, for example, the standard pupil motion path in fig. 6 may be converted into a characteristic vector. The corresponding current pupil motion path is also converted into a corresponding feature vector. The similarity calculation of the two vectors is obtained by a cosine method, if the two vectors are completely consistent, the similarity is 1, and if the two vectors are completely irrelevant, the similarity is 0. The similarity between the pupil movement path information and all the standard pupil movement path information is a value between 0 and 1.
S322, sequencing the similarity in a descending order, and determining a standard pupil movement path corresponding to the similarity at the head position;
specifically, all the calculated similarities are sorted from large to small, wherein the standard pupil motion path corresponding to the maximum similarity is the closest to the current pupil motion path, and the standard pupil motion path corresponding to the maximum similarity is used as the approximation of the current pupil motion path. For example, if the similarity of the current pupil movement path and all the standard pupil movement paths is calculated to be 0.1,0.3,0.95,0.34,0.72,0.55,0.78,0.26, and 0.86, respectively, the maximum similarity is 0.95, and the corresponding movement path is an approximation of the current pupil movement path.
S323, searching the standard pupil movement path corresponding to the similarity at the head position in the mapping relation between the preset standard pupil movement path information and the input symbol information, and determining the corresponding input symbol information.
Specifically, a standard pupil movement path corresponding to the maximum similarity is determined, and corresponding input symbol information can be found out according to a preset mapping relationship. For example, in the mapping relationship, the pupil movement path corresponding to the maximum similarity represents symbol 3, and the input password symbol is determined to be 3.
In a specific embodiment, the format of the password information is a number, and determining the password information according to the symbol information includes:
and determining the corresponding password information with the digital format according to the corresponding relation between the preset symbols and the numbers and the determined symbol information.
The above method is further described with reference to a specific implementation scenario.
The user needs to input the password information 1223456, and the user clicks the first three times to be used as a signal for starting password input according to the method provided by the application. After the shooting equipment captures the signal, the shooting equipment starts to collect the front eyeball image of the user to form an eyeball image set in the password input stage. If the mapping relationship between the adopted pupil motion path and the input symbol is as shown in fig. 6, the user inputs 1, the pupil needs to be moved from the upper left to the center and then to the lower right, in the process, 40 frames of front eyeball images are shot, the position information of the pupil in the eyeball images is processed to obtain the motion path information, the path information is represented as (-40, -51) by a vector, the motion path is calculated to be closest to the character 1 through the similarity, and the character 1 is displayed on the screen. The user confirms the input combination, for example, using a mouth opening motion to indicate that the input is confirmed, and the password 1 is successfully input. And by analogy, sequentially inputting subsequent password characters.
In a specific embodiment, the present application further provides a password input method, including:
acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
positioning a focus in each eyeball image, and determining focus information of the eyeballs;
specifically, according to the information of the eyeball part where the pupil is located in the left eyeball and the right eyeball, the sight line directions of the left eyeball and the right eyeball can be respectively determined, the sight line directions of the two eyeballs are converged and finally intersected at one point, namely the focus of the eyeballs.
Acquiring the motion path information of the concentrated pupils of the eyeball images in the password input stage;
and comparing the motion path information of the pupil with the focus information of the eyeball to determine the password information input by the user.
As can be seen from the above description, the password input device provided by the present invention includes an eyeball image set acquisition module, which implements a function of acquiring an eyeball image set of a user in a password input stage; the pupil position identification module is used for identifying the position relation between the pupil and the eyes in each eyeball image; and the input password determining module is used for determining password information input by the user according to the position information of the eye where the pupil is located in each eyeball image. The invention inputs the password information through eyeballs, so that the password input process is more secret without input traces, the safety is greatly improved, and the risk of password loss is reduced.
In terms of hardware, the present application provides an embodiment of an electronic device for implementing all or part of contents in a password input method, where the electronic device specifically includes the following contents:
fig. 10 is a schematic block diagram of a system configuration of an electronic device 9600 according to an embodiment of the present application. As shown in fig. 10, the electronic device 9600 can include a central processor 9100 and a memory 9140; the memory 9140 is coupled to the central processor 9100. Notably, this fig. 10 is exemplary; other types of structures may also be used in addition to or in place of the structure to implement telecommunications or other functions.
In one embodiment, the password entry method functionality may be integrated into the central processor. Wherein the central processor may be configured to control:
s1, acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
s2, identifying the position relation of the pupil and the eye in each eyeball image;
and S3, determining the password input by the user according to the position information of the eye where the pupil is located in each eyeball image by the input password determination module.
As can be seen from the above description, according to the electronic device provided by the embodiment of the application, the password information is input through the eyeballs, so that the password input process is more secret, no input trace exists, the security is greatly improved, and the risk of password loss is reduced.
In another embodiment, the password input device may be configured separately from the central processor 9100, for example, the password input device may be configured as a chip connected to the central processor 9100, and the function of the password input method is realized by the control of the central processor.
As shown in fig. 10, the electronic device 9600 may further include: a communication module 9110, an input unit 9120, an audio processor 9130, a display 9160, and a power supply 9170. It is noted that the electronic device 9600 also does not necessarily include all of the components shown in fig. 10; in addition, the electronic device 9600 may further include components not shown in fig. 10, which can be referred to in the prior art.
As shown in fig. 10, a central processor 9100, sometimes referred to as a controller or operational control, can include a microprocessor or other processor device and/or logic device, which central processor 9100 receives input and controls the operation of the various components of the electronic device 9600.
The memory 9140 can be, for example, one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, or other suitable device. The information relating to the failure may be stored, and a program for executing the information may be stored. And the central processing unit 9100 can execute the program stored in the memory 9140 to realize information storage or processing, or the like.
The input unit 9120 provides input to the central processor 9100. The input unit 9120 is, for example, a key or a touch input device. Power supply 9170 is used to provide power to electronic device 9600. The display 9160 is used for displaying display objects such as images and characters. The display may be, for example, an LCD display, but is not limited thereto.
The memory 9140 can be a solid state memory, e.g., Read Only Memory (ROM), Random Access Memory (RAM), a SIM card, or the like. There may also be a memory that holds information even when power is off, can be selectively erased, and is provided with more data, an example of which is sometimes called an EPROM or the like. The memory 9140 could also be some other type of device. Memory 9140 includes a buffer memory 9141 (sometimes referred to as a buffer). The memory 9140 may include an application/function storage portion 9142, the application/function storage portion 9142 being used for storing application programs and function programs or for executing a flow of operations of the electronic device 9600 by the central processor 9100.
The memory 9140 can also include a data store 9143, the data store 9143 being used to store data, such as contacts, digital data, pictures, sounds, and/or any other data used by an electronic device. The driver storage portion 9144 of the memory 9140 may include various drivers for the electronic device for communication functions and/or for performing other functions of the electronic device (e.g., messaging applications, contact book applications, etc.).
The communication module 9110 is a transmitter/receiver 9110 that transmits and receives signals via an antenna 9111. The communication module (transmitter/receiver) 9110 is coupled to the central processor 9100 to provide input signals and receive output signals, which may be the same as in the case of a conventional mobile communication terminal.
Based on different communication technologies, a plurality of communication modules 9110, such as a cellular network module, a bluetooth module, and/or a wireless local area network module, may be provided in the same electronic device. The communication module (transmitter/receiver) 9110 is also coupled to a speaker 9131 and a microphone 9132 via an audio processor 9130 to provide audio output via the speaker 9131 and receive audio input from the microphone 9132, thereby implementing ordinary telecommunications functions. The audio processor 9130 may include any suitable buffers, decoders, amplifiers and so forth. In addition, the audio processor 9130 is also coupled to the central processor 9100, thereby enabling recording locally through the microphone 9132 and enabling locally stored sounds to be played through the speaker 9131.
Embodiments of the present application further provide a computer-readable storage medium capable of implementing all steps in the password input method in the foregoing embodiments, where the computer-readable storage medium stores thereon a computer program, and when the computer program is executed by a processor, the computer program implements all steps of the password input method in the foregoing embodiments, where the execution subject is a server or a client, for example, when the processor executes the computer program, the processor implements the following steps:
s1, acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
s2, identifying the position relation of the pupil and the eye in each eyeball image;
and S3, determining the password input by the user according to the position information of the eye where the pupil is located in each eyeball image by the input password determination module.
As can be seen from the above description, the computer-readable storage medium provided in the embodiment of the present application enables a password input process to be more secret and have no input trace by inputting password information through eyeballs, thereby greatly improving security and reducing the risk of password loss.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the invention are explained by applying specific embodiments in the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (14)

1. A password input method, comprising:
acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
identifying the position relation of the pupil and the eye in each eyeball image;
and determining password information input by a user according to the position information of the eye where the pupil is located in each eyeball image.
2. The method for inputting a password according to claim 1, wherein the obtaining of the eyeball image set of the user in the password input stage comprises:
shooting a face video image of a user in a password input stage;
extracting a plurality of face images from the face video image;
and identifying eye regions in the face images, and intercepting images corresponding to the eye regions from each face image to serve as the eyeball images.
3. The password input method according to claim 1, wherein the identifying of the position information of the eye where the pupil is located in each eye image comprises:
determining a pupil area and an eye white area according to the pixel value range of the pupil and the pixel value range of the eye white;
and generating the position information of the pupil in the eye according to the size of the pupil area, the size of the eye white area and the position relation between the pupil area and the eye white area.
4. The password input method according to claim 1, wherein the determining of the password information input by the user according to the position information of the eye where the pupil is located in each eye image comprises:
generating pupil movement path information according to the position information of the eye where the pupil is located in each eyeball image;
and determining password information input by a user according to the pupil movement path information.
5. The password input method according to claim 4, wherein the generating of the pupil movement path information according to the position information of the eye where the pupil is located in each eye image includes:
drawing the pupil center position coordinates of all the image frames;
and connecting corresponding pupil center position coordinates according to the sequence of each frame of image in the image frame sequence to generate pupil motion path information.
6. The password input method according to claim 4, wherein the determining password information input by a user according to the pupil movement path information includes:
and determining input symbol information according to a preset mapping relation between standard pupil movement path information and input symbol information and the pupil movement path information.
7. The password input method according to claim 6, wherein the determining input symbol information according to a preset standard pupil movement path information and input symbol information mapping relationship and the pupil movement path information comprises:
calculating the similarity between the pupil movement path information and all standard pupil movement path information;
sequencing the similarity in a descending order to determine a standard pupil movement path corresponding to the similarity at the head position;
and searching the standard pupil movement path corresponding to the similarity at the head position in the preset mapping relation between the standard pupil movement path information and the input symbol information, and determining the corresponding input symbol information.
8. The password input method according to claim 7, further comprising: and establishing a mapping relation between the standard pupil movement path information and the input symbol information.
9. The method of claim 1, wherein the format of the password information is a number, and the determining the password information according to the symbol information comprises:
and determining the corresponding password information with the digital format according to the corresponding relation between the preset symbols and the numbers and the determined symbol information.
10. A password input method, comprising:
acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
positioning a focus in each eyeball image, and determining focus information of the eyeballs;
acquiring the motion path information of the concentrated pupils of the eyeball images in the password input stage;
and comparing the motion path information of the pupil with the focus information of the eyeball to determine the password information input by the user.
11. A password input apparatus, comprising:
the eyeball image set acquisition module is used for acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
the pupil position identification module is used for identifying the position relation between the pupil and the eyes in each eyeball image;
and the input password determining module is used for determining password information input by a user according to the position information of the eye where the pupil is located in each eyeball image.
12. A password input apparatus, comprising:
the eyeball image set acquisition module is used for acquiring an eyeball image set of a user in a password input stage, wherein the eyeball image set comprises a plurality of eyeball images;
the focus determining module is used for positioning the focus in each eyeball image and determining the focus information of the eyeballs;
the motion path acquisition module is used for acquiring motion path information of the eyeballs in the eyeball image set in the password input stage;
and the input password determining module is used for comparing the motion path information of the pupil with the focus information of the eyeball and determining password information input by a user.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the password entry method of any of claims 1 to 10 when executing the program.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a password input method according to any one of claims 1 to 10.
CN202110589823.2A 2021-05-28 2021-05-28 Password input method and device Pending CN113420279A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110589823.2A CN113420279A (en) 2021-05-28 2021-05-28 Password input method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110589823.2A CN113420279A (en) 2021-05-28 2021-05-28 Password input method and device

Publications (1)

Publication Number Publication Date
CN113420279A true CN113420279A (en) 2021-09-21

Family

ID=77713132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110589823.2A Pending CN113420279A (en) 2021-05-28 2021-05-28 Password input method and device

Country Status (1)

Country Link
CN (1) CN113420279A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116644459A (en) * 2023-07-27 2023-08-25 泰山学院 Encryption system and method based on computer software development

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902029A (en) * 2012-12-26 2014-07-02 腾讯数码(天津)有限公司 Mobile terminal and unlocking method thereof
CN104156643A (en) * 2014-07-25 2014-11-19 中山大学 Eye sight-based password inputting method and hardware device thereof
US20150347733A1 (en) * 2014-05-30 2015-12-03 Utechzone Co., Ltd. Eye-controlled password input apparatus, method and computer-readable recording medium and product thereof
CN106453281A (en) * 2016-09-26 2017-02-22 宇龙计算机通信科技(深圳)有限公司 Password input device, authentication device, password input method and authentication method
WO2020103291A1 (en) * 2018-11-20 2020-05-28 平安科技(深圳)有限公司 Unlocking method, apparatus and device based on eye movement trajectory, and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902029A (en) * 2012-12-26 2014-07-02 腾讯数码(天津)有限公司 Mobile terminal and unlocking method thereof
US20150347733A1 (en) * 2014-05-30 2015-12-03 Utechzone Co., Ltd. Eye-controlled password input apparatus, method and computer-readable recording medium and product thereof
CN105320251A (en) * 2014-05-30 2016-02-10 由田新技股份有限公司 Eye-controlled password input device, method and computer readable recording medium thereof
CN104156643A (en) * 2014-07-25 2014-11-19 中山大学 Eye sight-based password inputting method and hardware device thereof
CN106453281A (en) * 2016-09-26 2017-02-22 宇龙计算机通信科技(深圳)有限公司 Password input device, authentication device, password input method and authentication method
WO2020103291A1 (en) * 2018-11-20 2020-05-28 平安科技(深圳)有限公司 Unlocking method, apparatus and device based on eye movement trajectory, and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116644459A (en) * 2023-07-27 2023-08-25 泰山学院 Encryption system and method based on computer software development
CN116644459B (en) * 2023-07-27 2023-10-20 泰山学院 Encryption system and method based on computer software development

Similar Documents

Publication Publication Date Title
CN108280418A (en) The deception recognition methods of face image and device
CN105518708A (en) Method and equipment for verifying living human face, and computer program product
CN106471440A (en) Eye tracking based on efficient forest sensing
EP4033458A2 (en) Method and apparatus of face anti-spoofing, device, storage medium, and computer program product
WO2021047069A1 (en) Face recognition method and electronic terminal device
CN112036331A (en) Training method, device and equipment of living body detection model and storage medium
CN106980840A (en) Shape of face matching process, device and storage medium
CN112580472A (en) Rapid and lightweight face recognition method and device, machine readable medium and equipment
US20230306792A1 (en) Spoof Detection Based on Challenge Response Analysis
CN112989299A (en) Interactive identity recognition method, system, device and medium
CN113420279A (en) Password input method and device
US20200275271A1 (en) Authentication of a user based on analyzing touch interactions with a device
CN105518715A (en) Living body detection method, equipment and computer program product
CN111124109B (en) Interactive mode selection method, intelligent terminal, equipment and storage medium
CN110187806B (en) Fingerprint template input method and related device
CN109803450A (en) Wireless device and computer connection method, electronic device and storage medium
CN108764033A (en) Auth method and device, electronic equipment, computer program and storage medium
CN110991211B (en) Portable individual face recognition device based on improved residual neural network
CN106503697A (en) Target identification method and device, face identification method and device
CN114742561A (en) Face recognition method, device, equipment and storage medium
CN111079662A (en) Figure identification method and device, machine readable medium and equipment
CN114140839A (en) Image sending method, device and equipment for face recognition and storage medium
CN110891049A (en) Video-based account login method, device, medium and electronic equipment
CN111209863A (en) Living body model training and human face living body detection method, device and electronic equipment
CN113066238B (en) Interactive counter business processing system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination