CA3135471C - App login verification method and device and computer readable storage medium - Google Patents

App login verification method and device and computer readable storage medium Download PDF

Info

Publication number
CA3135471C
CA3135471C CA3135471A CA3135471A CA3135471C CA 3135471 C CA3135471 C CA 3135471C CA 3135471 A CA3135471 A CA 3135471A CA 3135471 A CA3135471 A CA 3135471A CA 3135471 C CA3135471 C CA 3135471C
Authority
CA
Canada
Prior art keywords
face image
server
screen
user
time face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CA3135471A
Other languages
French (fr)
Other versions
CA3135471A1 (en
Inventor
Jinfei DING
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
10353744 Canada Ltd
Original Assignee
10353744 Canada Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 10353744 Canada Ltd filed Critical 10353744 Canada Ltd
Publication of CA3135471A1 publication Critical patent/CA3135471A1/en
Application granted granted Critical
Publication of CA3135471C publication Critical patent/CA3135471C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

Disclosed in the present invention is an App login verification method, device, and computer readable medium. The described method comprises: acquiring a primary real-time face image and generating display position information based on the described primary real-time face image; receiving prompting messages from a server and displaying the described prompting message based on the described display position information; acquiring a secondary real-time face image based on the described prompting message information for live face detection; and where if the described live face detection passes, passing the login verification. The present invention identifies positions of tenninal user eye focus by the real- time face images captured on the terminal, wherein the prompting message of the live face detection can be displayed as the focus of terminal user eyes, to prevent time wasted on finding prompting messages and improves the speed of risk cognization.

Description

APP LOGIN VERIFICATION METHOD AND DEVICE AND COMPUTER READABLE
STORAGE MEDIUM
Technical Field [0001] The present invention relates to the field of mobile terminal security, in particular, to a method, a device, and a storage medium for App login verification.
Background
[0002] With the internet development and popularity of mobile phones, the national regulations are strictly monitoring the security and personal information for intelligent terminal Apps, wherein the identity verification is required for user logins in on the user mobile terminals. For example, account logins via a new smart phone or from a non-residence address generally use the popular face detection for identity verification. With the commands of stiff gestures such as eye blinking, head shaking, and mouth opening, users are required to change face gestures according to the commands for live face detection. The live face detection is used for user identity verification.
[0003] The forementioned method has the following drawbacks and limitations.
The live face detection adopts user face gestures according to the command such as eye blinking, head shaking, and mouth opening.
However different Apps displays prompting messages at different locations, being upper or lower. The user first needs to find the message location and then complete the face gestures according to several commands with complicated orders. Consequently, the live face detection requires a long processing time or multiple detections, leading to slowed risk identification process.
Summary
[0004] The aim of the present invention is to provide an App login verification method, device, and computer readable storage medium, to improve the login risk identification speed.
[0005] The technical proposal in the present invention includes, from the first perspective, an App login verification method is provided, comprising:
acquiring a primary real-time face image and generating display position information based on the described primary real-time face image;
receiving prompting messages from a server and displaying the described prompting message based on the described display position information;
acquiring a secondary face image based on the described prompting message information for live face detection; and where if the described live face detection passes, passing the login verification.

Date recue / Date received 202 1-1 1-30
[0006] In some preferred embodiments, the described acquisition of a primary real-time face image and 35 the described generation of display position information based on the described primary real-time face image, particularly comprise capturing a primary real-time face image;
based on image identification techniques, identifying the position range of the eye focus in the described primary real-time face image; and 40 generating the display position information based on the described position range of the eye focus.
[0007] In some preferred embodiments, the described identification of the position range of the eye focus in the described primary real-time face image based on image identification techniques, particularly include:
45 obtaining the positions of the front face of the primary real-time face images, position of pupils in eyes, and the visual shape of the pupils based on the image identification technique;
obtaining the angle between the front face of the described primary real-time face image and the terminal; and calculating the eye focus position based on the position of pupils in eyes, the visual shape of the 50 pupils, the primary real-time face image and the angle of terminal.
[0008] In some preferred embodiments, before the described acquisition of a primary real-time face image and the described generation of display position information based on the described primary real-time face image, the described method further includes:
sending a login request to the server, wherein the described login request at least includes a terminal 55 identification; and the described prompting message sent by terminal, acquired by:
searching user history activity data by the server from the database according to the described terminal identification; and generating user history activity operation questions by the server based on the described user history 60 activity data, wherein the described prompting message sent by the described server includes at least one of the user history activity operation questions.
[0009] In some preferred embodiments, the described secondary real-time face image, including the real-time face image when the user answers the described user history activity operation questions in voice;
65 and the described method further includes:
acquiring voice answer information;

Date recue / Date received 202 1-1 1-30 where if the described face detection is passed, sending the described secondary real-time face image to the server so as for the face recognizing comparison in the server, and sending the described voice 70 answer information to the server so as for determining that if the voice answer information matches with the described user history activity data; and where if the described face recognition passes and the voice answer information matches with the described user history activity data, the login verification is passed.
[0010] In some preferred embodiments, the described determination by the server of that if the voice 75 answer information matches with the described user history activity data, particularly includes:
converting the described voice answer information into text answer information; and performing fuzzy matching of the described text answer information with the described user history activity data by the server.
[0011] In some preferred embodiments, the described method further includes 80 sending the App account number and password information to the server for account and password verification by the server, in particular including:
sending the App account number and password information to the server, so as to compare the described account password with the password information obtained from the database for the App account by the server; and 85 where if the described live face detection passes and the server concludes the matched account password with the password information obtained from the database for the App account by the server, the login verification is passed.
[0012] From the second perspective, an App login verification device is provided, at least comprising:
a receiving module, configured to receive prompting messages from a server;
and 90 a processing module, configured to acquire a primary real-time face image and generate display position information based on the described primary real-time face image; then acquire a secondary face image based on the described prompting message information for live face detection.
[0013] In some preferred embodiments, the described processing unit comprises:
a capturing unit, configured to acquire the primary real-time face image and the secondary face 95 image;
a processing unit, configured to perform live face detection based on the described prompting message information by the primary real-time face image, and the described secondary face image; and a displaying unit, configured to display the described prompting information based on the described displaying position information.

Date recue / Date received 202 1-1 1-30 100 [0014] From the third perspective, a readable computer storage medium is provided with computer programs stored, wherein any of the procedures in the forementioned methods are performed when the described computer programs are executed on the described processor.
[0015] Compared with the current technologies, the benefits of the present invention include that the terminal user eye focus location information is determined by capturing real-time face image recognition 105 by terminals, to identify a range of focusing position on the terminal screen by user eyes. Therefore, the prompting messages of the live face detection are displayed at the user eye focus, to save time from finding prompting messages and help users for in-time responses. Therefore, the risk identification speed is improved, to prevent users from spending too much time on finding reminders and multiple commands with consequent extended risk identification time.

Brief descriptions of the drawings [0016] For better explanation of the technical proposal of embodiments in the present invention, the accompanying drawings are briefly introduced in the following. Obviously, the following drawings represent only a portion of embodiments of the present invention. Those skilled in the art are able to 115 create other drawings according to the accompanying drawings without making creative efforts.
[0017] Fig. 1 is a flow diagram of an App login verification method provided in the embodiment 1 of the present invention;
Fig. 2 is a schematic diagram of the algorithm for face-screen angles in the embodiment 1 of the present invention;
120 Fig. 3 is a schematic diagram of the maximum upward offset angle of simulated human eyes in the embodiment 1 of the present invention;
Fig. 4 is a schematic diagram of the imaging formation of human eyes in the screen when human eyes are in the front of the screen in the embodiment 1 of the present invention;
Fig. 5 is a schematic diagram of the imaging formation of human eyes in the screen when human 125 eyes are looking upwards with keep face static in the embodiment 1 of the present invention;
Fig. 6 is a schematic diagram of calculating the front face-terminal angle in the primary real-time face image in the embodiment 1 of the present invention;
Fig. 7 is a flow diagram of an App login verification method provided in the embodiment 2 of the present invention;
130 Fig. 8 is a flow diagram of an App login verification method provided in the embodiment 3 of the present invention;
Fig. 9 is a flow diagram of an App login verification method provided in the embodiment 4 of the present invention; and Date recue / Date received 202 1-1 1-30 Fig. 10 is a structure diagram of an App login verification device provided in the embodiment 5 135 of the present invention.
Detailed descriptions [0018] With the drawings of embodiments in the present invention, the technical proposals are explained precisely and completely. Obviously, the embodiments described below are only a portion of embodiments 140 of the present invention and cannot represent all possible embodiments.
Based on the embodiments in the present invention, the other applications by those skilled in the art without any creative works are falling within the scope of the present invention.
[0019] For App logins, especially for finance-related Apps, the user identities on the mobile terminals should be verified for risk identifications. Currently, the live face detection is generally adopted for 145 recognizing the user identity. The current methods include sending face detection commands to the terminal, and displaying command messages such as eye blinking, head shaking, and mouth opening, and the performing live face detection of the face gestures made by users according to the commands. With the described method, users need to find command messages on terminals. With different message displaying positions for different Apps, users need extra time to find the command. When a user does not find the 150 command in time without in-time response of corresponding face gestures, the live face detection is failed and restarted, leading to extended identity verification time and slowed risk identification speed.
[0020] Embodiment 1, an App login verification method as shown in Fig. 1 is provided, comprising:
Si-1, acquiring a primary real-time face image and generating display position information based on the described primary real-time face image.
155 [0021] The terminal opens a build-in camera to capture images of the current user as the primary real-time face image, and analyzes the primary real-time face image to generate display position information.
[0022] In detail, the present step comprises the following sub-steps:
S 1- la, capturing a primary real-time face image; and Si-lb, based on image identification techniques, identifying the position range of the eye focus in 160 the described primary real-time face image.
[0023] In detail, the step Si-lb comprises the following steps:
Si-lb l, obtaining the positions of the front face of the primary real-time face images, position of pupils in eyes, and the visual shape of the pupils based on the image identification technique;
S 1-1b2, obtaining the angle between the front face of the described primary real-time face image 165 and the terminal; and S 1-1b3, calculating the eye focus position based on the position of pupils in the eyes, the visual shape of the pupils, the primary real-time face image and the angle of the terminal.
Date recue / Date received 202 1-1 1-30 [0024] Sl-lc, generating the display position information based on the described position range of the eye focus.
170 [0025] In detail, if the prompting message is displayed as lined texts, the vertical position y of the prompting message on the terminal screen is calculated. For the horizontal position, by simulation trainings, when the pupil movement is less than a certain distance, the texts are centered, and when the pupil movement is greater than a certain distance to the right, the texts are aligned right.
[0026] As shown in Fig. 2, with assuming terminal user eyes are symmetric and equal sized, when the 175 face in the primary real-time face image sent by the terminal is relatively to the right of the screen, and the eyes are within the range of the screen with the left eyeball centered as a circle, the left eye in the primary real-time face image is concluded to be parallel to the phone screen.
The length of the left eye is noted as X1 and the right eye length is noted as zl. The angle A is calculated by cosA = zl/Xl. With big data trainings, the maximum value of the angle A (maxA) is simulated while eyes remain on the screen.
180 When A is smaller than maxA, the angle is identified as an effective angle. For facing to the left, the value notation is reversed with the same calculation.
[0027] With the model identifications, the human faces can be determined as facing upwards or downwards. When the face in the primary real-time face image is relatively upward to the screen, with two eyes within the range of the screen and eyeballs centered as a circle, the eye focus of the described 185 primary real-time face image is located at the top of the screen. When the eyes are relatively downward, the conclusion is reversed.
[0028] When the lengths of the left and right eyes are identical, or, when the left or right angles are within detectable ranges, the position of the pupils on the eyeballs are calculated. By simulation trainings, with the maximum angle (LmaxB), the focus leaves from the screen as shown in Fig. 3.
190 [0029] When human eyes are facing straight to the screen, the imaging formed in the screen of the eyes are shown in Fig. 4. When the face is static with eyes looking upwards, the imaging formed in the screen of the eyes are shown in Fig. 5.
[0030] As shown in Fig. 6, based on the imaging changes of pupils in the screen, the front face-screen angle in the primary real-time face image is calculated. According to the front face-screen angle in the 195 primary real-time face image, the y positions of the eyes in the screen are calculated. In the screen, the middle point of the eye imaging moves upwards by yl, wherein the shift yl is the position of the y location for text display (as shown in Fig. 3);
upward angle LmaxB, cosB = zI/Xl;
yl = maxY / LmaxB * LB; and 200 based on the shift algorithm, the prompting message display position on the screen is calculated (as shown in Fig. 7).

Date recue / Date received 202 1-1 1-30 [0031] When the eye focus position on the screen in the primary real-time face image is detected to change, based on the shift algorithm, the display location of user online activity question texts are re-calculated.
[0032] SI-2, receiving prompting messages from a server and displaying the described prompting message 205 based on the described display position information.
[0033] SI-3, acquiring a secondary face image based on the described prompting message information for live face detection.
[0034] In detail, the terminal user makes corresponding face gestures for the secondary real-time face images capturing based on the displayed prompting information. Where if the live face detection passes, 210 the login verification is passed.
[0035] An App login verification method is provided in embodiments of the present invention, wherein the real-time face images are captured by a terminal to identify the eye focus location information of the terminal user, for determining the range of user eye focus location on the terminal screen. Therefore, the prompting messages of the live face detection are displayed at the user eye focus, to save time from 215 finding prompting messages and help users for in-time responses.
Therefore, the risk identification speed is improved, to prevent users from spending too much time on finding reminders and multiple commands with consequent extended risk identification time.
[0036] Embodiment 2, an App login verification method is provided in the present invention, as shown in Fig.7, comprising:
220 S2-1, sending a login request to the server, wherein the described login request at least includes a terminal identification.
[0037] S2-2, acquiring a primary real-time face image and generating display position information based on the described primary real-time face image. In detail, the present step includes the following sub-steps:
52-2a, capturing a primary real-time face image.
225 [0038] 52-2b, based on image identification techniques, identifying the position range of the eye focus in the described primary real-time face image.
[0039] In detail, the step S2-2b includes the following sub-steps:
52-2b1, obtaining the positions of the front face of the primary real-time face images, position of pupils in eyes, and the visual shape of the pupils based on the image identification technique.
230 [0040] S2-2b2, obtaining the angle between the front face of the described primary real-time face image and the terminal.
[0041] 52-2b3, calculating the eye focus position based on the position of pupils in eyes, the visual shape of the pupils, the primary real-time face image and the angle of terminal.
[0042] 52-2c, generating the display position information based on the described position range of the 235 eye focus.

Date recue / Date received 202 1-1 1-30 [0043] In detail, if the prompting message is displayed as lined texts, the vertical position y of the prompting message on the terminal screen is calculated. For the horizontal position, by simulation trainings, when the pupil movement is less than a certain distance, the texts are centered, and when the pupil movement is greater than a certain distance to the right, the texts are aligned right.
240 [0044] As shown in Fig. 2, with assuming terminal user eyes are symmetric and equal sized, when the face in the primary real-time face image sent by the terminal is relatively to the right of the screen, and the eyes are within the range of the screen with the left eyeball centered as a circle, the left eye in the primary real-time face image is concluded to be parallel to the phone screen.
The length of the left eye is noted as X1 and the right eye length is noted as zl. The angle A is calculated by cosA = zl/Xl. With big 245 data trainings, the maximum value of the angle A (maxA) is simulated while eyes remain on the screen.
When A is smaller than maxA, the angle is identified as an effective angle.
For facing to the left, the value notation is reversed with the same calculation.
[0045] With the model identifications, the human faces can be determined as facing upwards or downwards. When the face in the primary real-time face image is relatively upward to the screen, with 250 two eyes within the range of the screen and eyeballs centered as a circle, the eye focus of the described primary real-time face image is located at the top of the screen. When the eyes are relatively downward, the conclusion is reversed.
[0046] When the lengths of the left and right eyes are identical, or, when the left or right angles are within detectable ranges, the position of the pupils on the eyeballs are calculated. By simulation trainings, 255 with the maximum angle (LmaxB), the focus leaves from the screen as shown in Fig. 3.
[0047] When human eyes are facing straight to the screen, the imaging formed in the screen of the eyes are shown in Fig. 4. When the face is static with eyes looking upwards, the imaging formed in the screen of the eyes are shown in Fig. 5.
[0048] As shown in Fig. 6, based on the imaging changes of pupils in the screen, the front face-screen 260 angle in the primary real-time face image is calculated. According to the front face-screen angle in the primary real-time face image, the y positions of the eyes in the screen are calculated. In the screen, the middle point of the eye imaging moves upwards by yl, wherein the shift yl is the position of the y location for text display (as shown in Fig. 3);
upward angle LmaxB, cosB = zl/Xl;
265 yl = maxY / LmaxB * LB; and based on the shift algorithm, the prompting message display position on the screen is calculated (as shown in Fig. 7).
[0049] S2-3, receiving prompting messages from a server and displaying the described prompting message based on the described display position information.

Date recue / Date received 202 1-1 1-30 270 [0050] In detail, the prompting information is sent by the server via the following procedure:
[0051] S2-3a, searching user history activity data by the server from the database.
[0052] The terminal history activity data is stored in the database, wherein each terminal has a terminal identification. The server searches user history activity data server from the database according to the described terminal identification, such as the item purchase name from a recent online order, name of a 275 service requested, and title key words of article or news messages.
[0053] The terminal displays the prompting messages including at least one of user history activity operation questions at the location of the user eye focus, wherein the user would not need to take extra time looking for prompting messages, and consequently less time for login verification is required.
[0054] Further in detail, as a preferrable application, the prompting message further includes a microphone 280 icon, to remind the current terminal user to use voice for answering user history activity operation questions.
[0055] S2-4, acquiring a secondary face image based on the described prompting message information for live face detection.
[0056] In detail, the secondary face image includes user voice answers to the user history activity operation questions. The present step further includes: acquiring voice answer information. In detail, the described 285 voice answer information includes the voice answer information of the user voice answers to user history activity operation questions. When the camera is turned on by the terminal, and the image captured when the user voice answers user history activity operation questions is identified as the secondary face image.
During the live face detection after acquiring the secondary face images the user voice answer to user history activity operation questions are checked and returned, to ensure the login verification performance 290 while prevent the processing time from being extended.
[0057] Where if the live face detection passes, proceeding to the next step S2-5.
[0058] S2-5, sending the described secondary real-time face image to the server so as for the face recognizing comparison in the server, and sending the described voice answer information to the server so as for determining that if the voice answer information matches with the described user history activity data.
295 [0059] As a preferred application, the secondary real-time face image is sent to the server so as for the face recognizing comparison in the server, by means of based on pre-set filtering conditions, the best frame in the secondary real-time face image is sent to the server for the face recognizing comparison. For example, the frame with the best quality and eyes facing straightly to the screen is selected from the secondary real-time face image and sent to the server for the 300 face recognizing comparison.
[0060] In detail, the server determines that if the voice answer information matches with the described user history activity data, by means of:
S2-5a, converting the described voice answer information into text answer information.

Date recue / Date received 202 1-1 1-30 [0061] S2-5b, performing fuzzy matching of the described text answer information with the described user 305 history activity data by the server.
[0062] where if the described face recognition passes and the voice answer information matches with the described user history activity data, the login verification is passed.
[0063] The present embodiment would not restrict the order of performing face recognition and the determination of that if the voice answer information matches with the described user history activity data.
310 [0064] An App login method is provided in the present invention. During the account login, the terminal built-in camera is always on. By image recognition, the front face, pupil location in the user eyes, the pupil shape and the front face-screen angle of the current terminal user are determined, to identify the location range of the eye focus on the screen. The user history activity operation questions are directly displayed on the current user eye focus. As a result, the user does not need extra time to look for the texts. The voice 315 answer information of the answers to history activity data by the user are collected. The user voice answer and the face gesture are compared with the stored user history. Compared with the current login verification detection methods, the problems of finding the message location and long live face detection time with complicated orders are solved. In the meanwhile, the user history activity verification is added with the live face detection to improve the account security and prevent account or funds from being stolen.
320 [0065] Embodiment 3, an app verification method is provided as shown in Fig. 8, comprising:
S3-1, acquiring a primary real-time face image and generating display position information based on the described primary real-time face image.
[0066] The terminal opens a build-in camera to capture images of the current user as the primary real-time face image, and analyzes the primary real-time face image to generate display position information.
325 [0067] In detail, the present step comprises the following sub-steps:
S3-1a, capturing a primary real-time face image; and S3-1b, based on image identification techniques, identifying the position range of the eye focus in the described primary real-time face image.
[0068] In detail, the step Si-lb comprises the following steps:
330 S3- lbl, obtaining the positions of the front face of the primary real-time face images, position of pupils in eyes, and the visual shape of the pupils based on the image identification technique.
[0069] S3-1b2, obtaining the angle between the front face of the described primary real-time face image and the terminal; and [0070] S3-1b3, calculating the eye focus position based on the position of pupils in eyes, the visual shape 335 of the pupils, the primary real-time face image and the angle of terminal.
[0071] S3-1c, generating the display position information based on the described position range of the eye focus.
Date recue / Date received 202 1-1 1-30 [0072] In detail, if the prompting message is displayed as lined texts, the vertical position y of the prompting message on the terminal screen is calculated. For the horizontal position, by simulation 340 trainings, when the pupil movement is less than a certain distance, the texts are centered, and when the pupil movement is greater than a certain distance to the right, the texts are aligned right.
[0073] As shown in Fig. 2, with assuming terminal user eyes are symmetric and equal sized, when the face in the primary real-time face image sent by the terminal is relatively to the right of the screen, and the eyes are within the range of the screen with the left eyeball centered as a circle, the left eye in the 345 primary real-time face image is concluded to be parallel to the phone screen. The length of the left eye is noted as X1 and the right eye length is noted as zl. The angle A is calculated by cosA = zl/Xl. With big data trainings, the maximum value of the angle A (maxA) is simulated while eyes remain on the screen.
When A is smaller than maxA, the angle is identified as an effective angle.
For facing to the left, the value notation is reversed with the same calculation.
350 [0074] With the model identifications, the human faces can be determined as facing upwards or downwards. When the face in the primary real-time face image is relatively upward to the screen, with two eyes within the range of the screen and eyeballs centered as a circle, the eye focus of the described primary real-time face image is located at the top of the screen. When the eyes are relatively downward, the conclusion is reversed.
355 [0075] When the lengths of the left and right eyes are identical, or, when the left or right angles are within detectable ranges, the position of the pupils on the eyeballs are calculated. By simulation trainings, with the maximum angle (LmaxB), the focus leaves from the screen as shown in Fig. 3.
[0076] When human eyes are facing straight to the screen, the imaging formed in the screen of the eyes are shown in Fig. 4. When the face is static with eyes looking upwards, the imaging formed in the screen 360 of the eyes are shown in Fig. 5.
[0077] As shown in Fig. 6, based on the imaging changes of pupils in the screen, the front face-screen angle in the primary real-time face image is calculated. According to the front face-screen angle in the primary real-time face image, the y positions of the eyes in the screen are calculated. In the screen, the middle point of the eye imaging moves upwards by yl, wherein the shift yl is the position of the y 365 location for text display (as shown in Fig. 3);
upward angle LmaxB, cosB = zl/Xl;
yl = maxY / LmaxB * LB; and based on the shift algorithm, the prompting message display position on the screen is calculated (as shown in Fig. 7).
370 [0078] When the eye focus position on the screen in the primary real-time face image is detected to change, based on the shift algorithm, the display location of user online activity question texts are re-calculated.

Date recue / Date received 202 1-1 1-30 [0079] S3-2, receiving prompting messages from a server and displaying the described prompting message based on the described display position information.
[0080] S3-3, acquiring a secondary face image based on the described prompting message information for 375 live face detection.
[0081] In detail, the user makes corresponding face gestures for the secondary real-time face images capturing based on the displayed prompting information.
[0082] As a preferred application, where if the live face detection passes, proceeding to the next step S3-4.
380 [0083] S3-4, sending the App account number and password information to the server for account and password verification by the server.
[0084] In particular, the step comprises:
S3-4a, receiving the account password verification command form the server based on the login request.
385 [0085] S3-4b, sending the App account number and password information to the server, so as to compare the described account password with the password information obtained from the database for the App account by the server.
[0086] Where if the password information obtained from the database for the App account by the server matches with the account password sent to the server from the terminal, the App account password 390 verification passes.
[0087] Where if both the live face detection and the App account password verification pass, login verification is passed.
[0088] An App login method is provided in the present invention. During the account login, the terminal built-in camera is always on. By image recognition, the front face, pupil location in the user eyes, the pupil 395 shape and the front face-screen angle of the current terminal user are determined, to identify the location range of the eye focus on the screen. The user history activity operation questions are directly displayed on the current user eye focus. As a result, the user does not need extra time to look for the texts. The voice answer information of the answers to history activity data by the user are collected. The user voice answer and the face gesture are compared with the stored user history. Compared with the current login verification 400 detection methods, the problems of finding the message location and long live face detection time with complicated orders are solved. In the meanwhile, the user history activity verification is added with the live face detection to improve the account security and prevent account or funds from being stolen.
[0089] Embodiment 4, an App login verification method is provided in the present embodiment as shown in Fig. 9, wherein the difference from the embodiment 4 is that the App account password information is Date recue / Date received 202 1-1 1-30 405 sent to the server for the account password verification before performing the live face detection. the present embodiment provides the same technical benefits as the embodiment 5, and is not further explained in detail.
[0090] Embodiment 5, an app verification device is provided in the present embodiment, as shown in Fig 10, at least comprising:
a receiving module 51, configured to receive prompting messages from a server.
410 [0091] a processing module 52, configured to acquire a primary real-time face image and generate display position information based on the described primary real-time face image; then acquire a secondary face image based on the described prompting message information for live face detection.
[0092] In some preferred embodiments, the processing module 52 particularly comprises:
a capturing unit, configured to acquire the primary real-time face image and the secondary face 415 image;
a processing unit, configured to perform live face detection based on the described prompting message information by the primary real-time face image, and the described secondary face image; and a displaying unit, configured to display the described prompting information based on the described displaying position information.
420 [0093] In some preferred embodiments, the processing module 52 further comprises:
an image identification unit, configured to identify the position range of the eye focus in the described primary real-time face image based on image identification techniques; in detail, to obtain the positions of the front face of the primary real-time face images, position of pupils in eyes, and the visual shape of the pupils based on the image identification technique; and to obtaining the angle between the 425 front face of the described primary real-time face image and the terminal.
[0094] A computation unit, configured to calculate the eye focus position based on the position of pupils in eyes, the visual shape of the pupils, the primary real-time face image and the angle of terminal.
[0095] The processing unit is further configured to generate the display position information based on the described position range of the eye focus.
430 [0096] In some preferred embodiment, the described device further comprises:
a sending module, configured to send a login request to the server and send the described secondary real-time face image to the server so as for the face recognizing comparison in the server, and send the described voice answer information to the server so as for determining that if the voice answer information matches with the described user history activity data.
435 [0097] In the meanwhile, the processing module 52 further includes a voice recording unit, configured to acquire voice answer information.
[0098] In some preferred embodiment, the sending module is further configured to send the App account number and password information to the server for account and password verification by the server.

Date recue / Date received 202 1-1 1-30 [0099] To clarify, when the App login verification method is invoked in the App login verification device 440 in the forementioned embodiments, the described functional module configurations are used for illustration only. In practical applications, the described functions can be assigned to different functional modules according to practical demands, wherein the internal structural configuration of the device is divided into different functional modules to perform all or a portion of the described functions. Besides, the forementioned App login verification device in the embodiment adopts the same concepts in the described 445 App login verification method embodiments. The described device is based on the implementation of the App login verification method, whereas the detailed procedures can be referred to the method embodiments and are not explained in further detail.
[0100] Embodiment 6, a readable computer storage medium with computer programs stored is provided in the present embodiment, wherein the App login verification methods in any of embodiments 1 ¨ 4 are 450 performed when the described computer programs are executed on the described processor.
[0101] The readable computer storage medium provided in the present embodiment is used to process the App login verification method in the embodiments 1 ¨4, with the same benefits provided by the App login verification method from the embodiments 1 ¨ 4, and is not further explained in detail.
[0102] Those skills in that art can understand that all or a portion of the forementioned embodiments can 455 be achieved by hardware, or by hardware driven by programs, stored on a readable computer storage medium. The forementioned storage medium can be but not limited to memory, diskettes, or discs.
[0103] the forementioned technical proposals can be achieved by any combinations of the embodiments in the present invention. In other words, the embodiments can be combined to meet requirements of different application scenarios, wherein all possible combinations are falling in the scope of the present 460 invention, and are not explained in further detail.
[0104] Obviously, the forementioned embodiments are referred to represent the technical concept and features of the present invention, providing explanations to those skilled in the art for further applications, that shall not limit the protection scope of the present invention. Therefore, all alterations, modifications, equivalence, improvements of the present invention fall within the scope of the present invention.
14 Date recue / Date received 202 1-1 1-30

Claims (60)

Claims:
1. A device comprising:
a receiving module, configured to receive prompting messages from a server;
and a processing module configured to:
acquire a primary real-time face image;
generate display position information based on the primary real-time face image;
acquire a secondary face image based on the prompting message information for live face detection; and wherein the processing module comprises an image identification unit to identify the position range of eye focus in the primary real-time face image based on image identification techniques to obtain positions of front face of the primary real-time face images, position of pupils in eyes, and visual shape of the pupils based on the image identification technique.
2. The device of claim 1, wherein the processing module further comprises of:
a capturing unit configured to acquire the primary real-time face image and the secondary face image;
a processing unit configured to:
perform live face detection based on the prompting message information by the primary real-time face image, and the secondary face image; and to generate the display position information based on the position range of the eye focus; and Date Reçue/Date Received 2024-03-01 a displaying unit configured to display the prompting information based on the displaying position information.
3. The device of any one of claims 1 to 2, wherein the processing module further comprises of:
the image identification unit to identify the position range of the eye focus in the primary real-time face image based on image identification techniques configured to:
obtain the angle between the front face of the primary real-time face image and the terminal;
a computation unit configured to calculate the eye focus position based on the position of pupils in eyes, the visual shape of the pupils, the primary real-time face image and the angle of terminal; and a voice recording unit configured to acquire voice answer information.
4. The device of any one of claims 1 to 3, further comprising:
a sending module configured to:
send a login request to the server;
send the secondary real-time face image to the server so as for the face recognizing comparison in the server;
send the voice answer information to the server for determining the voice answer information matches with the user history activity data; and send the app account number and password information to the server for account and password verification by the server.

Date Recue/Date Received 2024-03-01
5. The device of any one of claims 1 to 4, wherein the prompting message is displayed as lined texts, wherein the vertical position y of the prompting message on the terminal screen is calculated, and wherein the horizontal position the texts are centered, by simulation trainings, when the pupil movement is less than a certain distance and the texts are aligned right when the pupil movement is greater than a certain distance to the right.
6. The device of any one of claims 1 to 5, wherein terminal user eyes are assumed symmetric and equal sized, when the face in the primary real-time face image sent by the terminal is relatively to the right of a screen, and the eyes are within range of the screen with left eyeball centered as a circle, the left eye in the primary real-time face image is concluded to be parallel to the phone screen, wherein length of the left eye is noted as X1 and right eye length is noted as zl, wherein angle A is calculated by cosA = zl/X1, wherein maximum value of the angle A (maxA) is simulated while eyes remain on the screen wherein when angle A is smaller than maxA, the angle is identified as an effective angle, and wherein facing to the left, the value notation is reversed with the same calculation.
7. The device of any one of claims 1 to 6, wherein human faces can be determined as facing upwards or downwards with model identification. wherein when the face in the primary real-time face image is relatively upward to the screen, with two eyes within the range of the screen and eyeballs centered as a circle, the eye focus of the primary real-time face image is located at the top of the screen, wherein when the eyes are relatively downward, the conclusion is reversed.
8. The device of any one of claims 1 to 7, wherein when the lengths of the left and right eyes are identical, or when the left or right angles are within detectable ranges, the position of the pupils on the eyeballs are calculated, by simulation trainings, with maximum angle (LmaxB), the focus leaves from the screen.

Date Recue/Date Received 2024-03-01
9. The device of any one of claims 1 to 8, wherein based on the imaging changes of pupils in the screen, front face-screen angle in the primary real-time face image is calculated, wherein the y positions of the eyes in the screen are calculated according to the front face-screen angle in the primary real-time face image, wherein in the screen, the middle point of the eye imaging moves upwards by yl, wherein the shift yl is the position of the y location for text display, wherein upward angle LmaxB, wherein cosB = zl/X1, wherein y1 = maxY
LmaxB
* LB, and wherein, the prompting message display position on the screen is calculated based on the shift algorithm.
10. The device of any one of claims 1 to 9, wherein when the eye focus position on the screen in the primary real-time face image is detected to change, the display location of user online activity question texts are re-calculated based on the shift algorithm,.
11. The device of any one of claims 1 to 10, wherein the user makes corresponding face gestures for the secondary real-time face images capturing based on the displayed prompting information.
12. The device of any one of claims 1 to 11, wherein terminal history activity data is stored in the database, wherein each terminal has a terminal identification, wherein the server searches the user history activity data server from the database according to the terminal identification, including any one or more of the item purchase name from a recent online order, name of a service requested, and title key words of article and news messages.
13. The device of any one of claims 1 to 12, wherein the terminal displays the prompting messages including any one or more of user history activity operation questions at the location of the user eye focus, wherein the user would not need to take extra time looking for prompting messages.
14. The device of any one of claims 1 to 13, wherein the prompting message further includes a microphone icon to remind the user to use voice for answering the user history activity operation questions.

Date Recue/Date Received 2024-03-01
15. The device of any one of claims 1 to 14, wherein the secondary face image includes user voice answers to the user history activity operation questions, acquiring the secondary face image based on the prompting message information for live face detection further includes acquiring voice answer information, wherein the voice answer information includes the voice answer information of the user voice answers to the user history activity operation questions, wherein when a built in camera is turned on by the terminal, and the image captured when the user voice answers the user history activity operation questions is identified as the secondary face image, and wherein during the live face detection after acquiring the secondary face images the user voice answer to user history activity operation questions are checked and returned, to ensure the login verification performance while preventing the processing time from being extended.
16. The device of any one of claims 1 to 14, wherein the secondary real-time face image is sent to the server so as for the face recognizing comparison in the server based on pre-set filtering conditions, wherein the best frame in the secondary real-time face image is sent to the server for the face recognizing comparison, wherein the frame with the best quality and eyes facing straightly to the screen is selected from the secondary real-time face image and sent to the server for the face recognizing comparison.
17. The device of any one of claims 1 to 16, wherein the order of performing face recognition and the determination of the voice answer information matches with the described user history activity data is not restricted.
18. The device of any one of claims 1 to 17, wherein the terminal opens the build-in camera to capture images of the user as the primary real-time face image and analyzes the primary real-time face image to generate display position information.
19. A method comprising:

Date Recue/Date Received 2024-03-01 acquiring a primary real-time face image and generating display position information based on the primary real-time face image including identification of position range of eye focus in the primary real-time face image based on image identification techniques by obtaining positions of a front face of the primary real-time face images, position of pupils in eyes, and visual shape of pupils based on the image identification technique;
receiving prompting messages from a server and displaying the prompting messages based on the display position information;
acquiring a secondary face image based on prompting message information for live face detection; and wherein the live face detection passes, passing the login verification.
20. The method of claim 19, wherein the acquisition of the primary real-time face image and the generation of display position information based on the primary real-time face image comprises:
capturing the primary real-time face image;
based on image identification techniques, identifying position range of eye focus in the primary real-time face image; and generating the display position information based on the position range of the eye focus.
21. The method of claim 20, wherein identification of the position range of the eye focus in the primary real-time face image based on the image identification techniques comprises:
obtaining a angle between the front face of the primary real-time face image and terminal; and calculating eye focus position based on the position of pupils in eyes, the visual shape of the pupils, the primary real-time face image and angle of the terminal.
Date Recue/Date Received 2024-03-01
22. The method of claim 19, wherein before the acquisition of a primary real-time face image and the generation of display position information based on the primary real-time face image, the method further comprises:
sending a login request to the server, wherein the login request includes a terminal identification; and the prompting a message sent by the terminal, acquired by:
searching user history activity data by the server from a database according to the terminal identification; and generating user history activity operation questions by the server based on the user history activity data, wherein the prompting message sent by the server includes the user history activity operation questions.
23. The method of claim 22, wherein the secondary real-time face image, includes the real-time face image when a user answers the user history activity operation questions in voice, comprises:
acquiring voice answer information;
sending the secondary real-time face image to the server for face recognizing comparison in the server, wherein the face detection is passed;
sending the voice answer information to the server for determining that the voice answer information matches with the user history activity data, wherein the face detection is passed; and wherein the face recognition passes and the voice answer information matches with the user history activity data, the login verification is passed.
24. The method of claim 23, wherein the determination by the server the voice answer information matches with the user history activity data comprises:

Date Recue/Date Received 2024-03-01 converting the voice answer information into text answer information; and performing fuzzy matching of the text answer information with the user history activity data by the server.
25. The method of claim 19, further comprising:
sending an app account number and password information to the server for account and password verification by the server, comprises:
receiving account password verification command form the server based on the login request;
sending the app account number and the password information to the server, wherein the server compares the account password with the password information obtained from a database for the app account by the server, wherein the password information obtained from the database for the app account by the server matches with the account password sent to the server from the terminal, the app account password verification passes; and wherein the live face detection passes and the app account password verification passes, the login verification is passed.
26. The method of any one of claims 19 to 25, wherein the prompting message is displayed as lined texts, wherein the vertical position y of the prompting message on the terminal screen is calculated, and wherein the horizontal position the texts are centered, by simulation trainings, when the pupil movement is less than a certain distance and the texts are aligned right when the pupil movement is greater than a certain distance to the right.

Date Recue/Date Received 2024-03-01
27. The method of any one of claims 19 to 26, wherein terminal user eyes are assumed symmetric and equal sized, when the face in the primary real-time face image sent by the terminal is relatively to the right of a screen, and the eyes are within range of the screen with left eyeball centered as a circle, the left eye in the primary real-time face image is concluded to be parallel to the phone screen, wherein length of the left eye is noted as X1 and right eye length is noted as zl, wherein angle A is calculated by cosA = zl/X1, wherein maximum value of the angle A (maxA) is simulated while eyes remain on the screen wherein when angle A is smaller than maxA, the angle is identified as an effective angle, and wherein facing to the left, the value notation is reversed with the same calculation.
28. The method of any one of claims 19 to 27, wherein human faces can be determined as facing upwards or downwards with model identification. wherein when the face in the primary real-time face image is relatively upward to the screen, with two eyes within the range of the screen and eyeballs centered as a circle, the eye focus of the primary real-time face image is located at the top of the screen, wherein when the eyes are relatively downward, the conclusion is reversed.
29. The method of any one of claims 19 to 28, wherein when the lengths of the left and right eyes are identical, or when the left or right angles are within detectable ranges, the position of the pupils on the eyeballs are calculated, by simulation trainings, with maximum angle (LmaxB), the focus leaves from the screen.
30. The method of any one of claims 19 to 29, wherein based on the imaging changes of pupils in the screen, front face-screen angle in the primary real-time face image is calculated, wherein the y positions of the eyes in the screen are calculated according to the front face-screen angle in the primary real-time face image, wherein in the screen, the middle point of the eye imaging moves upwards by yl, wherein the shift yl is the position of the y location for text display, wherein upward angle LmaxB, wherein cosB = zl/X1, wherein yl = maxY /
LmaxB * LB, and wherein, the prompting message display position on the screen is calculated based on the shift algorithm.

Date Recue/Date Received 2024-03-01
31. The method of any one of claims 19 to 30, wherein when the eye focus position on the screen in the primary real-time face image is detected to change, the display location of user online activity question texts are re-calculated based on the shift algorithm,.
32. The method of any one of claims 19 to 31, wherein the user makes corresponding face gestures for the secondary real-time face images capturing based on the displayed prompting information.
33. The method of any one of claims 19 to 32, wherein terminal history activity data is stored in the database, wherein each terminal has a terminal identification, wherein the server searches the user history activity data server from the database according to the terminal identification, including any one or more of the item purchase name from a recent online order, name of a service requested, and title key words of article and news messages.
34. The method of any one of claims 19 to 33, wherein the terminal displays the prompting messages including any one or more of user history activity operation questions at the location of the user eye focus, wherein the user would not need to take extra time looking for prompting messages.
35. The method of any one of claims 19 to 34, wherein the prompting message further includes a microphone icon to remind the user to use voice for answering the user history activity operation questions.

Date Recue/Date Received 2024-03-01
36. The method of any one of claims 19 to 35, wherein the secondary face image includes user voice answers to the user history activity operation questions, acquiring the secondary face image based on the prompting message information for live face detection further includes acquiring voice answer information, wherein the voice answer information includes the voice answer information of the user voice answers to the user history activity operation questions, wherein when a built in camera is turned on by the terminal, and the image captured when the user voice answers the user history activity operation questions is identified as the secondary face image, and wherein during the live face detection after acquiring the secondary face images the user voice answer to user history activity operation questions are checked and returned, to ensure the login verification performance while preventing the processing time from being extended.
37. The method of any one of claims 19 to 36, wherein the secondary real-time face image is sent to the server so as for the face recognizing comparison in the server based on pre-set filtering conditions, wherein the best frame in the secondary real-time face image is sent to the server for the face recognizing comparison, wherein the frame with the best quality and eyes facing straightly to the screen is selected from the secondary real-time face image and sent to the server for the face recognizing comparison.
38. The method of any one of claims 19 to 37, wherein the order of performing face recognition and the determination of the voice answer information matches with the described user history activity data is not restricted.
39. The method of any one of claims 19 to 38, wherein the terminal opens the build-in camera to capture images of the user as the primary real-time face image and analyzes the primary real-time face image to generate display position information.
40. A computer readable physical memory having stored thereon a computer program executed by a computer configured to:
Date Recue/Date Received 2024-03-01 acquire a primary real-time face image and generating display position information based on the primary real-time face image including identification of position range of eye focus in the primary real-time face image based on image identification techniques by obtaining positions of a front face of the primary real-time face images, position of pupils in eyes, and visual shape of pupils based on the image identification technique;
receive prompting messages from a server and displaying the prompting messages based on the display position information;
acquire a secondary face image based on prompting message information for live face detection; and wherein the live face detection passes, passing the login verification.
41. The memory of claim 40, wherein the acquisition of the primary real-time face image and the generation of display position information based on the primary real-time face image comprises:
capturing the primary real-time face image;
based on image identification techniques, identifying position range of eye focus in the primary real-time face image; and generating the display position information based on the position range of the eye focus.
42. The memory of claim 41, wherein identification of the position range of the eye focus in the primary real-time face image based on the image identification techniques comprises:
obtaining a angle between the front face of the primary real-time face image and terminal; and calculating eye focus position based on the position of pupils in eyes, the visual shape of the pupils, the primary real-time face image and angle of the terminal.

Date Recue/Date Received 2024-03-01
43. The memory of claim 40, wherein before the acquisition of a primary real-time face image and the generation of display position information based on the primary real-time face image, the memory further comprises:
sending a login request to the server, wherein the login request includes a terminal identification; and the prompting a message sent by the terminal, acquired by:
searching user history activity data by the server from a database according to the terminal identification; and generating user history activity operation questions by the server based on the user history activity data, wherein the prompting message sent by the server includes the user history activity operation questions.
44. The memory of claim 43, wherein the secondary real-time face image, includes the real-time face image when a user answers the user history activity operation questions in voice, comprises:
acquiring voice answer information;
sending the secondary real-time face image to the server for face recognizing comparison in the server, wherein the face detection is passed;
sending the voice answer information to the server for determining that the voice answer information matches with the user history activity data, wherein the face detection is passed; and wherein the face recognition passes and the voice answer information matches with the user history activity data, the login verification is passed.
45. The memory of claim 44, wherein the determination by the server the voice answer information matches with the user history activity data comprises:

Date Recue/Date Received 2024-03-01 converting the voice answer information into text answer information; and performing fuzzy matching of the text answer information with the user history activity data by the server.
46. The memory of claim 40, further configured to:
send an app account number and password information to the server for account and password verification by the server, comprising:
receiving account password verification command form the server based on the login request;
sending the app account number and the password information to the server, wherein the server compares the account password with the password information obtained from a database for the app account by the server, wherein the password information obtained from the database for the app account by the server matches with the account password sent to the server from the terminal, the app account password verification passes; and wherein the live face detection passes and the app account password verification passes, the login verification is passed.
47. The memory of any one of claims 40 to 46, wherein the prompting message is displayed as lined texts, wherein the vertical position y of the prompting message on the terminal screen is calculated, and wherein the horizontal position the texts are centered, by simulation trainings, when the pupil movement is less than a certain distance and the texts are aligned right when the pupil movement is greater than a certain distance to the right.

Date Recue/Date Received 2024-03-01
48. The memory of any one of claims 40 to 47, wherein terminal user eyes are assumed symmetric and equal sized, when the face in the primary real-time face image sent by the terminal is relatively to the right of a screen, and the eyes are within range of the screen with left eyeball centered as a circle, the left eye in the primary real-time face image is concluded to be parallel to the phone screen, wherein length of the left eye is noted as X1 and right eye length is noted as zl, wherein angle A is calculated by cosA = zl/X1, wherein maximum value of the angle A (maxA) is simulated while eyes remain on the screen wherein when angle A is smaller than maxA, the angle is identified as an effective angle, and wherein facing to the left, the value notation is reversed with the same calculation.
49. The memory of any one of claims 40 to 48, wherein human faces can be determined as facing upwards or downwards with model identification. wherein when the face in the primary real-time face image is relatively upward to the screen, with two eyes within the range of the screen and eyeballs centered as a circle, the eye focus of the primary real-time face image is located at the top of the screen, wherein when the eyes are relatively downward, the conclusion is reversed.
50. The memory of any one of claims 40 to 49, wherein when the lengths of the left and right eyes are identical, or when the left or right angles are within detectable ranges, the position of the pupils on the eyeballs are calculated, by simulation trainings, with maximum angle (LmaxB), the focus leaves from the screen.
51. The memory of any one of claims 40 to 50, wherein based on the imaging changes of pupils in the screen, front face-screen angle in the primary real-time face image is calculated, wherein the y positions of the eyes in the screen are calculated according to the front face-screen angle in the primary real-time face image, wherein in the screen, the middle point of the eye imaging moves upwards by yl, wherein the shift yl is the position of the y location for text display, wherein upward angle LmaxB, wherein cosB = zl/X1, wherein yl = maxY /
LmaxB * LB, and wherein, the prompting message display position on the screen is calculated based on the shift algorithm.

Date Recue/Date Received 2024-03-01
52. The memory of any one of claims 40 to 51, wherein when the eye focus position on the screen in the primary real-time face image is detected to change, the display location of user online activity question texts are re-calculated based on the shift algorithm.
53. The memory of any one of claims 40 to 52, wherein the user makes corresponding face gestures for the secondary real-time face images capturing based on the displayed prompting information.
54. The memory of any one of claims 40 to 53, wherein terminal history activity data is stored in the database, wherein each terminal has a terminal identification, wherein the server searches the user history activity data server from the database according to the terminal identification, including any one or more of the item purchase name from a recent online order, name of a service requested, and title key words of article and news messages.
55. The memory of any one of claims 40 to 54, wherein the terminal displays the prompting messages including any one or more of user history activity operation questions at the location of the user eye focus, wherein the user would not need to take extra time looking for prompting messages.
56. The memory of any one of claims 40 to 55, wherein the prompting message further includes a microphone icon to remind the user to use voice for answering the user history activity operation questions.
Date Recue/Date Received 2024-03-01
57. The memory of any one of claims 40 to 56, wherein the secondary face image includes user voice answers to the user history activity operation questions, acquiring the secondary face image based on the prompting message information for live face detection further includes acquiring voice answer information, wherein the voice answer information includes the voice answer information of the user voice answers to the user history activity operation questions, wherein when a built in camera is turned on by the terminal, and the image captured when the user voice answers the user history activity operation questions is identified as the secondary face image, and wherein during the live face detection after acquiring the secondary face images the user voice answer to user history activity operation questions are checked and returned, to ensure the login verification performance while preventing the processing time from being extended.
58. The memory of any one of claims 40 to 57, wherein the secondary real-time face image is sent to the server so as for the face recognizing comparison in the server based on pre-set filtering conditions, wherein the best frame in the secondary real-time face image is sent to the server for the face recognizing comparison, wherein the frame with the best quality and eyes facing straightly to the screen is selected from the secondary real-time face image and sent to the server for the face recognizing comparison.
59. The memory of any one of claims 40 to 58, wherein the order of performing face recognition and the determination of the voice answer information matches with the described user history activity data is not restricted.
60. The memory of any one of claims 40 to 59, wherein the terminal opens the build-in camera to capture images of the user as the primary real-time face image and analyzes the primary real-time face image to generate display position information.

Date Recue/Date Received 2024-03-01
CA3135471A 2020-09-30 2021-09-30 App login verification method and device and computer readable storage medium Active CA3135471C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011056334.2A CN111931742A (en) 2020-09-30 2020-09-30 APP login verification method and device and computer readable storage medium
CN202011056334.2 2020-09-30

Publications (2)

Publication Number Publication Date
CA3135471A1 CA3135471A1 (en) 2022-03-30
CA3135471C true CA3135471C (en) 2024-05-14

Family

ID=73333683

Family Applications (1)

Application Number Title Priority Date Filing Date
CA3135471A Active CA3135471C (en) 2020-09-30 2021-09-30 App login verification method and device and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN111931742A (en)
CA (1) CA3135471C (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553978A (en) * 2021-07-30 2021-10-26 陕西科技大学 Face recognition device and recognition method for user-defined strategy

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885592B (en) * 2014-03-13 2017-05-17 宇龙计算机通信科技(深圳)有限公司 Method and device for displaying information on screen
US10063535B2 (en) * 2014-12-30 2018-08-28 Onespan North America Inc. User authentication based on personal access history
CN105183151B (en) * 2015-08-25 2018-07-06 广州视源电子科技股份有限公司 Method and device for adjusting display content position
CN105843383B (en) * 2016-03-21 2019-03-12 努比亚技术有限公司 Using starter and method
CN107516066B (en) * 2017-07-20 2021-03-09 Oppo广东移动通信有限公司 Detection method and related product
CN109584285B (en) * 2017-09-29 2024-03-29 中兴通讯股份有限公司 Control method and device for display content and computer readable medium
CN111274559A (en) * 2018-12-05 2020-06-12 深圳市茁壮网络股份有限公司 Identity verification method and device
CN109670456A (en) * 2018-12-21 2019-04-23 北京七鑫易维信息技术有限公司 A kind of content delivery method, device, terminal and storage medium
CN110532744A (en) * 2019-07-22 2019-12-03 平安科技(深圳)有限公司 Face login method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CA3135471A1 (en) 2022-03-30
CN111931742A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
US11100208B2 (en) Electronic device and method for controlling the same
US10108961B2 (en) Image analysis for user authentication
US11310223B2 (en) Identity authentication method and apparatus
CN112699828A (en) Implementation of biometric authentication
CN109359548A (en) Plurality of human faces identifies monitoring method and device, electronic equipment and storage medium
KR102092931B1 (en) Method for eye-tracking and user terminal for executing the same
CN109902630A (en) A kind of attention judgment method, device, system, equipment and storage medium
US20200026939A1 (en) Electronic device and method for controlling the same
CN105740688B (en) Unlocking method and device
CN109218269A (en) Identity authentication method, device, equipment and data processing method
CN110472130A (en) Reduce the demand to manual beginning/end point and triggering phrase
CN110109541A (en) A kind of method of multi-modal interaction
EP4092549B1 (en) Captcha method and apparatus, device, and storage medium
US20190347390A1 (en) Electronic device and method for controlling the same
CA3135471C (en) App login verification method and device and computer readable storage medium
KR102082418B1 (en) Electronic device and method for controlling the same
US11599612B2 (en) Method, apparatus and system for authenticating a user based on eye data and/or facial data
CN112883851A (en) Learning state detection method and device, electronic equipment and storage medium
CN112908325A (en) Voice interaction method and device, electronic equipment and storage medium
CN109947239A (en) A kind of air imaging system and its implementation
CN113077262A (en) Catering settlement method, device, system, machine readable medium and equipment
CN111275874B (en) Information display method, device and equipment based on face detection and storage medium
CN113093907B (en) Man-machine interaction method, system, equipment and storage medium
CN107992825B (en) Face recognition method and system based on augmented reality
US11250242B2 (en) Eye tracking method and user terminal performing same