CN110891049A - Video-based account login method, device, medium and electronic equipment - Google Patents

Video-based account login method, device, medium and electronic equipment Download PDF

Info

Publication number
CN110891049A
CN110891049A CN201910969479.2A CN201910969479A CN110891049A CN 110891049 A CN110891049 A CN 110891049A CN 201910969479 A CN201910969479 A CN 201910969479A CN 110891049 A CN110891049 A CN 110891049A
Authority
CN
China
Prior art keywords
user
video
initial
value
account
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910969479.2A
Other languages
Chinese (zh)
Inventor
陈恳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An International Smart City Technology Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN201910969479.2A priority Critical patent/CN110891049A/en
Publication of CN110891049A publication Critical patent/CN110891049A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0815Network architectures or network communication protocols for network security for authentication of entities providing single-sign-on or federations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The disclosure relates to the technical field of internet, and discloses an account login method, device, medium and electronic equipment based on video. The method comprises the following steps: responding to a login request instruction triggered by a user, and acquiring a verification video for logging in an account of the user; reading the verification video to read the biometric value of the user from the verification video; detecting whether an initial biometric value matching the biometric value of the user exists in a user information database constructed in advance based on the biometric value of the user; authorizing the account corresponding to the initial biometric value that exists that matches the biometric value of the user as the login account of the user. Under the method, the user video data is used as the verification basis of the user login account, so that the account login process is simplified, the user experience is improved, and meanwhile, the security of the account login is ensured.

Description

Video-based account login method, device, medium and electronic equipment
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a method, an apparatus, a medium, and an electronic device for account login based on video.
Background
Today, account login is an important part of a user participating in internet management and internet life, for example, a user uses a login page of application software (APP) in an electronic device such as a mobile phone as an account login entry to realize account login. Currently, a user generally uses an account password or an external link (such as a wechat link or a microblog link for login) to log in an account.
However, when different users log in to the account on the same device or the same account logs in to the account on different devices, the corresponding account and password need to be manually input. In addition, when the password of the account is leaked, others can easily log in the account of the user.
Therefore, in the above manner of realizing account login, the technical problems of tedious account login process, low login efficiency, low user experience and low security exist.
Disclosure of Invention
The invention aims to provide a video-based account login method and device, a computer-readable storage medium and electronic equipment, so that at least an account login process can be simplified, account login efficiency is improved, and account login safety is guaranteed.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of an embodiment of the present disclosure, there is provided a video-based account login method, including: responding to a login request instruction triggered by a user, and acquiring a verification video for logging in an account of the user; reading the verification video to read the biometric value of the user from the verification video; detecting whether an initial biometric value matching the biometric value of the user exists in a user information database constructed in advance based on the biometric value of the user; authorizing the account corresponding to the initial biometric value that exists that matches the biometric value of the user as the login account of the user.
According to an aspect of the embodiments of the present disclosure, there is provided a video-based account login apparatus, including:
the acquisition unit is used for responding to a login request instruction triggered by a user and acquiring a verification video of the user for logging in an account; a reading unit, configured to read the verification video to read a biometric value of the user from the verification video; a detection unit configured to detect whether an initial biometric value matching the biometric value of the user exists in a user information database constructed in advance based on the biometric value of the user; and the authorization unit is used for authorizing the account corresponding to the initial biological characteristic value which is matched with the biological characteristic value of the user to be the login account of the user.
According to an aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program comprising executable instructions that, when executed by a processor, implement a video-based account login method as described in the above embodiments.
According to an aspect of an embodiment of the present disclosure, there is provided an electronic device including: one or more processors; a memory for storing executable instructions of the processor, which when executed by the one or more processors, cause the one or more processors to implement a video-based account login method as described in the embodiments above.
According to the technical scheme, the user video data are used as the verification basis of the user login account, so that the advantage that the video data are convenient to obtain when the user logs in the account and the advantage that the video data cannot be replaced when the user identity is verified can be fully utilized. Also for example, a non-real user cannot acquire video data of a real user, and thus cannot log into an account. Therefore, the technical scheme of the embodiment of the disclosure simplifies the account login process, improves the user experience, and simultaneously ensures the security of the account login. And then can solve the account that exists among the prior art and log in the technical problem that the process is loaded down with trivial details, login efficiency is not high, user experience is low and the security is low.
Drawings
The above and other features and advantages of the present disclosure will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
Fig. 1 is an application scenario diagram illustrating a video-based account login method according to an embodiment of the present disclosure;
FIG. 2 is a flow diagram illustrating a video-based account login in accordance with an embodiment of the present disclosure;
FIG. 3 is a flow diagram illustrating enabling video-based account login functionality according to an embodiment of the present disclosure;
FIG. 4 is a detailed flowchart of step 250 shown in FIG. 2, according to an embodiment of the present disclosure;
FIG. 5 is a detailed flowchart of step 250 shown in FIG. 2, according to an embodiment of the present disclosure;
FIG. 6 is a detailed flowchart of step 260 shown in FIG. 2, according to an embodiment of the present disclosure;
FIG. 7 is a detailed flowchart of step 263 shown in FIG. 6, according to an embodiment of the present disclosure;
FIG. 8 is a block diagram illustrating a video-based account login apparatus in accordance with an embodiment of the present disclosure;
FIG. 9 is a computer readable storage medium embodying the above-described method, according to an embodiment of the present disclosure;
fig. 10 is a block diagram illustrating an example of an electronic device implementing the above method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments will now be described more fully with reference to the accompanying drawings. The exemplary embodiments, however, may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. In the drawings, the size of some of the elements may be exaggerated or distorted for clarity. The same reference numerals denote the same or similar structures in the drawings, and thus detailed descriptions thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, etc. In other instances, well-known structures, methods, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
First, the present disclosure illustrates an application scenario of a video-based account login method.
Fig. 1 is a schematic diagram illustrating an application scenario of the video-based account login method.
As shown in the figure, the video-based account login is implemented based on the mobile phone as shown in the figure, but the mobile phone as shown in the figure may also be replaced by a portable mobile electronic device such as a tablet computer and a notebook computer, or a stationary electronic device such as a computer device, a field terminal, and a desktop computer. In the scenario illustrated in the figure, when a user needs to log in an account registered by himself in an APP, the APP is first opened, that is, a mobile phone page as shown in 110 in fig. 1 is displayed, and in the shown page 110, the user can select a login mode including account login and video login. If the user clicks the word "video login", the mobile phone page jumps immediately to the page as shown by 120, and a virtual touch button for "video recording" is displayed in the page 120. When a user clicks a video recording button, the mobile phone APP immediately calls the camera function of the mobile phone to start recording videos. It should be noted that the recorded video must include the face and the voice of the account login user, where the voice may be a voice sent by the user at will, or may be a word, or may be a segment, and the duration of the video recording is not limited, and may be 3 seconds, or 5 seconds, etc. After the video recording is finished, the mobile phone page jumps to the page shown by 130 immediately, a virtual touch button of a login character is displayed in the page 130, when a user clicks the virtual touch button of the login character, the mobile phone APP verifies whether the logged-in user is legal or not according to the recorded video, if the logged-in user is legal, the user is authorized to log in the personal account, a login success character shown by the page 140 is displayed on the mobile phone page, if the logged-in user is illegal, a login failure character shown by the page 150 is displayed on the mobile phone interface, and an account login character and a video login character are also displayed so that the user can reselect a corresponding login mode to log in.
As described above, it is understood that the application scenario of the video-based account login method may be arbitrary and is not limited to the scenario as shown above. For example, the manner of triggering the user to log in the account through the video may also be when the user opens the APP, rather than requiring the user to click on the "video login" word to trigger.
According to a first aspect of the present disclosure, a video-based account login method is provided.
Referring to fig. 2, a flowchart of a video-based account login method is shown, and it should be noted that an execution subject of the video-based account login method may be a client, and may also be a backend system corresponding to the client. The video-based account login method may include the steps of:
step 240, in response to a login request instruction triggered by a user, acquiring a verification video for logging in an account of the user.
And 250, reading the verification video to read the biological characteristic value of the user from the verification video.
Step 260, detecting whether an initial biometric value matched with the biometric value of the user exists in a user information database constructed in advance based on the biometric value of the user.
Step 270, authorizing the account corresponding to the initial biometric value matching the biometric value of the user to be the login account of the user.
The steps carried out as above will be explained in detail below:
step 240, in response to a login request instruction triggered by a user, acquiring a verification video for logging in an account of the user.
Specifically, the obtaining of the verification video of the user for logging in the account refers to obtaining a self-shooting video about a logged-in user, which is recorded by an electronic device with a camera function where an account logging client is located after the user triggers a login request instruction, where the self-shooting video must include a face and a sound of the account logging-in user, where the sound may be a sound optionally made by the user, or may be a word, or may be a segment of a word, and the duration of video recording is not limited, and may be 3 seconds, or 5 seconds, and the like.
Further, the request for the sound may be a specific word spoken by the user or a specific section, such as "log on my account".
In summary, for example, when a user needs to log in an account by means of video login, after the user triggers a login request instruction, the user may speak a word "log in my account" in the face of a camera of an electronic device (e.g., a mobile phone), and the electronic device records the word about "log in my account" having a face of the user and spoken by the user, so that a verification video of the user for logging in the account can be obtained.
In addition, before step 240, a method as shown in fig. 3 may also be included.
Referring to fig. 3, which shows a flowchart of enabling the video-based account login function, it should be noted that this method may be implemented when the user enables the video-based account login function, and the main purpose is to store the initial basic data of the user in advance so as to verify whether the login identity of the user is legal at a later stage.
Specifically, the method for the user to enable the video-based account login function may include the steps of:
step 210, in response to a video login function enabling instruction triggered by a user, acquiring an initial video of the user for enabling a video login function.
Specifically, the specific process of acquiring the video login function enabling instruction triggered by the user, and acquiring the initial video of the user for enabling the video login function may be the same as the specific process of acquiring the verification video of the user for logging in the account for the login request instruction triggered by the user, and therefore, the description of step 210 is not repeated here.
Step 220, reading the initial video to read the initial biological characteristic value of the user from the initial video;
since the initial video has information about the face and voice of the user, the initial biometric value of the user can be read from the initial video by reading the initial video. In particular, the method comprises the following steps of,
in one embodiment, the reading of the initial biometric value of the user from the initial video may be the reading of the initial facial feature value of the user.
In one embodiment, the reading of the initial biometric value of the user from the initial video may also be the reading of the initial sound feature value of the user.
In one embodiment, the reading of the initial biometric value of the user from the initial video may also be the reading of the initial face feature value of the user and the initial sound feature value of the user.
And step 230, storing the read initial biometric value in a storage field corresponding to the user in a user information database constructed in advance.
The user information database may be constructed in advance, and further, the user information database stores personal basic information data of the user and business data related to the user, wherein the personal basic information data may include different types of personal information such as a name, a birthday, a telephone number, a registration time, an account name, an account password, and the like of the user, each type of information is stored in a field form, and after the video login function is enabled, an initial face feature value related to the user and a sound feature value of the user in the acquired video may also be stored in the personal basic information data in a field form. In the basic information data, each kind of information can be stored in a specified field position, so that when the user does not enable the video login function, the corresponding field position in the personal basic information data of the user information database has no field value about the initial human face feature of the user and the sound feature of the user, and the field value is null. Generally speaking, the number of users with video login function enabled is equivalent to the number of initial biometric values of the users stored in the user information database.
And 250, reading the verification video to read the biological characteristic value of the user from the verification video.
In one embodiment, the reading of the biometric value of the user from the verification video may be a reading of a face feature value of the user.
In one embodiment, the reading of the biometric value of the user from the verification video may also be the reading of the voice feature value of the user.
In one embodiment, the reading of the biometric value of the user from the verification video may also be a reading of a face feature value of the user and a reading of a voice feature value of the user.
As described above, it is understood that the reading of the biometric value of the user from the verification video may be arbitrary and is not limited to those shown above.
In order to make those skilled in the art better understand how the face feature value and the voice feature value of the user are read from the verification video, the following description will be made:
in a specific implementation of an embodiment, reading the face feature value of the user from the verification video may be performed according to a method as shown in fig. 4.
Referring to fig. 4, which is a detailed flowchart illustrating step 250 shown in fig. 2 according to an embodiment of the present disclosure, reading the facial feature value of the user from the verification video may include the following steps:
and 251, identifying the face image in the verification video to acquire a target picture with a clear face image.
When the continuous image changes more than 24 frames (frames) per second, the human eye cannot distinguish a single static image according to the principle of persistence of vision, so that people look like a smooth and continuous visual effect, and such continuous images are called videos (videos). Thus, the nature of a video can be understood as a collection of several frames of pictures. Based on this, by reading the verification video, before acquiring the target picture with a clear face image, the face image in the video needs to be recognized first. Specifically, the face image in the video is identified, that is, whether each frame of picture in the video contains a face is detected. If the picture contains the face image, the picture is reserved; and if the picture does not contain the face image, filtering the picture.
Further, in order to ensure the quality of the picture, the picture can be further screened, that is, a clear target picture with a face image is obtained. For this purpose, the recognizability of the picture with the facial image in each frame of the video is determined, and then whether the recognizability of the picture with the facial image is greater than a preset threshold value is judged. And if the number of the pictures is larger than a preset threshold value, acquiring the pictures as target pictures, and if the number of the pictures is not larger than the preset threshold value, filtering the pictures.
In a specific implementation, determining the identifiability of each frame of a picture with a face image in the video can be accomplished by the following steps: firstly, the picture can be converted into a gray-scale image, then the image data of the gray-scale image is input into a picture definition evaluation algorithm, the picture definition evaluation algorithm outputs a calculation result capable of showing the picture definition, and the calculation result is used as the identifiability of the picture. It should be noted that, in the present invention, the higher the intelligibility of the picture, i.e. the clearer the picture is represented.
Further, in the implementation manner as described above, the conversion of the picture into the gray-scale map may be performed by using a conversion algorithm grey ═ R + G + B)/3, where R, G, B represents the intensities of the three color channels of red (R), green (G), and blue (B), respectively.
Further, in the implementation manner as described above, the image sharpness evaluation algorithm may be designed based on a Brenner gradient function, which mainly calculates the square of the gray level difference between two adjacent pixels in the image, and the function is defined as follows:
D(f)=∑y∑x|f(x+2,y)-f(x,y)|2
wherein f (x, y) represents the gray value of the pixel point (x, y) corresponding to the picture, and D (f) is the picture definition calculation result.
Further, in the implementation manner described above, the picture sharpness evaluation algorithm may also be designed based on a Tenengrad gradient function, where the Tenengrad gradient function is to extract gradient values of the image in the horizontal and vertical directions by using a Sobel operator, and the function is defined as follows:
D(f)=∑y∑x|G(x,y)|(G(x,y)>T)
the form of G (x, y) is as follows:
Figure BDA0002231598300000081
wherein, T is a given edge detection threshold, and Gx and Gy are respectively the convolutions of Sobel horizontal and vertical edge detection operators at the pixel point (x, y).
In addition, the image sharpness evaluation algorithm may also be designed based on a Laplacian gradient function, an SMD (grayscale variance) function, an SMD2 (grayscale variance product) function, etc., and the specific principle thereof is not described herein again. It is noted that the design of the picture sharpness evaluation algorithm may be arbitrary and is not limited to those shown above.
In summary, the target picture with the clear face image is obtained by filtering and screening the pictures in the video, and the method has the advantages that the clear face image can enable the face feature value read in the subsequent implementation to be more accurate and real.
Step 252, performing feature extraction on the face image in the target picture to obtain a face feature value of the face image in the target picture.
The feature extraction of the face image in the target picture needs to perform preprocessing on the face image, wherein the preprocessing process mainly includes light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering, sharpening and the like on the face image.
The features of the face image can be divided into visual features, pixel statistical features, face image transformation coefficient features, face image algebraic features and the like. The face feature extraction is carried out aiming at certain features of the face, and the face feature extraction is a process of carrying out feature modeling on the face.
Since a human face is composed of parts such as eyes, nose, mouth, and chin, geometric description of the parts and their structural relationship can be used as important features for recognizing the human face, and these features are called geometric features. Therefore, in a specific implementation, the feature extraction of the face image in the target picture to obtain the face feature value of the face image in the target picture may be implemented by a geometric feature-based method.
Specifically, the geometric feature-based method mainly obtains feature data that is helpful for face classification according to shape descriptions of face organs and distance characteristics between the face organs, and feature components of the geometric feature-based method generally include euclidean distances, curvatures, angles and the like between feature points.
In addition, feature extraction is performed on the face image in the target picture to obtain the face feature value of the face image in the target picture, and the feature value can be obtained based on a template matching method, an algebraic feature or a statistical learning characterization method.
It should be noted that the facial feature values in the present disclosure may refer to various parameters used to describe the facial features,
in one embodiment, the reading of the sound feature value of the user from the verification video may be performed according to a method as shown in fig. 5.
Referring to fig. 5, which is a detailed flowchart illustrating step 250 shown in fig. 2 according to an embodiment of the present disclosure, reading the sound feature value of the user from the verification video may include the following steps:
and step 253, carrying out noise reduction processing on the sound in the verification video so as to enhance the sound signal of the user.
In the verification video, it is difficult to ensure that the sound signals in the video are completely from the user. When a user records a video, certain noise is brought into the video, for example, the user records the video in a noisy crowd, or records the video at the road side, or records the video in a factory, and certain noise is brought into the video. Therefore, in order to emphasize the user's voice and enhance the voice signal thereof, it is necessary to perform noise reduction processing on the voice in the verification video.
In a specific implementation, the noise reduction processing on the sound in the verification video may be implemented based on an LMS adaptive filter noise reduction method. For the LMS adaptive filter method, the current filter parameters are automatically adjusted mainly by using the filter parameters obtained at the previous moment to adapt to the unknown or randomly varying statistical characteristics of signals and noise, thereby realizing optimal filtering.
In addition, the noise reduction processing on the sound in the verification video can be realized by an adaptive notch filter noise reduction method based on LMS, a basic spectral subtraction method and a wiener filter noise reduction method.
And 254, extracting the characteristics of the voice signal of the user to obtain the voice characteristic value of the user.
In the present disclosure, the feature extraction is performed on the user's voice signal, and the voice feature may be sound intensity and intensity level, loudness, pitch period and pitch frequency, frequency perturbation, amplitude perturbation, normalized noise energy, mel-frequency cepstral coefficient, short-time energy, short-time average amplitude, short-time average zero-crossing rate, formants, glottal wave, and other voice features.
It should be noted that when extracting the sound features listed above, one of the sound features may be extracted, all the sound features may be extracted, or any combination of the sound features listed above may be extracted.
It should also be noted that the sound feature values described in the present disclosure may refer to various parameters used to describe the sound features,
step 260, detecting whether an initial biometric value matched with the biometric value of the user exists in a user information database constructed in advance based on the biometric value of the user.
Specifically, in the step 260, based on the biometric value of the user, the implementation manner of detecting whether the initial biometric value matching the biometric value of the user exists in the user information database constructed in advance may be various, and specifically, the following steps are performed:
in one embodiment, detecting whether an initial biometric value matching the biometric value of the user exists in a user information database constructed in advance based on the biometric value of the user may be accomplished by: calculating the face goodness of fit between the face characteristic value of the user and the initial face characteristic value in a user information database constructed in advance; and detecting whether an initial biological characteristic value which is consistent with the biological characteristic value of the user exists in a user information database which is constructed in advance according to the human face goodness of fit.
In one embodiment, detecting whether an initial biometric value matching the biometric value of the user exists in a user information database constructed in advance based on the biometric value of the user may be accomplished by: calculating the sound matching degree between the sound characteristic value of the user and the initial sound characteristic value in a user information database constructed in advance; and detecting whether an initial biological characteristic value which is consistent with the biological characteristic value of the user exists in a user information database which is constructed in advance according to the voice goodness of fit.
In one embodiment, detecting whether there is an initial biometric value matching the biometric value of the user in a previously constructed user information database based on the biometric value of the user may be accomplished in a manner as shown in fig. 6:
referring to FIG. 6, a detailed flow chart of step 260 shown in FIG. 2 is shown. The method specifically comprises the following steps:
and 261, calculating the face coincidence degree between the face characteristic value of the user and the initial face characteristic value in the user information database constructed in advance.
And 262, calculating the sound matching degree between the sound characteristic value of the user and the initial sound characteristic value in the user information database constructed in advance.
And 263, detecting whether an initial biological characteristic value which is consistent with the biological characteristic value of the user exists in a user information database which is constructed in advance according to the human face goodness of fit and the sound goodness of fit.
In a specific implementation of an embodiment, the detecting whether there is an initial biometric value matching the biometric value of the user in a user information database constructed in advance according to the human face matching degree and the voice matching degree may be performed in a manner as shown in fig. 7:
referring to FIG. 7, a detailed flowchart of step 263 shown in FIG. 7 is shown. The method specifically comprises the following steps:
step 2631, calculating a comprehensive goodness of fit between the biometric value of the user and the initial biometric value in the user information database constructed in advance according to the face goodness of fit and the voice goodness of fit and a preset rule.
Calculating the comprehensive goodness of fit may be based on a proportional relationship between the face goodness of fit and the sound goodness of fit, for example, if the face goodness of fit is 80 and the sound goodness of fit is 90, then the comprehensive goodness of fit is:
0.8×80+0.2×90=82
step 2632, sorting the comprehensive goodness of fit to determine whether the maximum comprehensive goodness of fit exceeds a preset threshold;
step 2633, if yes, determining the initial biometric value corresponding to the maximum integrated goodness of fit as the initial biometric value matching the biometric value of the user.
In a specific implementation of an embodiment, the detecting whether an initial biometric value matching the biometric value of the user exists in a user information database constructed in advance according to the human face goodness of fit and the voice goodness of fit may be performed by: respectively detecting whether the human face goodness of fit and the sound goodness of fit exceed a preset threshold value; performing intersection processing on the account corresponding to the face goodness of fit exceeding the preset threshold and the account corresponding to the sound goodness of fit exceeding the preset threshold to obtain initial biological characteristic values of which the face goodness of fit and the sound goodness of fit both exceed the preset threshold; and determining the initial biological characteristic value of which the human face goodness of fit and the sound goodness of fit both exceed a preset threshold value as an initial biological characteristic value which is matched with the biological characteristic value of the user.
As described above, it is understood that the manner of detecting whether there is an initial biometric value matching the biometric value of the user in the user information database constructed in advance may be arbitrary, and is not limited to those shown above, according to the face matching degree and the voice matching degree.
In order to make those skilled in the art better understand how the human face matching degree between the human face feature value of the user and the initial human face feature value in the user information database constructed in advance and the sound matching degree between the sound feature value of the user and the initial sound feature value in the user information database constructed in advance are calculated, the following specific description will be made here:
biometric values (e.g., face feature values and voice feature values) obtained from videos recorded at different times are substantially the same, or extremely similar, for the same user. Therefore, it is possible to calculate a face matching degree (sound matching degree) between the face feature value (sound feature value) of the user and the initial face feature value (initial sound feature value) in the user information database constructed in advance by comparing each parameter for describing a face feature (sound feature) included in the face feature value (sound feature value) of the user with each parameter for describing a face feature (sound feature) included in the initial face feature value (initial sound feature value) of each user stored in the user information database constructed in advance one by one; alternatively, the parameters describing the face features (voice features) included in the face feature values (voice feature values) of the users and the parameters describing the face features (voice features) included in the initial face feature values (initial voice feature values) of the users stored in the user information database constructed in advance may be input into a machine learning model trained in advance, and then the results output by the machine learning model may be compared to calculate the face matching degree between the face feature values (voice feature values) of the users and the initial face feature values (initial voice feature values) in the user information database constructed in advance.
In one embodiment, after detecting whether there is an initial biometric value matching the biometric value of the user in a previously constructed user information database based on the biometric value of the user, the method may further include:
if the initial biological characteristic value which is matched with the biological characteristic value of the user does not exist in a user information database which is constructed in advance, rejecting an account login request of the user; and displaying an account password login page so that a user can login an account by inputting an account and a password.
Step 270, authorizing the account corresponding to the initial biometric value matching the biometric value of the user to be the login account of the user.
It is noted that step 270 is performed on the premise that an initial biometric value matching the biometric value of the user exists in the user information database constructed in advance.
To sum up, according to the technical scheme of the embodiment of the present disclosure, the video data of the user is used as the verification basis for the user to log in the account, so that the advantage that the video data is conveniently obtained when the user logs in the account and the advantage that the video data is irreplaceable when the user identity is verified can be fully utilized. Also for example, a non-real user cannot acquire video data of a real user, and thus cannot log into an account. Therefore, the technical scheme of the embodiment of the disclosure simplifies the account login process, improves the user experience, and simultaneously ensures the security of the account login. And then can solve the account that exists among the prior art and log in the technical problem that the process is loaded down with trivial details, login efficiency is not high, user experience is low and the security is low.
The following describes embodiments of the apparatus of the present disclosure, which may be used to execute the video-based account login method in the above embodiments of the present disclosure. For details that are not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the video-based account login method of the present disclosure.
Fig. 8 shows a block diagram of a video-based account login apparatus according to one embodiment of the present disclosure.
Referring to fig. 8, a video-based account login apparatus 800 according to an embodiment of the present disclosure includes: an obtaining unit 810, a determining unit 820, a selecting unit 830, and a synchronizing unit 840.
The obtaining unit 810 is configured to obtain, in response to a login request instruction triggered by a user, a verification video of the user for logging in an account; a reading unit 820, configured to read the verification video to read a biometric value of the user from the verification video; a detecting unit 830 configured to detect whether an initial biometric value matching the biometric value of the user exists in a user information database constructed in advance based on the biometric value of the user; an authorizing unit 840, configured to authorize an account corresponding to the initial biometric value matching the biometric value of the user to be a login account of the user.
In some embodiments of the present disclosure, based on the foregoing scheme, the obtaining unit 810 is further configured to: responding to a video login function starting instruction triggered by a user, and acquiring an initial video of the user for starting a video login function; the reading unit 820 is further configured to: reading the initial video to read an initial biometric value of the user from the initial video; the video-based account login device further comprises a storage unit, wherein the storage unit is used for storing the read initial biological characteristic value in a storage field corresponding to the user in a user information database constructed in advance.
In some embodiments of the present disclosure, based on the foregoing, the reading unit 820 is further configured to: and reading the face characteristic value of the user and the sound characteristic value of the user from the verification video.
In some embodiments of the present disclosure, based on the foregoing, the reading unit 820 comprises: the identification unit is used for identifying the face image in the verification video so as to obtain a target picture with a clear face image; noise reduction processing for performing noise reduction processing on the sound in the verification video to enhance the sound signal of the user; and the extraction unit is used for respectively extracting the characteristics of the face image in the target picture so as to obtain the face characteristic value of the face image in the target picture and extracting the characteristics of the voice signal of the user so as to obtain the voice characteristic value of the user.
In some embodiments of the present disclosure, based on the foregoing scheme, the detecting unit 830 is configured to: calculating the face goodness of fit between the face characteristic value of the user and the initial face characteristic value in a user information database constructed in advance; calculating the sound matching degree between the sound characteristic value of the user and the initial sound characteristic value in a user information database constructed in advance; and detecting whether an initial biological characteristic value which is consistent with the biological characteristic value of the user exists in a user information database which is constructed in advance according to the human face goodness of fit and the sound goodness of fit.
In some embodiments of the present disclosure, based on the foregoing scheme, the detecting unit 830 is configured to: calculating the comprehensive goodness of fit between the biological characteristic value of the user and the initial biological characteristic value in a user information database constructed in advance according to the human face goodness of fit and the sound goodness of fit and a preset rule; sequencing the comprehensive goodness of fit to judge whether the maximum comprehensive goodness of fit exceeds a preset threshold value; and if so, determining the initial biological characteristic value corresponding to the maximum comprehensive goodness of fit as the initial biological characteristic value matched with the biological characteristic value of the user.
In some embodiments of the present disclosure, based on the foregoing scheme, the detecting unit 830 is configured to: if the initial biological characteristic value which is matched with the biological characteristic value of the user does not exist in a user information database which is constructed in advance, rejecting an account login request of the user; and displaying an account password login page so that a user can login an account by inputting an account and a password.
It should be noted that although several units of the video-based account login method and the video-based account login apparatus are mentioned in the above detailed description, such division is not mandatory. Indeed, two or more of the units and functions described above may be embodied in one unit, according to embodiments of the present disclosure. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units. The components displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the elements can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
As another aspect, the present disclosure also provides a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the "exemplary methods" section above of this specification, when the program product is run on the terminal device.
Referring to fig. 9, a program product 900 for implementing the above method according to an embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
As another aspect, the present disclosure also provides an electronic device capable of implementing the above method.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 1000 according to this embodiment of the disclosure is described below with reference to fig. 10. The electronic device 1000 shown in fig. 10 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the electronic device 1000 is embodied in the form of a general purpose computing device. The components of the electronic device 1000 may include, but are not limited to: the at least one processing unit 1010, the at least one memory unit 1020, and a bus 1030 that couples various system components including the memory unit 1020 and the processing unit 1010.
Wherein the storage unit stores program code that can be executed by the processing unit 1010 to cause the processing unit 1010 to perform the steps according to various exemplary embodiments of the present disclosure described in the section "example methods" above in this specification.
The memory unit 1020 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)1021 and/or a cache memory unit 1022, and may further include a read-only memory unit (ROM) 1023.
Storage unit 1020 may also include a program/utility 1024 having a set (at least one) of program modules 1025, such program modules 1025 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1030 may be any one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, and a local bus using any of a variety of bus architectures.
The electronic device 1000 may also communicate with one or more external devices 1200 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1000, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 1000 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interfaces 1050. Also, the electronic device 1000 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 1060. As shown, the network adapter 1060 communicates with the other modules of the electronic device 1000 over the bus 1030. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 1000, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
It is to be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A video-based account login method is characterized by comprising the following steps:
responding to a login request instruction triggered by a user, and acquiring a verification video for logging in an account of the user;
reading the verification video to read the biometric value of the user from the verification video;
detecting whether an initial biometric value matching the biometric value of the user exists in a user information database constructed in advance based on the biometric value of the user;
authorizing the account corresponding to the initial biometric value that exists that matches the biometric value of the user as the login account of the user.
2. The method of claim 1, wherein prior to obtaining the user's verification video for logging into the account in response to a user-triggered login request instruction, the method further comprises:
responding to a video login function starting instruction triggered by a user, and acquiring an initial video of the user for starting a video login function;
reading the initial video to read an initial biometric value of the user from the initial video;
and storing the read initial biological characteristic value in a storage field corresponding to the user in a user information database constructed in advance.
3. The method according to claim 1, wherein the verification video comprises a face of a user and a voice of the user, and reading the biometric value of the user from the verification video comprises:
and reading the face characteristic value of the user and the sound characteristic value of the user from the verification video.
4. The method of claim 3, wherein reading the face feature value of the user from the verification video comprises:
identifying a face image in the verification video to obtain a target picture with a clear face image;
extracting the characteristics of the face image in the target picture to obtain the face characteristic value of the face image in the target picture;
reading the sound characteristic value of the user from the verification video, comprising:
performing noise reduction processing on the sound in the verification video to enhance the sound signal of the user;
and carrying out feature extraction on the sound signal of the user to obtain a sound feature value of the user.
5. The method of claim 1, wherein the biometric values of the user comprise face feature values of the user and voice feature values of the user, wherein the initial biometric values comprise initial face feature values and initial voice feature values,
the detecting whether an initial biometric value matching the biometric value of the user exists in a user information database constructed in advance based on the biometric value of the user includes:
calculating the face goodness of fit between the face characteristic value of the user and the initial face characteristic value in a user information database constructed in advance;
calculating the sound matching degree between the sound characteristic value of the user and the initial sound characteristic value in a user information database constructed in advance;
and detecting whether an initial biological characteristic value which is consistent with the biological characteristic value of the user exists in a user information database which is constructed in advance according to the human face goodness of fit and the sound goodness of fit.
6. The method according to claim 5, wherein the detecting whether an initial biometric value matching the biometric value of the user exists in a user information database constructed in advance according to the human face matching degree and the voice matching degree comprises:
calculating the comprehensive goodness of fit between the biological characteristic value of the user and the initial biological characteristic value in a user information database constructed in advance according to the human face goodness of fit and the sound goodness of fit and a preset rule;
sequencing the comprehensive goodness of fit to judge whether the maximum comprehensive goodness of fit exceeds a preset threshold value;
and if so, determining the initial biological characteristic value corresponding to the maximum comprehensive goodness of fit as the initial biological characteristic value matched with the biological characteristic value of the user.
7. The method according to claim 1, wherein after detecting whether there is an initial biometric value matching the biometric value of the user in a previously constructed user information database based on the biometric value of the user, the method further comprises:
if the initial biological characteristic value which is matched with the biological characteristic value of the user does not exist in a user information database which is constructed in advance, rejecting an account login request of the user;
and displaying an account password login page so that a user can login an account by inputting an account and a password.
8. A video-based account login apparatus, comprising:
the acquisition unit is used for responding to a login request instruction triggered by a user and acquiring a verification video of the user for logging in an account;
a reading unit, configured to read the verification video to read a biometric value of the user from the verification video;
a detection unit configured to detect whether an initial biometric value matching the biometric value of the user exists in a user information database constructed in advance based on the biometric value of the user;
and the authorization unit is used for authorizing the account corresponding to the initial biological characteristic value which is matched with the biological characteristic value of the user to be the login account of the user.
9. A computer-readable program medium, characterized in that it stores computer program instructions which, when executed by a computer, cause the computer to perform the method according to any one of claims 1 to 7.
10. A video-based account login electronic device, the electronic device comprising:
a processor;
a memory having stored thereon computer readable instructions which, when executed by the processor, implement the method of any of claims 1 to 7.
CN201910969479.2A 2019-10-12 2019-10-12 Video-based account login method, device, medium and electronic equipment Pending CN110891049A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910969479.2A CN110891049A (en) 2019-10-12 2019-10-12 Video-based account login method, device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910969479.2A CN110891049A (en) 2019-10-12 2019-10-12 Video-based account login method, device, medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN110891049A true CN110891049A (en) 2020-03-17

Family

ID=69746110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910969479.2A Pending CN110891049A (en) 2019-10-12 2019-10-12 Video-based account login method, device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110891049A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706767A (en) * 2021-10-27 2021-11-26 深圳市恒裕惠丰贸易有限公司 Register storage system capable of automatically identifying face value of coin

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697514A (en) * 2009-10-22 2010-04-21 中兴通讯股份有限公司 Method and system for identity authentication
CN104598796A (en) * 2015-01-30 2015-05-06 科大讯飞股份有限公司 Method and system for identifying identity
EP3229177A2 (en) * 2016-04-04 2017-10-11 Daon Holdings Limited Methods and systems for authenticating users
CN107368817A (en) * 2017-07-26 2017-11-21 湖南云迪生物识别科技有限公司 Face identification method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697514A (en) * 2009-10-22 2010-04-21 中兴通讯股份有限公司 Method and system for identity authentication
CN104598796A (en) * 2015-01-30 2015-05-06 科大讯飞股份有限公司 Method and system for identifying identity
EP3229177A2 (en) * 2016-04-04 2017-10-11 Daon Holdings Limited Methods and systems for authenticating users
CN107368817A (en) * 2017-07-26 2017-11-21 湖南云迪生物识别科技有限公司 Face identification method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
谈玲等: "交互式多生物特征识别技术在电子商务中的应用", 《电信科学》 *
邝细超: "《基于人脸识别的嵌入式视频监控系统》", 《中国优秀硕士学位论文全文库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706767A (en) * 2021-10-27 2021-11-26 深圳市恒裕惠丰贸易有限公司 Register storage system capable of automatically identifying face value of coin

Similar Documents

Publication Publication Date Title
US10699103B2 (en) Living body detecting method and apparatus, device and storage medium
US10095927B2 (en) Quality metrics for biometric authentication
US9922238B2 (en) Apparatuses, systems, and methods for confirming identity
CN109409204B (en) Anti-counterfeiting detection method and device, electronic equipment and storage medium
US20170262472A1 (en) Systems and methods for recognition of faces e.g. from mobile-device-generated images of faces
US8744141B2 (en) Texture features for biometric authentication
EP2704052A1 (en) Transaction verification system
CN110728234A (en) Driver face recognition method, system, device and medium
CN108280418A (en) The deception recognition methods of face image and device
EP2863339A2 (en) Methods and systems for determing user liveness
WO2016084072A1 (en) Anti-spoofing system and methods useful in conjunction therewith
US10824890B2 (en) Living body detecting method and apparatus, device and storage medium
US11328043B2 (en) Spoof detection by comparing images captured using visible-range and infrared (IR) illuminations
CN112868028A (en) Spoofing detection using iris images
CN111241873A (en) Image reproduction detection method, training method of model thereof, payment method and payment device
JP4708835B2 (en) Face detection device, face detection method, and face detection program
CN110891049A (en) Video-based account login method, device, medium and electronic equipment
CN111985400A (en) Face living body identification method, device, equipment and storage medium
CN112201254A (en) Non-sensitive voice authentication method, device, equipment and storage medium
CN112949363A (en) Face living body identification method and device
CN111368644B (en) Image processing method, device, electronic equipment and storage medium
US11842573B1 (en) Methods and systems for enhancing liveness detection of image data
RU2798179C1 (en) Method, terminal and system for biometric identification
KR102579610B1 (en) Apparatus for Detecting ATM Abnormal Behavior and Driving Method Thereof
Dixit et al. SIFRS: Spoof Invariant Facial Recognition System (A Helping Hand for Visual Impaired People)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200317