CN117558058A - User login method, system, computer equipment and storage medium - Google Patents
User login method, system, computer equipment and storage medium Download PDFInfo
- Publication number
- CN117558058A CN117558058A CN202410047289.6A CN202410047289A CN117558058A CN 117558058 A CN117558058 A CN 117558058A CN 202410047289 A CN202410047289 A CN 202410047289A CN 117558058 A CN117558058 A CN 117558058A
- Authority
- CN
- China
- Prior art keywords
- features
- user
- face
- light intensity
- login
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000001815 facial effect Effects 0.000 claims abstract description 247
- 230000004927 fusion Effects 0.000 claims abstract description 89
- 238000000605 extraction Methods 0.000 claims abstract description 80
- 238000012545 processing Methods 0.000 claims abstract description 42
- 239000011159 matrix material Substances 0.000 claims description 32
- 238000004364 calculation method Methods 0.000 claims description 27
- 238000007906 compression Methods 0.000 claims description 27
- 230000006835 compression Effects 0.000 claims description 24
- 238000006243 chemical reaction Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 14
- 238000012790 confirmation Methods 0.000 claims description 7
- 230000002708 enhancing effect Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 description 10
- 238000009434 installation Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 238000012795 verification Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 5
- 235000019580 granularity Nutrition 0.000 description 5
- 235000019587 texture Nutrition 0.000 description 4
- 238000005286 illumination Methods 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 238000012886 linear function Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 208000032544 Cicatrix Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000000049 pigment Substances 0.000 description 1
- 239000011148 porous material Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 231100000241 scar Toxicity 0.000 description 1
- 230000037387 scars Effects 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/469—Contour-based spatial representations, e.g. vector-coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/169—Holistic features and representations, i.e. based on the facial image taken as a whole
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/0861—Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Computer Security & Cryptography (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computer Hardware Design (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a user login method, a system, computer equipment and a storage medium, wherein the method comprises the following steps: when a face login request of a user is received, different types of image acquisition are carried out on the face of the user, and a face light intensity image and a face depth image of the user are obtained; respectively extracting features of the face light intensity image and the face depth image to obtain face light intensity features and face depth features; performing self-adaptive feature enhancement processing on the facial light intensity features based on the facial depth features to obtain facial fusion features; and carrying out user identification based on the facial fusion characteristics and a pre-stored facial image template, and generating feedback data of the facial login request based on user identification data. The invention carries out graph feature extraction based on the facial light intensity image and the facial depth image of the user, can improve the extraction capability of the facial features of the user, and obtains more accurate facial features of the user, thereby improving the accuracy of face recognition and further ensuring the login safety of the user.
Description
Technical Field
The present invention relates to the field of image data processing technologies, and in particular, to a user login method, a system, a computer device, and a storage medium.
Background
The traditional internet platform user login mode is usually a user name encryption mode, and once user name and password information are revealed, other people can log in corresponding accounts, crisis personal account information and even property safety. Based on this, the existing solution is to acquire the real-time biological characteristics of the logged-in user to perform identity verification, and further judge the validity of the user, so as to protect the login security of the user, for example, the login mode based on the face recognition technology. The face recognition technology is a series of related technologies for acquiring images or videos containing faces through a camera or a camera, performing face tracking detection, and then performing face recognition on the detected faces.
However, the inventor finds that, for a software system, if an account is registered by using a face recognition method, the account may be registered by a person with similar length, or may be cracked by other people by adopting a still photo, a video or other methods, and the accuracy of the face recognition registration method is not enough, so that the user registration security is not high.
Disclosure of Invention
The invention provides a user login method, a system, computer equipment and a storage medium, which are used for solving the problem of low user login safety caused by insufficient accuracy of the existing face recognition login mode.
Provided is a user login method, comprising:
when receiving a face login request sent by a user through terminal equipment, carrying out different types of image acquisition on the face of the user to obtain a face light intensity image and a face depth image of the user;
respectively extracting features of the face light intensity image and the face depth image to obtain face light intensity features and face depth features;
performing self-adaptive feature enhancement processing on the facial light intensity features based on the facial depth features to obtain facial fusion features;
and carrying out user identification based on the facial fusion characteristics and a pre-stored facial image template, and generating feedback data of the facial login request based on user identification data.
Optionally, the facial light intensity feature comprises a plurality of light intensity sub-features and the facial depth feature comprises a plurality of depth sub-features; performing adaptive feature addition processing on the facial light intensity features based on the facial depth features to obtain facial fusion features, including:
based on the light intensity sub-features and the depth sub-features, performing distance calculation on the face light intensity features and the face depth features to obtain feature distance data; the characteristic distance data comprises a distance matrix of each light intensity sub-characteristic and a corresponding depth sub-characteristic;
Performing weight conversion on each distance matrix in the characteristic distance data to obtain characteristic weight data, wherein the characteristic weight data comprises weight values of each light intensity sub-characteristic;
and enhancing the light intensity sub-features based on the feature weight data, and fusing the enhanced features to obtain the facial fusion features.
Optionally, based on the plurality of light intensity sub-features and the plurality of depth sub-features, performing distance calculation on the face light intensity features and the face depth features to obtain feature distance data, including:
determining light intensity sub-features and depth sub-features with the same data size according to the data sizes of the light intensity sub-features and the depth sub-features, marking the light intensity sub-features and the depth sub-features as feature groups, and traversing all the features to obtain a plurality of feature groups;
performing feature compression processing on the light intensity sub-features and the depth sub-features in each feature group to obtain compressed feature groups;
and performing distance calculation on the light intensity sub-features and the depth sub-features in each compressed feature group to obtain a distance matrix of each feature group, and summarizing the distance matrix into feature distance data.
Optionally, the facial light intensity feature comprises a plurality of light intensity sub-features and the facial depth feature comprises a plurality of depth sub-features; the face light intensity image and the face depth image are respectively subjected to feature extraction to obtain face light intensity features and face depth features, and the face light intensity feature and face depth features comprise:
Acquiring a convolution feature model comprising a plurality of feature extraction layers, wherein the plurality of feature extraction layers have different convolution kernels;
inputting the face light intensity image into a convolution feature model for carrying out layered feature extraction, and obtaining a light intensity feature image output by each feature extraction layer to obtain a plurality of light intensity sub-features with different data sizes;
and inputting the facial depth features into a convolution feature model to perform hierarchical feature extraction, and obtaining a depth feature map output by each feature extraction layer to obtain a plurality of depth sub-features with different data sizes.
Optionally, performing different types of image acquisition on the face of the user to obtain a face light intensity image and a face depth image of the user, including:
determining whether a light intensity image acquisition device and a depth image acquisition device are installed on the terminal equipment;
if the terminal equipment is provided with the light intensity image acquisition device and the depth image acquisition device, the light intensity image acquisition device and the depth image acquisition device are respectively called to acquire images of the face of the user, and a face light intensity image and a face depth image are obtained.
Optionally, after determining whether the light intensity image acquisition device and the depth image acquisition device are installed on the terminal device, the method further includes:
If the terminal equipment is not provided with the light intensity image acquisition device and/or the depth image acquisition device, providing an installation package of light intensity image acquisition software and/or depth image acquisition software so that a user installs the light intensity image acquisition software and/or the depth image acquisition software on the terminal equipment;
prompting the user to call light intensity image acquisition software and/or depth image acquisition software, and acquiring images of the face of the user to obtain a light intensity image and a depth image of the face.
Optionally, before receiving the face login request sent by the user through the terminal device, the method further includes:
when a user enters a user login interface through terminal equipment, the terminal equipment prompts the user to input a login account and provides various login modes, so that the terminal equipment generates a user login request according to the login account input by the user and the selected login mode; the login mode comprises a face recognition login mode;
when a user login request sent by a terminal device is received, determining whether a login mode in the user login request is a face recognition login mode or not;
if the login mode in the user login request is the face recognition login mode, the face login request sent by the user through the terminal equipment is determined to be received.
Optionally, after determining whether the login mode in the user login request is the face recognition login mode, the method further comprises:
if the login mode in the user login request is a password login mode, performing account authentication on a login account and login key information in the user login request to obtain account authentication information, wherein the login key information comprises a login password and a dynamic key;
if the account authentication information is that the login account is registered and the login key information is wrong, sending a login key wrong prompt to a user through the terminal equipment, and prompting whether the user confirms to execute a face recognition login mode or not;
and when a face identification login mode confirmation instruction sent by the terminal equipment is received, determining that a face login request is received.
The utility model provides a user login system, including server and terminal equipment, wherein, the server includes:
the acquisition module is used for acquiring images of different types of the face of the user when receiving a face login request sent by the user through the terminal equipment, so as to obtain a face light intensity image and a face depth image of the user;
the extraction module is used for extracting the characteristics of the face light intensity image and the face depth image respectively to obtain face light intensity characteristics and face depth characteristics;
The fusion module is used for carrying out self-adaptive feature enhancement processing on the facial light intensity features based on the facial depth features to obtain facial fusion features;
and the recognition module is used for carrying out user recognition based on the facial fusion characteristics and a pre-stored facial image template, and generating feedback data of the facial login request based on the user recognition data.
There is provided a computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the user login method as described above when executing the computer program.
There is provided a computer readable storage medium storing a computer program which when executed by a processor performs the steps of the user login method as described above.
In one technical scheme provided by the user login method, the system, the computer equipment and the storage medium, when a face login request sent by a user through terminal equipment is received, different types of image acquisition are carried out on the face of the user, so that a face light intensity image and a face depth image of the user are obtained; respectively extracting features of the face light intensity image and the face depth image to obtain face light intensity features and face depth features; performing self-adaptive feature enhancement processing on the facial light intensity features based on the facial depth features to obtain facial fusion features; and carrying out user identification based on the facial fusion characteristics and a pre-stored facial image template, and generating feedback data of the facial login request based on user identification data. According to the embodiment of the invention, through collecting different types of facial images, then carrying out image feature extraction based on the facial light intensity images and the facial depth images of the user, the extraction capability of facial features of the user can be improved, and then, the facial light intensity features and the facial depth features are subjected to self-adaptive feature enhancement and fusion treatment, so that the effective fusion of the different types of facial features can be realized, the texture features of the face of the user are highlighted, more accurate facial features of the user are obtained, the face recognition accuracy can be improved, and the user login safety is further ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an application environment of a user login method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a user login method according to an embodiment of the invention;
FIG. 3 is a flowchart illustrating an implementation of step S40 in FIG. 2;
FIG. 4 is a flowchart illustrating an implementation of step S10 in FIG. 2;
FIG. 5 is a schematic flow chart of another implementation of step S30 in FIG. 2;
FIG. 6 is a schematic diagram of a server according to an embodiment of the invention;
FIG. 7 is a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The image retrieval method provided by the embodiment of the invention can be applied to a user login system shown in figure 1, wherein the user login system comprises terminal equipment and a server, and the terminal equipment is communicated with the server through a network. The terminal equipment is equipment for logging in and using related software and a platform by a user, and the number of the terminal equipment can be multiple.
When a user needs to log in an account, the user enters a user login interface of a front end corresponding to a server through terminal equipment, and a face login request based on face recognition is sent to the server through the terminal equipment; when a face login request sent by a user through a terminal device is received, acquiring a face light intensity image and a face depth image of the user by carrying out different types of image acquisition on the face of the user; respectively extracting features of the face light intensity image and the face depth image to obtain face light intensity features and face depth features; performing self-adaptive feature enhancement processing on the facial light intensity features based on the facial depth features to obtain facial fusion features; and carrying out user identification based on the facial fusion characteristics and a pre-stored facial image template, and generating feedback data of the facial login request based on user identification data. According to the embodiment of the invention, through collecting different types of facial images, then carrying out image feature extraction based on the facial light intensity images and the facial depth images of the user, the extraction capability of facial features of the user can be improved, and then, the facial light intensity features and the facial depth features are subjected to self-adaptive feature enhancement and fusion treatment, so that the effective fusion of the different types of facial features can be realized, the texture features of the face of the user are highlighted, more accurate facial features of the user are obtained, the face recognition accuracy can be improved, and the user login safety is further ensured.
The terminal equipment device can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers and other equipment; the server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, a user login method is provided, and the user login system in fig. 1 is taken as an example to illustrate the method, which includes the following steps:
s10: when receiving a face login request sent by a user through terminal equipment, different types of image acquisition are carried out on the face of the user, and a face light intensity image and a face depth image are obtained.
When the user needs to log in the account, the user enters a user login interface of the front end corresponding to the server through the terminal equipment, and performs relevant information input or button clicking operation on the user login interface, so that the terminal equipment generates a face login request according to the operation of the user, and the terminal equipment sends the generated face login request to the server. When receiving a face login request sent by a user through a terminal device, the server starts to acquire images of different types on the face of the user in response to the face login request, and a face light intensity image and a face depth image of the user are obtained. The face light intensity image may be a color image including light intensity information or a gray scale image including light intensity information.
It should be understood that light intensity refers to the energy of visible light received per unit area, and is used to indicate the intensity of illumination and the amount of illumination to which the surface area of the object is illuminated; the depth is three-dimensional coordinate information from each point of the detected object to the image acquisition device, and the depth image can be expressed. In this embodiment, the face light intensity image can express illumination intensity and energy information of different positions of the face of the user, and can be acquired by a traditional camera or a digital camera; the face depth image can express three-dimensional coordinate information of different positions of the face of the user and can be acquired by a depth camera.
S20: and respectively extracting the features of the face light intensity image and the face depth image to obtain the face light intensity features and the face depth features.
After obtaining the facial light intensity image and the facial depth image of the user, the server needs to perform feature extraction on the facial light intensity image and the facial depth image respectively to obtain facial light intensity features and facial depth features. For example, the server needs to call a traditional image feature extraction model to perform feature extraction on the facial light intensity image to obtain facial light intensity features, and call a pre-trained depth feature extraction model to perform feature extraction on the facial depth image to obtain facial depth features, which is simple and convenient.
S30: and carrying out self-adaptive feature enhancement processing on the facial light intensity features based on the facial depth features to obtain facial fusion features.
After obtaining the facial light intensity characteristics and the facial depth characteristics, the server needs to perform self-adaptive characteristic enhancement processing on the facial light intensity characteristics based on the facial depth characteristics to obtain facial fusion characteristics.
For example, the facial depth feature and the facial light intensity feature both comprise features of different pixels, namely the facial depth feature can be a depth feature matrix formed by depth features of all pixels, the facial light intensity feature can be a light intensity feature matrix formed by light intensity features of all pixels, covariance calculation can be directly performed on the two matrixes to obtain a similarity matrix of the facial depth feature and the facial light intensity feature, further, conversion processing (including linear conversion processing and nonlinear conversion processing) is performed on the similarity matrix, a plurality of feature similarities in the similarity matrix are converted into a plurality of weight values, and then each weight value is multiplied by a corresponding feature in the light intensity feature of the face, so that a face fusion feature after feature enhancement is obtained, and then account registration of face recognition is performed by using the face fusion feature. In other embodiments, the facial depth features and the facial light intensity features can be directly spliced together to form facial fusion features, so that the facial fusion features comprise two types of different facial features, feature diversity is increased, and the accuracy of the facial fusion features is improved, and the facial fusion features are simple and convenient.
In this embodiment, the face depth image collected on site can collect three-dimensional coordinate information of the object to be measured, so that the situation that other people use data such as photos and videos to perform face recognition can be avoided; the facial depth features are used for enhancing the facial light intensity features to obtain facial fusion features, fine features of each position of the face of the user can be effectively enhanced, the detailed textures of the face of the user are highlighted, the facial fusion features are more accurate, the possibility of false recognition of long-phase similar people is reduced, the accuracy of face recognition login is improved, and therefore the safety and property safety of the user account are guaranteed.
S40: and carrying out user identification based on the facial fusion characteristics and a pre-stored facial image template, and generating feedback data of the facial login request based on user identification data.
After obtaining the facial fusion features, the server needs to perform user recognition based on the facial fusion features and a pre-stored facial image template, and generates feedback data of a facial login request based on user recognition data.
For example, face image templates of all users (the face image templates at least comprise standard face light intensity patterns) can be pulled from a user database, then feature extraction is performed on the face image templates, and then the extracted face template features and face fusion features are matched (if similarity is calculated, the matching is successful if the similarity is greater than or less than a threshold value, otherwise the matching is failed if the similarity is less than the threshold value), if the face template features are matched, user identification is successful, the users corresponding to the matched face template features are used as user identification data, and further login successful feedback data is generated; if the facial template characteristics are matched, the user identification fails, the user identification data is the identification failure, and feedback data of login failure is generated. In this embodiment, the face image template may be only a standard face light intensity map, so as to reduce the data processing amount for extracting the features of the face image template.
The facial template features of each facial image template can be stored in the user database in advance, and the facial template features are bound with the facial image templates and the user account. After the face fusion features are obtained, the face template features of all users are obtained from a user database, then the face fusion features are matched with each face template feature, user identification data are generated according to the matching results, and further feedback data of face login requests are generated, so that the data processing amount can be reduced, and the user login efficiency is improved. The face template features are fusion features obtained by feature extraction based on a standard face light intensity image and a standard face depth image, the face template feature acquisition process refers to the face fusion feature acquisition process, namely the standard face light intensity image and the standard face depth image are subjected to feature extraction to obtain standard light intensity features and standard depth features, and then self-adaptive feature enhancement processing is performed based on the standard light intensity features and the standard depth features to obtain the face template features, so that the accuracy of the face template features is guaranteed, the accuracy of matching the face template features with the face fusion features is improved, the user identification accuracy is improved, and the user login security is improved.
In the embodiment, when a face login request sent by a user through a terminal device is received, a face light intensity image and a face depth image of the user are obtained by carrying out different types of image acquisition on the face of the user; respectively extracting features of the face light intensity image and the face depth image to obtain face light intensity features and face depth features; performing self-adaptive feature enhancement processing on the facial light intensity features based on the facial depth features to obtain facial fusion features; and carrying out user identification based on the facial fusion characteristics and a pre-stored facial image template, and generating feedback data of the facial login request based on user identification data. According to the embodiment of the invention, through collecting different types of facial images, then carrying out image feature extraction based on the facial light intensity images and the facial depth images of the user, the extraction capability of facial features of the user can be improved, and then, the facial light intensity features and the facial depth features are subjected to self-adaptive feature enhancement and fusion treatment, so that the effective fusion of the different types of facial features can be realized, the texture features of the face of the user are highlighted, more accurate facial features of the user are obtained, the face recognition accuracy can be improved, and the user login safety is further ensured.
In one embodiment, the face login request includes a login account entered by the user. As shown in fig. 3, in step S40, user recognition is performed based on the facial fusion feature and a pre-stored facial image template, and feedback data of a facial login request is generated based on user recognition data, which specifically includes the following steps:
s41: and determining a login account number input by a user in the face login request, and determining a face image template corresponding to the login account number in a user database.
In this embodiment, the face login request includes a login account input by the user. After obtaining the face fusion feature, the server needs to determine a login account number input by a user in the face login request, and determine a face image template corresponding to the login account number in a user database. Each user is bound with the account number and the face image template and stored in a user database.
S42: and determining facial template characteristics corresponding to the facial image template, and carrying out similarity calculation on the facial template characteristics and facial fusion characteristics to obtain characteristic similarity.
After the face image template corresponding to the login account is determined, face template features corresponding to the face image template are determined. The face template features can be obtained by determining a standard face light intensity image and a standard face depth image in a face image template, then respectively extracting features of the standard face light intensity image and the standard face depth image to obtain standard light intensity features and standard depth features, and then carrying out self-adaptive feature enhancement processing based on the standard light intensity features and the standard depth features to obtain the face template features. In other embodiments, feature extraction may be performed on the facial image template of each user to obtain facial template features, and then each user is bound and stored in the user database as well as the account number, the facial image template and the facial template features thereof, so as to be called later; after the server determines the facial image template corresponding to the login account in the user database, the facial template features corresponding to the facial image template are directly pulled in the user database, so that the method is simple and convenient. The face template features are fusion features obtained by feature extraction based on a standard face light intensity image and a standard face depth image in advance, and accuracy of the face template features is improved.
After the facial template features are obtained, similarity calculation is needed to be carried out on the facial template features and the facial fusion features, so that feature similarity is obtained. Specifically, covariance calculation is performed on the facial template features and the facial fusion features to obtain covariance matrixes (similarity matrixes) of the facial template features and the facial fusion features, and then linear conversion is performed on the covariance matrixes (if the covariance matrixes are directly multiplied by linear functions), so that feature similarity is obtained, data conversion quantity can be reduced by using linear function conversion, processing speed is improved, and user login efficiency is further improved. In other embodiments, the covariance matrix may be subjected to nonlinear conversion (e.g., directly multiplying the covariance matrix by a nonlinear function) to obtain the feature similarity, and the nonlinear function conversion may be used to improve the accuracy of data conversion, thereby improving the accuracy of the feature similarity.
S43: and determining user identification data according to the feature similarity, and generating feedback data of the face login request according to the user identification data.
After similarity calculation is performed on the facial template features and the facial fusion features to obtain feature similarity, the server needs to acquire preset similarity, determine whether the feature similarity is larger than the preset similarity, if so, determine that the current user and the binding user corresponding to the facial image template are the same user, determine that the user identification data are successful in user identification, determine that the current login is legal, and enable the user to successfully log in the login account, and determine that the feedback data generated according to the user identification data are successful in login so as to subsequently execute page skip operation successful in login. If the user identification data is smaller than the preset similarity, determining that the binding user corresponding to the current user and the facial image template is not the same user, determining that the user identification data is user identification failure, determining that the current login is illegal, determining that the user can not login the login account, and prompting the current user that the login fails according to feedback data generated by the user identification data is login failure to perform login operation.
In this embodiment, a login account input by a user in a face login request is determined, a face image template corresponding to the login account is determined in a user database, face template features corresponding to the face image template are determined, similarity calculation is performed on the face template features and face fusion features to obtain feature similarity, user identification data is determined according to the feature similarity, and feedback data of the face login request is generated according to the user identification data. By acquiring the login account to pull the facial image templates of the binding users, the operations such as feature extraction and matching are further performed, user identification is achieved, the facial image templates of all users do not need to be compared, the data processing amount is greatly reduced on the basis of guaranteeing the identification precision, and the user login efficiency is further improved.
In an embodiment, before step S10, that is, before receiving a face login request sent by a user through a terminal device, the method further specifically includes the following steps:
s01: when a user enters a user login interface through the terminal equipment, the terminal equipment prompts the user to input a login account and provides various login modes, so that the terminal equipment generates a user login request according to the login account input by the user and the selected login mode.
When a user enters a user login interface through the terminal equipment, the terminal equipment prompts the user to input a login account and provides various login modes, so that the terminal equipment generates a user login request according to the login account input by the user and the selected login mode. When a user needs to log in an account, the user enters a user login interface corresponding to the server through the terminal equipment, at this time, the server sends a login account prompt and a login mode selection prompt to the terminal equipment, so that the user is prompted to input the login account on the user login interface through the terminal equipment to input the account needed to be logged in, and multiple login modes are provided through the user login interface to be input by the user to select a proper login mode. The plurality of login modes comprise a face recognition login mode, an account password login mode and a dynamic key login mode (such as a mobile phone verification code, a character string key sent to the binding device by a server and the like).
When a user inputs a login account number in a user login interface and selects a login mode, the terminal equipment generates a corresponding user login request according to the login mode selected by the user and input information (including the login account number), the corresponding user login request comprises the login mode and the input information of the user, the corresponding user login request is sent to a server, and after receiving the corresponding user login request, the server executes different operations according to the corresponding user login request.
For example, when the user selects the face recognition login mode and inputs the login account, the terminal device generates a face login request including the face recognition login mode and the login account, and sends the face login request to the server, and after receiving the face login request, the server executes steps S10-S40; before executing the steps S10-S40, the login account may be verified a priori, and if the login account is a registered account, the steps S10-S40 are executed; if the login account is an unregistered account, prompting that the login account is unregistered and prompting that the user is registered. When a user selects a dynamic key login mode and inputs a login account, the terminal equipment generates a dynamic key password login request comprising the dynamic key login mode and the login account and sends the dynamic key password login request to a server, after the server receives the dynamic key login request, after verifying that the login account is a registered account, the server sends a dynamic key to a mobile phone number or binding equipment bound by the login account, prompts the user to input the dynamic key, verifies dynamic key information input by the user after the user inputs the dynamic key, and sends a login success prompt to the user after the user passes verification.
S02: when a user login request sent by the terminal equipment is received, determining whether a login mode in the user login request is a face recognition login mode or not.
After prompting a user to input a login account through a terminal device and providing a plurality of login modes, the server determines whether a user login request sent by the terminal device is received, and determines whether the login mode in the user login request is a face recognition login mode when the user login request sent by the terminal device is received.
S03: if the login mode in the user login request is the face recognition login mode, the face login request sent by the user through the terminal equipment is determined to be received.
After determining whether the login mode in the user login request is a face recognition login mode, if the login mode in the user login request is the face recognition login mode, determining that the face login request sent by the user through the terminal equipment is received; if the login mode in the user login request is not the face recognition login mode, determining that the face login request is not received.
In this embodiment, when a user enters a user login interface through a terminal device, the terminal device prompts the user to input a login account and provides a plurality of login modes, so that the terminal device generates a user login request according to the login account input by the user and the selected login mode; the login mode comprises a face recognition login mode; when a user login request sent by a terminal device is received, determining whether a login mode in the user login request is a face recognition login mode or not; if the login mode in the user login request is the face recognition login mode, the face login request sent by the user through the terminal equipment is determined to be received. Different account login modes are provided for a user, the generation process of a face login request is clarified, and the user is required to input login account information when the face login request is generated so as to carry out login account verification and face recognition verification subsequently, so that an accurate data basis is provided for face recognition login subsequently, and the security of the user login is ensured.
In an embodiment, after step S02, that is, after determining whether the login mode in the user login request is the face recognition login mode, the method further specifically includes the following steps:
s04: if the login mode in the user login request is the password login mode, performing account authentication on the login account and the login key information in the user login request to obtain account authentication information.
After determining whether the login mode in the user login request is the face recognition login mode, if the login mode in the user login request is the password login mode, performing account authentication on the login account and the login key information in the user login request to obtain account authentication information. Wherein the login key information includes one or both of a login password and a dynamic key.
Specifically, performing account authentication on a login account and login key information in a user login request to obtain account authentication information, including: firstly, carrying out account verification on a login account in a user login request, determining whether the login account is matched with an account of a user database, if the login account is matched with the account of the user database, determining that the login account is a registered account, further verifying whether a login password in login key information is a password bound by the login account, if the login password is the password bound by the login account, determining that the account verification is passed, and generating account authentication information: the login account is registered, and the login key information is correct; if the login password is not the password bound by the login account, determining that account verification is not passed, and generating account authentication information: the login account is registered and the login key information is wrong. If the account number is not matched with the user database, determining that the login account number is an unregistered account number, determining that account number verification is not passed, and generating account number authentication information: the login account is unregistered, and the login account information is confirmed or the login account is requested.
S05: if the account authentication information is that the login account is registered and the login key information is wrong, sending a login key wrong prompt to a user through the terminal equipment, and prompting whether the user confirms to execute the face recognition login mode.
After account authentication is carried out on a login account and login key information in a user login request to obtain account authentication information, determining the content of the account authentication information, if the account authentication information is determined to be that the login account is registered and the login key information is wrong, sending a login key error prompt to a user through terminal equipment, prompting whether the user confirms to execute a face recognition login mode or not, enabling the user to timely acquire the result that the login key information is wrong and login is unsuccessful, and reminding the user to select the face recognition login mode.
S06: and when a face identification login mode confirmation instruction sent by the terminal equipment is received, determining that a face login request is received.
After a login key error prompt is sent to a user through terminal equipment and whether the user confirms to execute a face identification login mode is prompted, if the user is confirmed to accept the face identification login mode confirmation instruction sent by the terminal equipment, when the face identification login mode confirmation instruction sent by the terminal equipment is received, the face identification login request is confirmed to be received. The face login request comprises a face recognition login mode and a login account input by a user in the steps.
In this embodiment, after determining whether the login mode in the user login request is the face recognition login mode, if the login mode in the user login request is the password login mode, account authentication is performed on the login account and the login key information in the user login request to obtain account authentication information; if the account authentication information is that the login account is registered and the login key information is wrong, sending a login key wrong prompt to a user through the terminal equipment, and prompting whether the user confirms to execute a face recognition login mode or not; and when a face identification login mode confirmation instruction sent by the terminal equipment is received, determining that a face login request is received. When a user logs in, the face and account double authentication is performed when the password is wrong, so that the possibility of login failure is reduced, and the user experience is improved.
In an embodiment, after account authentication is performed on a login account and login key information in a user login request to obtain account authentication information, if it is determined that the account authentication information is that the login account is registered and the login key information is correct, login information of the login account is obtained last time, the login information includes login time and login equipment, whether current terminal equipment is the login equipment in the login information is determined, and a time interval between current time and login time in the login information is determined; if the current terminal equipment is not the login equipment in the login information or the time interval between the current time and the login time in the login information is smaller than a preset interval, generating login reconfirmation information and prompting a user, wherein the login reconfirmation information is used for indicating that the current login equipment is inconsistent with the last login equipment or the login interval is too short, and a face recognition login mode is required to be executed; after prompting the login reconfirming information of the user, the steps S10-S40 are directly executed to carry out face recognition login, so that account safety is ensured. The process can enable the user to acquire abnormal login information in time and avoid other account passwords which are stolen from being used for successfully logging in the account.
In an embodiment, as shown in fig. 4, in step S10, different types of image acquisition are performed on the face of the user to obtain a face light intensity image and a face depth image of the user, which specifically includes the following steps:
s11: and determining whether the terminal equipment is provided with a light intensity image acquisition device and a depth image acquisition device.
After receiving the face login request, the server needs to determine whether a light intensity image acquisition device and a depth image acquisition device are installed on the terminal equipment.
Specifically, the method comprises the steps of determining that the equipment model of the terminal equipment is obtained, determining whether the equipment of the equipment model is provided with a conventional camera and a depth camera, and if the equipment of the equipment model is provided with the conventional camera and the depth camera, determining that the terminal equipment is provided with a light intensity image acquisition device and a depth image acquisition device; if the equipment of the equipment model is not provided with a conventional camera or a depth camera, acquiring installation software information on the terminal equipment, determining whether the terminal equipment is provided with light intensity image acquisition software or depth image acquisition software according to the installation software information on the terminal equipment, and if the terminal equipment is provided with the conventional camera and the depth image acquisition software or the terminal equipment is provided with the depth camera and the light intensity image acquisition software, determining that the terminal equipment is provided with a light intensity image acquisition device and a depth image acquisition device.
S12: if the terminal equipment is provided with the light intensity image acquisition device and the depth image acquisition device, the light intensity image acquisition device and the depth image acquisition device are respectively called to acquire images of the face of the user, and a face light intensity image and a face depth image are obtained.
After determining whether the light intensity image acquisition device and the depth image acquisition device are installed on the terminal equipment, if the light intensity image acquisition device and the depth image acquisition device are installed on the terminal equipment, respectively calling the light intensity image acquisition device and the depth image acquisition device to acquire images of the face of the user to obtain a light intensity image and a depth image of the face, namely calling the light intensity image acquisition device to acquire images of the face of the user to obtain a light intensity image of the face, and calling the depth image acquisition device to acquire images of the face of the user to obtain a depth image of the face.
S13: if the terminal equipment is not provided with the light intensity image acquisition device and/or the depth image acquisition device, providing an installation package of light intensity image acquisition software and/or depth image acquisition software so that a user installs the light intensity image acquisition software and/or the depth image acquisition software on the terminal equipment.
After determining whether the light intensity image acquisition device and the depth image acquisition device are installed on the terminal equipment, if the light intensity image acquisition device and/or the depth image acquisition device are not installed on the terminal equipment, namely, the light intensity image acquisition device is not installed on the terminal equipment, or the depth image acquisition device is not installed on the terminal equipment, or the light intensity image acquisition device and the depth image acquisition device are not installed at the same time, providing an installation package of light intensity image acquisition software and/or depth image acquisition software for a user so that the user installs the light intensity image acquisition software and/or the depth image acquisition software on the terminal equipment.
When the installation package of the light intensity image acquisition software and/or the depth image acquisition software is provided for the user, the installation package suitable for the installation of the terminal equipment needs to be determined according to the equipment model of the terminal equipment, so that the user can directly install the installation package. For example, if the terminal device is determined to be a mobile phone device using the operating system a according to the device model of the terminal device, an installation package of the light intensity image acquisition software and/or the depth image acquisition software, which is suitable for being installed by the mobile phone device of the operating system a, can be directly called in the user database, and then sent to the terminal device to prompt the user to download and install until the user confirms that the light intensity image acquisition software and/or the depth image acquisition software is installed.
S14: prompting the user to call light intensity image acquisition software and/or depth image acquisition software, and acquiring images of the face of the user to obtain a light intensity image and a depth image of the face.
After the light intensity image acquisition software and/or the depth image acquisition software are/is installed on the terminal equipment by a user, the user is prompted to call the light intensity image acquisition software and/or the depth image acquisition software, and then the face of the user is subjected to image acquisition to obtain a face light intensity image and a face depth image.
In this embodiment, it is determined whether a light intensity image acquisition device and a depth image acquisition device are installed on the terminal device, if the light intensity image acquisition device and the depth image acquisition device are installed on the terminal device, the light intensity image acquisition device and the depth image acquisition device are respectively called to acquire images of the face of the user, so that a light intensity image and a depth image of the face are obtained, and the corresponding acquisition device is directly called to acquire images, so that the light intensity image and the depth image of the face of the user can be ensured to be acquired. In addition, if the light intensity image acquisition device and/or the depth image acquisition device are not installed on the terminal equipment, an installation package of light intensity image acquisition software and/or depth image acquisition software is provided, so that a user installs the light intensity image acquisition software and/or the depth image acquisition software on the terminal equipment, and prompts the user to call the light intensity image acquisition software and/or the depth image acquisition software to acquire images of the face of the user, and a face light intensity image and a face depth image are obtained; when the terminal equipment does not have a corresponding image acquisition function, the technology provides a software installation package, so that a user installs corresponding software according to requirements, the user is ensured to acquire corresponding images to carry out face recognition login, login failure caused by incapability of acquiring corresponding images is avoided, the time consumed by the user to search other places and download corresponding acquisition software is also reduced, and the user login experience and efficiency are improved.
In one embodiment, in step S20, feature extraction is performed on the face light intensity image and the face depth image to obtain a face light intensity feature and a face depth feature, which specifically includes the following steps:
s21: a convolution feature model is obtained that includes a plurality of feature extraction layers having different convolution kernels.
After obtaining the facial light intensity image and the facial depth image of the user, the server needs to obtain a plurality of convolution feature models comprising a plurality of feature extraction layers, wherein the convolution feature models are feature extraction models based on a convolution neural network, the convolution feature models comprise a plurality of feature extraction layers, and the convolution kernel sizes of the feature extraction layers are different, namely the receptive field sizes of the feature extraction layers are different. The receptive field refers to the size of the area of the input image mapped back by the pixel points on the characteristic map output by each layer in the convolutional neural network, and the receptive field of the characteristic extraction layer has different sizes, namely the sizes of the input images corresponding to the output characteristics are different, namely the data sizes of the output characteristics are different.
The sizes of receptive fields of a plurality of feature layers in the convolution feature model are sequentially increased, and the data sizes of output features of a plurality of feature extraction layers are sequentially reduced. The size of the receptive field of the last feature extraction layer is the same as the size of the convolution kernel, and the receptive fields of other feature extraction layers are determined according to the size of the convolution kernel and the receptive field of the next feature extraction layer, so that the receptive field of the last feature extraction layer can be determined in advance according to requirements, and then the receptive fields of other feature extraction layers are determined in sequence.
S22: and inputting the face light intensity image into a convolution feature model to perform layered feature extraction, and obtaining a light intensity feature image output by each feature extraction layer to obtain a plurality of light intensity sub-features with different data sizes.
After the convolution feature model is obtained, the server inputs the face light intensity image into the convolution feature model to conduct layered feature extraction, light intensity feature images output by the feature extraction layers are obtained, the light intensity feature images output by the feature extraction layers are used as a light intensity sub-feature, and therefore a plurality of light intensity sub-features with different data sizes are obtained.
S23: and inputting the facial depth features into a convolution feature model to perform hierarchical feature extraction, and obtaining a depth feature map output by each feature extraction layer to obtain a plurality of depth sub-features with different data sizes.
In addition, the server inputs the facial depth features into the convolution feature model to perform hierarchical feature extraction, obtains depth feature images output by each feature extraction layer, and uses the depth feature images output by each feature extraction layer as one depth sub-feature, so that a plurality of depth sub-features with different data sizes are obtained.
In this embodiment, in the process of extracting layered features, the sizes of receptive fields of a plurality of feature layers in the convolution feature model are sequentially increased, so that the data sizes of the output features of the plurality of feature extraction layers are sequentially reduced, the data sizes of the output features of different feature extraction layers are different, and the information expressed on the image features is not presented. For example, in the previous feature layer or layers with smaller receptive fields, the data size of the output features is larger, i.e. the area of the pixel points on the output feature map mapped back to the input image is larger, which can extract feature data with high fine granularity, i.e. the output features can express more subtle facial features of the user, such as local skin states, such as skin pigment, pores, scars, pits, and the like; in the last or last feature layer with larger receptive field, the data size of the output features is smaller, and the output features can extract high-semantic feature data, namely, the output features can express more comprehensive and complete facial features of users, such as the overall facial outline, facial five sense organs and the like.
In the embodiment, a plurality of feature extraction layers are obtained, the plurality of feature extraction layers have different convolution kernels, then the face light intensity image is input into the convolution feature model for carrying out layered feature extraction, the light intensity feature image output by each feature extraction layer is obtained, a plurality of light intensity sub-features with different data sizes are obtained, the face depth feature is input into the convolution feature model for carrying out layered feature extraction, the depth feature image output by each feature extraction layer is obtained, a plurality of depth sub-features with different data sizes are obtained, the extraction process of the face light intensity feature and the face depth feature is clarified, can extract a plurality of sub-features of different data sizes and a plurality of depth sub-features, and simultaneously improves the information expression capability of the facial light intensity features and the facial depth features, and further based on the plurality of light intensity sub-features and the plurality of depth sub-features, and carrying out feature fusion on the facial light intensity features and the facial depth features to obtain facial fusion features, so that the facial fusion features can express the facial information of users with different granularities, the accuracy of the facial fusion features is improved, the accuracy of the subsequent facial recognition of the users is improved, and the login safety of the users is further improved.
In other embodiments, the facial light intensity feature includes a plurality of light intensity sub-features, but the facial depth feature may include only one depth sub-feature, that is, after obtaining a plurality of light sub-feature with different data sizes, the facial depth feature is input into a convolution feature model to perform feature extraction, the depth feature map output by the first feature extraction layer is recorded as the facial depth feature, and the depth feature map with the most complete information is recorded as the facial depth feature, so that the facial light intensity feature and the facial depth feature are subjected to feature fusion based on a single depth feature map and a plurality of light intensity sub-features to obtain a facial fusion feature, that is, the accuracy of the fusion feature can be ensured, the calculation amount of the subsequent feature fusion can be reduced, the load of a server is reduced, and the user login efficiency is provided.
In one embodiment, the facial light intensity feature comprises a plurality of light intensity sub-features and the facial depth feature comprises a plurality of depth sub-features. As shown in fig. 5, in step S30, that is, performing distance calculation on the facial light intensity feature and the facial depth feature to obtain feature distance data, the method specifically includes the following steps:
s31: and calculating the distance between the facial light intensity characteristic and the facial depth characteristic based on the light intensity sub-characteristics and the depth sub-characteristics to obtain characteristic distance data.
In this embodiment, the facial light intensity feature includes a plurality of light intensity sub-features having different data sizes, and the facial depth feature includes a plurality of depth sub-features having different data sizes. After the light intensity features and the depth features of the face are obtained, the server needs to calculate the distance between the light intensity features and the depth features of the face based on the light intensity sub-features and the depth sub-features to obtain feature distance data. The characteristic distance data comprises a distance matrix of each light intensity sub-characteristic and a corresponding depth sub-characteristic.
The method comprises the steps of determining light intensity sub-features and depth sub-features with the same data size according to the data size of each light intensity sub-feature and each depth sub-feature, marking the light intensity sub-features and the depth sub-features as feature groups, traversing all the features to obtain a plurality of feature groups, calculating the distances between the light intensity sub-features and the depth sub-features in each feature group, namely calculating the distances between each feature point (pixel point) in the light intensity sub-features and each feature point in the depth sub-features, so as to obtain a distance matrix of each feature group, namely obtaining the distance matrix of each light intensity sub-feature and the corresponding depth sub-feature, and summarizing the distance matrix as feature distance data.
S32: and performing weight conversion on each distance matrix in the characteristic distance data to obtain characteristic weight data.
After the distance calculation is performed on the face light intensity characteristic and the face depth characteristic to obtain characteristic distance data, the server also needs to perform weight conversion on each distance matrix in the characteristic distance data to obtain characteristic weight data.
The activation function (such as a mix function, a linear rectification function, etc.) for weight conversion may be obtained first, and then the activation function is multiplied by the feature distance data, so that each distance matrix in the feature distance data may be converted into a corresponding weight value, to obtain feature weight data. The feature weight data comprises a plurality of weight values, i.e. weight values for feature enhancement of the respective light intensity sub-features.
S33: and enhancing the light intensity sub-features based on the feature weight data, and fusing the enhanced features to obtain the facial fusion features.
After the feature weight data is obtained, the server needs to perform enhancement processing on the plurality of light intensity sub-features based on the feature weight data, and perform fusion processing on the enhanced features to obtain facial fusion features.
For example, a plurality of weight values of the feature weight data are multiplied by a plurality of light intensity sub-features in the light intensity features of the face to obtain a plurality of enhancement sub-features, and then convolution processing is carried out on the plurality of enhancement sub-features to obtain the face fusion feature with accurate information and small data volume. After the plurality of enhancement sub-features are obtained, the plurality of enhancement sub-features can be spliced to obtain the facial fusion features with different data sizes, so that the facial fusion features are more accurate.
In this embodiment, based on a plurality of light intensity sub-features and a plurality of depth sub-features, performing distance computation on the light intensity features and the depth features of the face to obtain feature distance data, then performing weight conversion on each distance matrix in the feature distance data to obtain feature weight data, and finally enhancing the plurality of light intensity sub-features based on the feature weight data, and performing fusion processing on the enhanced features to obtain a face fusion feature. The method comprises the specific steps of carrying out self-adaptive feature addition processing on the facial light intensity features based on the facial depth features to obtain facial fusion features, calculating feature distance data of the facial light intensity features and the facial depth features based on the light intensity sub-features and the depth sub-features, carrying out weight conversion, and further realizing global feature enhancement on the facial light intensity features, so that the facial fusion features obtained through fusion can express facial information with different granularities, the accuracy of the facial fusion features is improved, the accuracy of the face recognition of a user is improved, and the login security of the user is further improved.
In other embodiments, the facial light intensity feature comprises a plurality of light intensity sub-features of different data sizes, the facial depth feature comprises only one depth sub-feature, and the data size of the depth sub-feature is consistent with the data size of the light intensity sub-feature having the largest data size. In step S30, the distance between the depth sub-feature and each light intensity sub-feature is calculated to obtain a distance matrix of the depth sub-feature and each light intensity sub-feature, so that feature distance data is summarized and formed, the feature distance data is calculated based on the depth sub-feature with the largest data size (i.e. the smallest information granularity) and the plurality of light intensity sub-features, and the calculated amount of the feature distance data can be reduced on the basis of ensuring that the feature distance data meets the accuracy requirement; then, carrying out weight conversion on each distance matrix in the characteristic distance data by adopting a linear rectification function to obtain characteristic weight data so as to reduce the calculated amount of the weight conversion data; and finally, weighting the plurality of photonic features based on the feature weight data to obtain the facial fusion feature. The feature distance data is calculated based on the depth sub-feature with the largest data size (namely the smallest information granularity) and the light intensity sub-features, and then the depth sub-feature is converted into the weight values to weight the light sub-features, so that the global feature enhancement of the light intensity features of the face can be realized, the accuracy of the face fusion features is ensured, the calculation amount of feature fusion is reduced, and the login efficiency of a user is further improved.
In an embodiment, in step S31, the distance calculation is performed on the facial light intensity feature and the facial depth feature based on the light intensity sub-features and the depth sub-features to obtain feature distance data, which specifically includes the following steps:
s311: and determining the light intensity sub-features and the depth sub-features with the same data size according to the data sizes of the light intensity sub-features and the depth sub-features, marking the light intensity sub-features and the depth sub-features as feature groups, and traversing all the features to obtain a plurality of feature groups.
And in the process of obtaining the facial light intensity characteristics and the facial depth characteristics, determining the light intensity sub-characteristics and the depth sub-characteristics with the same data size according to the data sizes of the light intensity sub-characteristics and the depth sub-characteristics, marking the light intensity sub-characteristics and the depth sub-characteristics as characteristic groups, and traversing all the characteristics to obtain a plurality of characteristic groups.
For example, the facial light intensity feature includes a first light intensity sub-feature, a second light intensity sub-feature and a third light intensity sub-feature with sequentially reduced data sizes, and the facial depth feature correspondingly includes a first depth sub-feature, a second depth sub-feature and a third depth sub-feature with sequentially reduced data sizes, so that the first light intensity sub-feature and the first depth sub-feature with the same data size are marked as a feature group, the second light intensity sub-feature and the second depth sub-feature with the same data size are marked as a feature group, and the third light intensity sub-feature and the third depth sub-feature with the same data size are also marked as a feature group, thereby obtaining three feature groups.
S312: and carrying out feature compression processing on the light intensity sub-features and the depth sub-features in each feature group to obtain a compressed feature group.
After obtaining a plurality of feature groups, in order to reduce the data processing amount, feature compression processing is required to be performed on the light intensity sub-features and the depth sub-features in each feature group, so as to obtain a compressed feature group. In this embodiment, the feature compression process may include an input channel compression process and/or a dimension compression process.
Specifically, the server may acquire a channel compression parameter (the channel compression parameter is generally n-th of an input channel of an original image), and compress the input channel according to the channel compression parameter to obtain a compressed light intensity sub-feature and a compressed depth sub-feature of the input channel, i.e. compressed feature compression processing; and (3) carrying out input channel compression on the light intensity sub-features and the depth sub-features in each feature group by using the channel compression parameters, and reducing the feature data quantity on the basis of guaranteeing feature diversity so as to carry out subsequent distance calculation. In other embodiments, a preset dimension may be obtained, and then a global average pooling operation of the preset dimension is performed on the light intensity sub-features and the depth sub-features in each feature group, so that dimension compression is implemented, and the light intensity sub-features and the depth sub-features after dimension reduction are obtained, that is, feature compression processing is performed after compression, and the global average pooling operation is performed on the light intensity sub-features and the depth sub-features in each feature group through the preset dimension, so that feature data quantity is reduced on the basis of guaranteeing feature diversity, so that distance calculation is performed subsequently. In addition, channel compression parameters and preset dimensions can be obtained, then input channel compression is carried out on the light intensity sub-features and the depth sub-features in each feature group according to the channel compression parameters, the light intensity sub-features and the depth sub-features after the input channel compression are obtained, and then global average pooling operation of the preset dimensions is carried out on the light intensity sub-features and the depth sub-features after the input channel compression, so that the light intensity sub-features and the depth sub-features after the dimension reduction are obtained, namely compressed feature compression processing is carried out; the light intensity sub-features and the depth sub-features in each feature group are subjected to double compression processing, so that the feature data quantity is further reduced, the speed of subsequent distance calculation is further improved, and the efficiency of subsequent operation is further improved.
S313: and performing distance calculation on the light intensity sub-features and the depth sub-features in each compressed feature group to obtain a distance matrix of each feature group, and summarizing the distance matrix into feature distance data.
After the light intensity sub-features and the depth sub-features in each feature group are subjected to feature compression processing to obtain compressed feature groups, the server also needs to perform distance calculation on the compressed light intensity sub-features and the compressed depth sub-features in each feature group to obtain distance matrixes of each feature group, and the distance matrixes of each feature group are summarized to form feature distance data. The distance calculation mode can be a Manhattan distance, a Hamming distance and a Marsdian distance equidistant calculation mode.
In the embodiment, according to the data sizes of each light intensity sub-feature and each depth sub-feature, determining the light intensity sub-feature and the depth sub-feature with the same data size, and marking the light intensity sub-feature and the depth sub-feature as feature groups, and traversing all the features to obtain a plurality of feature groups; performing feature compression processing on the light intensity sub-features and the depth sub-features in each feature group to obtain compressed feature groups, performing distance calculation on the light intensity sub-features and the depth sub-features in each feature group after compression to obtain distance matrixes of each feature group, summarizing the distance matrixes to obtain feature distance data, and determining specific steps of performing distance calculation on the light intensity features and the face depth features of the face based on the light intensity sub-features and the depth sub-features to obtain feature distance data. In the process, the characteristic distance is calculated after the light intensity sub-characteristics and the depth sub-characteristics are compressed, the data processing amount is reduced on the basis of ensuring the accuracy of calculated data, and the login efficiency of a user can be effectively improved.
In other embodiments, the facial light intensity feature comprises a plurality of light intensity sub-features of different data sizes, the facial depth feature comprises only one depth sub-feature, and the data size of the depth sub-feature is consistent with the data size of the light intensity sub-feature having the largest data size. In step S31, the depth sub-feature is compressed to obtain a compressed depth sub-feature, and each light intensity sub-feature is compressed to obtain a plurality of compressed light intensity sub-features, and then the distance between the compressed depth sub-feature and each compressed light intensity sub-feature is calculated to obtain a distance matrix of the compressed depth sub-feature and each compressed light intensity sub-feature, so that feature distance data is summarized, the data processing amount can be further reduced, and the user login efficiency can be further improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In one embodiment, a server is provided, where the server corresponds to the graph user login method in the above embodiment one by one. As shown in fig. 6, the server includes an acquisition module 601, an extraction module 602, a fusion module 603, and an identification module 604. The functional modules are described in detail as follows:
The acquisition module 601 is configured to acquire different types of images of a face of a user when receiving a face login request sent by the user through a terminal device, so as to obtain a face light intensity image and a face depth image of the user;
the extraction module 602 is configured to perform feature extraction on the face light intensity image and the face depth image respectively, so as to obtain a face light intensity feature and a face depth feature;
the fusion module 603 is configured to perform adaptive feature enhancement processing on the facial light intensity feature based on the facial depth feature, to obtain a facial fusion feature;
the recognition module 604 is configured to perform user recognition based on the facial fusion feature and a pre-stored facial image template, and generate feedback data of the facial login request based on the user recognition data.
Optionally, the facial light intensity feature comprises a plurality of light intensity sub-features and the facial depth feature comprises a plurality of depth sub-features; the fusion module 603 is specifically configured to:
based on the light intensity sub-features and the depth sub-features, performing distance calculation on the face light intensity features and the face depth features to obtain feature distance data; the characteristic distance data comprises a distance matrix of each light intensity sub-characteristic and a corresponding depth sub-characteristic;
performing weight conversion on each distance matrix in the characteristic distance data to obtain characteristic weight data, wherein the characteristic weight data comprises weight values of each light intensity sub-characteristic;
And enhancing the light intensity sub-features based on the feature weight data, and fusing the enhanced features to obtain the facial fusion features.
Optionally, the fusion module 603 is specifically further configured to:
determining light intensity sub-features and depth sub-features with the same data size according to the data sizes of the light intensity sub-features and the depth sub-features, marking the light intensity sub-features and the depth sub-features as feature groups, and traversing all the features to obtain a plurality of feature groups;
performing feature compression processing on the light intensity sub-features and the depth sub-features in each feature group to obtain compressed feature groups;
and performing distance calculation on the light intensity sub-features and the depth sub-features in each compressed feature group to obtain a distance matrix of each feature group, and summarizing the distance matrix into feature distance data.
Optionally, the facial light intensity feature comprises a plurality of light intensity sub-features and the facial depth feature comprises a plurality of depth sub-features; the extraction module 602 is specifically configured to:
acquiring a convolution feature model comprising a plurality of feature extraction layers, wherein the plurality of feature extraction layers have different convolution kernels;
inputting the face light intensity image into a convolution feature model for carrying out layered feature extraction, and obtaining a light intensity feature image output by each feature extraction layer to obtain a plurality of light intensity sub-features with different data sizes;
And inputting the facial depth features into a convolution feature model to perform hierarchical feature extraction, and obtaining a depth feature map output by each feature extraction layer to obtain a plurality of depth sub-features with different data sizes.
Optionally, the acquisition module 601 is specifically configured to:
determining whether a light intensity image acquisition device and a depth image acquisition device are installed on the terminal equipment;
if the terminal equipment is provided with the light intensity image acquisition device and the depth image acquisition device, the light intensity image acquisition device and the depth image acquisition device are respectively called to acquire images of the face of the user, and a face light intensity image and a face depth image are obtained.
Optionally, after determining whether the light intensity image acquisition device and the depth image acquisition device are installed on the terminal device, the acquisition module 601 is further configured to:
if the terminal equipment is not provided with the light intensity image acquisition device and/or the depth image acquisition device, providing an installation package of light intensity image acquisition software and/or depth image acquisition software so that a user installs the light intensity image acquisition software and/or the depth image acquisition software on the terminal equipment;
prompting the user to call light intensity image acquisition software and/or depth image acquisition software, and acquiring images of the face of the user to obtain a light intensity image and a depth image of the face.
Optionally, the server further includes a determining module 605, and before receiving a face login request sent by the user through the terminal device, the determining module 605 is configured to:
when a user enters a user login interface through terminal equipment, the terminal equipment prompts the user to input a login account and provides various login modes, so that the terminal equipment generates a user login request according to the login account input by the user and the selected login mode; the login mode comprises a face recognition login mode;
when a user login request sent by a terminal device is received, determining whether a login mode in the user login request is a face recognition login mode or not;
if the login mode in the user login request is the face recognition login mode, the face login request sent by the user through the terminal equipment is determined to be received.
Optionally, after determining whether the login mode in the user login request is a face recognition login mode, the determining module 605 is further configured to:
if the login mode in the user login request is a password login mode, performing account authentication on a login account and login key information in the user login request to obtain account authentication information, wherein the login key information comprises a login password and a dynamic key;
If the account authentication information is that the login account is registered and the login key information is wrong, sending a login key wrong prompt to a user through the terminal equipment, and prompting whether the user confirms to execute a face recognition login mode or not;
and when a face identification login mode confirmation instruction sent by the terminal equipment is received, determining that a face login request is received.
For specific limitations on the server, reference may be made to the above limitation on the user login method, and no further description is given here. Each of the modules in the above server may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, as shown in FIG. 7, a computer device, which may be a server, is provided that includes a processor, memory, network interface, and database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data used and generated by the user login method, such as a face light intensity image and a face depth image, face fusion characteristics, a face image template, feedback data and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a user login method.
In one embodiment, a computer device is provided, which may be a server or a terminal device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the computer program to perform the steps of:
when receiving a face login request sent by a user through terminal equipment, carrying out different types of image acquisition on the face of the user to obtain a face light intensity image and a face depth image of the user;
respectively extracting features of the face light intensity image and the face depth image to obtain face light intensity features and face depth features;
performing self-adaptive feature enhancement processing on the facial light intensity features based on the facial depth features to obtain facial fusion features;
and carrying out user identification based on the facial fusion characteristics and a pre-stored facial image template, and generating feedback data of the facial login request based on user identification data.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
when receiving a face login request sent by a user through terminal equipment, carrying out different types of image acquisition on the face of the user to obtain a face light intensity image and a face depth image of the user;
Respectively extracting features of the face light intensity image and the face depth image to obtain face light intensity features and face depth features;
performing self-adaptive feature enhancement processing on the facial light intensity features based on the facial depth features to obtain facial fusion features;
and carrying out user identification based on the facial fusion characteristics and a pre-stored facial image template, and generating feedback data of the facial login request based on user identification data.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.
Claims (10)
1. A user login method, comprising:
when receiving a face login request sent by a user through terminal equipment, carrying out different types of image acquisition on the face of the user to obtain a face light intensity image and a face depth image of the user;
Respectively extracting the features of the face light intensity image and the face depth image to obtain face light intensity features and face depth features;
performing self-adaptive feature enhancement processing on the facial light intensity features based on the facial depth features to obtain facial fusion features;
and carrying out user identification based on the facial fusion characteristics and a pre-stored facial image template, and generating feedback data of the face login request based on user identification data.
2. The user login method of claim 1, wherein the facial light intensity feature comprises a plurality of light intensity sub-features and the facial depth feature comprises a plurality of depth sub-features; the self-adaptive feature adding processing is performed on the facial light intensity features based on the facial depth features to obtain facial fusion features, including:
based on the light intensity sub-features and the depth sub-features, performing distance calculation on the face light intensity features and the face depth features to obtain feature distance data; the characteristic distance data comprises a distance matrix of each light intensity sub-characteristic and the corresponding depth sub-characteristic;
performing weight conversion on each distance matrix in the characteristic distance data to obtain characteristic weight data;
And enhancing the light intensity sub-features based on the feature weight data, and fusing the enhanced features to obtain the facial fusion features.
3. The user login method according to claim 2, wherein the calculating the distance between the facial light intensity feature and the facial depth feature based on the plurality of light intensity sub-features and the plurality of depth sub-features to obtain feature distance data includes:
determining the light intensity sub-features and the depth sub-features with the same data size according to the data size of each light intensity sub-feature and each depth sub-feature, marking the light intensity sub-features and the depth sub-features as feature groups, and traversing all the features to obtain a plurality of feature groups;
performing feature compression processing on the light intensity sub-features and the depth sub-features in each feature group to obtain compressed each feature group;
and performing distance calculation on the light intensity sub-features and the depth sub-features in the compressed feature groups to obtain a distance matrix of each feature group, and summarizing the distance matrix into the feature distance data.
4. The user login method according to claim 1, wherein the feature extraction is performed on the face light intensity image and the face depth image to obtain a face light intensity feature and a face depth feature, respectively, and the method comprises:
Acquiring a convolution feature model comprising a plurality of feature extraction layers, wherein the plurality of feature extraction layers have different convolution kernels;
inputting the face light intensity image into the convolution feature model for layered feature extraction, and obtaining a light intensity feature image output by each feature extraction layer to obtain a plurality of light intensity sub-features with different data sizes;
and inputting the facial depth features into the convolution feature model to perform hierarchical feature extraction, and obtaining a depth feature map output by each feature extraction layer to obtain a plurality of depth sub-features with different data sizes.
5. The user login method according to claim 1, wherein the performing different types of image acquisition on the face of the user to obtain the face light intensity image and the face depth image of the user comprises:
determining whether a light intensity image acquisition device and a depth image acquisition device are installed on the terminal equipment;
and if the light intensity image acquisition device and the depth image acquisition device are arranged on the terminal equipment, respectively calling the light intensity image acquisition device and the depth image acquisition device to acquire images of the face of the user, so as to obtain the light intensity image and the depth image of the face.
6. The user login method according to any one of claims 1 to 5, wherein before receiving a face login request sent by a user through a terminal device, the method further comprises:
when the user enters a user login interface through the terminal equipment, prompting the user to input a login account through the terminal equipment and providing a plurality of login modes, so that the terminal equipment generates a user login request according to the login account input by the user and the selected login mode; the login mode comprises a face recognition login mode;
when the user login request sent by the terminal equipment is received, determining whether the login mode in the user login request is the face recognition login mode or not;
and if the login mode in the user login request is the face recognition login mode, determining that the face login request sent by the user through the terminal equipment is received.
7. The user login method according to claim 6, wherein after said determining whether said login pattern in said user login request is said face recognition login pattern, said method further comprises:
If the login mode in the user login request is a password login mode, performing account authentication on the login account and login key information in the user login request to obtain account authentication information, wherein the login key information comprises a login password and a dynamic key;
if the account authentication information is that the login account is registered and the login key information is wrong, sending a login key wrong prompt to the user through the terminal equipment, and prompting whether the user confirms to execute the face recognition login mode;
and when the face identification login mode confirmation instruction sent by the terminal equipment is received, determining that the face login request is received.
8. A user login system comprising a server and a terminal device, wherein the server comprises:
the acquisition module is used for acquiring images of different types of the face of the user when receiving a face login request sent by the user through the terminal equipment, so as to obtain a face light intensity image and a face depth image of the user;
the extraction module is used for extracting the characteristics of the face light intensity image and the face depth image respectively to obtain face light intensity characteristics and face depth characteristics;
The fusion module is used for carrying out self-adaptive feature enhancement processing on the facial light intensity features based on the facial depth features to obtain facial fusion features;
and the recognition module is used for carrying out user recognition based on the facial fusion characteristics and a pre-stored facial image template, and generating feedback data of the face login request based on user recognition data.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the user login method according to any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the user login method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410047289.6A CN117558058A (en) | 2024-01-12 | 2024-01-12 | User login method, system, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410047289.6A CN117558058A (en) | 2024-01-12 | 2024-01-12 | User login method, system, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117558058A true CN117558058A (en) | 2024-02-13 |
Family
ID=89823733
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410047289.6A Pending CN117558058A (en) | 2024-01-12 | 2024-01-12 | User login method, system, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117558058A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108306886A (en) * | 2018-02-01 | 2018-07-20 | 深圳市腾讯计算机系统有限公司 | A kind of auth method, device and storage medium |
CN110889094A (en) * | 2019-11-18 | 2020-03-17 | 中国银行股份有限公司 | Login authentication method and device |
CN112036284A (en) * | 2020-08-25 | 2020-12-04 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment and storage medium |
CN113140302A (en) * | 2020-01-20 | 2021-07-20 | 深圳市理邦精密仪器股份有限公司 | Authority management method of mobile electrocardiogram equipment, mobile electrocardiogram equipment and storage medium |
CN114581978A (en) * | 2022-02-28 | 2022-06-03 | 支付宝(杭州)信息技术有限公司 | Face recognition method and system |
CN115496975A (en) * | 2022-08-29 | 2022-12-20 | 锋睿领创(珠海)科技有限公司 | Auxiliary weighted data fusion method, device, equipment and storage medium |
CN115641067A (en) * | 2022-09-05 | 2023-01-24 | 华通软件科技南京有限公司 | Student information integrated management system based on cloud platform |
-
2024
- 2024-01-12 CN CN202410047289.6A patent/CN117558058A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108306886A (en) * | 2018-02-01 | 2018-07-20 | 深圳市腾讯计算机系统有限公司 | A kind of auth method, device and storage medium |
CN110889094A (en) * | 2019-11-18 | 2020-03-17 | 中国银行股份有限公司 | Login authentication method and device |
CN113140302A (en) * | 2020-01-20 | 2021-07-20 | 深圳市理邦精密仪器股份有限公司 | Authority management method of mobile electrocardiogram equipment, mobile electrocardiogram equipment and storage medium |
CN112036284A (en) * | 2020-08-25 | 2020-12-04 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment and storage medium |
CN114581978A (en) * | 2022-02-28 | 2022-06-03 | 支付宝(杭州)信息技术有限公司 | Face recognition method and system |
CN115496975A (en) * | 2022-08-29 | 2022-12-20 | 锋睿领创(珠海)科技有限公司 | Auxiliary weighted data fusion method, device, equipment and storage medium |
CN115641067A (en) * | 2022-09-05 | 2023-01-24 | 华通软件科技南京有限公司 | Student information integrated management system based on cloud platform |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109241868B (en) | Face recognition method, device, computer equipment and storage medium | |
CN111191539B (en) | Certificate authenticity verification method and device, computer equipment and storage medium | |
US10664581B2 (en) | Biometric-based authentication method, apparatus and system | |
CN106778525B (en) | Identity authentication method and device | |
CN109389723B (en) | Visitor management method and device using face recognition and computer equipment | |
TWI752418B (en) | Server, client, user authentication method and system | |
US20200065460A1 (en) | Method and computer readable storage medium for remote interview signature | |
CN110751025A (en) | Business handling method, device, equipment and medium based on face recognition | |
CN109801192A (en) | Electron contract method, apparatus, computer equipment and storage medium | |
CN108429745B (en) | Login authentication method and system, and webpage login method and system | |
CN112001932A (en) | Face recognition method and device, computer equipment and storage medium | |
CN110795714A (en) | Identity authentication method and device, computer equipment and storage medium | |
CN110688878B (en) | Living body identification detection method, living body identification detection device, living body identification detection medium, and electronic device | |
CN111339897B (en) | Living body identification method, living body identification device, computer device, and storage medium | |
CN113656761B (en) | Business processing method and device based on biological recognition technology and computer equipment | |
CN110247898B (en) | Identity verification method, identity verification device, identity verification medium and electronic equipment | |
US20220100839A1 (en) | Open data biometric identity validation | |
CN113887408B (en) | Method, device, equipment and storage medium for detecting activated face video | |
CN109325666B (en) | Service processing method, device, computer equipment and storage medium | |
CN111368814A (en) | Identity recognition method and system | |
CN110175500B (en) | Finger vein comparison method, device, computer equipment and storage medium | |
CN110992155A (en) | Bidding and enclosing processing method and related product | |
CN112632504B (en) | Webpage access method, device, system, computer equipment and storage medium | |
CN111931148A (en) | Image processing method and device and electronic equipment | |
CN117558058A (en) | User login method, system, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |