CN115330392A - User identity information verification method and system, storage medium and electronic equipment - Google Patents

User identity information verification method and system, storage medium and electronic equipment Download PDF

Info

Publication number
CN115330392A
CN115330392A CN202210963398.3A CN202210963398A CN115330392A CN 115330392 A CN115330392 A CN 115330392A CN 202210963398 A CN202210963398 A CN 202210963398A CN 115330392 A CN115330392 A CN 115330392A
Authority
CN
China
Prior art keywords
data
user
face
verification
verified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210963398.3A
Other languages
Chinese (zh)
Inventor
江文乐
杨洁琼
张楚熠
罗亚明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202210963398.3A priority Critical patent/CN115330392A/en
Publication of CN115330392A publication Critical patent/CN115330392A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/06Decision making techniques; Pattern matching strategies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • H04L63/0435Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload wherein the sending and receiving network entities apply symmetric encryption, i.e. same key used for encryption and decryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Accounting & Taxation (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Game Theory and Decision Science (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

The invention discloses a method and a system for verifying user identity information, a storage medium and electronic equipment, which relate to the field of biological identification, wherein the method comprises the following steps: acquiring an identity, biological characteristic fragment data and video frame data of a user to be verified in a current service process; under the condition that the identity identification passes the verification, identifying the human body action in the video frame data by adopting an action detection model, and comparing the human body action with an action requirement instruction in the current business process; comparing the biological characteristic fragment data with a pre-recorded biological characteristic set; and under the condition that the human body action and the biological characteristics of the user to be verified are compared, confirming that the identity information of the user to be verified passes verification, and generating a user verification report. The invention solves the technical problem of user information leakage caused by immature online business systems and single biological identification means of financial institutions in the related technology.

Description

User identity information verification method and system, storage medium and electronic equipment
Technical Field
The invention relates to the field of biological identification, in particular to a method and a system for verifying user identity information, a storage medium and electronic equipment.
Background
At present, in the network communication process, the network communication system is easily affected by epidemic situation/communication terminal/weather, various traditional offline services such as financial institution websites and the like are difficult to be normally developed, therefore, many financial institutions gradually provide online service for users through online channels such as mobile phone applications, online financial institutions and applets, and based on the audio and video technology, the users can transact services such as credit services and password resetting which originally need to be transacted in person to websites without going out.
In the related technology, the verification of the user identity by the online business developed by the financial institution depends on the user uploading an identity card photo, and the identity of the user is verified by the face recognition of the system and the comparison of business personnel through a video confirmation link. In these links, the malicious attacks of lawbreakers may be encountered, and some problems of insufficient identification accuracy, privacy protection and potential safety hazards exist.
For example, in a human face image acquisition link, an attacker carries out hijack attack on a camera of the terminal equipment, a human face image acquired by the camera is tampered on a system driving layer, a scene snapshot picture is replaced by a victim picture, and the scene snapshot picture is sent to a server to be compared through a human face; in a living body detection link before image acquisition, for example, an attacker may convert a user photo into a dynamic picture by using image processing and three-dimensional modeling software, and record a video containing all living body detection actions; in the video confirmation link of the business handling staff, an attacker may render and synthesize a 'face changing' image by using a deep counterfeiting technology, generate a counterfeit user video in real time, and cheat the business handling staff.
In the related art, the business handling of the online financial institution has the following problems: firstly, the identity verification accuracy is not high; secondly, the privacy of the client is at risk of disclosure; thirdly, the biological recognition means is single, and the security defense mechanism is weak.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a system for verifying user identity information, a storage medium and electronic equipment, which are used for at least solving the technical problem of user information leakage caused by immature online business systems and single biological identification means of financial institutions in related technologies.
According to an aspect of the embodiments of the present invention, there is provided a method for verifying user identity information, which is applied to a server, and includes: acquiring an identity of a user to be verified in a current service process, biological feature fragment data and video frame data, wherein the biological feature fragment data is obtained by performing fragment processing on the acquired biological features of the user to be verified, and the video frame data is acquired when the user to be verified is subjected to video surface examination; under the condition that the identity identification passes the verification, identifying the human body action in the video frame data by adopting an action detection model, and comparing the human body action with an action requirement instruction in the current service flow; comparing the biological feature fragment data with a pre-recorded biological feature set by adopting a biological feature recognition model; and under the condition that the human body action and the biological characteristics of the user to be verified are compared, confirming that the identity information of the user to be verified passes verification, and generating a user verification report.
Optionally, the step of obtaining the identity, the biometric feature fragment data, and the video frame data of the user to be verified in the current service flow includes: acquiring the identity document of the user to be verified through a verification terminal, and identifying the identity mark on the identity document, wherein the verification terminal is a terminal used by the user to be verified; acquiring a face image and sound data of the user to be verified through the verification terminal, extracting a face feature vector in the face image and extracting voiceprint data in the sound data; the face feature vector and the voiceprint data are subjected to fragmentation processing through the check terminal, and biological feature fragmentation data are generated; and during video face examination, acquiring a face examination video of the user to be verified through the verification terminal, and extracting the video frame data in the face examination video.
Optionally, before acquiring the facial image and the sound data of the user to be verified, the method further includes: randomly generating an action instruction and a colorful light signal, wherein the action instruction is used for indicating the user to be verified to face a display screen and finishing a specified detection action under reflected light, and the colorful light signal is generated based on a three-dimensional imaging principle; sending the action instruction to the verification terminal; receiving sample detection data transmitted by the verification terminal, wherein the sample detection data is used for carrying out living body detection on the user to be verified; and under the condition that the sample detection data indicate that the user to be verified passes the living body detection, starting to acquire the face image and the sound data of the user to be verified through the verification terminal.
Optionally, the steps of extracting the face feature vector in the face image and extracting the voiceprint data in the voice data include: extracting a face frame in each face image by adopting a multitask convolution neural network through the verification terminal; extracting face information in the face frame through a face recognition model through the verification terminal, and mapping the face information to a multi-dimensional space vector to obtain the face feature vector; preprocessing the sound data through the verification terminal; and performing fast Fourier transform processing, convolution operation processing and discrete cosine transform on the preprocessed sound data through the verification terminal to obtain a Mel cepstrum coefficient, and taking the Mel cepstrum coefficient as voiceprint data in the sound data.
Optionally, after the facial feature vector and the voiceprint data are subjected to fragmentation processing to generate the biometric fragmentation data, the method further includes: encrypting the biological characteristic fragment data by adopting a preset symmetric encryption algorithm through the verification terminal to obtain an encrypted ciphertext; and transmitting the encrypted ciphertext to the server through the verification terminal.
Optionally, after obtaining the identity, the biometric feature fragmentation data, and the video frame data of the user to be verified in the current business process, the method includes: receiving an encrypted ciphertext related to the biological feature fragment data, wherein the encrypted ciphertext is classified into first ciphertext data and second ciphertext data according to the human face feature vector and the voiceprint data; distributing the first ciphertext data to a first cache space, and distributing the second ciphertext data to a second cache space, wherein the first cache space corresponds to a first decryption computation node, and the second cache space corresponds to a second decryption computation node; restoring the first ciphertext data into face feature fragment data by using the first decryption computing node, and restoring the second ciphertext data into voiceprint fragment data by using the second decryption computing node; and decrypting the face feature fragment data and the voiceprint fragment data by adopting the preset symmetric encryption algorithm to obtain the biological feature fragment data.
According to another aspect of the embodiments of the present invention, there is also provided a method for verifying user identity information, which is applied to an audit terminal of a financial institution, where the audit terminal is connected to a verification terminal and a server, respectively, and the verification terminal is a terminal used by a user to be verified, and includes: receiving an identity identifier, biological feature fragment data and video frame data transmitted by the verification terminal, wherein the biological feature fragment data is obtained by performing fragment processing on the collected biological features of the user to be verified, and the video frame data is collected when the user to be verified is subjected to video surface examination; transmitting the identity, the biometric fragment data and the video frame data to the server; receiving a user verification report transmitted by the server, wherein the user verification report at least comprises: the human body action comparison result is a result obtained by adopting an action detection model to identify human body actions in the video frame data and comparing the human body actions with action requirement instructions in the current business process under the condition that the identity identification passes the verification, and the biological characteristic comparison result is a result obtained by adopting a biological characteristic identification model to compare the biological characteristic fragment data with a pre-recorded biological characteristic set; and outputting a process auditing result of the user to be verified in the current business process based on the user verification report.
According to another aspect of the embodiments of the present invention, there is also provided a method for verifying user identity information, which is applied to a cloud server, and includes: receiving an identity verification instruction, wherein the identity verification instruction carries an identity identifier of a user to be verified, biological characteristic fragment data and video frame data, the biological characteristic fragment data is obtained by carrying out fragment processing on collected biological characteristics of the user to be verified, and the video frame data is data collected when video surface examination is carried out on the user to be verified; responding to the identity verification instruction, verifying the identity identification, adopting an action detection model to identify human body actions in the video frame data under the condition that the identity identification passes the verification, comparing the human body actions with action requirement instructions in the current business process, and adopting a biological characteristic identification model to compare the biological characteristic fragment data with a biological characteristic set which is input in advance; and after the human body action and the biological characteristics of the user to be verified are compared, generating a user verification report.
According to another aspect of the embodiments of the present invention, there is also provided a system for verifying user identity information, including: the verification terminal is used for acquiring an identity of a user to be verified, biological feature fragment data and video frame data, wherein the biological feature fragment data is obtained by performing fragment processing on the acquired biological features of the user to be verified, and the video frame data is acquired when the user to be verified is subjected to video surface examination; the auditing terminal of the financial institution is connected with the checking terminal and is used for transmitting the identity identifier, the biological characteristic fragment data and the video frame data to a server; the server is connected with the auditing terminal, verifies the identity identifier, adopts an action detection model to identify the human body action in the video frame data under the condition that the identity identifier passes the verification, compares the human body action with an action requirement instruction in the current business process, and adopts a biological characteristic identification model to compare the biological characteristic fragment data with a pre-recorded biological characteristic set; and after the human body action and the biological characteristics of the user to be verified are compared, generating a user verification report.
Optionally, the verification terminal includes: the image acquisition unit is used for controlling a terminal camera to acquire the face image of the user to be verified; the voice acquisition unit is used for controlling the terminal microphone equipment to acquire voice data of the user to be verified; the feature data extraction unit is used for extracting a face feature vector in the face image and extracting voiceprint data in the voice data; the feature fragmentation unit is used for carrying out fragmentation processing on the face feature vector and the voiceprint data to generate the biological feature fragmentation data; and the video acquisition unit is used for acquiring the face examination video of the user to be verified.
Optionally, the server includes: the data access unit is used for receiving an encrypted ciphertext related to the biological feature fragment data, wherein the encrypted ciphertext is classified into first ciphertext data and second ciphertext data according to the human face feature vector and the voice print data; the fragment decryption unit is used for distributing the first ciphertext data to a first cache space and distributing the second ciphertext data to a second cache space, wherein the first cache space corresponds to a first decryption computing node, and the second cache space corresponds to a second decryption computing node; restoring the first ciphertext data into face feature fragment data by using the first decryption computing node, and restoring the second ciphertext data into voiceprint fragment data by using the second decryption computing node; decrypting the face feature fragment data and the voiceprint fragment data by adopting a preset symmetric encryption algorithm to obtain the biological feature fragment data; and the comparison calculation unit is used for comparing the biological characteristic fragment data with a pre-recorded biological characteristic set.
Optionally, the server further comprises: the real-time video stream receiving unit is in long connection with the auditing terminal and is used for receiving the video for face auditing collected in the video face auditing link; the video frame extracting unit is used for extracting video frame data in the face examination video; the face recognition unit is used for storing a face recognition model, and comparing the face feature fragment data with face features stored in advance by adopting the face recognition model to obtain a face feature comparison result; the voiceprint recognition unit is used for storing a voiceprint recognition model, comparing the voiceprint fragment data with the prestored voiceprint characteristic features by adopting the voiceprint recognition model and obtaining a voiceprint characteristic comparison result; the action detection and identification unit is used for storing an action detection model, identifying the human body action in the video frame data by adopting the action detection model, and comparing the human body action with the action requirement instruction in the current business process to obtain a human body action comparison result; and the recognition result calculating unit is used for performing weighted calculation on the face feature comparison result, the voiceprint feature comparison result and the human body action comparison result and determining a credible probability value of the user to be verified, wherein the credible probability value is recorded in the user verification report.
According to another aspect of the embodiment of the present invention, there is further provided a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program, and when the computer program runs, a device in which the computer-readable storage medium is located is controlled to perform any one of the above methods for checking user identity information.
According to another aspect of embodiments of the present invention, there is also provided an electronic device, including one or more processors and a memory for storing one or more programs, wherein when the one or more programs are executed by the one or more processors, the one or more processors implement the method for checking user identity information of any one of the above.
The method comprises the following steps of obtaining an identity of a user to be verified in a current business process, biological feature fragment data and video frame data, wherein the biological feature fragment data is obtained by carrying out fragment processing on collected biological features of the user to be verified, and the video frame data is collected when the user to be verified is subjected to video face examination; under the condition that the identity identification passes the verification, identifying the human body action in the video frame data by adopting an action detection model, and comparing the human body action with an action requirement instruction in the current business process; comparing the biological characteristic fragment data with a pre-recorded biological characteristic set by adopting a biological characteristic identification model; and under the condition that the human body action and the biological characteristics of the user to be verified are compared, confirming that the identity information of the user to be verified passes verification, and generating a user verification report.
In the application, a multi-mode biological feature fusion recognition technology is adopted, the method is different from the single face image recognition in the past, multiple biological features are used at the same time, the advantages and the disadvantages can be made up, the recognition error is effectively reduced, the recognition accuracy is improved, the risk of a single biological recognition system is reduced, meanwhile, the biological feature information of a user is converted into feature random fragment data or ciphertext data in the registration, transmission and recognition processes through a multi-party safety calculation technology, the feature random fragment data or the ciphertext data are scattered in different nodes of a cloud end to be cached and cooperatively calculated, the leakage of the biological feature information data such as the face, the voiceprint and the like of the user in the transmission process is prevented, and the legal right and the individual privacy of the user are guaranteed against infringement. Further, the technical problem of user information leakage caused by the fact that an online business system of a financial institution is immature and a biological identification means is single in the related technology is solved.
In the application, the technical safety of the remote financial institution in the biological feature recognition is enhanced through the anti-sample attacking and defending technology. In the living body detection link and the video surface examination link before the face image is collected, the counterfeit attack means of lawless persons can be found in time through the technologies of resisting sample detection, dazzling color light living body detection, resisting sample interference and the like, and active defense is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic diagram of an alternative method for verifying user identity information according to an embodiment of the present invention;
fig. 2 is a first schematic diagram of another alternative method for checking user identity information according to an embodiment of the present invention;
fig. 3 is a second schematic diagram of another alternative method for checking user identity information according to an embodiment of the present invention;
fig. 4 is an architecture diagram of an alternative verification system for user identity information according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating an optional system for verifying user identity information according to an embodiment of the present invention;
fig. 6 is a flowchart of an alternative method for verifying user identity information according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an alternative system for verifying user identity information according to an embodiment of the present invention;
fig. 8 is a block diagram of a hardware structure of an electronic device (or a mobile device) according to an embodiment of the present invention, wherein the electronic device is a method for verifying user identity information
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
To facilitate understanding of the invention by those skilled in the art, the following explanation is made for some terms or nouns referred to in the embodiments of the invention:
a Software Development Kit, SDK for short, refers to a set of Development tools used by engineers to build application Software for a specific Software package, software framework, hardware platform, operating system, and the like.
The MTCNN network is used for a multitask neural network model of a face detection task, and the model mainly adopts three cascaded networks and adopts the idea of adding a classifier into a candidate frame to carry out rapid and efficient face detection.
YUV, a color coding method, is used in each video processing component to reduce the bandwidth of chrominance in consideration of human perception when coding a picture or video.
MP4 format, moving Picture Experts Group 4, a set of compression coding standards for audio and video information.
MPEG, moving Picture Experts Group, is a video compression coding technique.
The AVI format, audio Video Interleaved, is an Audio Video Interleaved format.
It should be noted that the method and system for verifying user identity information in the present disclosure may be used in the field of biometric identification, and may also be used in any field other than the biometric identification field for verifying user identity information in the case of verifying user identity information.
It should be noted that the relevant information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) referred to in the present disclosure are information and data authorized by the user or sufficiently authorized by each party. For example, an interface is provided between the system and the relevant user or organization, before obtaining the relevant information, an obtaining request needs to be sent to the user or organization through the interface, and after receiving the consent information fed back by the user or organization, the relevant information is obtained.
For the condition that a plurality of offline services of the current financial institutions are inconvenient to develop, a plurality of online service systems are derived, and for online handling of the services of the financial institutions, due to the fact that privacy of customers and property safety are involved, strict planning is needed, and user information and property safety are guaranteed.
The invention can be applied to various user identity information checking systems/software/products, can provide a safe and reliable user identity information checking method and system, adopts a multi-mode biological characteristic fusion recognition technology, is different from the traditional single face image recognition, simultaneously uses various biological characteristics, can make up for deficiencies of each other, effectively reduces the recognition error, not only improves the recognition accuracy, but also reduces the risk of a single biological recognition system, simultaneously converts the biological characteristic information of the user into characteristic random fragment data or ciphertext data in the processes of registration, transmission and recognition through a multi-party safety calculation technology, disperses the characteristic random fragment data or the ciphertext data in different nodes of a cloud end for caching and cooperative calculation, prevents the leakage of the biological characteristic information data of the face, the voiceprint and the like of the user in the transmission process, and ensures the legal interest of the user and the personal privacy against infringement. The present application will be described in detail with reference to various embodiments.
Example one
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for verifying user identity information, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than that described herein.
The verification method for the user identity information provided by the embodiment of the invention is explained by taking a server as an execution main body.
The present invention is described below with reference to preferred implementation steps, and fig. 1 is a schematic diagram of an optional method for checking user identity information according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S101, acquiring an identity of a user to be verified in a current business process, biological feature fragment data and video frame data, wherein the biological feature fragment data is obtained by performing fragment processing on the biological features of the acquired user to be verified, and the video frame data is acquired when the user to be verified performs video surface examination;
step S102, under the condition that the identity identification passes the verification, a motion detection model is adopted to identify the human motion in the video frame data, and the human motion is compared with the motion requirement instruction in the current business process;
step S103, comparing the biological characteristic fragment data with a pre-recorded biological characteristic set by adopting a biological characteristic identification model;
and step S104, under the condition that the human body action and the biological characteristics of the verified user are compared, confirming that the identity information of the user to be verified passes verification, and generating a user verification report.
Through the steps, the identity of the user to be verified in the current service process, the biological feature fragment data and the video frame data are obtained, wherein the biological feature fragment data is obtained by carrying out fragment processing on the biological features of the collected user to be verified, and the video frame data is the data collected when the user to be verified is subjected to video surface examination; under the condition that the identity identification passes the verification, identifying the human body action in the video frame data by adopting an action detection model, and comparing the human body action with an action requirement instruction in the current business process; comparing the biological characteristic fragment data with a pre-recorded biological characteristic set by adopting a biological characteristic identification model; and under the condition that the human body action and the biological characteristics of the verified user are compared, confirming that the identity information of the user to be verified passes verification, and generating a user verification report.
In the embodiment, a multi-mode biological feature fusion recognition technology is adopted, which is different from the single face image recognition in the past, multiple biological features are used at the same time, the advantages and the disadvantages can be made up, the recognition error is effectively reduced, the recognition accuracy is improved, the risk of a single biological recognition system is reduced, the biological feature information of a user is converted into feature random fragment data or ciphertext data in the registration, transmission and recognition processes and is dispersed in different nodes, the leakage of the biological feature information data such as the face, the voiceprint and the like of the user in the transmission process is prevented, the legal rights and interests of the user and the personal privacy of the user are guaranteed not to be violated, and the technical problem of the leakage of the user information caused by the immaturity of an online service system of a financial institution and the single biological recognition means in the related technology is solved.
The following will explain each implementation procedure in detail.
In this embodiment, the cloud server may be connected to a verification terminal of the user side, or the cloud server may be connected to a verification terminal of the user side through an audit terminal of a financial institution, and the verification terminal of the user side may collect multi-modal biometric data of the client and credential screenshot information, where the multi-modal biometric data includes but is not limited to: face image, voiceprint, handwriting, signature action and the like. The checking terminal is used for realizing human face living body detection and video face examination links to collect original plaintext images and video data such as human faces and actions, then packaging the images to collect SDK, driving the terminal camera equipment, and opening the camera to obtain real-time video data. The method comprises the steps of collecting original sound data of a client in a video surface examination link, packaging audio collection SDK, driving a terminal microphone device, and opening a microphone to obtain real-time sound data/audio stream.
Step S101, acquiring an identity of a user to be verified in a current business process, biological feature fragment data and video frame data, wherein the biological feature fragment data is obtained by performing fragment processing on the biological features of the acquired user to be verified, and the video frame data is acquired when the user to be verified performs video surface examination;
the user identity includes but is not limited to: the method comprises the steps of carrying out fragment processing on collected user biological characteristic information to obtain fragment data, converting the user biological characteristic information into fragment data or ciphertext data in the processes of registration, transmission and identification, dispersing the fragment data or the ciphertext data at different nodes in the cloud end for caching and collaborative calculation, and accordingly preventing the biological characteristic information data such as the face, the fingerprint and the voiceprint of a user from being leaked in the transmission process, and protecting the personal privacy and the personal property safety of the user.
The method comprises the steps of collecting user video information, carrying out frame extraction processing on the user video information to obtain video frame data, wherein similar to the slicing processing, the frame extraction processing is to extract a YUV format similar to a picture by extracting a plurality of frames of a complete video (such as an MP4 format, an MPEG format, an AVI format and the like) at certain intervals.
In the embodiment of the invention, the steps of acquiring the identity, the biological characteristic fragment data and the video frame data of the user to be verified in the current service process comprise: acquiring an identity document of a user to be verified through a verification terminal, and identifying an identity mark on the identity document, wherein the verification terminal is a terminal used by the user to be verified; acquiring a face image and voice data of a user to be verified through a verification terminal, extracting a face characteristic vector in the face image and extracting voiceprint data in the voice data; carrying out fragmentation processing on the face feature vector and the voiceprint data through a verification terminal to generate biological feature fragmentation data; and during video face examination, acquiring a face examination video of a user to be verified through the verification terminal, and extracting video frame data in the face examination video.
The verification terminal may refer to a terminal device used by a user, and the types of the verification terminal include, but are not limited to: mobile terminals (e.g., cell phone, tablet, IPAD, notebook), PC. The user uploads the identity card and other identity information of the user to be verified through the verification terminal, and meanwhile, the verification terminal identifies the user to obtain a user identity for subsequent verification.
In the embodiment of the present invention, before acquiring the face image and the sound data of the user to be verified, the method further includes: randomly generating an action instruction and a colorful light signal, wherein the action instruction is used for indicating a user to be verified to face a display screen and finishing a specified detection action under reflected light, and the colorful light signal is generated based on a three-dimensional imaging principle; sending an action instruction to a verification terminal; receiving sample detection data transmitted by a verification terminal, wherein the sample detection data is used for performing living body detection on a user to be verified; and under the condition that the sample detection data indicate that the user to be verified passes the living body detection, starting to acquire the face image and the sound data of the user to be verified through the verification terminal.
In this embodiment, the in-vivo detection can be implemented before the identity audit, the biological sample collection and the video surface audit are performed, and since an attacker renders and synthesizes a "face changing" image by using a depth forgery (DeepFake) technology in the prior art, a forged user video is generated in real time, and a worker is deceived, it is necessary to confirm whether the current identity audit and the video surface audit are true users through the in-vivo detection. The in-vivo detection is applied to a check terminal and used for defending attack means such as paper photo copying, 3D synthesis, video copying and screen copying, randomly generating action instructions and colorful light signals (based on a reflected light three-dimensional imaging principle), the user terminal obtains the action instructions and the colorful light signals, corresponding actions are performed under reflected light facing a screen, and when in-vivo detection, the check terminal starts to collect face images and sound data of a user.
The embodiment of the invention can realize multi-party safe calculation of data, complete the extraction of biological characteristics such as human faces, voiceprints and the like in the data through a human face and voiceprint characteristic extraction algorithm, divide the biological characteristic data into random fragment data through a fragment algorithm, encrypt the data fragments through an encryption algorithm, and finally send ciphertext fragment data to a cloud for analysis. Wherein, extracting the face feature vector in the face image and extracting the voiceprint data in the voice data, including: extracting a face frame in each face image by adopting a multitask convolutional neural network through a check terminal; extracting face information in a face frame through a face recognition model through a verification terminal, and mapping the face information to a multi-dimensional space vector to obtain a face feature vector; preprocessing the sound data through a verification terminal; and performing fast Fourier transform processing, convolution operation processing and discrete cosine transform on the preprocessed sound data through a verification terminal to obtain a Mel cepstrum coefficient, and taking the Mel cepstrum coefficient as the voiceprint data in the sound data.
For face data, a face frame in a current image can be acquired through a multitask convolutional neural network (MTCNN), then face information in the current image is extracted through a face recognition model (for example, a faceNet model), and then the face information is mapped to a multidimensional space vector to obtain a face feature vector; for the voiceprint data, the voice signal can be preprocessed, redundant information irrelevant to identification is removed, and then fast Fourier transform, convolution operation and discrete cosine transform are carried out to obtain Mel cepstrum coefficient (MFCC) as the voiceprint characteristic parameters. Where Mel cepstral coefficients are cepstral parameters that can be extracted in the Mel-scale frequency domain, and the pre-processing may include pre-emphasizing the acquired speech signal through a high pass filter so that the spectrum of the signal is flattened and remains in the entire band from low frequencies to high frequencies.
After the face feature vector and the voiceprint data are obtained, the extracted face feature vector and the extracted voiceprint feature data can be subjected to fragmentation processing, and face and voiceprint feature random fragmentation data are generated. Taking face feature vector fragmentation as an example, a feature vector element random difference method can be adopted to transform the original face feature vector and generate two new different face feature random sequences.
In the embodiment of the present invention, after the face feature vector and the voiceprint data are subjected to fragment processing to generate biometric fragment data, the method further includes: encrypting the biological characteristic fragment data by adopting a preset symmetric encryption algorithm through the verification terminal to obtain an encrypted ciphertext; and transmitting the encrypted ciphertext to the server through the verification terminal.
After fragmentation is completed, encryption processing is carried out on output biological feature random fragmentation data such as human faces, voiceprints and the like, confidentiality protection transmission of the biological feature fragmentation data is achieved, a symmetric encryption algorithm is adopted, the fragmentation data are encrypted into a ciphertext, and the symmetric encryption algorithm adopts a ZUC algorithm which meets the national secret standard.
It should be noted that, after acquiring the identity, the biometric feature fragmentation data, and the video frame data of the user to be verified in the current business process, the method includes: receiving an encrypted ciphertext related to the biological feature fragment data, wherein the encrypted ciphertext is classified into first ciphertext data and second ciphertext data according to the human face feature vector and the voice print data; distributing the first ciphertext data to a first cache space, and distributing the second ciphertext data to a second cache space, wherein the first cache space corresponds to a first decryption computing node, and the second cache space corresponds to a second decryption computing node; restoring the first ciphertext data into face feature fragment data by adopting a first decryption computing node, and restoring the second ciphertext data into voiceprint fragment data by adopting a second decryption computing node; and decrypting the face characteristic fragment data and the voiceprint fragment data by adopting a preset symmetric encryption algorithm to obtain the biological characteristic fragment data.
In the embodiment of the present invention, after acquiring the identity, the biometric feature fragmentation data and the video frame data of the user to be verified in the current business process, the method includes: and generating preset anti-disturbance information in the biological feature fragment data and the video frame data, wherein the anti-disturbance information is used for defending an anti-sample of an external device.
And S102, under the condition that the identity identification passes the verification, identifying the human body action in the video frame data by adopting an action detection model, and comparing the human body action with an action requirement instruction in the current business process.
And calculating and identifying the human body action in the received video frame data, and comparing the human body action with action instructions (such as lens proof, signature, nodding and the like) required by service personnel of financial institutions, wherein the action instructions are generated by a living body detection server and a video face examination link server in the current service flow.
In this embodiment, the human body actions in the received video frame data are identified through an action detection algorithm model stored in the server, and compared with action instructions (such as lens evidence presenting, signing, nodding and the like) generated by a live body detection server, a video surface examination link server side randomly and required by a customer service manager in the current service flow.
And S103, comparing the biological characteristic fragment data with a pre-recorded biological characteristic set by adopting a biological characteristic identification model.
In this embodiment, the server may cache not only biometric fragment data and video frame data but also artificial intelligent algorithm models such as biometric identification (face feature identification, voice feature identification), motion detection and the like for checking user identity consistency information for various biometric fragment data (these fragment data are stored in different storage spaces in blocks to prevent attack and theft of external devices), and through the biometric identification models, the face feature fragment data registered by the user and the face feature random fragment data received by the current business process are used as data input parameters of the algorithm models to complete comparison calculation of the face feature fragment data, and meanwhile, the voiceprint feature fragment data registered by the user and the voiceprint feature random fragment data received by the current business process are used as data input parameters of the algorithm models to complete comparison calculation of the voiceprint feature fragment data.
And step S104, under the condition that the human body action and the biological characteristics of the user to be verified are compared, confirming that the identity information of the user to be verified passes verification, and generating a user verification report.
In this embodiment, the comparison calculation result may be subjected to fusion weighted calculation to obtain an overall credible probability of identity consistency of the user, and a user verification report may be generated, where the user verification report may be used as a basis for business process control decision of an audit terminal of a financial institution.
Optionally, the server in this embodiment further includes a cache timing cleaning unit, and deletes stored unregistered face and voiceprint biometric plaintext exceeding a certain time range along with the token data and the video frame data.
Through the embodiment, the privacy protection of the user biological characteristic information can be technically ensured on the premise of no trusted third party management and no dependence on specific hardware equipment. By means of the multi-party security computing technology, the biological feature information of the user is converted into feature random fragment data or ciphertext data in the processes of registration, transmission and identification, the feature random fragment data or the ciphertext data are dispersed in different nodes of the cloud for caching and collaborative computing, leakage of biological feature information data such as the face and voiceprints of the user in the transmission process is prevented, and the legal rights and the individual privacy of the user are protected from being infringed.
Meanwhile, a multi-mode biological feature fusion recognition technology is adopted, the method is different from the traditional single face image recognition, the technologies of voiceprint recognition, motion detection and the like are added, and fusion weighting calculation is carried out on recognition results. The multimode biological recognition uses various biological characteristics at the same time, can make up for deficiencies of others, effectively reduces recognition errors, improves recognition accuracy, and reduces risks of a single biological recognition system.
Meanwhile, the embodiment enhances the technical safety of the remote financial institution in the biological feature recognition through the anti-sample attack and defense technology. In the living body detection link and the video surface examination link before the face image is collected, by the technologies of resisting sample detection, dazzling color light living body detection, resisting sample interference and the like, the auditing terminal of the financial institution can check the counterfeiting attack means of lawless persons in time, and active defense is achieved.
The invention is described below in connection with an alternative embodiment.
Example two
According to an embodiment of the present invention, an embodiment of a method for verifying user identity information is provided, which is applied to an auditing terminal of a financial institution (or an agent terminal side of the financial institution), the auditing terminal of the financial institution is connected with a server, and the server executes various implementation steps and all implementation schemes illustrated in the above embodiment, it should be noted that the steps illustrated in the flowchart of the drawings may be executed in a computer system such as a set of computer executable instructions, and although a logical sequence is illustrated in the flowchart, in some cases, the steps illustrated or described may be executed in a sequence different from that here.
Fig. 2 is a first schematic diagram of another optional verification method for user identity information according to an embodiment of the present invention, as shown in fig. 2, the verification method includes:
step S201, receiving an identity identifier, biological characteristic fragment data and video frame data transmitted by a verification terminal;
the verification terminal may refer to a terminal device used by a user, and the types of the verification terminal include, but are not limited to: mobile terminals (e.g., cell phone, tablet, IPAD, notebook), PC. The user uploads the identity card and other identity information of the user to be verified through the verification terminal, and meanwhile, the verification terminal identifies the user to obtain a user identity for subsequent verification.
User identities include, but are not limited to: the method comprises the steps of carrying out fragment processing on collected user biological characteristic information to obtain fragment data, converting the user biological characteristic information into fragment data or ciphertext data in the processes of registration, transmission and identification, dispersing the fragment data or the ciphertext data at different nodes in the cloud end for caching and collaborative calculation, and accordingly preventing the biological characteristic information data such as the face, the fingerprint and the voiceprint of a user from being leaked in the transmission process, and protecting the personal privacy and the personal property safety of the user.
Step S202, transmitting the identity identification, the biological characteristic fragment data and the video frame data to a server;
step S203, receiving a user verification report transmitted by the server;
and step S204, outputting a process auditing result of the user to be verified in the current business process based on the user verification report.
The steps executed by the auditing terminal of the financial institution comprise: receiving an identity identifier, biological characteristic fragment data and video frame data transmitted by a verification terminal; transmitting the identity identification, the biological feature fragment data and the video frame data to a server; receiving a user verification report transmitted by a server; and outputting a process auditing result of the user to be verified in the current business process based on the user verification report.
The auditing terminal of the financial institution can be connected with the verifying terminal of the user side, and the verifying terminal of the user side can collect multi-mode biological characteristic data and certificate screenshot information of the user, wherein the multi-mode biological characteristic data comprises but is not limited to: face image, voiceprint, handwriting, signature action and the like. The checking terminal is used for realizing human face living body detection and video face examination links to collect original plaintext images and video data such as human faces and actions, then packaging the images to collect SDK, driving the terminal camera equipment, and opening the camera to obtain real-time video data. The method comprises the steps of collecting original sound data of a user in a video surface examination link, packaging audio collection SDK, driving a terminal microphone device, and opening a microphone to obtain real-time sound data/audio stream.
In this embodiment, the in-vivo detection can be implemented before the identity audit, the biological sample collection and the video surface audit are performed, and since an attacker renders and synthesizes a "face changing" image by using a depth forgery (DeepFake) technology in the prior art, a forged user video is generated in real time, and a worker is deceived, it is necessary to confirm whether the current identity audit and the video surface audit are true users through the in-vivo detection. The in-vivo detection is applied to a check terminal and used for defending attack means such as paper photo reproduction, 3D synthesis, video reproduction and screen reproduction, randomly generated action instructions and colorful light signals (based on a reflected light three-dimensional imaging principle) are obtained by the user terminal, corresponding actions are performed under reflected light by facing a screen, and when the in-vivo detection is performed, face images and sound data of a user are collected by the check terminal.
The embodiment of the invention can realize multi-party safe calculation of data, complete the extraction of biological characteristics such as human faces, voiceprints and the like in the data through a human face and voiceprint characteristic extraction algorithm, divide the biological characteristic data into random fragment data through a fragment algorithm, encrypt the data fragments through an encryption algorithm, and finally send ciphertext fragment data to a cloud for analysis. Wherein, extracting the face feature vector in the face image and extracting the voiceprint data in the voice data, including: extracting a face frame in each face image by adopting a multitask convolution neural network through a verification terminal; extracting face information in a face frame through a face recognition model through a verification terminal, and mapping the face information to a multi-dimensional space vector to obtain a face feature vector; preprocessing the sound data through a verification terminal; and performing fast Fourier transform processing, convolution operation processing and discrete cosine transform on the preprocessed sound data through a verification terminal to obtain a Mel cepstrum coefficient, and taking the Mel cepstrum coefficient as the voiceprint data in the sound data.
For face data, a face frame in a current image can be acquired through a multitask convolutional neural network (MTCNN), then face information in the current image is extracted through a face recognition model (for example, a faceNet model), and then the face information is mapped to a multidimensional space vector to obtain a face feature vector; for the voiceprint data, the voice signal can be preprocessed, redundant information irrelevant to identification is removed, and then fast Fourier transform, convolution operation and discrete cosine transform are carried out to obtain Mel cepstrum coefficient (MFCC) as the voiceprint characteristic parameters. Wherein the Mel cepstral coefficients are cepstral parameters that can be extracted in the Mel-scale frequency domain, and the pre-processing can include pre-emphasizing the collected speech signal through a high-pass filter so that the frequency spectrum of the signal becomes flat and remains in the whole frequency band from low frequency to high frequency.
After the face feature vector and the voiceprint data are obtained, the extracted face feature vector and the extracted voiceprint feature data can be subjected to fragmentation processing, and face and voiceprint feature random fragmentation data are generated. Taking face feature vector fragmentation as an example, a feature vector element random difference method can be adopted to transform the original face feature vector and generate two new different face feature random sequences.
In this embodiment, the server may cache not only biometric fragment data and video frame data but also artificial intelligent algorithm models such as biometric identification (face feature identification, voice feature identification), motion detection and the like for checking user identity consistency information for various biometric fragment data (these fragment data are stored in different storage spaces in blocks to prevent attack and theft of external devices), and through the biometric identification models, the face feature fragment data registered by the user and the face feature random fragment data received by the current business process are used as data input parameters of the algorithm models to complete comparison calculation of the face feature fragment data, and meanwhile, the voiceprint feature fragment data registered by the user and the voiceprint feature random fragment data received by the current business process are used as data input parameters of the algorithm models to complete comparison calculation of the voiceprint feature fragment data.
In the embodiment, a multi-mode biological feature fusion recognition technology is adopted, the multi-mode biological feature fusion recognition technology is different from the single face image recognition in the past, multiple biological features are used at the same time, the advantages and the disadvantages can be made up, the recognition error is effectively reduced, the recognition accuracy is improved, the risk of a single biological recognition system is reduced, meanwhile, the biological feature information of a user is converted into feature random fragment data or ciphertext data in the registration, transmission and recognition processes through a multi-party safety calculation technology, the feature random fragment data or the ciphertext data are dispersed in different nodes of a cloud end to be cached and cooperatively calculated, the leakage of the biological feature information data such as the face and the voiceprint of the user in the transmission process is prevented, and the legal right and the individual privacy of the user are guaranteed not to be invaded. Further, the technical problem of user information leakage caused by immature online business systems and single biological identification means of financial institutions in the related technology is solved.
The invention is described below in connection with an alternative embodiment.
EXAMPLE III
According to an embodiment of the present invention, an embodiment of a method for verifying user identity information is provided, which is applied to a cloud server, and the implementation scheme of the cloud server is the same as that of the server in the first embodiment, it should be noted that the steps shown in the flowchart of the figure may be implemented in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowchart, in some cases, the steps shown or described may be implemented in an order different from that in the first embodiment.
Fig. 3 is a schematic diagram two of another optional verification method for user identity information according to an embodiment of the present invention, as shown in fig. 3, the verification method includes:
step S301, receiving an identity verification instruction, wherein the identity verification instruction carries an identity of a user to be verified, biological characteristic fragment data and video frame data;
step S302, responding to an identity verification instruction, verifying an identity, adopting an action detection model to identify human body actions in video frame data under the condition that the identity passes the verification, comparing the human body actions with action requirement instructions in the current business process, and adopting a biological characteristic identification model to compare biological characteristic fragment data with a biological characteristic set which is recorded in advance;
and step S303, generating a user verification report after the comparison of the human body action and the biological characteristics of the user to be verified is completed.
The steps executed by the cloud server include: receiving an identity verification instruction, wherein the identity verification instruction carries an identity of a user to be verified, biological characteristic fragment data and video frame data; responding to the identity verification instruction, verifying the identity, adopting an action detection model to identify the human body action in the video frame data under the condition that the identity passes the verification, comparing the human body action with an action requirement instruction in the current business process, and adopting a biological characteristic identification model to compare the biological characteristic fragment data with a pre-recorded biological characteristic set; and after the human body action and the biological characteristics of the user to be verified are compared, generating a user verification report.
In the embodiment, a multi-mode biological feature fusion recognition technology is adopted, the multi-mode biological feature fusion recognition technology is different from the single face image recognition in the past, multiple biological features are used at the same time, the advantages and the disadvantages can be made up, the recognition error is effectively reduced, the recognition accuracy is improved, the risk of a single biological recognition system is reduced, meanwhile, the biological feature information of a user is converted into feature random fragment data or ciphertext data in the registration, transmission and recognition processes through a multi-party safety calculation technology, the feature random fragment data or the ciphertext data are dispersed in different nodes of a cloud end to be cached and cooperatively calculated, the leakage of the biological feature information data such as the face and the voiceprint of the user in the transmission process is prevented, and the legal right and the individual privacy of the user are guaranteed not to be invaded. Further, the technical problem of user information leakage caused by the fact that an online business system of a financial institution is immature and a biological identification means is single in the related technology is solved.
The invention is described below in connection with an alternative embodiment.
Example four
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for verifying user identity information, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than that described herein.
The present invention is described below with reference to preferred implementation steps, and fig. 4 is an architecture diagram of an optional verification system for user identity information according to an embodiment of the present invention, and as shown in fig. 4, the system includes three main bodies, namely, a collection end (corresponding to the verification terminal), a server, and a verification end/seat end of a financial institution.
As shown in fig. 4, when the user transacts online services such as debit card password resetting and loan application through the online service system of the financial institution, in addition to uploading the identification photo, the user needs to acquire multi-modal biological characteristics, such as voiceprint data, face data and motion data (acquired during interactive live detection and video face examination), identification document information, and the like; after the terminal collects the biological characteristic data information of the user, the verification terminal SDK encrypts and transmits the private data of the user by adopting characteristic data encryption transformation and a multi-party security computing technology (privacy computation); after receiving the user identity document information, the agent terminal can carry out primary verification on the user identity information through the personal credit investigation link and the identity information system; the financial institution data center server side comprises an antagonistic sample detection algorithm model, a voiceprint recognition model, an action detection model and the like, performs anti-counterfeiting check and recognition on the acquired encrypted face data, voiceprint data and audio/video data, performs networking check on the user identity, performs people and cards integrated check, and immediately feeds back the analysis result to the check end/seat end of the financial institution.
Fig. 5 is a schematic diagram illustrating an optional system for verifying user identity information according to an embodiment of the present invention, and as shown in fig. 5, the system for verifying an online identity of a remote financial institution based on multimodal biometric identification, privacy calculation and anti-sample attack and defense technology includes: a user information acquisition unit; a multi-party secure computing unit; a task scheduling unit; a data decryption unit; a biometric recognition unit; an anti-counterfeiting detection unit.
And the user information acquisition unit is used for acquiring the multi-mode biological characteristic data and the certificate screenshot information of the user. The multi-modal biometric data includes, but is not limited to, facial images, voice prints, handwriting, signature actions, and the like.
The image acquisition unit is used for acquiring original plaintext images and video data of human faces, actions and the like in the links of human face living body detection and video face examination. And packaging the image acquisition SDK, driving the terminal camera equipment, and opening the camera to acquire a real-time video stream.
And the sound acquisition unit is used for acquiring original sound data of a user in a video surface examination link, packaging an audio acquisition SDK, driving terminal microphone equipment and opening a microphone to acquire real-time audio stream.
The multi-party safety computing unit extracts biological characteristics such as human faces, voiceprints and the like in the data through a human face and voiceprint characteristic extraction algorithm, then divides the biological characteristic data into random fragment data through a fragment algorithm, encrypts the data fragments through an encryption algorithm, and finally sends the ciphertext fragment data to the cloud end for analysis.
And the characteristic data extraction unit is used for extracting the facial and voiceprint characteristic data of the user from the acquired original image and audio data through a facial and voiceprint characteristic extraction algorithm. The method specifically comprises the following steps: firstly, acquiring a face frame in a current image through an MTCNN (multiple terminal connected network) network for face data, and then extracting a face in the current image and mapping the face to a multi-dimensional space vector through a faceNet model; secondly, for the voiceprint data, preprocessing of the voice signals is firstly carried out, redundant information irrelevant to identification is removed, and then fast Fourier transform, convolution operation and discrete cosine transform are carried out, so that Mel cepstrum coefficients (MFCC) are obtained and used as voiceprint characteristic parameters.
And the feature fragmentation unit is used for carrying out fragmentation processing on the face and voiceprint feature data extracted by the feature data extraction unit to generate face and voiceprint feature random fragmentation data. Taking face feature vector fragmentation as an example, the unit transforms the original face feature vector by adopting a feature vector element random difference method to generate two new different face feature random sequences.
And the fragment encryption unit is used for encrypting the biological characteristic random fragment data such as the face, the voiceprint and the like output by the characteristic fragment unit so as to realize confidentiality protection transmission of the biological characteristic fragment data. And encrypting the sliced data into a ciphertext by adopting a symmetric encryption algorithm, wherein the symmetric encryption algorithm adopts a ZUC algorithm which accords with the national cipher standard.
The task scheduling unit is deployed at the seat side and used for realizing user information data transfer, service flow control and intelligent auditing management and control. The remote financial institution seat can access the online business handling process of the user, initiate remote video surface review and scheduling cloud intelligent analysis functions through the task scheduling unit to verify the identity consistency of the user and ensure the compliance and safety of the online business process.
And the data transfer unit is used for receiving and sending the picture data uploaded by the user, the encrypted biological characteristic data generated by the fragment encryption unit and the audio and video data generated by the remote face examination.
And the business process control unit is used for controlling the online business process. After the user passes through the preposition links such as service application, data submission, face living body detection and the like, the seat end can also access the remote video face examination through the unit module to carry out video confirmation or assistance on the user.
And the intelligent auditing unit is used for scheduling the cloud biological characteristic identification model and the anti-counterfeiting detection model to perform anti-counterfeiting judgment and identity consistency judgment on the user biological characteristic and the audio and video data, and also provides an information base link to facilitate seat networking for checking the related information of the identity document. And the intelligent auditing result is finally fed back to the service process control unit to be used as a decision basis of the service process on the control line.
And the data decryption unit is deployed at the cloud end and used for restoring the generated encrypted biological characteristic sliced data transferred by the data transfer unit into plaintext random sliced data by the sliced encryption unit and performing consistency comparison and precision verification.
And the data access unit is used for receiving the encrypted biological characteristic fragment data generated by the fragment encryption unit and transferred by the data transfer unit, and distributing different types of biological characteristic encrypted data such as human faces, voiceprints and the like to corresponding data cache spaces and decryption computing nodes.
And the fragment decryption unit is used for restoring the ciphertext fragments with the human face and voiceprint characteristics after the data access unit accesses and route distribution into plaintext random fragment data. And decrypting the ciphertext fragments by adopting a symmetric encryption algorithm corresponding to the fragment encryption unit.
And the comparison calculation unit is used for performing comparison operation on the random fragmented data of the human face and the voiceprint characteristics decrypted by the fragment decryption unit to ensure that the calculation result of the fragmented data is consistent with the calculation result of the complete characteristic data or the result loss precision is within an acceptable range.
And the biological characteristic identification unit is deployed at the cloud end and is used for carrying out intelligent analysis, identification and detection on the uploaded biological characteristic data such as human faces, voiceprints and the like and video stream data. The unit caches the biological feature fragment data and the video frame data, and also caches human face recognition, voiceprint recognition, action detection and other artificial intelligent algorithm models for checking identity consistency information of the user.
And the biological characteristic fragment data receiving unit is used for receiving the face and voiceprint plaintext random fragment data generated by the fragment decryption unit.
And the real-time video stream receiving unit is in long connection with the seat side and is used for receiving real-time video stream data acquired by the image acquisition unit in the video surface examination link.
And the video frame extraction unit is used for extracting the frame of the acquired video stream based on the embedded FFmpeg component.
And the data caching unit is used for storing the human face, the voiceprint biological characteristic plaintext random fragment data, the biological characteristic data initially registered by the user and the video frame data received by the biological characteristic fragment data receiving unit and the real-time video stream receiving unit.
And the buffer timing cleaning unit is used for deleting the unregistered face and voiceprint biological characteristic plaintext which exceeds a certain time range in the storage unit of the data buffer unit along with the memory fragment data and the video frame data.
And the face recognition unit is used for storing the face recognition algorithm model. And the face feature fragment data registered by the user in the data cache unit and the face feature random fragment data received in the current service process are used as data input parameters of the algorithm model to complete comparison calculation of the face feature fragment data.
And the voiceprint recognition unit is used for storing the voiceprint recognition algorithm model. And taking the voiceprint feature fragment data registered by the user in the data cache unit and the voiceprint feature random fragment data received by the current service process as data input parameters of the algorithm model to complete comparison calculation of the voiceprint feature fragment data.
And the action detection and identification unit is used for storing the action detection algorithm model. And calculating and identifying human body actions based on the video frame data received in the data cache unit, and comparing the human body actions with action instructions (such as lens evidence-taking, signature, head nodding and the like) required by a customer service manager, wherein the action instructions are generated randomly by a service terminal of a video face examination link and a living body detection in the current service process.
And the recognition result calculation unit is used for performing fusion weighted calculation on comparison calculation results of the face recognition unit, the voiceprint recognition unit and the action detection recognition unit to obtain the integral credible probability of the identity consistency of the user, and the integral credible probability is used as a decision basis for the service flow control of the seat end.
The anti-counterfeiting detection unit is also deployed at the cloud end and is used for actively preventing attacks against the biological identification model, wherein the attacks include but are not limited to video reproduction, 3D synthesis, sample attack resistance, deep counterfeiting attack and the like. The unit can effectively ensure that the biological characteristic data and the audio and video data to be analyzed are real samples instead of false data which is falsified and forged by a hiding means.
And the anti-sample interference unit is used for defending against the currently popular deep forgery (DeepFake) attack technology. The deep forgery technology is based on a generation countermeasure network, can utilize deep learning to identify and replace an original portrait in a picture or a video, and an algorithm model stored in the unit can generate specific countermeasure disturbance, is added into video and image data in a business process, and enables a seat side to immediately check abnormality by counteracting the noise interference and generating a result of the deep forgery.
And the confrontation sample detection unit is used for defending the confrontation sample attack aiming at the face recognition link. The unit judges whether the image data collected by the image collecting unit is a confrontation sample or not based on methods such as a face feature compression algorithm, a face feature activation value, face attribute interpretability and the like, and ensures that the face image data is a normal sample.
And the living body detection unit is used for defending attack means such as paper photo reproduction, 3D synthesis, video reproduction, screen reproduction and the like. The unit can randomly generate motion instructions and a glare light signal (based on the principle of reflected light three-dimensional imaging). Before the face data of the user is collected, the SDK side of the user terminal can receive the action instruction and the colorful light signal transmitted by the unit, the user faces the screen according to the requirement and makes a specified action under reflected light, and the living body distinguishing calculation is carried out through the unit after the image collecting unit collects sample data.
Fig. 6 is a flowchart of an optional method for verifying user identity information according to an embodiment of the present invention, where the entire authentication process includes:
the first step is as follows: starting;
the second step: and (3) interactive living body detection, wherein a living body detection unit randomly generates an action instruction and a colorful light signal (based on a reflected light three-dimensional imaging principle), the action instruction and the colorful light signal are obtained by the user terminal, corresponding actions are performed under reflected light by facing a screen, and under the condition of living body detection, a verification terminal starts to acquire a face image and sound data of a user.
The third step: judging whether the living body is a living body, if so, executing the fourth step, and if not, executing the fifteenth step;
the fourth step: capturing a face image and acquiring user voice;
the fifth step: encrypting transmission;
after the user terminal collects the user biological characteristic data information, the user terminal adopts characteristic data encryption transformation and a multi-party safety computing technology through a user side SDK to carry out encryption transmission on user privacy data, divides the biological characteristic data into random fragment data through a fragment algorithm on a face image captured by the user terminal and collected user voice, encrypts data fragments through an encryption algorithm, and finally sends ciphertext fragment data to a cloud end for analysis.
And a sixth step: receiving and decrypting by the cloud;
the seventh step: uploading the screenshot of the identity document;
the eighth step: checking the seat side connection;
the ninth step: checking whether the identity document information is real and effective, unifying the people and the certificate, if so, executing the eleventh step, and if not, executing the fifteenth step;
the tenth step: the video surface audits and pushes the video stream;
the eleventh step: judging whether the video is forged or not, if not, executing the twelfth step, and if so, executing the fifteenth step;
the twelfth step: identifying biological characteristics;
the thirteenth step: judging whether the biological characteristic comparison calculation is passed, if so, executing a fourteenth step, and if not, executing a fifteenth step;
the fourteenth step is that: through identity consistency check, entering the next business process link;
the fifteenth step: ending the service flow;
sixteenth, ending;
according to the user identity information verification system, on the premise of no trusted third party management and no dependence on specific hardware equipment, privacy protection of user biological feature information is technically ensured, the biological feature information of a user is converted into feature random fragment data or ciphertext data in the registration, transmission and identification processes through a multi-party security computing technology, the feature random fragment data or the ciphertext data are dispersed in different nodes of a cloud end to be cached and cooperatively computed, leakage of biological feature information data such as a user face and voiceprints in the transmission process is prevented, legal interests and personal privacy of the user are guaranteed not to be invaded, meanwhile, a multi-mode biological feature fusion identification technology is adopted, the system is different from single face image identification in the past, multiple biological features are used, advantages can be obtained, the identification error can be effectively reduced, the identification accuracy is improved, the risk of a single biological identification system is reduced, and the technical security of a remote financial institution in biological feature identification is enhanced through a sample defense technology. In the living body detection link and the video surface examination link before the face image is collected, the false attack means of lawbreakers can be found out in time by the seat side through the technologies of resisting sample detection, dazzling color light living body detection, resisting sample interference and the like, and active defense is achieved. Further, the technical problem of user information leakage caused by immature online business systems and single biological identification means of financial institutions in the related technology is solved.
The invention is described below in connection with an alternative embodiment.
EXAMPLE five
The embodiment of the present application further provides a system for verifying user identity information, and it should be noted that the system for verifying user identity information of the embodiment of the present application can be used to execute the method for verifying user identity information provided in the first, second, and third embodiments of the present application. The following describes a system for verifying user identity information provided in an embodiment of the present application.
Fig. 7 is a schematic diagram of an alternative apparatus for verifying user identity information according to an embodiment of the present invention. As shown in fig. 7, the apparatus includes: a verification terminal 70, a financial institution audit terminal 72, and a server 74, which will be described in detail below.
The verification terminal 70 is configured to collect an identity of a user to be verified, biometric fragmentation data, and video frame data, where the biometric fragmentation data is data obtained by performing fragmentation processing on a biometric feature of the collected user to be verified, and the video frame data is data collected when a video surface inspection is performed on the user to be verified;
the financial institution auditing terminal 72 is connected with the checking terminal and is used for transmitting the identity identifier, the biological characteristic fragment data and the video frame data to the server;
the server 74 is connected with the auditing terminal, verifies the identity identifier, adopts the motion detection model to identify the human motion in the video frame data under the condition that the identity identifier passes the verification, compares the human motion with the motion requirement instruction in the current business flow, and adopts the biological feature identification model to compare the biological feature fragment data with a pre-recorded biological feature set; and after the human body action and the biological characteristics of the user to be verified are compared, generating a user verification report.
Optionally, the verification terminal includes: the image acquisition unit is used for controlling the terminal camera to acquire a face image of the user to be verified; the voice acquisition unit is used for controlling the terminal microphone equipment to acquire voice data of a user to be verified; the characteristic data extraction unit is used for extracting a face characteristic vector in the face image and extracting voiceprint data in the voice data; the characteristic fragmentation unit is used for carrying out fragmentation processing on the face characteristic vectors and the voiceprint data to generate biological characteristic fragmentation data; and the video acquisition unit is used for acquiring a face examination video of the user to be verified.
Optionally, the server comprises: the data access unit is used for receiving an encrypted ciphertext related to the biological feature fragment data, wherein the encrypted ciphertext is classified into first ciphertext data and second ciphertext data according to the human face feature vector and the voiceprint data; the fragment decryption unit is used for distributing the first ciphertext data to a first cache space and distributing the second ciphertext data to a second cache space, wherein the first cache space corresponds to a first decryption calculation node, and the second cache space corresponds to a second decryption calculation node; restoring the first ciphertext data into face feature fragment data by adopting a first decryption computing node, and restoring the second ciphertext data into voiceprint fragment data by adopting a second decryption computing node; decrypting the face feature fragment data and the voiceprint fragment data by adopting a preset symmetric encryption algorithm to obtain biological feature fragment data; and the comparison calculation unit is used for comparing the biological characteristic fragment data with a pre-recorded biological characteristic set.
Optionally, the server further comprises: the real-time video stream receiving unit is in long connection with the auditing terminal and is used for receiving the video for face auditing collected in the video face auditing link; the video frame extracting unit is used for extracting video frame data in the face examination video; the face recognition unit is used for storing a face recognition model, and comparing the face feature fragment data with face features stored in advance by adopting the face recognition model to obtain a face feature comparison result; the voiceprint recognition unit is used for storing a voiceprint recognition model, comparing the voiceprint fragment data with the prestored voiceprint characteristic features by adopting the voiceprint recognition model and obtaining a voiceprint characteristic comparison result; the action detection and identification unit is used for storing the action detection model, identifying the human body action in the video frame data by adopting the action detection model, and comparing the human body action with the action requirement instruction in the current business process to obtain a human body action comparison result; and the recognition result calculating unit is used for performing weighted calculation on the face feature comparison result, the voiceprint feature comparison result and the human body action comparison result and determining a credible probability value of the user to be checked, wherein the credible probability value is recorded in the user check report.
The verification device for the user identity information may further include a processor and a memory, and the verification terminal 70, the financial institution audit terminal 72, the server 74, and the like are all stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls a corresponding program unit from the memory. The kernel can be set to be one or more than one, the user identity information is verified through the action detection model and the biological characteristic identification model, and under the condition that the human body action and the biological characteristic of the user to be verified are compared, the identity information of the user to be verified is confirmed to pass verification, and a user verification report is generated.
The memory may include volatile memory in a computer readable medium, random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
According to another aspect of the embodiments of the present invention, a computer-readable storage medium is further provided, where the computer-readable storage medium includes a stored computer program, and when the computer program runs, a device in which the computer-readable storage medium is located is controlled to perform any one of the above methods for checking user identity information.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including: a processor; and a memory for storing executable instructions for the processor; wherein the processor is configured to execute any one of the above methods for verifying the user identity information via executing the executable instructions.
Fig. 8 is a block diagram of a hardware structure of an electronic device (or a mobile device) according to an embodiment of the present invention. As shown in fig. 8, the electronic device may include one or more (shown here as 802a, 802b, \8230;, 802 n) processors 802 (processor 802 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), memory 808 for storing data. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a keyboard, a power supply, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 8 is only an illustration and is not intended to limit the structure of the electronic device. For example, the electronic device may also include more or fewer components than shown in FIG. 8, or have a different configuration than shown in FIG. 8.
The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, a division of a unit may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention, which is substantially or partly contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that it is obvious to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should also be considered as the protection scope of the present invention.

Claims (15)

1. A method for verifying user identity information is applied to a server and comprises the following steps:
acquiring an identity of a user to be verified in a current service process, biological feature fragment data and video frame data, wherein the biological feature fragment data is obtained by performing fragment processing on the acquired biological features of the user to be verified, and the video frame data is acquired when the user to be verified is subjected to video surface examination;
under the condition that the identity identification passes the verification, identifying the human body action in the video frame data by adopting an action detection model, and comparing the human body action with an action requirement instruction in the current service flow;
comparing the biological feature fragment data with a pre-recorded biological feature set by adopting a biological feature recognition model;
and under the condition that the human body action and the biological characteristics of the user to be verified are compared, confirming that the identity information of the user to be verified passes verification, and generating a user verification report.
2. The verification method according to claim 1, wherein the step of obtaining the identity, the biometric feature fragment data and the video frame data of the user to be verified in the current business process comprises:
acquiring the identity document of the user to be verified through a verification terminal, and identifying the identity mark on the identity document, wherein the verification terminal is a terminal used by the user to be verified;
collecting a face image and sound data of the user to be verified through the verification terminal, extracting a face characteristic vector in the face image and extracting voiceprint data in the sound data;
the face feature vector and the voiceprint data are subjected to fragmentation processing through the check terminal, and biological feature fragmentation data are generated;
and during video face examination, acquiring a face examination video of the user to be verified through the verification terminal, and extracting the video frame data in the face examination video.
3. The verification method according to claim 2, before collecting the facial image and the sound data of the user to be verified, further comprising:
randomly generating a motion instruction and a colorful light signal, wherein the motion instruction is used for indicating the user to be verified to face a display screen and finishing a specified detection motion under reflected light, and the colorful light signal is generated based on a three-dimensional imaging principle;
sending the action instruction to the verification terminal;
receiving sample detection data transmitted by the verification terminal, wherein the sample detection data is used for carrying out living body detection on the user to be verified;
and under the condition that the sample detection data indicate that the user to be verified passes the living body detection, starting to acquire the face image and the sound data of the user to be verified through the verification terminal.
4. The verification method according to claim 2, wherein the steps of extracting the face feature vector in the face image and extracting the voiceprint data in the voice data comprise:
extracting a face frame in each face image by adopting a multitask convolution neural network through the verification terminal;
extracting face information in the face frame through a face recognition model through the verification terminal, and mapping the face information to a multi-dimensional space vector to obtain the face feature vector;
preprocessing the sound data through the verification terminal;
and performing fast Fourier transform processing, convolution operation processing and discrete cosine transform on the preprocessed sound data through the verification terminal to obtain a Mel cepstrum coefficient, and taking the Mel cepstrum coefficient as voiceprint data in the sound data.
5. The verification method according to claim 2, wherein after the face feature vector and the voiceprint data are subjected to a fragmentation processing to generate the biometric fragmentation data, the verification method further comprises:
encrypting the biological characteristic fragment data by adopting a preset symmetric encryption algorithm through the verification terminal to obtain an encrypted ciphertext;
and transmitting the encrypted ciphertext to the server through the verification terminal.
6. The verification method according to claim 5, after obtaining the identity, the biometric fragmentation data and the video frame data of the user to be verified in the current business process, comprising:
receiving an encrypted ciphertext related to the biological feature fragment data, wherein the encrypted ciphertext is classified into first ciphertext data and second ciphertext data according to the face feature vector and the voiceprint data;
distributing the first ciphertext data to a first cache space, and distributing the second ciphertext data to a second cache space, wherein the first cache space corresponds to a first decryption computation node, and the second cache space corresponds to a second decryption computation node;
restoring the first ciphertext data into face feature fragment data by using the first decryption computing node, and restoring the second ciphertext data into voiceprint fragment data by using the second decryption computing node;
and decrypting the face feature fragment data and the voiceprint fragment data by adopting the preset symmetric encryption algorithm to obtain the biological feature fragment data.
7. The verification method according to claim 1, after obtaining the identity, the biometric fragmentation data and the video frame data of the user to be verified in the current business process, comprising:
generating preset countermeasure disturbance information in the biological feature fragment data and the video frame data, wherein the countermeasure disturbance information is used for defending countermeasure samples of external equipment.
8. A user identity information verification method is characterized in that the method is applied to an audit terminal of a financial institution, the audit terminal is respectively connected with a verification terminal and a server, the verification terminal is a terminal used by a user to be verified, and the method comprises the following steps:
receiving the identity identifier, the biological feature fragment data and the video frame data transmitted by the verification terminal, wherein the biological feature fragment data is obtained by carrying out fragment processing on the collected biological features of the user to be verified, and the video frame data is collected when the user to be verified is subjected to video surface examination;
transmitting the identity, the biological characteristic fragment data and the video frame data to the server;
receiving a user verification report transmitted by the server, wherein the user verification report at least comprises: the human body action comparison result refers to a result obtained by adopting an action detection model to identify the human body action in the video frame data and comparing the human body action with an action requirement instruction in the current business process under the condition that the identity passes the verification, and the biological characteristic comparison result refers to a result obtained by adopting a biological characteristic identification model to compare the biological characteristic fragment data with a pre-recorded biological characteristic set;
and outputting a process auditing result of the user to be verified in the current business process based on the user verification report.
9. A method for verifying user identity information is applied to a cloud server and comprises the following steps:
receiving an identity verification instruction, wherein the identity verification instruction carries an identity identifier of a user to be verified, biological characteristic fragment data and video frame data, the biological characteristic fragment data is obtained by carrying out fragment processing on collected biological characteristics of the user to be verified, and the video frame data is data collected when video surface examination is carried out on the user to be verified;
responding to the identity verification instruction, verifying the identity identification, adopting an action detection model to identify human body actions in the video frame data under the condition that the identity identification passes the verification, comparing the human body actions with action requirement instructions in the current business process, and adopting a biological characteristic identification model to compare the biological characteristic fragment data with a biological characteristic set which is input in advance;
and after the human body action and the biological characteristics of the user to be verified are compared, generating a user verification report.
10. A system for verifying user identity information, comprising:
the verification terminal is used for acquiring an identity of a user to be verified, biological characteristic fragment data and video frame data, wherein the biological characteristic fragment data is obtained by carrying out fragment processing on the acquired biological characteristics of the user to be verified, and the video frame data is acquired when the user to be verified is subjected to video surface examination;
the auditing terminal of the financial institution is connected with the checking terminal and is used for transmitting the identity identifier, the biological characteristic fragment data and the video frame data to a server;
the server is connected with the auditing terminal, verifies the identity identifier, adopts an action detection model to identify the human body action in the video frame data under the condition that the identity identifier passes the verification, compares the human body action with an action requirement instruction in the current business process, and adopts a biological characteristic identification model to compare the biological characteristic fragment data with a pre-recorded biological characteristic set; and after the comparison of the human body action and the biological characteristics of the user to be verified is completed, generating a user verification report.
11. The verification system of claim 10, wherein the verification terminal comprises:
the image acquisition unit is used for controlling the terminal camera to acquire the face image of the user to be verified;
the voice acquisition unit is used for controlling the terminal microphone equipment to acquire voice data of the user to be verified;
the feature data extraction unit is used for extracting a face feature vector in the face image and extracting voiceprint data in the voice data;
the feature fragmentation unit is used for carrying out fragmentation processing on the face feature vector and the voiceprint data to generate the biological feature fragmentation data;
and the video acquisition unit is used for acquiring the face examination video of the user to be verified.
12. The verification system of claim 10, wherein the server comprises:
the data access unit is used for receiving an encrypted ciphertext related to the biological feature fragment data, wherein the encrypted ciphertext is classified into first ciphertext data and second ciphertext data according to the human face feature vector and the voiceprint data;
the fragment decryption unit is used for distributing the first ciphertext data to a first cache space and distributing the second ciphertext data to a second cache space, wherein the first cache space corresponds to a first decryption computing node, and the second cache space corresponds to a second decryption computing node; restoring the first ciphertext data into face feature fragment data by using the first decryption computing node, and restoring the second ciphertext data into voiceprint fragment data by using the second decryption computing node; decrypting the face feature fragment data and the voiceprint fragment data by adopting a preset symmetric encryption algorithm to obtain the biological feature fragment data;
and the comparison calculation unit is used for comparing the biological characteristic fragment data with a pre-input biological characteristic set.
13. The verification system of claim 12, wherein the server further comprises:
the real-time video stream receiving unit is in long connection with the auditing terminal and is used for receiving a face audit video collected in a video face audit link;
the video frame extracting unit is used for extracting video frame data in the face examination video;
the face recognition unit is used for storing a face recognition model, and comparing the face feature fragment data with face features stored in advance by adopting the face recognition model to obtain a face feature comparison result;
the voiceprint recognition unit is used for storing a voiceprint recognition model, comparing the voiceprint fragment data with the prestored voiceprint characteristic features by adopting the voiceprint recognition model and obtaining a voiceprint characteristic comparison result;
the action detection and identification unit is used for storing an action detection model, identifying the human body action in the video frame data by adopting the action detection model, and comparing the human body action with the action requirement instruction in the current business process to obtain a human body action comparison result;
and the recognition result calculating unit is used for performing weighted calculation on the face feature comparison result, the voiceprint feature comparison result and the human body action comparison result and determining a credible probability value of the user to be verified, wherein the credible probability value is recorded in the user verification report.
14. A computer-readable storage medium, comprising a stored computer program, wherein when the computer program runs, the computer-readable storage medium controls an apparatus to execute the method for checking user identity information according to any one of claims 1 to 7.
15. An electronic device comprising one or more processors and memory storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of verifying user identity information of any one of claims 1 to 7.
CN202210963398.3A 2022-08-11 2022-08-11 User identity information verification method and system, storage medium and electronic equipment Pending CN115330392A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210963398.3A CN115330392A (en) 2022-08-11 2022-08-11 User identity information verification method and system, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210963398.3A CN115330392A (en) 2022-08-11 2022-08-11 User identity information verification method and system, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115330392A true CN115330392A (en) 2022-11-11

Family

ID=83924799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210963398.3A Pending CN115330392A (en) 2022-08-11 2022-08-11 User identity information verification method and system, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115330392A (en)

Similar Documents

Publication Publication Date Title
US11336643B2 (en) Anonymizing biometric data for use in a security system
US20090138405A1 (en) System and method for performing secure online transactions
US11599669B2 (en) Image distribution using composite re-encrypted images
Adler Biometric system security
Nematollahi et al. Multi-factor authentication model based on multipurpose speech watermarking and online speaker recognition
JP7236042B2 (en) Face Recognition Application Using Homomorphic Encryption
Osho et al. AbsoluteSecure: a tri-layered data security system
WO2023142453A1 (en) Biometric identification method, server, and client
CN115330392A (en) User identity information verification method and system, storage medium and electronic equipment
Kannavara et al. Topics in biometric human-machine interaction security
Snijder Biometrics, surveillance and privacy
Olaniyi et al. Enhanced stegano-cryptographic model for secure electronic voting
Nanda et al. Towards Higher Levels of Assurance in Remote Identity Proofing
Trung et al. Secure eeg-based user authentication system integrated with robust watermarking
CN117576763A (en) Identity recognition method and system based on voiceprint information and face information in cloud environment
CN109299945B (en) Identity verification method and device based on biological recognition algorithm
Madan et al. The Effect of Vulnerability in Facial Biometric Authentication
Toli et al. Biometric solutions as privacy enhancing technologies
Blythe Biometric authentication system for secure digital cameras
CN117609965A (en) Upgrade data packet acquisition method of intelligent device, intelligent device and storage medium
CN115086045A (en) Data security protection method and device based on voiceprint forgery detection
CN116468214A (en) Evidence electronization method and electronic equipment based on fault event processing process
Tilton White Paper: Biometric Industry Standards
Maiorana Protezione dei template biometrici per sistemi di autenticazione basati su firma

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination