CN110750774A - Identity recognition method and device - Google Patents

Identity recognition method and device Download PDF

Info

Publication number
CN110750774A
CN110750774A CN201911002444.8A CN201911002444A CN110750774A CN 110750774 A CN110750774 A CN 110750774A CN 201911002444 A CN201911002444 A CN 201911002444A CN 110750774 A CN110750774 A CN 110750774A
Authority
CN
China
Prior art keywords
user
information
verification
voiceprint
verification result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911002444.8A
Other languages
Chinese (zh)
Other versions
CN110750774B (en
Inventor
姜瑾
刘臣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jianlian Technology Guangdong Co ltd
Original Assignee
Shenzhen Zhongyi Weirong Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhongyi Weirong Technology Co Ltd filed Critical Shenzhen Zhongyi Weirong Technology Co Ltd
Priority to CN201911002444.8A priority Critical patent/CN110750774B/en
Publication of CN110750774A publication Critical patent/CN110750774A/en
Application granted granted Critical
Publication of CN110750774B publication Critical patent/CN110750774B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4016Transaction verification involving fraud or risk level assessment in transaction processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Accounting & Taxation (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Finance (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The embodiment of the disclosure discloses a method and a device for identity recognition, wherein the method comprises the following steps: acquiring first voiceprint information of a user and second voiceprint information of a user-associated human vein; calculating a similarity value of the first voiceprint information and the second voiceprint information; if the similarity value is smaller than a preset first threshold value, generating at least one verification strategy to verify the user and the user-associated relationship, and generating a verification result; and taking the similarity value and the verification result as training parameters or correction parameters to train an identification model or correct the identification model.

Description

Identity recognition method and device
Technical Field
The present disclosure relates to the field of identity recognition, and in particular, to a method and an apparatus for identity recognition, an electronic device, and a storage medium.
Background
In the prior art, a general method for identity recognition is to perform identity recognition by using a biometric signal of a human body, for example, based on fingerprint, facial recognition, and the like. The common identity recognition method based on human body biological characteristics is to extract representative human body biological characteristics for a user needing to be recognized, compare the extracted human body biological characteristics with a huge pre-established human body biological characteristic template to obtain similarity, and determine that the human body characteristics are not legal according to the similarity. However, the above identity recognition methods have many disadvantages, one is that whether the human body features to be recognized are legal can be judged by comparing the human body features to be recognized with all human body feature templates in the total template library during identity recognition every time, and the calculation amount is large and time is consumed; secondly, in the prior art, the biological characteristics of the human body including the face and the fingerprint have no uniqueness, and the appearance of brothers, sisters, twins or cloned persons which grow very much like the human body threatens the safety of identity authentication based on the biological characteristics of the face, the fingerprint and the like; thirdly, in the prior art, the method for acquiring the biological characteristic signals of the human body is too complex, needs special equipment or time for centralized acquisition, has high cost and cannot realize remote identification; particularly in the financial field, the defects of the prior art are easy to cause the occurrence of fraud, so that a fraud person can possibly use a plurality of different identities to perform loan fraud, the benign development of the digital financial industry is not facilitated, and a plurality of adverse effects are caused to the innovation business of the traditional finance.
Disclosure of Invention
In view of the above technical problems in the prior art, embodiments of the present disclosure provide an identity recognition method, an identity recognition device, an electronic device, and a computer-readable storage medium, so as to solve the problems in the prior art that the security of identity recognition is low, the amount of calculation is large, and human body biometric features are inconvenient to adopt.
A first aspect of the embodiments of the present disclosure provides an identity recognition method, including:
acquiring first voiceprint information of a user and second voiceprint information of a user-associated human vein;
calculating a similarity value of the first voiceprint information and the second voiceprint information;
if the similarity value is smaller than a preset first threshold value, generating at least one verification strategy to verify the user and the user-associated relationship, and generating a verification result;
and taking the similarity value and the verification result as training parameters or correction parameters to train an identification model or correct the identification model.
In some embodiments, the method further comprises: and acquiring the first voiceprint information through an intelligent voice assistant, and acquiring the second voiceprint information according to the first voiceprint information.
In some embodiments, the method further comprises: and acquiring real-time voiceprint information of a user through an intelligent voice assistant, and acquiring the first voiceprint information and the second voiceprint information according to the real-time voiceprint information.
Specifically, the method further comprises: the intelligent voice assistant sends the real-time voiceprint information to a voiceprint intelligent control module;
and when the voiceprint intelligent control module analyzes that the user is in an active state, acquiring the first voiceprint information and the second voiceprint information of the associated human veins, which are prestored by the user, from a map database.
In some embodiments, each of the first voiceprint information and/or the second voiceprint information comprises at least one voiceprint feature value;
the calculating the similarity value between the first voiceprint information and the second voiceprint information specifically includes: and calculating the similarity value according to the first voiceprint characteristic value of the first voiceprint information and the second voiceprint characteristic value of the second voiceprint information.
In some embodiments, the method of calculating the similarity value includes: at least one of a Euclidean distance method, a Manhattan distance method, a standardized Euclidean distance method, and an included angle cosine method.
In some embodiments, the method further comprises: generating at least one of the verification policies by detecting a state of the user and an associated personality of the user.
In some embodiments, the verification policy is specifically: and simultaneously initiating a voice call to the user and the user-associated personality through the intelligent voice assistant.
Optionally, the verification policy at least includes a control session parameter;
wherein the control dialog parameters include at least one of a time at which a voice dialog is initiated, a dialog target, and a dialog channel.
Optionally, at least a control dialog flow parameter is included in the verification policy;
wherein the control dialog flow parameters include at least one of dialog content and question-and-answer logic.
Further, the verification policy is also used for controlling the intelligent voice assistant to output voice-related information parameters; the voice-related information parameter includes at least one of a voice input parameter and text information recognized by a voice signal.
In some embodiments, the method further comprises: and receiving verification feedback information, and generating a verification result according to the verification feedback information.
Specifically, the verification feedback information is specifically a voice-related information parameter; wherein the voice-related information parameter comprises at least one of a voice input parameter and text information recognized by a voice signal.
In some embodiments, the verification policy is specifically: and acquiring the spatial data of the user and the relationship of the user from a map database for verification.
Specifically, the spatial data includes at least one of city information, longitude and latitude information, and network type information.
Further, the method further comprises: calculating a spatial correlation of the user and the associated user; and if the times that the spatial correlation is smaller than the preset second threshold value are larger than the preset value, combining the similarity value of the first voiceprint information and the second voiceprint information, and outputting a verification result.
In some embodiments, the authentication policy includes at least an authentication parameter; wherein the verification parameters include at least one of an initiation verification time, a verification target, and verification content.
In some embodiments, the method further comprises: generating verification feedback information according to the acquired feedback information of the user and the user-associated relationship; and generating the verification result according to the verification feedback information.
Optionally, the verification feedback information at least comprises user feedback related information and biometric information;
the user feedback related information is specifically user feedback behavior information and/or a user feedback information body.
In some embodiments, the method further comprises: calculating the time-space correlation degree and the biological characteristic information similarity of feedback behavior information according to the acquired feedback information of the user and the user-associated relationship; and generating the verification result according to the space-time correlation degree and the biological characteristic information similarity.
Preferably, the method further comprises: and generating an information verification score and/or a cross-validation score according to the obtained feedback information of the user and the user-associated relationship.
Preferably, the method further comprises: generating the verification result according to the verification feedback information, the information verification score and/or the cross-verification score; alternatively, the first and second electrodes may be,
and generating the verification result according to the space-time correlation degree, the biological characteristic information similarity degree, the information verification score and/or the cross verification score.
In some embodiments, the using the similarity value and the verification result as a training parameter or a modification parameter specifically includes: and directly or indirectly using the similarity value and the verification result as a training parameter or a correction parameter.
Specifically, the indirect serving as the training parameter or the correction parameter specifically includes: and processing the similarity value and the verification result to obtain new variable data, and taking the new variable data as a training parameter or a correction parameter.
A second aspect of the embodiments of the present disclosure provides an apparatus for identity recognition, including:
the voiceprint acquisition unit is used for acquiring first voiceprint information of a user and second voiceprint information of the user-associated human veins;
a calculating unit, configured to calculate a similarity value between the first voiceprint information and the second voiceprint information;
the verification unit is used for generating at least one verification strategy to verify the user and the user-associated relationship if the similarity value is smaller than a preset first threshold value, and generating a verification result;
and the precision improving unit is used for taking the similarity value and the verification result as training parameters or correction parameters to train an identification model or correct the identification model.
In some embodiments, the voiceprint acquisition unit is specifically configured to acquire the second voiceprint information by acquiring the first voiceprint information and according to the first voiceprint information.
In some embodiments, the voiceprint acquisition unit is specifically configured to acquire real-time voiceprint information of a user by using an intelligent voice assistant, and acquire the first voiceprint information and the second voiceprint information according to the real-time voiceprint information.
Specifically, the voiceprint acquisition unit sends the real-time voiceprint information to a voiceprint intelligent control module through the intelligent voice assistant; and when the voiceprint intelligent control module analyzes that the user is in an active state, acquiring the first voiceprint information and the second voiceprint information of the associated human veins, which are prestored by the user, from a map database.
In some embodiments, the first voiceprint information and/or the second voiceprint information acquired by the voiceprint acquisition unit each include at least one voiceprint characteristic value;
the calculating unit is specifically configured to calculate the similarity value according to a first voiceprint feature value of the first voiceprint information and a second voiceprint feature value of the second voiceprint information.
In some embodiments, the method of calculating the similarity value by the calculation unit comprises: at least one of a Euclidean distance method, a Manhattan distance method, a standardized Euclidean distance method, and an included angle cosine method.
In some embodiments, the verification unit is configured to generate at least one of the verification policies by detecting a state of the user and an associated personality of the user.
In some embodiments, the verification policy generated by the verification unit is specifically: and simultaneously initiating a voice call to the user and the user-associated personality through the intelligent voice assistant.
Optionally, the verification policy generated by the verification unit at least includes a control dialog flow parameter; wherein the control dialog flow parameters include at least one of dialog content and question-and-answer logic.
Optionally, the verification policy generated by the verification unit at least includes a control dialog flow parameter; wherein the control dialog flow parameters include at least one of dialog content and question-and-answer logic.
Further, the verification policy generated by the verification unit is also used for controlling the intelligent voice assistant to output voice-related information parameters; the voice-related information parameter includes at least one of a voice input parameter and text information recognized by a voice signal.
In some embodiments, the verification unit is further configured to receive verification feedback information, and generate a verification result according to the verification feedback information.
In some embodiments, the verification feedback information received by the verification unit is specifically a voice-related information parameter; wherein the voice-related information parameter comprises at least one of a voice input parameter and text information recognized by a voice signal.
Further, the verification policy generated by the verification unit specifically includes: and acquiring the spatial data of the user and the relationship of the user from a map database for verification.
In some embodiments, the spatial data includes at least one of city information, longitude and latitude information, and network type information.
In some embodiments, the verification unit comprises:
a spatial correlation calculation subunit configured to calculate a spatial correlation between the user and the associated user;
and the verification subunit is used for comparing the similarity value of the first voiceprint information and the second voiceprint information and outputting a verification result if the number of times that the spatial correlation is smaller than the preset second threshold value is larger than a preset value.
Optionally, the verification policy generated by the verification unit at least contains a verification parameter; wherein the verification parameters include at least one of an initiation verification time, a verification target, and verification content.
In some embodiments, the verification unit is specifically configured to: generating verification feedback information according to the acquired feedback information of the user and the user-associated relationship; and generating the verification result according to the verification feedback information.
Preferably, the verification feedback information at least comprises user feedback related information and biometric information; the user feedback related information is specifically user feedback behavior information and/or a user feedback information body.
Preferably, the verification unit is specifically configured to calculate a spatiotemporal relevance degree and a biological feature information similarity of feedback behavior information according to the obtained feedback information of the user and the user-associated relationship; and generating the verification result according to the space-time correlation degree and the biological characteristic information similarity.
The verification unit is further used for generating an information verification score and/or a cross-verification score according to the obtained feedback information of the user and the user-associated personality.
Preferably, the verification unit is further configured to generate the verification result according to the verification feedback information, the information verification score and/or the cross-verification score;
or generating the verification result according to the space-time correlation degree, the biological characteristic information similarity degree, the information verification score and/or the cross-verification score.
In some embodiments, the precision-improving unit is specifically configured to use the similarity value and the verification result directly or indirectly as a training parameter or a correction parameter.
Specifically, the precision-improving unit is specifically configured to process the similarity value and the verification result to obtain new variable data, and use the new variable data as a training parameter or a correction parameter.
A third aspect of the embodiments of the present disclosure provides an electronic device, including:
a memory and one or more processors;
wherein the memory is communicatively coupled to the one or more processors, and the memory stores instructions executable by the one or more processors, and when the instructions are executed by the one or more processors, the electronic device is configured to implement the method according to the foregoing embodiments.
A fourth aspect of the embodiments of the present disclosure provides a computer-readable storage medium having stored thereon computer-executable instructions, which, when executed by a computing device, may be used to implement the method according to the foregoing embodiments.
A fifth aspect of embodiments of the present disclosure provides a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, are operable to implement a method as in the preceding embodiments.
According to the method and the device, the voiceprint information of the user and the user-associated person is compared to generate at least one verification strategy, the identity of the user and the user-associated person is identified and verified, meanwhile, the generated voiceprint similarity value and the verification result are used as the training parameters or the correction parameters of the identification model, the identity identification is conveniently and quickly achieved, the identity identification accuracy is greatly improved, the safety degree is greatly improved, and the user experience is good.
Drawings
The features and advantages of the present disclosure will be more clearly understood by reference to the accompanying drawings, which are illustrative and not to be construed as limiting the disclosure in any way, and in which:
FIG. 1 is a schematic diagram of an existing knowledge-graph and artificial intelligence based risk control system, according to some embodiments of the present disclosure;
FIG. 2 is a schematic diagram of an existing financial APP loan page, according to some embodiments of the disclosure;
FIG. 3 is a schematic diagram of an intelligent voice assistant architecture, according to some embodiments of the present disclosure;
FIG. 4 is a diagram of an existing spectra database structure, shown in accordance with some embodiments of the present disclosure;
FIG. 5 is a schematic diagram of a spectral database structure incorporating a voiceprint entry, according to some embodiments of the present disclosure;
FIG. 6 is a schematic illustration of a knowledge-graph and artificial intelligence based risk control system architecture, according to some embodiments of the present disclosure;
FIG. 7 is a schematic flow chart diagram illustrating a method of identity recognition in accordance with some embodiments of the present disclosure;
FIG. 8 is a schematic diagram illustrating the operation of a policy module and a verification module according to some embodiments of the present disclosure;
FIG. 9 is a schematic diagram of another graph database structure shown in accordance with some embodiments of the present disclosure;
FIG. 10 is a schematic diagram of a policy module operation according to some embodiments of the present disclosure;
FIG. 11 is a schematic diagram illustrating a verification module verification step according to some embodiments of the present disclosure;
FIG. 12 is a block diagram of an apparatus for identification in accordance with some embodiments of the present disclosure
FIG. 13 is a schematic structural diagram of an electronic device in accordance with some embodiments of the present disclosure.
Detailed Description
In the following detailed description, numerous specific details of the disclosure are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. However, it will be apparent to one of ordinary skill in the art that the present disclosure may be practiced without these specific details. It should be understood that the use of the terms "system," "apparatus," "unit" and/or "module" in this disclosure is a method for distinguishing between different components, elements, portions or assemblies at different levels of sequence. However, these terms may be replaced by other expressions if they can achieve the same purpose.
It will be understood that when a device, unit or module is referred to as being "on" … … "," connected to "or" coupled to "another device, unit or module, it can be directly on, connected or coupled to or in communication with the other device, unit or module, or intervening devices, units or modules may be present, unless the context clearly dictates otherwise. For example, as used in this disclosure, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present disclosure. As used in the specification and claims of this disclosure, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" are intended to cover only the explicitly identified features, integers, steps, operations, elements, and/or components, but not to constitute an exclusive list of such features, integers, steps, operations, elements, and/or components.
These and other features and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will be better understood by reference to the following description and drawings, which form a part of this specification. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the disclosure. It will be understood that the figures are not drawn to scale.
Various block diagrams are used in this disclosure to illustrate various variations of embodiments according to the disclosure. It should be understood that the foregoing and following structures are not intended to limit the present disclosure. The protection scope of the present disclosure is subject to the claims.
As shown in fig. l, a schematic diagram of an existing risk control system based on knowledge graph and artificial intelligence, such as an intelligent risk control system of a certain company, is provided. The user submits a financial entry application through an internet front-end system, such as an sdk (software development kit), a H5 page and an internet financial APP program, wherein the financial APP can also provide an intelligent voice assistant function, such as an intelligent voice assistant represented by a right lower corner icon in an APP module shown in fig. 1, the function can greatly reduce the pressure of the user to enter personal related data through a mobile phone soft keyboard, the user can complete the data entry through voice, and the function is very useful when the information is not suitable for being input through the mobile phone soft keyboard; then, the financial entry is accessed to a task matching server through a wired or wireless communication network; automatically matching a financial advance to a different financial service provider in the task allocation server; typically, the matching server is owned by a third-party financial institution; further, for the entry data entering the financial service system, the data is preprocessed and then stored in a map database; the spectrogram database, which may be the map database of NEO4J, is used to store a large amount of knowledge map data about financial businesses. Further, the financial advance may generate a risk control analysis task that obtains relationship-based data from a graph database by way of graph queries, and relationship data associated with the advance. And inputting the relational data into a variable calculation module to obtain evaluation variables corresponding to the relational data. Further, inputting the evaluation variable into an anti-fraud evaluation model to complete anti-fraud identification; wherein the anti-fraud assessment model may be based on a machine-learned assessment model, which may be a decision tree-based GDBT model or a neural network-based depth model, for example. Further, the results of the anti-fraud recognition and the variable data are input to an anti-fraud and pneumatic system module that completes the evaluation of the entry based on the corresponding decision flow and optional manual intervention.
As shown in fig. 2, a schematic diagram of a financial APP loan page of the prior art is provided, when a loan applicant applies for an installment loan, a large amount of relevant information about an individual, such as e-mail, city of residence, detailed address, etc., needs to be filled in. Before the intelligent voice assistant function is not available, the loan applicant inputs personal data manually through a mobile phone soft keyboard, and after the intelligent voice assistant function is added, the loan applicant can realize the input of the personal data through the interaction with the intelligent voice assistant.
As shown in FIG. 3, a schematic diagram of an intelligent voice assistant is provided. The loan applicant's voice receives the user's voice input through a microphone, and the voice recognition module recognizes the received voice and extracts text formatted as a particular type of domain, intent, Slot, etc., and further compares the extracted text to a list of possible user commands stored in an agent definition structure (classifier) to determine the command that most likely matches the user's intent. If a command matching the user's intention is matched, performing an operation required in the voice; if not, determining a confidence score by matching with one or more of the classifiers, selecting a classifier based on the confidence score, and determining a command most likely to match the user's intent based on the commands associated with the classifier; wherein the matching may be based on one or more combinations of statistics, probabilistic methods, decision trees, other rules, other suitable matching criteria; if the confidence score is larger than or equal to the set threshold value, executing the required operation in the voice; if the confidence score is lower than the set threshold, the classifier is considered to have no content related to the user voice, the user uses a marking tool to input a request, and after the system obtains the text input of the user, the classifier model is updated, and the request of the user is executed.
As shown in fig. 4, there is a structural diagram of a prior art spectrum database which stores financial data for anti-fraud or wind control in a different manner from a conventional structural database, and a spectrum database which stores data mainly in terms of real world entities and relations. Wherein different entities correspond to different nodes, such as the circular nodes and the polygonal nodes shown in FIG. 4; the connection between different entities is accomplished through relationships, such as the connection between different nodes shown in fig. 4. The nodes and the relations further comprise different attributes for defining the types of the entities and the types of the relations. In the financial big data system related to the present disclosure, personal data related to finance is stored in the map database, and as shown in fig. 4, "zhang" and "lie" are two personal entities, each of which is connected to other entities such as "mobile phone number" or "company" through a relationship such as "work on" or "own phone". One unique point of the knowledge graph compared with knowledge graphs in other fields is that a large number of 'progress' entities are stored in the financial graph data, and financial service progress corresponding to one person, such as credit loan, is recorded specifically. In a practical financial-map database, the number of points and edges can reach hundreds of millions.
Similar to human fingerprints and DNA, voiceprints are also a unique, personal biometric feature of the human body, and features that characterize a person's voiceprint information can be multifaceted: acoustic features related to the pronunciation mechanism (e.g., spectrum, cepstrum, formants, pitch, reflection coefficients, etc.), nasal sounds, deep breath sounds, mute, laugh, etc.; or semantics, paraphrasing, pronunciation, language habits, etc., which are influenced by social and economic conditions, education level, place of birth, etc.; or personal characteristics or characteristics of rhythm, speed, intonation, volume, etc. influenced by parents; in summary, from the standpoint of being modeled mathematically, the features that can be used by the voiceprint recognition model include: (1) acoustic features (cepstrum); (2) lexical features (speaker dependent words N-Gram, phoneme N-Gram); (3) prosodic features (pitch and energy "poses" described by N-Gram); (4) the language, dialect, accent information, and the like, which type of voiceprint feature is specifically used in a specific system, or a combination of multiple types of voiceprint features may be used according to specific situations, or other types of voiceprint features may be used to represent the voiceprint information, which is not specifically limited herein.
Voiceprint recognition, which is one of the biometric identification techniques, is also referred to as speaker recognition, including speaker identification and speaker verification. Voiceprint recognition is the conversion of acoustic signals into electrical signals, which are then recognized by a computer. Different tasks and applications may use different voiceprint recognition techniques, such as recognition techniques may be required to narrow criminal investigation, and validation techniques may be required for banking transactions. The voiceprint is different from other biological identification technologies (such as face identification, fingerprint identification, palm print identification, iris identification and the like), has the advantages of no loss, no change, no need of memory and the like, can be identified only by inputting voice through a microphone or a telephone, does not need special equipment or specific text content, is extremely convenient in data acquisition and low in manufacturing cost, and is the most economic, reliable, simple, safe and effective identification mode. These advantages may form the basis for the widespread use of voiceprint recognition.
As shown in fig. 5, which is a schematic diagram illustrating a database structure of a spectrum added with voiceprint entries according to some embodiments of the present disclosure, when a loan applicant uses an intelligent language assistant to enter information, the system provides an intelligent voice assistant service on the one hand, for example, the loan data filling guidance is provided by the intelligent voice assistant, and simultaneously the system extracts and stores the voiceprint features of the loan applicant in the corresponding database node of the spectrum by receiving the voice input of a user, so that each individual can have a plurality of voiceprint information because each individual has slight changes in different environments (content, manner, physical condition, time, age, etc.). As shown in fig. 5, zhang contains two voiceprint entries of "voiceprint 1" and "voiceprint 2", zhang contains one voiceprint entry of "voiceprint 3", li qiang contains one voiceprint entry of "voiceprint 4", and wangdong has no voiceprint information (probably li qiang has not used an intelligent voice assistant, and has not extracted the voiceprint information of li qiang at present).
As shown in fig. 6, a schematic structural diagram of a risk control system based on a knowledge graph and artificial intelligence in some embodiments of the present disclosure is given, further optimization is performed on the basis of the existing risk control system, and a voiceprint intelligent control module is added. The voice print intelligent control module analyzes the existing voice print information of the loan applicant, after two voice print information with high similarity are found, the voice call is initiated to the two applicant corresponding to the voice print information through the intelligent voice assistant, so that whether the two people are the same person is judged according to the reply synchronization condition of the user and the related personal vein of the user, suggestions are provided for the identification module or the anti-cheating and air control system module, the risk of cheating is further reduced, and the implementation steps of the method are specifically analyzed below.
Fig. 7 is a schematic diagram illustrating an identity recognition method according to some embodiments of the present disclosure. In some embodiments, the method of identification is performed by an identification system. As shown in fig. 7, the method for identifying identity includes the following steps:
s202, first voiceprint information of the user and second voiceprint information of the user-associated human veins are obtained.
Specifically, the intelligent voice assistant receives real-time voiceprint information of a user and sends the real-time voiceprint information to the voiceprint intelligent control module; the voiceprint intelligent module receives real-time voiceprint information of a user and analyzes that the user is in an active state at present, and then first voiceprint information prestored by the user and second voiceprint information of a person related to the user are obtained from a map database; the active status may be a series of preset actions such as that the user has recently made a loan attempt/record or has opened a financial APP.
In some embodiments, voiceprint information which is obtained from a map database and is pre-stored by a user is used as first voiceprint information, and voiceprint information of a user-associated human vein is used as second voiceprint information; since the associated person may include one or more, accordingly, the second voiceprint information may also include one or more.
In some embodiments, the real-time voiceprint information of the user received by the intelligent voice assistant can be used as the first voiceprint information; and sending the first voiceprint information to a voiceprint intelligent control module, and when the voiceprint intelligent module receives the first voiceprint information and analyzes that the user is in an active state at present, only obtaining second voiceprint information of the user-associated human veins from a map database.
In some embodiments, taking the diagram of the structure of the spectrum database shown in fig. 5 as an example, if it is detected that the user opens a recent active state, the voiceprint intelligent control module obtains the opened voiceprint 1 and voiceprint 2 data, the opened voiceprint 3 data associated with human pulse and the voiceprint 4 data of plum intensity from the spectrum database; the operation steps are described by taking the first-degree pulse as an example, and the Zhangcheng and the Liqiang are the first-degree bright pulses.
And S204, calculating the similarity value of the first voiceprint information and the second voiceprint information.
Specifically, the first voiceprint information and/or the second voiceprint information at least comprise a voiceprint characteristic value; s204 specifically comprises: and calculating the similarity value according to the first voiceprint characteristic value of the first voiceprint information and the second voiceprint characteristic value of the second voiceprint information. Assuming that the voiceprint information of the system is characterized by three voiceprint feature values, the voiceprint feature values obtained from the spectrum database in S202 are respectively: the voiceprint characteristic values of the Zhangming voiceprint 1 are (ZM1, ZM2 and ZM3), the Zhanginging voiceprint 3 characteristic value is (ZC1, ZC2 and ZC3), the Liqiang voiceprint 4 characteristic value is (LQ1, LQ2 and LQ3), and the similarity values of the Zhangming, Zhanginging and Liqiang voiceprint characteristic values are Szm-ZC and Szm-LQ respectively obtained through calculation. The method for calculating the similarity value comprises the following steps: the method comprises at least one of an Euclidean distance method, a Manhattan distance method, a standardized Euclidean distance method and an included angle cosine method, and the algorithm for calculating the similarity value is not limited and can be selected according to actual conditions; the formula for calculating the similarity value of the voiceprint features is shown as follows by taking the Euclidean distance as an example:
Figure BDA0002241747020000142
s206, if the similarity value is smaller than a preset first threshold value, generating at least one verification strategy to verify the user and the user-associated relationship, and generating a verification result.
Specifically, the similarity values S of the user and the first degree voiceprint are obtained according to S204, and then the similarity values are respectively compared with a preset first threshold, and the setting of the first threshold M may be set according to historical statistical data or empirical values.
In some embodiments, taking the euclidean distance as an example of the similarity value algorithm in S204, if the similarity value of the voiceprint data of the zhang and zhang is Szm-zc < M, the similarity value of the voiceprint data between the zhang and zhang is considered to be high, and there is a possibility that the zhang and zhang are the same person, generating at least one authentication policy for further verification; if the similarity value of the stretched and stretched voiceprint data is Szm-zc ≧ M, the identity of the same person cannot be determined, and the step S208 is executed after the identity recognition is finished.
More specifically, it is known from the above description that the similarity of the voiceprint data of Zhangming and Zhanging is high, and it is possible that Zhangming and Zhanging are the same person, a false identity is set for fraudulent loan, and in order to verify whether the Zhangming and Zhanging are the same person, the system performs verification by using the cooperation of the policy module and the verification module.
In some embodiments, the method further comprises: the strategy module generates at least one verification strategy by detecting the state of the user and the associated relationship of the user; the verification policy may specifically be: and simultaneously initiating a voice call to the user and the user-associated personality through the intelligent voice assistant.
In some embodiments, the policy module generates the verification policy by detecting the activities of Zhang and Zhang, e.g., when Zhang and Zhang log in a financial APP at the same time, the policy module initiates a first policy; specifically, the first policy is to initiate a voice call to both Zhangming and Zhang by the intelligent voice assistant simultaneously, asking him a set of private questions related to the person in the first policy (e.g., who your recommended person is, etc.); furthermore, the verification module can count the times according to the received voice replies from the Zhang Ming and Zhang Cheng; in a simple consistency detection method, if the voice replies from the Zhangming and the Zhang are received at the same time, the two persons are considered to be different persons, because the same person generally cannot reply to two voice calls at the same time; if the replies of the two persons cannot reach synchronization each time, the two persons are considered to be possibly the same person, and the risk of fraud exists, as shown in table 1; the synchronicity given in table 1 is only a simple consistency verification that the intelligent voice assistant initiated the time as input.
Number of statistics Whether or not to reply to synchronization
For the first time Whether or not
For the second time Whether or not
The third time Whether or not
Fourth time Whether or not
Fifth time Whether or not
…… Whether or not
TABLE 1 statistical table of whether replies are synchronous
In some embodiments, as shown in FIG. 8, a schematic diagram of the operation of a policy module and a verification module is presented. The strategy module actively initiates a conversation with a user through the intelligent voice assistant, and inputs more parameters into a verification module, and at the moment, the time when the intelligent voice assistant initiates the conversation and whether the intelligent voice assistant is connected are only one variable and are input into a consistency identification model in the verification module. The consistency recognition model can utilize a large amount of training data and apply a machine learning mode to obtain a consistency result. The strategy module can generate at least one verification strategy, wherein one verification strategy at least comprises a group of control conversation parameters, and the control conversation parameters comprise at least one of time for initiating a voice conversation, a conversation target and a conversation channel; further, a verification policy at least also includes a control dialog flow parameter, the control dialog flow parameter includes at least one of the content of dialog and the question-and-answer logic, for example, the content of dialog flow that has controlled the initiated dialog, whether to receive voice after the dialog, whether to end the dialog, the user guide flow, and so on; further, a policy may also include other auxiliary information, such as user information collection through an image presented by the APP or control of the APP.
Preferably, the verification policy may further control the intelligent voice assistant to output verification feedback information, specifically, the verification feedback information may be specifically a voice-related information parameter; further, the intelligent voice assistant receives user input and generates at least one of a group of parameters and a group of information; the parameters are voice input parameters, such as user input time, a user input channel, the geographical position of the user and the like; the group of information is text information identified by the voice signal; still further, the intelligent voice assistant input may also include other enhanced information, such as input information using a user interface.
Further, the intelligent voice assistant inputs the parameters and the information into the verification module and outputs a plurality of output variables, wherein the output variables are used for indicating that the authenticity of the identity of the relevant user is confirmed after the intelligent voice assistant interacts with the output variables, namely the verification result.
And S208, taking the similarity value and the verification result as training parameters or correction parameters, and training a recognition model or correcting the recognition model.
In some embodiments, the similarity values of the voiceprint information obtained in S204 and S206, the intelligent voice assistant dialogue synchronization data, and the user feedback information obtained by the intelligent voice assistant dialogue are directly or indirectly used as training parameters or correction parameters to train a recognition model or correct a recognition model; the indirect training parameters or correction parameters are specifically as follows: and processing the similarity value of the voiceprint information, the intelligent voice assistant dialogue synchronous data and the user feedback information obtained by the intelligent voice assistant dialogue to obtain a group of new variable data, and taking the new variable data as a training parameter or a correction parameter.
In the embodiment, the identity recognition method combined with the intelligent voice assistant and the knowledge graph is adopted to further verify the identity of the user, and the method can be used for recognizing whether the user has fraud risk to a great extent, so that the probability of the fraud risk is reduced.
In some optional embodiments, the similarity value of the voiceprint information, the intelligent voice assistant dialogue synchronization data and the user feedback information obtained by the intelligent voice assistant dialogue are input into the wind control and anti-fraud recognition model to be used for correcting the calculation result of the wind control and anti-fraud recognition, and the accuracy of the wind control and anti-fraud recognition is improved. In one embodiment, the verification results of S202-S206 may be directly used to generate an anti-fraud output, that is, when the user cannot pass through the verification module, the verification result is directly output as that the current user is a fraudulent person.
In some alternative embodiments, as shown in FIG. 9, a schematic diagram of another spectral database structure is presented; the node information contained in each person in the spectrum database may be different from the structure of the spectrum database shown in fig. 5, for example, it is stated that the node information contained in the spectrum database includes: voiceprint 1, voiceprint 2, phone 1, location 1, biological pose 1, etc.; the node information contained by the plum is as follows: voiceprint 4, position 2, biological pose 2, etc.; the node information contained by the span includes: voiceprint 3, phone 3, location 3, etc.; the wangdong contains node information including: voiceprint 5, phone 4, location 4, biometric gesture 4, etc.; the location information refers to information of a network visitor accurately positioned by using a multi-dimensional geographic location information base comprising an IP (Internet protocol), a base station, Wi-Fi (wireless fidelity), an identity card, a mobile phone number, a bank card and the like, and comprises city information, longitude and latitude information, network type information and the like, and is represented by longitude and latitude in a certain system; the biological posture information refers to multiple indexes of the user in the using process, such as the pressure degree, the equipment elevation angle, the finger touch surface, the linear acceleration, the contact point interval and the like, which are acquired through a client and the like, and because the performances of different equipment are different, for example, the equipment with poor performance can only acquire the pressure degree and the equipment elevation angle, but cannot acquire the linear acceleration; the equipment with excellent performance can be acquired according to the pressure degree, the equipment elevation angle, the finger touch surface, the linear acceleration and the contact point interval, so that the biological attitude is determined according to the actual performance of the equipment when the biological attitude is used, and the biological attitude information, such as the equipment elevation angle, can be represented by using an index; the biological gesture may also be characterized by a plurality of indicators, such as elevation angle of the device, contact spacing, etc., without being limited thereto.
Correspondingly, generating at least one verification policy in S206 to verify the user and the user-associated relationship may specifically include: the strategy module outputs a verification strategy for obtaining the user and the spatial data of the user-associated relationship from the map database for verification; the spatial data includes at least one of city information, latitude and longitude information, and network type information.
In some embodiments, the method of obtaining spatial data of the user and the user's associated personality from a profile database for verification comprises: calculating a spatial correlation of the user and the associated user; and if the times that the spatial correlation is smaller than the preset second threshold value are larger than the preset value, combining the similarity value of the first voiceprint information and the second voiceprint information, and outputting a verification result.
Specifically, position information of a user and a user-associated relationship is obtained from a map database; in some embodiments, the policy module obtains from the map database the stretched location 1 longitude and latitude data (LonZM, LatZM), the leeward location 2 longitude and latitude data (LonLQ, LatLQ), and the stretched location 3 longitude and latitude data (LonWD, LatLWD) to send to the verification module;
further, the verification module calculates the size of the spatial correlation of the Zhangming, the lie intensity and the Zhang-forming to be Szm-lqAnd Szm-wdThere are many ways to calculate the spatial correlation between the two, which are not limited herein, and in one embodiment, the following formula can be used to calculate the spatial correlation between the twoCalculating:
Szm-lq=R*arccos(C)*Pi/180
wherein the content of the first and second substances,
c ═ sin (latzm) · sin (latlq) · cos (LonZM-LonLQ) + cos (latzm) · cos (latlq), and R is the earth mean radius 6371.004 km.
In the same way, the method for preparing the composite material,
Szm-wd=R*arccos(C)*Pi/180
wherein the content of the first and second substances,
c ═ sin (latzm) sin (latwd) cos (LonZM-LonWD) + cos (latzm) cos (latwd), and R is the mean radius of the earth 6371.004 km.
The verification modules then separately correlate the spatial correlations Szm-lqAnd Szm-wdThe setting of the second threshold value M 'may be set according to historical statistical data or empirical values, compared with a preset second threshold value M'. If the magnitude of spatial correlation of Zhang and lie is strong Szm-lq<M', the spatial correlation between Zhangming and Liqiang is considered to be high;
further, counting the number of consecutive times that the spatial correlation is less than the threshold over a period of time: since there is a certain contingency in the spatial correlation between two accounts at a certain time, for example, Zhang and Lijian may be together participating in a party, it may be necessary to count the continuous times that the spatial correlation of two accounts is smaller than the threshold value in a period of time, and when the continuous times exceed the preset value N, the spatial and temporal correlation of the two accounts is considered to be high. The preset value N may be set according to historical statistical data or empirical values.
In one embodiment, assume that the spatial correlation magnitudes of Zhangming and Lile Strength S are calculated once a dayzm-lqS of N successive calculationszm-lqBoth are smaller than M', the spatiotemporal correlation between Zhangming and lie strength is considered to be high, and the possibility that Zhangming and lie strength are the same person exists.
Furthermore, the verification module can verify the user and the user-associated relationship by combining the spatial correlation calculated according to the spectrum database and the voiceprint similarity calculated by the voiceprint intelligent control module, and output a verification result.
In some embodiments, as shown in FIG. 10, a policy module working diagram is presented; policy module verification methods include, but are not limited to: simultaneously sending verification messages to the user and the user-associated relationship; or, the messages are sent to the user and the user-associated relationship in sequence according to a predefined time sequence; or sending messages to the user and the user-associated relationship at intervals within a period of time, and the like; of course, the policy module may also set other different policies, which are not specifically limited herein. The policy module generates at least one verification policy, and one verification policy at least comprises a group of verification parameters, such as parameters of initiation verification time, verification target, verification content and the like. Specifically, the feedback requirement information indicates that the user inputs the feedback information according to a preset mode, for example, the feedback requirement information may be a target (for example, an automobile in a picture) that requires the user to select multiple preset types in the APP interface; for another example, the feedback requirement information may be a verification code input through a short message application, and the like; wherein variables such as input device, transmission time, transmission destination, etc. constitute the content generated by the policy.
In some embodiments, the verification module may further generate verification feedback information according to the obtained feedback information of the user and the user-associated personality; generating the verification result according to the verification feedback information; the verification feedback information at least comprises user feedback related information and biological characteristic information; the user feedback related information is specifically user feedback behavior information and/or a user feedback information body.
Further, as shown in fig. 11, a schematic diagram of a verification step of the verification module is provided; the client receives the input of the user and the user-associated relationship and at least generates two groups of parameters as verification feedback information; one group of parameters is related information fed back by a user; another set of parameters is biometric information, such as input time, input interval, device elevation, etc.; the user feedback related information comprises user feedback behavior information and/or a user feedback information body; the present disclosure is illustrated with the generation of three sets of parameters as an example.
Correspondingly, the verification module is used for verifying according to the received feedback information; the need to receive the feedback message includes three parts: the first part is behavior information fed back by the user, such as time length fed back by different users; the second part is the biological characteristic information of each user corresponding to the equipment, such as the pressing degree, the elevation angle of the equipment, the finger touch surface, the linear acceleration, the contact point interval and the like; the third part is a user feedback information body, namely data input by a user; the input data of the user can be structured data, such as personal information input according to a form format; the user's input data may also be unstructured data, such as free text entered by the user.
Further, the verification module verifies whether the person is the same person or not based on the user feedback related information and the biological characteristic information and outputs a verification result; specifically, the verification module sequentially or simultaneously judges the consistency of the user feedback related information and the biological characteristic information.
In some embodiments, the spatiotemporal relevance and the biological characteristic information similarity of the feedback behavior information are calculated according to the acquired feedback information of the user and the user-associated relationship; and generating the verification result according to the space-time correlation degree and the biological characteristic information similarity.
In one embodiment, the verification module may verify by: if the feedback required by the user for the verification messages sent simultaneously cannot complete the operation simultaneously or the operation intervals of the user are consistent and the biological information has higher consistency, the output verification result is the same user; in another case, if the messages are transmitted according to a predefined time sequence, if the return sequence is completely consistent with the transmission sequence and closely spaced, and the biometric information has higher consistency, the output authentication result is the same user; in one embodiment, the messages are sent at intervals within a period of time, and if the reply time is at the same time point, the output verification result is the same user; of course, how the verification principle of the verification module is set is not limited herein, and only a few possible ways are provided for reference.
In a real verification process, a plurality of accounts are controlled by a plurality of real users, a strategy module sends a verification strategy in a stronger mode, the user is required to input corresponding feedback, and the operation of a plurality of applicants has randomness, so that the feedback information presents a randomized result; however, in the case of fraud, the multiple accounts are controlled by a single person, and the feedback of the accounts is also highly consistent, so that the detection module can identify potential fraud according to a high pattern presented in the information of the multiple dimensions.
In one embodiment, the verification module may also perform verification by: calculating the time-space correlation degree of the feedback behavior information of the user and the user-associated relationship according to the user feedback information, further calculating the similarity of the biological characteristic information, and obtaining a verification result, namely a user identification result according to the calculation results of the user and the user; here, the calculation value of the spatiotemporal relevance is related to the complexity of the selected transmission strategy, because the feedback spatiotemporal relevance of a normal user is controlled by a plurality of independent users under one transmission strategy, the spatiotemporal relevance is distributed randomly; for example, in one strategy, the system sends the authentication information at intervals, where the time and place of the feedback information will present a fully randomized result under the feedback of multiple real users. The spatio-temporal relevance calculation may calculate a plurality of variables, such as a relevance between a receiving order and a transmitting order, a receiving time relevance between a plurality of account feedback information, a spatial relevance between a plurality of account feedback information, and the like.
In another embodiment, the client receives the input of different users, generates at least two groups of parameters as feedback information, and inputs the feedback information to the information verification module and the information cross-validation module respectively to obtain a user information authenticity assessment score, where the score can be used as the input of the validation module together with the aforementioned user feedback related information and biometric information, and the validation module generates the final validation result to be output.
Further, the information verification module can perform secondary confirmation on the user information by accessing a data and/or graph database of a third-party database, and since the policy module sends a plurality of verification policies, the information verification module is meant to obtain verification information of a plurality of accounts at one time, and calculate a plurality of verification results to obtain an information verification score.
Further, the information cross-validation module is used for cross-validating the relationship of a plurality of users. For example, if a common recommender a exists in a large number of users, the verification policy sent by the policy module includes a request for sending personal information of the recommender a to a plurality of users, wherein the personal information may be personal data that does not exist in the spectrum database; at this time, the information cross-validation module generates a cross-validation score according to the consistency of the plurality of user input information. In a specific implementation, a user feedback expiration date can be set, and data beyond the expiration date is discarded, so that the feedback information is ensured to be immediate feedback of the user and not to further forge the data.
Further, the information verification score and the cross-validation score are input to a validation module, and the validation module combines the user feedback behavior data and the biological characteristic data to complete the identification of the user.
Accordingly, in some embodiments, the system directly or indirectly uses the obtained spatio-temporal correlation, the user feedback behavior, the biometric information and the verification result as training parameters or correction parameters, trains the recognition model or corrects the recognition model; the indirect training parameters or correction parameters are specifically as follows: and processing the spatio-temporal correlation, the user feedback behavior, the biological characteristic information and the verification result to obtain a group of new variable data, and taking the new variable data as a training parameter or a correction parameter.
In some optional embodiments, the time-space correlation, the user feedback behavior, the biological feature information and the verification result are input into the wind control and anti-fraud recognition model to be used for correcting the calculation result of the wind control and anti-fraud recognition, and the accuracy of the wind control and anti-fraud recognition is improved. In one embodiment, the verification results of S202-S206 may be directly used to generate an anti-fraud output, that is, when the user cannot pass through the verification module, the verification result is directly output as that the current user is a fraudulent person.
The embodiment of the disclosure also adopts an identity recognition method combined with biological characteristics, personal information and a knowledge graph, and the method can greatly identify whether a user has fraud risk, so that the probability of fraud risk is reduced.
By the method, further verification of identity recognition is conveniently and quickly realized, identity recognition accuracy is greatly improved, safety is greatly improved, and user experience is good; and the whole financial big data system still keeps the state of automatic operation without the intervention of wind control personnel.
The above is a specific implementation manner of the method for identifying an identity provided by the present disclosure.
Fig. 12 is a schematic diagram of an apparatus for identification according to some embodiments of the present disclosure. As shown in fig. 12, the apparatus 300 for identity recognition includes a voiceprint acquisition unit 302, a calculation unit 304, a verification unit 306, and a precision-improving unit 308; wherein the content of the first and second substances,
a voiceprint acquisition unit 302, configured to acquire first voiceprint information of a user and second voiceprint information of a user-associated person;
a calculating unit 304, configured to calculate a similarity value between the first voiceprint information and the second voiceprint information;
a verification unit 306, configured to generate at least one verification policy to verify the user and the user-associated personality if the similarity value is smaller than a preset first threshold, and generate a verification result;
and the precision improving unit 308 is configured to train or modify the recognition model by using the similarity value and the verification result as training parameters or modification parameters.
In some embodiments, the voiceprint acquisition unit 302 is specifically configured to acquire the second voiceprint information by acquiring the first voiceprint information and according to the first voiceprint information.
In some embodiments, the voiceprint acquisition unit 302 is specifically configured to acquire real-time voiceprint information of a user by using an intelligent voice assistant, and acquire the first voiceprint information and the second voiceprint information according to the real-time voiceprint information.
Specifically, the voiceprint acquisition unit 302 sends the real-time voiceprint information to a voiceprint intelligent control module through the intelligent voice assistant; and when the voiceprint intelligent control module analyzes that the user is in an active state, acquiring the first voiceprint information and the second voiceprint information of the associated human veins, which are prestored by the user, from a map database.
In some embodiments, each of the first voiceprint information and/or the second voiceprint information acquired by the voiceprint acquisition unit 302 includes at least one voiceprint feature value;
the calculating unit 304 is specifically configured to calculate the similarity value according to a first voiceprint feature value of the first voiceprint information and a second voiceprint feature value of the second voiceprint information.
In some embodiments, the method of calculating the similarity value by the calculating unit 304 includes: at least one of a Euclidean distance method, a Manhattan distance method, a standardized Euclidean distance method, and an included angle cosine method.
In some embodiments, the verification unit 306 is configured to generate at least one of the verification policies by detecting a state of the user and an associated relationship of the user.
In some embodiments, the verification policy generated by the verification unit 306 specifically includes: and simultaneously initiating a voice call to the user and the user-associated personality through the intelligent voice assistant.
Optionally, the verification policy generated by the verification unit 306 at least includes a control dialog flow parameter; wherein the control dialog flow parameters include at least one of dialog content and question-and-answer logic.
Optionally, the verification policy generated by the verification unit 306 at least includes a control dialog flow parameter; wherein the control dialog flow parameters include at least one of dialog content and question-and-answer logic.
Further, the verification policy generated by the verification unit 306 is also used to control the intelligent voice assistant to output voice-related information parameters; the voice-related information parameter includes at least one of a voice input parameter and text information recognized by a voice signal.
In some embodiments, the verification unit 306 is further configured to receive verification feedback information, and generate a verification result according to the verification feedback information.
In some embodiments, the verification feedback information received by the verification unit 306 is specifically a voice-related information parameter; wherein the voice-related information parameter comprises at least one of a voice input parameter and text information recognized by a voice signal.
Further, the verification policy generated by the verification unit 306 specifically includes: and acquiring the spatial data of the user and the relationship of the user from a map database for verification.
In some embodiments, the spatial data includes at least one of city information, longitude and latitude information, and network type information.
In some embodiments, the verification unit 306 comprises:
a spatial correlation calculation subunit configured to calculate a spatial correlation between the user and the associated user;
and the verification subunit is used for comparing the similarity value of the first voiceprint information and the second voiceprint information and outputting a verification result if the number of times that the spatial correlation is smaller than the preset second threshold value is larger than a preset value.
Optionally, the verification policy generated by the verification unit 306 at least contains a verification parameter; wherein the verification parameters include at least one of an initiation verification time, a verification target, and verification content.
In some embodiments, the verification unit 306 is specifically configured to: generating verification feedback information according to the acquired feedback information of the user and the user-associated relationship; and generating the verification result according to the verification feedback information.
Preferably, the verification feedback information at least comprises user feedback related information and biometric information; the user feedback related information is specifically user feedback behavior information and/or a user feedback information body.
Preferably, the verification unit 306 is specifically configured to calculate a spatiotemporal relevance degree and a biological feature information similarity of feedback behavior information according to the obtained feedback information of the user and the user-associated relationship; and generating the verification result according to the space-time correlation degree and the biological characteristic information similarity.
The verification unit 306 is further configured to generate an information verification score and/or a cross-verification score according to the obtained feedback information of the user and the user-associated personality.
Preferably, the verification unit 306 is further configured to generate the verification result according to the verification feedback information, the information verification score and/or the cross-verification score;
or generating the verification result according to the space-time correlation degree, the biological characteristic information similarity degree, the information verification score and/or the cross-verification score.
In some embodiments, the precision-improving unit 308 is specifically configured to use the similarity value and the verification result directly or indirectly as a training parameter or a modification parameter.
Specifically, the precision-improving unit 308 is specifically configured to process the similarity value and the verification result to obtain new variable data, and use the new variable data as a training parameter or a correction parameter.
Referring to fig. 13, a schematic diagram of an electronic device according to an embodiment of the present application is provided. As shown in fig. 13, the electronic device 500 includes:
memory 530 and one or more processors 510;
wherein the memory 530 is communicatively coupled to the one or more processors 510, and instructions 532 executable by the one or more processors are stored in the memory 530, and the instructions 532 are executed by the one or more processors 510 to cause the one or more processors 510 to perform the methods of the previous embodiments of the present application.
In particular, processor 510 and memory 530 may be connected by a bus or other means, such as bus 540 in FIG. 13. Processor 510 may be a Central Processing Unit (CPU). The Processor 510 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 530, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as the cascaded progressive network in the embodiments of the present application. The processor 510 performs various functional applications of the processor and data processing by executing non-transitory software programs, instructions, and modules 532 stored in the memory 530.
The memory 530 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 510, and the like. Further, memory 530 may include high-speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 530 may optionally include memory located remotely from processor 510, which may be connected to processor 510 via a network, such as through communication interface 520. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
An embodiment of the present application further provides a computer-readable storage medium, in which computer-executable instructions are stored, and the computer-executable instructions are executed to perform the method in the foregoing embodiment of the present application.
The foregoing computer-readable storage media include physical volatile and nonvolatile, removable and non-removable media implemented in any manner or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. The computer-readable storage medium specifically includes, but is not limited to, a USB flash drive, a removable hard drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), an erasable programmable Read-Only Memory (EPROM), an electrically erasable programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, a CD-ROM, a Digital Versatile Disk (DVD), an HD-DVD, a Blue-Ray or other optical storage, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
While the subject matter described herein is provided in the general context of execution in conjunction with the execution of an operating system and application programs on a computer system, those skilled in the art will recognize that other implementations may also be performed in combination with other types of program modules. Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Those skilled in the art will appreciate that the subject matter described herein may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like, as well as distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application.
In summary, the present disclosure provides a method and an apparatus for identity recognition, an electronic device and a computer-readable storage medium thereof. According to the method and the device, the voiceprint information of the user and the user-associated person is compared to generate at least one verification strategy, the identity of the user and the user-associated person is identified and verified, and meanwhile the generated voiceprint similarity value and the verification result are used as the training parameters or the correction parameters of the identification model, so that the identity identification is conveniently and quickly realized, the identity identification accuracy is greatly improved, the safety degree is greatly improved, and the user experience is good.
In addition, the embodiment of the disclosure also adopts an identity recognition method combined with the intelligent voice assistant and the knowledge graph, and the method can greatly recognize whether the user has fraud risk, thereby reducing the probability of fraud risk.
It is to be understood that the above-described specific embodiments of the present disclosure are merely illustrative of or illustrative of the principles of the present disclosure and are not to be construed as limiting the present disclosure. Accordingly, any modification, equivalent replacement, improvement or the like made without departing from the spirit and scope of the present disclosure should be included in the protection scope of the present disclosure. Further, it is intended that the following claims cover all such variations and modifications that fall within the scope and bounds of the appended claims, or equivalents of such scope and bounds.

Claims (10)

1. A method of identity recognition, comprising:
acquiring first voiceprint information of a user and second voiceprint information of a user-associated human vein;
calculating a similarity value of the first voiceprint information and the second voiceprint information;
if the similarity value is smaller than a preset first threshold value, generating at least one verification strategy to verify the user and the user-associated relationship, and generating a verification result;
and taking the similarity value and the verification result as training parameters or correction parameters to train an identification model or correct the identification model.
2. The method of claim 1, further comprising:
calculating spatial correlation of the user and the associated user;
and if the times that the spatial correlation is smaller than the preset second threshold value are larger than the preset value, combining the similarity value of the first voiceprint information and the second voiceprint information, and outputting a verification result.
3. The method of identification according to claim 1, wherein the method further comprises:
generating verification feedback information according to the acquired feedback information of the user and the user-associated relationship;
and generating the verification result according to the verification feedback information.
4. The method of identification according to claim 1, wherein the method further comprises:
calculating the time-space correlation degree and the biological characteristic information similarity of feedback behavior information according to the acquired feedback information of the user and the user-associated relationship;
and generating the verification result according to the space-time correlation degree and the biological characteristic information similarity.
5. The method according to claim 1, wherein the using the similarity value and the verification result as a training parameter or a modification parameter specifically comprises:
directly using the similarity value and the verification result as training parameters or correction parameters;
or processing the similarity value and the verification result to obtain new variable data, and taking the new variable data as a training parameter or a correction parameter.
6. An apparatus for identification, comprising:
the voiceprint acquisition unit is used for acquiring first voiceprint information of a user and second voiceprint information of the user-associated human veins;
a calculating unit, configured to calculate a similarity value between the first voiceprint information and the second voiceprint information;
the verification unit is used for generating at least one verification strategy to verify the user and the user-associated relationship if the similarity value is smaller than a preset first threshold value, and generating a verification result;
and the precision improving unit is used for taking the similarity value and the verification result as training parameters or correction parameters to train an identification model or correct the identification model.
7. The apparatus of claim 6, wherein the authentication unit comprises:
a spatial correlation subunit for calculating spatial correlations of the user and associated users;
and the verification subunit is used for combining the similarity value of the first voiceprint information and the second voiceprint information to output a verification result when the frequency that the spatial correlation is smaller than a preset second threshold value is larger than a preset value.
8. The apparatus of claim 6, wherein the verification unit is further configured to:
generating verification feedback information according to the acquired feedback information of the user and the user-associated relationship;
and generating the verification result according to the verification feedback information.
9. The apparatus of claim 6, wherein the verification unit is further configured to:
calculating the time-space correlation degree and the biological characteristic information similarity of feedback behavior information according to the acquired feedback information of the user and the user-associated relationship;
and generating the verification result according to the space-time correlation degree and the biological characteristic information similarity.
10. The apparatus of claim 6, wherein the precision-improving unit is further configured to:
directly using the similarity value and the verification result as training parameters or correction parameters;
or processing the similarity value and the verification result to obtain new variable data, and taking the new variable data as a training parameter or a correction parameter.
CN201911002444.8A 2019-10-21 2019-10-21 Identity recognition method and device Active CN110750774B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911002444.8A CN110750774B (en) 2019-10-21 2019-10-21 Identity recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911002444.8A CN110750774B (en) 2019-10-21 2019-10-21 Identity recognition method and device

Publications (2)

Publication Number Publication Date
CN110750774A true CN110750774A (en) 2020-02-04
CN110750774B CN110750774B (en) 2021-12-03

Family

ID=69279193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911002444.8A Active CN110750774B (en) 2019-10-21 2019-10-21 Identity recognition method and device

Country Status (1)

Country Link
CN (1) CN110750774B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111324878A (en) * 2020-02-05 2020-06-23 重庆特斯联智慧科技股份有限公司 Identity verification method and device based on face recognition, storage medium and terminal
CN111325267A (en) * 2020-02-18 2020-06-23 京东城市(北京)数字科技有限公司 Data fusion method, device and computer readable storage medium
CN113689291A (en) * 2021-09-22 2021-11-23 杭银消费金融股份有限公司 Anti-fraud identification method and system based on abnormal movement
TWI752474B (en) * 2020-04-22 2022-01-11 莊連豪 An accessible and intelligent voice recognition system and the control method
CN116993371A (en) * 2023-09-25 2023-11-03 中邮消费金融有限公司 Abnormality detection method and system based on biological characteristics
CN117493618A (en) * 2023-12-29 2024-02-02 深圳市加推科技有限公司 Customer relationship management method and device based on human vein map and related medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105391594A (en) * 2014-09-03 2016-03-09 阿里巴巴集团控股有限公司 Method and device for recognizing characteristic account number
US9837079B2 (en) * 2012-11-09 2017-12-05 Mattersight Corporation Methods and apparatus for identifying fraudulent callers
CN109035010A (en) * 2018-05-25 2018-12-18 中国地质大学(武汉) For the suspicious transaction of block chain password currency and suspicious account analysis method and system
CN109753778A (en) * 2018-12-30 2019-05-14 北京城市网邻信息技术有限公司 Checking method, device, equipment and the storage medium of user
CN110110093A (en) * 2019-04-08 2019-08-09 深圳众赢维融科技有限公司 A kind of recognition methods, device, electronic equipment and the storage medium of knowledge based map

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9837079B2 (en) * 2012-11-09 2017-12-05 Mattersight Corporation Methods and apparatus for identifying fraudulent callers
CN105391594A (en) * 2014-09-03 2016-03-09 阿里巴巴集团控股有限公司 Method and device for recognizing characteristic account number
CN109035010A (en) * 2018-05-25 2018-12-18 中国地质大学(武汉) For the suspicious transaction of block chain password currency and suspicious account analysis method and system
CN109753778A (en) * 2018-12-30 2019-05-14 北京城市网邻信息技术有限公司 Checking method, device, equipment and the storage medium of user
CN110110093A (en) * 2019-04-08 2019-08-09 深圳众赢维融科技有限公司 A kind of recognition methods, device, electronic equipment and the storage medium of knowledge based map

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111324878A (en) * 2020-02-05 2020-06-23 重庆特斯联智慧科技股份有限公司 Identity verification method and device based on face recognition, storage medium and terminal
CN111325267A (en) * 2020-02-18 2020-06-23 京东城市(北京)数字科技有限公司 Data fusion method, device and computer readable storage medium
CN111325267B (en) * 2020-02-18 2024-02-13 京东城市(北京)数字科技有限公司 Data fusion method, device and computer readable storage medium
TWI752474B (en) * 2020-04-22 2022-01-11 莊連豪 An accessible and intelligent voice recognition system and the control method
CN113689291A (en) * 2021-09-22 2021-11-23 杭银消费金融股份有限公司 Anti-fraud identification method and system based on abnormal movement
CN113689291B (en) * 2021-09-22 2022-11-01 杭银消费金融股份有限公司 Anti-fraud identification method and system based on abnormal movement
CN116993371A (en) * 2023-09-25 2023-11-03 中邮消费金融有限公司 Abnormality detection method and system based on biological characteristics
CN117493618A (en) * 2023-12-29 2024-02-02 深圳市加推科技有限公司 Customer relationship management method and device based on human vein map and related medium
CN117493618B (en) * 2023-12-29 2024-04-09 深圳市加推科技有限公司 Customer relationship management method and device based on human vein map and related medium

Also Published As

Publication number Publication date
CN110750774B (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN110750774B (en) Identity recognition method and device
US10992763B2 (en) Dynamic interaction optimization and cross channel profile determination through online machine learning
AU2013203139B2 (en) Voice authentication and speech recognition system and method
CN111883140B (en) Authentication method, device, equipment and medium based on knowledge graph and voiceprint recognition
CN106790054A (en) Interactive authentication system and method based on recognition of face and Application on Voiceprint Recognition
US20110320202A1 (en) Location verification system using sound templates
AU2013203139A1 (en) Voice authentication and speech recognition system and method
US20170294192A1 (en) Classifying Signals Using Mutual Information
CN104835497A (en) Voiceprint card swiping system and method based on dynamic password
US10909991B2 (en) System for text-dependent speaker recognition and method thereof
CN113823293B (en) Speaker recognition method and system based on voice enhancement
CN112309372B (en) Intent recognition method, device, equipment and storage medium based on intonation
Chakraborty et al. Knowledge-based framework for intelligent emotion recognition in spontaneous speech
KR100779242B1 (en) Speaker recognition methods of a speech recognition and speaker recognition integrated system
US20050232470A1 (en) Method and apparatus for determining the identity of a user by narrowing down from user groups
JP2021064110A (en) Voice authentication device, voice authentication system and voice authentication method
Vivaracho-Pascual et al. Client threshold prediction in biometric signature recognition by means of Multiple Linear Regression and its use for score normalization
JP2003302999A (en) Individual authentication system by voice
Samal et al. On the use of MFCC feature vector clustering for efficient text dependent speaker recognition
CN113593580B (en) Voiceprint recognition method and device
Gupta et al. Text dependent voice based biometric authentication system using spectrum analysis and image acquisition
Pinheiro et al. Type-2 fuzzy GMM-UBM for text-independent speaker verification
CN112652314A (en) Method, device, equipment and medium for verifying disabled object based on voiceprint shading
CN204496911U (en) A kind of vocal print punch-card device based on dynamic password
Tsang et al. Speaker verification using type-2 fuzzy gaussian mixture models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220613

Address after: 510000 floor 7, building S6, poly Yuzhu port, No. 848, Huangpu Avenue East, Huangpu District, Guangzhou, Guangdong

Patentee after: Jianlian Technology (Guangdong) Co.,Ltd.

Address before: 510623 Room 201, building a, No. 1, Qianwan 1st Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong

Patentee before: SHENZHEN ZHONGYING WEIRONG TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right