CN113946579A - Model-based data generation method and device, computer equipment and storage medium - Google Patents

Model-based data generation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113946579A
CN113946579A CN202111187628.3A CN202111187628A CN113946579A CN 113946579 A CN113946579 A CN 113946579A CN 202111187628 A CN202111187628 A CN 202111187628A CN 113946579 A CN113946579 A CN 113946579A
Authority
CN
China
Prior art keywords
expression recognition
palm
recognition model
preset
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111187628.3A
Other languages
Chinese (zh)
Inventor
吴先祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Puhui Enterprise Management Co Ltd
Original Assignee
Ping An Puhui Enterprise Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Puhui Enterprise Management Co Ltd filed Critical Ping An Puhui Enterprise Management Co Ltd
Priority to CN202111187628.3A priority Critical patent/CN113946579A/en
Publication of CN113946579A publication Critical patent/CN113946579A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/221Column-oriented storage; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Technology Law (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Computer Hardware Design (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application relates to the technical field of artificial intelligence, and provides a data generation method, a data generation device, computer equipment and a storage medium based on a model, wherein the method comprises the following steps: receiving an operation index data acquisition request; acquiring user information of a user, a first palm print image, a first palm print vein image and a face image of the user, and performing identity authentication on the user based on an expression recognition model and a palm recognition model; if the identity authentication is passed, acquiring a first data table corresponding to the operation index identification; calculating the data of the first data table based on the calculation rule of the operation index to obtain result data; and synchronizing the result data with a second data table in the columnar storage database to obtain a target data table. The target data table containing the result data of the operation index can be automatically and quickly generated. The method and the device can also be applied to the field of block chains, and the result data can be stored on the block chains.

Description

Model-based data generation method and device, computer equipment and storage medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a data generation method and device based on a model, computer equipment and a storage medium.
Background
At present, more and more clients transact loans in various credit and credit companies on the market, and overdue situations are more and more frequent. Therefore, the indicator of the post-loan hasty operation is receiving more and more attention. How to realize the real-time data of the relevant indexes of the operation of urging collection after the credit is known at the first time so as to adjust the urging collection work of the company in time is called as a problem which needs to be solved urgently. However, in the conventional data generation method of the operation-related index, a special analyst usually performs statistical analysis on the obtained data of the overdue case call records after being credited, so as to generate result data of the operation index. The data generation mode based on manual processing needs to occupy a large amount of human resources, the manual workload is large, the efficiency of generating the result data of the operation index is low, and the accuracy of the result data is low.
Disclosure of Invention
The application mainly aims to provide a data generation method, a data generation device, computer equipment and a storage medium based on a model, and aims to solve the technical problems that an existing data generation mode based on manual processing needs to occupy a large amount of human resources, the manual workload is large, the efficiency of generating result data of operation indexes is low, and the accuracy of the result data is low.
The application provides a data generation method based on a model, which comprises the following steps:
receiving an operation index data acquisition request; the operation index data acquisition request comprises an operation index identifier;
acquiring user information of a user, a first palm print image, a first palm print vein image and a face image of the user, performing identity authentication on the user based on a preset expression recognition model and a preset palm recognition model, and judging whether the identity authentication passes;
if the identity authentication is passed, acquiring a first data table corresponding to the operation index identification from a preset database;
calculating the data in the first data table based on the calculation rule of the operation index corresponding to the operation index identification to obtain result data corresponding to the operation index;
and synchronizing the result data in a second data table in a preset column type storage database to obtain a target data table.
Optionally, the step of authenticating the user based on a preset expression recognition model and a preset palm recognition model and judging whether the authentication passes includes:
obtaining a preset number of expression recognition models, and determining a target expression recognition model from all the expression recognition models according to a preset rule; the preset number is more than 1, and each expression recognition model is generated based on different training sample sets in a training mode;
generating a target expression recognition result corresponding to the face image through the target expression recognition model;
judging whether the target expression recognition result belongs to a preset expression type or not;
if the expression type belongs to the preset expression type, judging that the identity authentication is not passed, and limiting the response to the operation index data acquisition request;
if the expression type does not belong to the preset expression type, judging whether specified user information which is the same as the user information is stored in a preset palm feature database or not;
if the appointed user information is stored, performing feature extraction on the first palm print image and the first palm print vein image through the palm recognition model to obtain a first palm feature vector of the user;
acquiring appointed palm feature information corresponding to the appointed user information from the palm feature database, comparing the first palm feature vector with the appointed palm feature information, and judging whether the first palm feature vector and the appointed palm feature information are the same feature information or not;
if the identity information is the same characteristic information, the identity authentication is judged to be passed;
and if the identity authentication is not the same characteristic information, judging that the identity authentication is not passed.
Optionally, the step of obtaining a preset number of expression recognition models and determining a target expression recognition model from all the expression recognition models according to a preset rule includes:
acquiring a preset number of the expression recognition models, and respectively acquiring a first test accuracy of each expression recognition model based on a preset first verification sample set;
comparing the sizes of all the first test accuracy rates, and screening out a second test accuracy rate with the maximum value from all the first test accuracy rates;
acquiring a first expression recognition model corresponding to the second test accuracy from all the expression recognition models;
taking the first expression recognition model as the target expression recognition model;
the step of generating a target expression recognition result corresponding to the face image through the target expression recognition model includes:
inputting the facial image into the first expression recognition model, and outputting a first expression recognition result corresponding to the facial image through the first expression recognition model;
and taking the first expression recognition result as the target expression recognition result.
Optionally, the step of obtaining a preset number of expression recognition models and determining a target expression recognition model from all the expression recognition models according to a preset rule includes:
acquiring a preset number of the expression recognition models, and respectively acquiring a third test accuracy rate of each expression recognition model based on a preset second verification sample set;
screening out a second expression recognition model with a third test accuracy rate larger than a preset accuracy rate threshold value from all the expression recognition models;
generating a first recognition processing time of each second expression recognition model based on the second verification sample set;
screening out second identification processing time smaller than a preset processing time threshold from all the first identification processing time;
acquiring a third expression recognition model corresponding to the second recognition processing time from all the second expression recognition models; wherein the number of the third emotion recognition models includes a plurality;
taking the third expression recognition model as the target expression recognition model;
the step of generating a target expression recognition result corresponding to the face image through the target expression recognition model includes:
inputting the face image into each third emotion recognition model respectively, and acquiring second expression recognition results output by each third emotion recognition model respectively;
screening out a third expression recognition result with the largest number of times from all the second expression recognition results;
and taking the third expression recognition result as the target expression recognition result.
Optionally, the step of generating a first recognition processing time of each second expression recognition model based on the second verification sample set includes:
obtaining the second set of verification samples; wherein the second set of validation samples comprises a plurality of validation sample data;
when a fourth expression recognition model receives each input verification sample data, respectively counting the processing time of the fourth expression recognition model for outputting the expression recognition result corresponding to each verification sample data; the fourth expression recognition model is any one of the second expression recognition models;
calculating a sum value between all the processing times;
acquiring the number of all the processing time;
calculating a quotient between the sum and the amount of all of the processing times;
and taking the quotient value as the first recognition processing time of the fourth expression recognition model.
Optionally, the step of performing feature extraction on the first palm print image and the first palm print vein image through the palm recognition model to obtain a first palm feature vector of the user includes:
calling the palm recognition model;
inputting the first palm print image and the first palm print vein image into the palm recognition model, and performing convolution operation and feature extraction on the first palm print image and the first palm print vein image through the palm recognition model to obtain surface texture features of the first palm print image and palm vein blood vessel distribution features of the first palm print vein image;
performing feature fusion on the surface texture features and the palm vein blood vessel distribution features to obtain fused feature vectors;
and taking the fused feature vector as the first palm feature vector.
Optionally, before the step of determining whether the preset palm feature database stores the designated user information that is the same as the user information, the method includes:
collecting a second palm print image and a second palm print vein image of a registered user;
performing feature extraction on the second palm print image and the second palm print vein image through the palm recognition model to obtain a second palm feature vector of the registered user;
establishing a one-to-one mapping relation between the user information of the registered user and the second palm feature vector;
storing the user information of the registered user and the second palm feature vector of the registered user in a preset original database based on the mapping relation to obtain a stored original database;
and taking the stored original database as the palm characteristic database.
The present application further provides a data generation apparatus based on a model, comprising:
the receiving module is used for receiving an operation index data acquisition request; the operation index data acquisition request comprises an operation index identifier;
the verification module is used for acquiring user information of a user, a first palm print image, a first palm vein image and a face image of the user, verifying the identity of the user based on a preset expression recognition model and a preset palm recognition model, and judging whether the identity verification passes;
the acquisition module is used for acquiring a first data table corresponding to the operation index identification from a preset database if the identity authentication passes;
the calculation module is used for calculating and processing the data in the first data table based on the calculation rule of the operation index corresponding to the operation index identification to obtain result data corresponding to the operation index;
and the synchronization module is used for synchronizing the result data in a second data table in a preset column type storage database to obtain a target data table.
The present application further provides a computer device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the above method when executing the computer program.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method.
The model-based data generation method, the model-based data generation device, the computer equipment and the storage medium have the following beneficial effects:
the model-based data generation method, the model-based data generation device, the computer equipment and the storage medium are different from the existing mode of generating result data of operation indexes through manual statistics, after receiving an operation index data acquisition request triggered by a user and judging that the user passes identity verification, the method firstly acquires a first data table corresponding to the operation indexes from a preset database, then carries out calculation processing on data in the first data table based on calculation rules of the operation indexes corresponding to operation index identifications to obtain result data corresponding to the operation indexes, and finally synchronizes the result data in a second data table in a preset column storage database to obtain a target data table, so that the generation and display of the result data of the operation indexes are completed in an automatic mode. By the method and the device, the target data table containing the result data of the operation index can be automatically, quickly and accurately generated according to the first data table corresponding to the operation index, so that a large amount of human resources do not need to be occupied, the manual workload is greatly reduced, the accuracy of the obtained result data of the operation index is effectively guaranteed, and the processing efficiency of generating the target data table is improved.
Drawings
FIG. 1 is a schematic flow chart diagram of a model-based data generation method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a model-based data generation apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Referring to fig. 1, a model-based data generation method according to an embodiment of the present application includes:
s10: receiving an operation index data acquisition request; the operation index data acquisition request comprises an operation index identifier;
s20: acquiring user information of a user, a first palm print image, a first palm print vein image and a face image of the user, performing identity authentication on the user based on a preset expression recognition model and a preset palm recognition model, and judging whether the identity authentication passes;
s30: if the identity authentication is passed, acquiring a first data table corresponding to the operation index identification from a preset database;
s40: calculating the data in the first data table based on the calculation rule of the operation index corresponding to the operation index identification to obtain result data corresponding to the operation index;
s50: and synchronizing the result data in a second data table in a preset column type storage database to obtain a target data table.
As described in steps S10-S50 above, the subject of the present method embodiment is a model-based data generation apparatus. In practical applications, the model-based data generation apparatus may be implemented by a virtual apparatus, such as a software code, or may be implemented by a physical apparatus written with or integrated with a relevant execution code, and may perform human-computer interaction with a user through a keyboard, a mouse, a remote controller, a touch panel, or a voice control device. The data generation device based on the model in the embodiment can automatically, quickly and accurately generate the target data table containing the result data of the operation index according to the data table corresponding to the operation index, so that a large amount of human resources do not need to be occupied, the manual workload is greatly reduced, the accuracy of the obtained result data of the operation index is effectively ensured, and the processing efficiency of generating the target data table is improved. Specifically, a job index data acquisition request is first received. The operation index data acquisition request comprises an operation index identification. In addition, the job index data acquisition request refers to a request triggered by a user to generate result data of a corresponding job index. The user may refer to a holding user of the device. The operation index may refer to an index of an operation to be executed after loan, and may include, for example, a total account amount, an effective coverage rate, a user availability rate, a user average duration, a user average number of times, and the like. The operation index identifier may refer to name information of the operation index, and the user information may refer to user name information. In addition, the operation index data acquisition request may further include user information, where the user information may refer to name information or id information of a user.
After the operation index data acquisition request is received, acquiring user information of a user, a first palm print image, a first palm print vein image and a face image of the user, performing identity authentication on the user based on a preset expression recognition model and a preset palm recognition model, and judging whether the identity authentication passes or not. The palm print image can be a palm print infrared image, and the palm print vein image can be a palm print vein infrared image. The device is internally provided with a 3D camera and a common camera in advance, the palm of a user can be shot through the 3D camera, and a palm print infrared image and a palm vein infrared image of the palm of the user are acquired and acquired. The 3D camera includes structured light, TOF (Time of flight), and binocular structure. The distance measurement mode of the TOF camera is active, and the distance is directly measured according to the flight time of light. The distance measurement mode of the binocular camera is a passive mode, and the distance is indirectly calculated through triangulation according to image feature point matching. The ranging mode of the structured light type camera is active, which is based on active projection of a known code. The palm is placed in the shooting range of the 3D camera through indicating the user, the 3D camera shoots the palm, and therefore a first palm print image and a first palm print vein image of the user can be collected. In addition, the face image of the user can be acquired through a common camera. In addition, for the specific implementation process of performing identity authentication on the user based on the preset expression recognition model and the preset palm recognition model and judging whether the identity authentication passes, this will be further described in the subsequent specific embodiments, which is not described herein.
And if the identity authentication is passed, acquiring a first data table corresponding to the operation index identification from a preset database. The first data table corresponding to the operation index identifier can be obtained from a preset database through a message middleware, and the message middleware can be KAFKA. KAFKA is a distributed message queue, which has high performance, persistence, multi-copy backup and horizontal expansion capability, a producer writes messages into the queue, and a consumer removes messages from the queue for business logic processing. The method plays the roles of decoupling, peak clipping and asynchronous processing in the architecture design. In addition, after the first data table is acquired, the first data table can be sent to the Flink calculation engine, so that the calculation processing of the data in the first data table can be executed by the Flink calculation engine subsequently to obtain corresponding result data. Flink is an open source streaming framework developed by the Apache software Foundation, and at the heart of it is a distributed streaming data streaming engine written in Java and Scale. Flink executes arbitrary stream data programs in parallel and pipelined fashion, and Flink's pipelined runtime system can execute batch and stream processing programs. The operation index mark can refer to an index mark of the operation which is urged to be received after the loan. The preset database is a database in which a data table corresponding to the operation index is stored, and may be an Oracle database, for example. The first data table data may specifically be a post-loan overdue case call record table (postloan _ overlap _ acc _ talk _ record), data related to the post-loan overdue case call record is stored in the first data table, and result data of a corresponding operation index may be calculated by using a Flink calculation engine according to the data and a preset calculation rule.
And then, calculating the data in the first data table based on the calculation rule of the operation index corresponding to the operation index identification to obtain result data corresponding to the operation index. The calculation rule may be a calculation logic rule for generating a result of the job index, which is written in advance according to an actual business requirement. In addition, the generation of result data may be accomplished using specifically a Flink calculation engine. Specifically, the operation index may specifically include a total account amount, an effective coverage rate, a personal availability rate, a per-household duration, and a per-household number. The calculation rule of the operation index can comprise: (1) total account amount: the case allocation time in the corresponding architecture is the total number of accounts in the current day (only accounts which are overdue in real time are counted); (2) effective coverage rate: account proportion (numerator/denominator) covered by effective call (denominator: the case allocation time in the corresponding framework personnel queue is the account total number of the current day; numerator: the number of effective call accounts in the corresponding denominator case range; effective call definition: connection and call duration is more than or equal to 10 seconds); (3) the self associativity is as follows: account occupation ratio (numerator/denominator) which can be effectively connected by an account (denominator: the total number of accounts in the current day corresponding to case allocation time in a staff queue under the framework; numerator: the number of accounts which can effectively communicate in the range of cases corresponding to the denominator and is given to the account owner; effective communication definition: connection and communication time length is more than or equal to 10 seconds); (4) the average time of the households is as follows: the time length (numerator/denominator) of calling by each user (denominator: the total account number of the current day corresponding to the case distribution time in the staff queue under the framework; numerator: the total time length of effective call corresponding to the case range of the denominator; effective call definition: connection and call time length more than or equal to 10 seconds); (5) average number of times of households: the number of times (numerator/denominator) that the user dials the telephone (denominator: the case allocation time in the corresponding architecture personnel queue is the account total number of the day; numerator: the total number of effective calls in the corresponding denominator case range; effective call definition: connection and call duration is more than or equal to 10 seconds). In addition, a clickhouse sql code corresponding to the calculation rule can be written and generated in advance, and the code is run to calculate and count the data in the first data table according to the calculation rule by using the Flink calculation engine, so that the result data corresponding to the operation index can be obtained.
And finally, synchronizing the result data in a second data table in a preset column type storage database to obtain a target data table. Wherein the columnar storage database is a clickhouse database. The clickhouse database is adopted, and because the clickhouse is an open-source database based on columnar storage, the compression ratio is very high, the clickhouse can be independently deployed without depending on any component, the cluster deployment is supported, the query is realized, and the batch writing efficiency is extremely high. In addition, fields corresponding to various operation indexes are preset in the second data table, after the Flink calculation engine calculates the result data corresponding to the operation indexes, a clickhouse sink is created in the Flink, so that the result data corresponding to the operation indexes can be stored in the second data table of the clickhouse in a one-to-one corresponding batch mode to obtain the target data table. In addition, after the target data table is generated, the corresponding result data in the target data table of the clickhouse can be read through the java code writing interface, and the backend interface is accessed through vue on the preset job bulletin board page to show the result data of each job index dimension in the target data table on the page, and query and download functions can be provided. Further, after the target data table is generated, the target data table can be displayed. The manner of displaying the target data table is not limited. For example, a preset operation bulletin board can be used to display the result data of each operation index contained in the target data table. For the case collection service scene, the detail of the relevant indexes of the operation can be visually displayed through the billboard page. The case prompting device is used for the leader layer to take real and accurate operation index data in real time when a decision is made through analysis, so that loss caused by overdue is reduced, case prompting efficiency is improved, and cases are prompted in time.
Different from the existing manner of generating result data of an operation index through manual statistics, in this embodiment, after receiving an operation index data acquisition request triggered by a user and judging that the user passes identity verification, a first data table corresponding to the operation index is acquired from a preset database, then, calculation processing is performed on data in the first data table based on a calculation rule of the operation index corresponding to an operation index identifier to obtain result data corresponding to the operation index, and finally, the result data is synchronized in a second data table in a preset column storage database to obtain a target data table, so that generation and display of the result data of the operation index are completed in an automated manner. According to the embodiment, the target data table containing the result data of the operation index can be automatically, quickly and accurately generated according to the first data table corresponding to the operation index, so that a large amount of human resources do not need to be occupied, the manual workload is greatly reduced, the accuracy of the obtained result data of the operation index is effectively guaranteed, and the processing efficiency of generating the target data table is improved. In addition, the target data table is displayed to realize visual display of the details of the relevant indexes of the operation, so that the leader layer can obtain real and accurate data in real time as a support when analyzing and making a decision, further convenience is provided for subsequent business processing, and the success rate and the processing efficiency of the business processing can be improved.
Further, in an embodiment of the present application, the step S20 includes:
s200: obtaining a preset number of expression recognition models, and determining a target expression recognition model from all the expression recognition models according to a preset rule; the preset number is more than 1, and each expression recognition model is generated based on different training sample sets in a training mode;
s201: generating a target expression recognition result corresponding to the face image through the target expression recognition model;
s202: judging whether the target expression recognition result belongs to a preset expression type or not;
s203: if the expression type belongs to the preset expression type, judging that the identity authentication is not passed, and limiting the response to the operation index data acquisition request;
s204: if the expression type does not belong to the preset expression type, judging whether specified user information which is the same as the user information is stored in a preset palm feature database or not;
s205: if the appointed user information is stored, performing feature extraction on the first palm print image and the first palm print vein image through the palm recognition model to obtain a first palm feature vector of the user;
s206: acquiring appointed palm feature information corresponding to the appointed user information from the palm feature database, comparing the first palm feature vector with the appointed palm feature information, and judging whether the first palm feature vector and the appointed palm feature information are the same feature information or not;
s207: if the identity information is the same characteristic information, the identity authentication is judged to be passed;
s208: and if the identity authentication is not the same characteristic information, judging that the identity authentication is not passed.
As described in the foregoing steps S200 to S208, the step of performing authentication on the user based on the preset expression recognition model and the preset palm recognition model, and determining whether the authentication passes may specifically include: firstly, obtaining a preset number of expression recognition models, and determining a target expression recognition model from all the expression recognition models according to a preset rule. The preset number is larger than 1, and each expression recognition model is generated based on different training sample sets in a training mode. Specifically, the expression recognition model is a model obtained by training an initial recognition model, such as a neural network, based on a preset training sample set by using a machine learning method. The training process of the expression recognition model can refer to the prior art, and is not described herein too much. The preset number may be a preset number greater than 1, and may be, for example, 3, 4, 5, or the like. Different expression recognition models are trained based on different training sample sets. Therefore, for the facial expression presented by the same one of the facial images, the expression recognition results recognized by the different expression recognition models for indicating that the facial expression presented by the one of the facial images are not completely the same. In addition, the number of the target expression recognition models may be one or more, and for the specific implementation process of determining the target expression recognition model from all the expression recognition models according to the preset rule, this will be further described in the subsequent specific embodiments, which is not described herein again. And then generating a target expression recognition result corresponding to the face image through the target expression recognition model. And then judging whether the target expression recognition result belongs to a preset expression type. The preset expression type refers to an expression which is easy to flow out when a deception intention exists in the process of identity verification of a user, and the expression is afraid, panic, tension and the like. For example, if the number of the target expression recognition models is multiple, when the designated expression recognition result with the largest occurrence frequency screened from all the expression recognition results obtained based on the target expression recognition models belongs to the preset expression type, it may be determined whether the user is a legitimate user himself or herself. And if the expression type belongs to the preset expression type, judging that the identity authentication is not passed, and limiting the response to the operation index data acquisition request. When the target expression recognition result is detected to belong to the preset expression type, if the target expression recognition result belongs to any one of expressions such as fearful expressions, panic expressions, tense expressions and the like, the doubtful attitude of whether the user belongs to a legal user or not is determined, namely the current user is considered as an illegal user, and the response to the operation index data acquisition request is limited, so that adverse consequences caused by the fact that the operation index data acquisition request input by the illegal user is responded can be avoided, and the data security in the generation process of the operation index data is effectively guaranteed. In addition, when the target expression recognition result is detected to belong to the preset expression type, the operation index data acquisition request is responded through limitation, so that feature extraction and comparison processing on the first palm print image and the first palm print vein image of the user are not needed any more subsequently, the energy consumption of the device can be effectively reduced, and the processing intelligence of the device is improved. And if the expression type does not belong to the preset expression type, judging whether the preset palm feature database stores the appointed user information which is the same as the user information. And if the appointed user information is stored, performing feature extraction on the first palm print image and the first palm print vein image through the palm recognition model to obtain a first palm feature vector of the user. And subsequently acquiring appointed palm feature information corresponding to the appointed user information from the palm feature database, comparing the first palm feature vector with the appointed palm feature information, and judging whether the first palm feature vector and the appointed palm feature information are the same feature information. The similarity between the first palm feature vector and the designated palm feature information can be calculated by adopting an existing similarity calculation method, if the calculated similarity is greater than a preset similarity threshold, the first palm feature vector and the designated palm feature information are determined to be the same feature information, and if the calculated similarity is less than the preset similarity threshold, the first palm feature vector and the designated palm feature information are determined to be different feature information. In addition, the value of the similarity threshold is not particularly limited, and can be set according to actual requirements. And if the identity information is the same characteristic information, judging that the identity authentication is passed. And if the identity authentication is not the same characteristic information, the identity authentication is judged not to be passed. According to the embodiment, the identity verification processing of the user is realized by adopting multiple identity verification modes such as expression recognition, palm characteristic comparison and the like, the accuracy and reliability of identity verification are effectively improved, bad consequences caused by responding to an operation index data acquisition request input by an illegal user are avoided, and the data security in the generation process of the operation index data is effectively ensured. And only when the user passes the identity authentication, the received operation index data acquisition request is further responded subsequently to generate the index data of the corresponding operation index, so that the safety of request processing is effectively improved.
Further, in an embodiment of the present application, the step S200 includes:
s2000: acquiring a preset number of the expression recognition models, and respectively acquiring a first test accuracy of each expression recognition model based on a preset first verification sample set;
s2001: comparing the sizes of all the first test accuracy rates, and screening out a second test accuracy rate with the maximum value from all the first test accuracy rates;
s2002: acquiring a first expression recognition model corresponding to the second test accuracy from all the expression recognition models;
s2003: taking the first expression recognition model as the target expression recognition model;
the step S201 includes:
s2010: inputting the facial image into the first expression recognition model, and outputting a first expression recognition result corresponding to the facial image through the first expression recognition model;
s2011: and taking the first expression recognition result as the target expression recognition result.
As described in steps S2000 to S2011, the step of obtaining a preset number of expression recognition models and determining a target expression recognition model from all the expression recognition models according to a preset rule may specifically include: the method comprises the steps of firstly obtaining a preset number of expression recognition models, and respectively obtaining a first test accuracy of each expression recognition model based on a preset first verification sample set. The process of respectively obtaining the first test accuracy of each expression recognition model based on a preset first verification sample set may include: obtaining the first verification sample set; the first verification sample set comprises a plurality of test data and expression information corresponding to each test data; respectively inputting the test data into a designated expression recognition model, and acquiring a third expression recognition result which is output by the designated expression recognition model and corresponds to the test data respectively; the designated expression recognition model is any one of all the expression recognition models; acquiring a specified expression recognition result which is correctly recognized in all the specified expression recognition results based on the expression information respectively corresponding to the test data; acquiring a first number of the appointed expression recognition results and a second number of the third expression recognition results; calculating a ratio of the first quantity to the second quantity; and taking the ratio as a first test accuracy of the designated expression recognition model. And then comparing the sizes of all the first test accuracy rates, and screening out a second test accuracy rate with the maximum value from all the first test accuracy rates. And then acquiring a first expression recognition model corresponding to the second test accuracy from all the expression recognition models. And finally, taking the first expression recognition model as the target expression recognition model. Further, the step of generating a target expression recognition result corresponding to the facial image through the target expression recognition model includes: firstly, the face image is input into the first expression recognition model, and a first expression recognition result corresponding to the face image is output through the first expression recognition model. And then taking the first expression recognition result as the target expression recognition result. In this embodiment, after a preset number of pre-trained expression recognition models are obtained, the test accuracy of each expression recognition model is considered, and the first expression recognition model with the highest test accuracy is used as the final target expression recognition model. Because the determined target expression recognition model has the highest recognition accuracy, a first expression recognition result obtained by performing expression recognition on a face image of the user by using the first expression recognition model is used as the target expression recognition result, the accuracy of the generated target expression recognition result is effectively ensured, the follow-up accurate authentication processing on the user based on the target expression recognition result and the palm recognition model can be facilitated, the response processing on the operation index data acquisition request can be accurately executed according to the obtained user authentication result, and the safety of request processing is effectively improved.
Further, in an embodiment of the present application, the step S200 includes:
s2004: acquiring a preset number of the expression recognition models, and respectively acquiring a third test accuracy rate of each expression recognition model based on a preset second verification sample set;
s2005: screening out a second expression recognition model with a third test accuracy rate larger than a preset accuracy rate threshold value from all the expression recognition models;
s2006: generating a first recognition processing time of each second expression recognition model based on the second verification sample set;
s2007: screening out second identification processing time smaller than a preset processing time threshold from all the first identification processing time;
s2008: acquiring a third expression recognition model corresponding to the second recognition processing time from all the second expression recognition models; wherein the number of the third emotion recognition models includes a plurality;
s2009: taking the third expression recognition model as the target expression recognition model;
the step S201 includes:
s2012: inputting the face image into each third emotion recognition model respectively, and acquiring second expression recognition results output by each third emotion recognition model respectively;
s2013: screening out a third expression recognition result with the largest number of times from all the second expression recognition results;
s2014: and taking the third expression recognition result as the target expression recognition result.
As described in the foregoing steps S2004 to S2014, the step of obtaining a preset number of expression recognition models and determining a target expression recognition model from all the expression recognition models according to a preset rule may specifically include: the method comprises the steps of firstly obtaining a preset number of expression recognition models, and respectively obtaining a third test accuracy rate of each expression recognition model based on a preset second verification sample set. The specific process of calculating the third test accuracy of each expression recognition model may refer to the calculation process of the first test accuracy, and is not repeated here. And then screening out a second expression recognition model with a third test accuracy rate larger than a preset accuracy rate threshold from all the expression recognition models. The value of the preset accuracy threshold is not specifically limited, and can be set according to actual requirements. And then generating a first recognition processing time of each second expression recognition model based on the second verification sample set. For the specific implementation process of the first identification processing time for generating each second expression identification model based on the second verification sample set, this will be further described in the subsequent specific embodiments, and details are not repeated here. And subsequently screening out second identification processing time smaller than a preset processing time threshold from all the first identification processing time. The value of the preset processing time threshold is not particularly limited, and can be set according to actual requirements. And finally, acquiring a third expression recognition model corresponding to the second recognition processing time from all the second expression recognition models, and taking the third expression recognition model as the target expression recognition model. Wherein the number of the third emotion recognition models includes a plurality. Further, the step of generating a target expression recognition result corresponding to the facial image through the target expression recognition model includes: firstly, the face image is respectively input into each third emotion recognition model, and second emotion recognition results respectively output by each third emotion recognition model are obtained; and then screening out a third expression recognition result with the largest number of times from all the second expression recognition results. And then taking the third expression recognition result as the target expression recognition result. In the embodiment, comprehensive consideration of the recognition processing time and the test accuracy is respectively carried out on the pre-trained expression recognition models with the preset number, so that the models with the test accuracy greater than the preset accuracy threshold and the recognition processing time less than the preset processing time threshold are screened out from all the expression recognition models to serve as the final target expression recognition models. The determined target expression recognition models have high recognition accuracy and high processing efficiency, so that expression recognition is performed on the face image of the user by using the plurality of target expression recognition models respectively, and a third expression recognition result with the largest number of occurrences is screened out from all the obtained second expression recognition results to serve as a final target expression recognition result, the comprehensiveness and accuracy of the generated target expression recognition result are ensured, and the condition that the recognition result is inaccurate due to the fact that only one face expression recognition model is used for performing expression recognition on the face image can be avoided, so that the accuracy of the expression recognition on the face image is effectively improved, and the accuracy of identity verification of the user by using the expression recognition models is improved.
Further, in an embodiment of the present application, the step S2006 includes:
s20060: obtaining the second set of verification samples; wherein the second set of validation samples comprises a plurality of validation sample data;
s20061: when a fourth expression recognition model receives each input verification sample data, respectively counting the processing time of the fourth expression recognition model for outputting the expression recognition result corresponding to each verification sample data; the fourth expression recognition model is any one of the second expression recognition models;
s20062: calculating a sum value between all the processing times;
s20063: acquiring the number of all the processing time;
s20064: calculating a quotient between the sum and the amount of all of the processing times;
s20065: and taking the quotient value as the first recognition processing time of the fourth expression recognition model.
As described in the above steps S20060 to S20065, the step of generating the first recognition processing time of each second expression recognition model based on the second verification sample set may specifically include: the second set of verification samples is first obtained. Wherein the second set of verification samples comprises a plurality of verification sample data. In addition, the second verification sample set may be generated based on the training sample set, for example, data of a preset numerical proportion may be randomly obtained from any one training sample set as the second verification sample set, and the preset numerical proportion may be set according to an actual requirement, and may be set to be 30%, for example. And when the fourth expression recognition model receives each input verification sample data, respectively counting the processing time of the fourth expression recognition model for outputting the expression recognition result corresponding to each verification sample data. The fourth expression recognition model is any one of the second expression recognition models. In addition, the processing time is the time taken by the fourth expression recognition model to output the expression recognition result corresponding to any verification sample data after receiving the verification sample data. For example, if the time when the fourth expression recognition model receives the verification sample data m is T1 and the time when the fourth expression recognition model successfully outputs the expression recognition result of the verification sample data m is T2, the processing time of the fourth expression recognition model corresponding to the verification sample data m is T2-T1. The sum between all the processing times is then calculated. The amount of all of the processing time is then obtained. A quotient between the sum and the amount of all of the processing times is subsequently calculated. And finally, taking the quotient as the first recognition processing time of the fourth expression recognition model. In this embodiment, the identification processing time of each second expression recognition model can be quickly calculated based on the second verification sample set, which is beneficial to subsequently performing screening processing on all the second expression recognition models based on the identification processing time and the processing accuracy of each second expression recognition model to generate a final target expression recognition model, and further accurately generating a target expression recognition result corresponding to the face image based on the target expression recognition model, so that identity verification processing for the user is subsequently performed based on the target expression recognition result and the palm recognition model. The target expression recognition model obtained by screening based on the second expression recognition model has higher processing accuracy and higher processing efficiency, so that the processing efficiency of generating the target expression recognition result corresponding to the user can be effectively improved on the basis of ensuring the accuracy of the target expression recognition result output by the target expression recognition model when the target expression recognition model is used for carrying out expression recognition processing on the user in the follow-up process.
Further, in an embodiment of the application, the step S205 includes:
s2050: calling the palm recognition model;
s2051: inputting the first palm print image and the first palm print vein image into the palm recognition model, and performing convolution operation and feature extraction on the first palm print image and the first palm print vein image through the palm recognition model to obtain surface texture features of the first palm print image and palm vein blood vessel distribution features of the first palm print vein image;
s2052: performing feature fusion on the surface texture features and the palm vein blood vessel distribution features to obtain fused feature vectors;
s2053: and taking the fused feature vector as the first palm feature vector.
As described in the foregoing steps S2050 to S2053, the step of performing feature extraction on the first palm print image and the first palm print vein image by using the palm recognition model to obtain the first palm feature vector of the user may specifically include: the palm recognition model is first invoked. The method comprises the steps of obtaining a plurality of palm images, obtaining a convolutional neural network model, and training the convolutional neural network model to obtain the palm recognition model. The convolutional neural network is a feedforward neural network, the basic structure of which comprises two layers, one of which is a feature extraction layer, the input of each neuron is connected with a local receiving domain of the previous layer, and the local features are extracted; the second is a feature mapping layer, each computation layer of the network is composed of a plurality of feature mappings, each feature mapping layer is a plane, and the weights of all neurons on the plane are equal. In addition, the specific training generation process of the palm recognition model can refer to the training process of the existing convolutional neural network model, and is not elaborated herein too much. And then inputting the first palm print image and the first palm print vein image into the palm recognition model, and performing convolution operation and feature extraction on the first palm print image and the first palm print vein image through the palm recognition model to obtain the surface texture feature of the first palm print image and the palm vein blood vessel distribution feature of the first palm print vein image. And then carrying out feature fusion on the surface texture features and the palm vein blood vessel distribution features to obtain fused feature vectors. And finally, taking the fused feature vector as the first palm feature vector. In this embodiment, the first palm print image and the first palm print vein image are subjected to convolution operation and feature extraction by using the palm recognition model to obtain corresponding surface texture features and palm vein blood vessel distribution features, and then the surface texture features and the palm vein blood vessel distribution features are subjected to feature fusion to obtain a first palm feature vector of the user, so that the identity information of the user can be recognized more accurately. In addition. The palm print and the vein blood vessel distribution can not change greatly due to factors such as age and the like, so that the method has better robustness, and is favorable for carrying out comparison processing on the first palm feature vector obtained by combining a plurality of feature vectors and the appointed palm feature information corresponding to the appointed user information stored in the palm feature database, so as to effectively and accurately obtain the user identity verification result. And then, response processing to the operation index data acquisition request can be accurately executed according to the obtained user authentication result, and the normalization and the safety of request processing are effectively improved.
Further, in an embodiment of the present application, before the step S204, the method includes:
s2040: collecting a second palm print image and a second palm print vein image of a registered user;
s2041: performing feature extraction on the second palm print image and the second palm print vein image through the palm recognition model to obtain a second palm feature vector of the registered user;
s2042: establishing a one-to-one mapping relation between the user information of the registered user and the second palm feature vector;
s2043: storing the user information of the registered user and the second palm feature vector of the registered user in a preset original database based on the mapping relation to obtain a stored original database;
s2044: and taking the stored original database as the palm characteristic database.
As described in steps S2040 to S2044 above, before the step of determining whether the preset palm feature database stores the same designated user information as the user information, a process of creating the palm feature database may be further included. Specifically, a second palm print image and a second palm print vein image of the registered user are collected first. The registered user can be a target user with legal identity, and the target user has the authority of commanding the device to generate the result data of the operation index. And then, performing feature extraction on the second palm print image and the second palm print vein image through the palm recognition model to obtain a second palm feature vector of the registered user. And then establishing a one-to-one mapping relation between the user information of the registered user and the second palm feature vector. And subsequently storing the user information of the registered user and the second palm feature vector of the registered user in a preset original database based on the mapping relation to obtain a stored original database. The original database may be a blank database in which no data is stored. And finally, taking the stored original database as the palm characteristic database. In this embodiment, the palm feature database storing the user information and the second palm feature vector of the registered user is generated in advance based on the second palm print image and the second palm vein image of the registered user, which is beneficial to subsequently performing the authentication processing on the user based on the palm feature database, and then the response processing on the operation index data acquisition request can be accurately performed according to the obtained authentication result of the user, so as to effectively improve the security of the request processing.
The model-based data generation method in the embodiment of the present application may also be applied to the field of blockchains, for example, data such as the result data is stored on a blockchain. By storing and managing the result data by using the block chain, the security and the non-tamper property of the result data can be effectively ensured.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism and an encryption algorithm. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The block chain underlying platform can comprise processing modules such as user management, basic service, intelligent contract and operation monitoring. The user management module is responsible for identity information management of all blockchain participants, and comprises public and private key generation maintenance (account management), key management, user real identity and blockchain address corresponding relation maintenance (authority management) and the like, and under the authorization condition, the user management module supervises and audits the transaction condition of certain real identities and provides rule configuration (wind control audit) of risk control; the basic service module is deployed on all block chain node equipment and used for verifying the validity of the service request, recording the service request to storage after consensus on the valid request is completed, for a new service request, the basic service firstly performs interface adaptation analysis and authentication processing (interface adaptation), then encrypts service information (consensus management) through a consensus algorithm, transmits the service information to a shared account (network communication) completely and consistently after encryption, and performs recording and storage; the intelligent contract module is responsible for registering and issuing contracts, triggering the contracts and executing the contracts, developers can define contract logics through a certain programming language, issue the contract logics to a block chain (contract registration), call keys or other event triggering and executing according to the logics of contract clauses, complete the contract logics and simultaneously provide the function of upgrading and canceling the contracts; the operation monitoring module is mainly responsible for deployment, configuration modification, contract setting, cloud adaptation in the product release process and visual output of real-time states in product operation, such as: alarm, monitoring network conditions, monitoring node equipment health status, and the like.
Referring to fig. 2, an embodiment of the present application further provides a model-based data generation apparatus, including:
the system comprises a receiving module 1, a processing module and a display module, wherein the receiving module is used for receiving an operation index data acquisition request; the operation index data acquisition request comprises an operation index identifier;
the verification module 2 is used for acquiring user information of a user, a first palm print image, a first palm vein image and a face image of the user, verifying the identity of the user based on a preset expression recognition model and a preset palm recognition model, and judging whether the identity verification passes;
the obtaining module 3 is used for obtaining a first data table corresponding to the operation index identifier from a preset database if the identity authentication passes;
the calculation module 4 is configured to perform calculation processing on the data in the first data table based on a calculation rule of the operation index corresponding to the operation index identifier to obtain result data corresponding to the operation index;
and the synchronization module 5 is used for synchronizing the result data in a second data table in a preset column type storage database to obtain a target data table.
In this embodiment, the operations that the modules or units are respectively configured to execute correspond to the steps of the model-based data generation method in the foregoing embodiment one to one, and are not described herein again.
Further, in an embodiment of the present application, the verification module 2 includes:
the determining submodule is used for acquiring a preset number of expression recognition models and determining a target expression recognition model from all the expression recognition models according to a preset rule; the preset number is more than 1, and each expression recognition model is generated based on different training sample sets in a training mode;
the first generation submodule is used for generating a target expression recognition result corresponding to the face image through the target expression recognition model;
the first judgment submodule is used for judging whether the target expression recognition result belongs to a preset expression type;
the response submodule is used for judging that the identity authentication is not passed and limiting the response to the operation index data acquisition request if the expression type belongs to the preset expression type;
the second judgment submodule is used for judging whether the preset palm feature database stores the specified user information which is the same as the user information or not if the preset expression type does not belong to the preset expression type;
the second generation submodule is used for extracting the characteristics of the first palm print image and the first palm print vein image through the palm recognition model if the designated user information is stored, so as to obtain a first palm characteristic vector of the user;
a third judging sub-module, configured to obtain specified palm feature information corresponding to the specified user information from the palm feature database, compare the first palm feature vector with the specified palm feature information, and judge whether the first palm feature vector and the specified palm feature information are the same feature information;
the first judgment submodule is used for judging that the identity authentication is passed if the identity information is the same characteristic information;
and the second judging submodule is used for judging that the identity authentication is not passed if the identity authentication is not the same characteristic information.
In this embodiment, the operations that the modules or units are respectively configured to execute correspond to the steps of the model-based data generation method in the foregoing embodiment one to one, and are not described herein again.
Further, in an embodiment of the present application, the determining sub-module includes:
the first obtaining unit is used for obtaining a preset number of the expression recognition models and respectively obtaining a first test accuracy rate of each expression recognition model based on a preset first verification sample set;
the first screening unit is used for comparing the sizes of all the first test accuracy rates and screening out a second test accuracy rate with the largest value from all the first test accuracy rates;
a second obtaining unit, configured to obtain, from all the expression recognition models, a first expression recognition model corresponding to the second test accuracy;
a first determining unit, configured to use the first expression recognition model as the target expression recognition model;
the first generation submodule includes:
the first input unit is used for inputting the face image into the first expression recognition model and outputting a first expression recognition result corresponding to the face image through the first expression recognition model;
and the second determining unit is used for taking the first expression recognition result as the target expression recognition result.
In this embodiment, the operations that the modules or units are respectively configured to execute correspond to the steps of the model-based data generation method in the foregoing embodiment one to one, and are not described herein again.
Further, in an embodiment of the present application, the determining sub-module includes:
the third obtaining unit is used for obtaining a preset number of the expression recognition models and respectively obtaining a third test accuracy rate of each expression recognition model based on a preset second verification sample set;
the second screening unit is used for screening out a second expression recognition model with a third testing accuracy rate larger than a preset accuracy rate threshold value from all the expression recognition models;
a first generation unit configured to generate a first recognition processing time of each of the second expression recognition models based on the second verification sample set;
the third screening unit is used for screening out second identification processing time smaller than a preset processing time threshold from all the first identification processing time;
a fourth obtaining unit, configured to obtain a third expression recognition model corresponding to the second recognition processing time from all the second expression recognition models; wherein the number of the third emotion recognition models includes a plurality;
a third determining unit, configured to use the third expression recognition model as the target expression recognition model;
the first generation submodule includes:
the second input unit is used for respectively inputting the face image into each third emotion recognition model and acquiring a second emotion recognition result respectively output by each third emotion recognition model;
the fourth screening unit is used for screening out a third expression recognition result with the largest number of times from all the second expression recognition results;
and the fourth determining unit is used for taking the third expression recognition result as the target expression recognition result.
In this embodiment, the operations that the modules or units are respectively configured to execute correspond to the steps of the model-based data generation method in the foregoing embodiment one to one, and are not described herein again.
Further, in an embodiment of the present application, the first generating unit includes:
a first obtaining subunit, configured to obtain the second verification sample set; wherein the second set of validation samples comprises a plurality of validation sample data;
the statistical subunit is configured to, when the fourth expression recognition model receives each input verification sample data, separately count processing time for the fourth expression recognition model to output an expression recognition result corresponding to each verification sample data; the fourth expression recognition model is any one of the second expression recognition models;
a first calculating subunit configured to calculate a sum value between all the processing times;
a second obtaining subunit, configured to obtain the number of all the processing times;
a second calculating subunit configured to calculate a quotient value between the sum value and the number of all the processing times;
and the determining subunit is configured to use the quotient as the first recognition processing time of the fourth expression recognition model.
In this embodiment, the operations that the modules or units are respectively configured to execute correspond to the steps of the model-based data generation method in the foregoing embodiment one to one, and are not described herein again.
Further, in an embodiment of the application, the second generation submodule includes:
the calling unit is used for calling the palm recognition model;
the second generation unit is used for inputting the first palm print image and the first palm print vein image into the palm recognition model, and performing convolution operation and feature extraction on the first palm print image and the first palm print vein image through the palm recognition model to obtain surface texture features of the first palm print image and palm vein blood vessel distribution features of the first palm print vein image;
the fusion unit is used for performing feature fusion on the surface texture features and the palm vein blood vessel distribution features to obtain fused feature vectors;
a fifth determining unit, configured to use the fused feature vector as the first palm feature vector.
In this embodiment, the operations that the modules or units are respectively configured to execute correspond to the steps of the model-based data generation method in the foregoing embodiment one to one, and are not described herein again.
Further, in an embodiment of the present application, the verification module 2 includes:
the acquisition sub-module is used for acquiring a second palm print image and a second palm print vein image of the registered user;
the extraction submodule is used for performing feature extraction on the second palm print image and the second palm print vein image through the palm recognition model to obtain a second palm feature vector of the registered user;
the creating submodule is used for creating a one-to-one mapping relation between the user information of the registered user and the second palm feature vector;
the storage submodule is used for storing the user information of the registered user and the second palm feature vector of the registered user in a preset original database based on the mapping relation to obtain a stored original database;
and the determining submodule is used for taking the stored original database as the palm characteristic database.
In this embodiment, the operations that the modules or units are respectively configured to execute correspond to the steps of the model-based data generation method in the foregoing embodiment one to one, and are not described herein again.
Referring to fig. 3, a computer device, which may be a server and whose internal structure may be as shown in fig. 3, is also provided in the embodiment of the present application. The computer device comprises a processor, a memory, a network interface, a display screen, an input device and a database which are connected through a system bus. Wherein the processor of the computer device is designed to provide computing and control capabilities. The memory of the computer device comprises a storage medium and an internal memory. The storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and computer programs in the storage medium to run. The database of the computer device is used for storing an operation index identification, user information, a first palm print image, a first palm print vein image, a human face image, a first data table, result data and a target data table. The network interface of the computer device is used for communicating with an external terminal through a network connection. The display screen of the computer equipment is an indispensable image-text output equipment in the computer, and is used for converting digital signals into optical signals so that characters and figures are displayed on the screen of the display screen. The input device of the computer equipment is the main device for information exchange between the computer and the user or other equipment, and is used for transmitting data, instructions, some mark information and the like to the computer. The computer program is executed by a processor to implement a model-based data generation method.
The processor executes the steps of the model-based data generation method:
receiving an operation index data acquisition request; the operation index data acquisition request comprises an operation index identifier;
acquiring user information of a user, a first palm print image, a first palm print vein image and a face image of the user, performing identity authentication on the user based on a preset expression recognition model and a preset palm recognition model, and judging whether the identity authentication passes;
if the identity authentication is passed, acquiring a first data table corresponding to the operation index identification from a preset database;
calculating the data in the first data table based on the calculation rule of the operation index corresponding to the operation index identification to obtain result data corresponding to the operation index;
and synchronizing the result data in a second data table in a preset column type storage database to obtain a target data table.
Those skilled in the art will appreciate that the structure shown in fig. 3 is only a block diagram of a part of the structure related to the present application, and does not constitute a limitation to the apparatus and the computer device to which the present application is applied.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for generating data based on a model is implemented, specifically:
receiving an operation index data acquisition request; the operation index data acquisition request comprises an operation index identifier;
acquiring user information of a user, a first palm print image, a first palm print vein image and a face image of the user, performing identity authentication on the user based on a preset expression recognition model and a preset palm recognition model, and judging whether the identity authentication passes;
if the identity authentication is passed, acquiring a first data table corresponding to the operation index identification from a preset database;
calculating the data in the first data table based on the calculation rule of the operation index corresponding to the operation index identification to obtain result data corresponding to the operation index;
and synchronizing the result data in a second data table in a preset column type storage database to obtain a target data table.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A method for model-based data generation, comprising:
receiving an operation index data acquisition request; the operation index data acquisition request comprises an operation index identifier;
acquiring user information of a user, a first palm print image, a first palm print vein image and a face image of the user, performing identity authentication on the user based on a preset expression recognition model and a preset palm recognition model, and judging whether the identity authentication passes;
if the identity authentication is passed, acquiring a first data table corresponding to the operation index identification from a preset database;
calculating the data in the first data table based on the calculation rule of the operation index corresponding to the operation index identification to obtain result data corresponding to the operation index;
and synchronizing the result data in a second data table in a preset column type storage database to obtain a target data table.
2. The model-based data generation method of claim 1, wherein the step of authenticating the user based on the preset expression recognition model and the palm recognition model and determining whether the authentication passes comprises:
obtaining a preset number of expression recognition models, and determining a target expression recognition model from all the expression recognition models according to a preset rule; the preset number is more than 1, and each expression recognition model is generated based on different training sample sets in a training mode;
generating a target expression recognition result corresponding to the face image through the target expression recognition model;
judging whether the target expression recognition result belongs to a preset expression type or not;
if the expression type belongs to the preset expression type, judging that the identity authentication is not passed, and limiting the response to the operation index data acquisition request;
if the expression type does not belong to the preset expression type, judging whether specified user information which is the same as the user information is stored in a preset palm feature database or not;
if the appointed user information is stored, performing feature extraction on the first palm print image and the first palm print vein image through the palm recognition model to obtain a first palm feature vector of the user;
acquiring appointed palm feature information corresponding to the appointed user information from the palm feature database, comparing the first palm feature vector with the appointed palm feature information, and judging whether the first palm feature vector and the appointed palm feature information are the same feature information or not;
if the identity information is the same characteristic information, the identity authentication is judged to be passed;
and if the identity authentication is not the same characteristic information, judging that the identity authentication is not passed.
3. The model-based data generation method of claim 2, wherein the step of obtaining a preset number of expression recognition models and determining a target expression recognition model from all the expression recognition models according to a preset rule comprises:
acquiring a preset number of the expression recognition models, and respectively acquiring a first test accuracy of each expression recognition model based on a preset first verification sample set;
comparing the sizes of all the first test accuracy rates, and screening out a second test accuracy rate with the maximum value from all the first test accuracy rates;
acquiring a first expression recognition model corresponding to the second test accuracy from all the expression recognition models;
taking the first expression recognition model as the target expression recognition model;
the step of generating a target expression recognition result corresponding to the face image through the target expression recognition model includes:
inputting the facial image into the first expression recognition model, and outputting a first expression recognition result corresponding to the facial image through the first expression recognition model;
and taking the first expression recognition result as the target expression recognition result.
4. The model-based data generation method of claim 2, wherein the step of obtaining a preset number of expression recognition models and determining a target expression recognition model from all the expression recognition models according to a preset rule comprises:
acquiring a preset number of the expression recognition models, and respectively acquiring a third test accuracy rate of each expression recognition model based on a preset second verification sample set;
screening out a second expression recognition model with a third test accuracy rate larger than a preset accuracy rate threshold value from all the expression recognition models;
generating a first recognition processing time of each second expression recognition model based on the second verification sample set;
screening out second identification processing time smaller than a preset processing time threshold from all the first identification processing time;
acquiring a third expression recognition model corresponding to the second recognition processing time from all the second expression recognition models; wherein the number of the third emotion recognition models includes a plurality;
taking the third expression recognition model as the target expression recognition model;
the step of generating a target expression recognition result corresponding to the face image through the target expression recognition model includes:
inputting the face image into each third emotion recognition model respectively, and acquiring second expression recognition results output by each third emotion recognition model respectively;
screening out a third expression recognition result with the largest number of times from all the second expression recognition results;
and taking the third expression recognition result as the target expression recognition result.
5. The model-based data generation method of claim 4, wherein the step of generating a first recognition processing time for each of the second expression recognition models based on the second verification sample set includes:
obtaining the second set of verification samples; wherein the second set of validation samples comprises a plurality of validation sample data;
when a fourth expression recognition model receives each input verification sample data, respectively counting the processing time of the fourth expression recognition model for outputting the expression recognition result corresponding to each verification sample data; the fourth expression recognition model is any one of the second expression recognition models;
calculating a sum value between all the processing times;
acquiring the number of all the processing time;
calculating a quotient between the sum and the amount of all of the processing times;
and taking the quotient value as the first recognition processing time of the fourth expression recognition model.
6. The model-based data generation method according to claim 2, wherein the step of extracting features of the first palm print image and the first palm print vein image by the palm recognition model to obtain the first palm feature vector of the user comprises:
calling the palm recognition model;
inputting the first palm print image and the first palm print vein image into the palm recognition model, and performing convolution operation and feature extraction on the first palm print image and the first palm print vein image through the palm recognition model to obtain surface texture features of the first palm print image and palm vein blood vessel distribution features of the first palm print vein image;
performing feature fusion on the surface texture features and the palm vein blood vessel distribution features to obtain fused feature vectors;
and taking the fused feature vector as the first palm feature vector.
7. The model-based data generation method according to claim 2, wherein the step of determining whether the predetermined palm feature database stores the same designated user information as the user information is preceded by:
collecting a second palm print image and a second palm print vein image of a registered user;
performing feature extraction on the second palm print image and the second palm print vein image through the palm recognition model to obtain a second palm feature vector of the registered user;
establishing a one-to-one mapping relation between the user information of the registered user and the second palm feature vector;
storing the user information of the registered user and the second palm feature vector of the registered user in a preset original database based on the mapping relation to obtain a stored original database;
and taking the stored original database as the palm characteristic database.
8. A model-based data generation apparatus, comprising:
the receiving module is used for receiving an operation index data acquisition request; the operation index data acquisition request comprises an operation index identifier;
the verification module is used for acquiring user information of a user, a first palm print image, a first palm vein image and a face image of the user, verifying the identity of the user based on a preset expression recognition model and a preset palm recognition model, and judging whether the identity verification passes;
the acquisition module is used for acquiring a first data table corresponding to the operation index identification from a preset database if the identity authentication passes;
the calculation module is used for calculating and processing the data in the first data table based on the calculation rule of the operation index corresponding to the operation index identification to obtain result data corresponding to the operation index;
and the synchronization module is used for synchronizing the result data in a second data table in a preset column type storage database to obtain a target data table.
9. A computer device comprising a memory and a processor, the memory having stored therein a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202111187628.3A 2021-10-12 2021-10-12 Model-based data generation method and device, computer equipment and storage medium Pending CN113946579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111187628.3A CN113946579A (en) 2021-10-12 2021-10-12 Model-based data generation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111187628.3A CN113946579A (en) 2021-10-12 2021-10-12 Model-based data generation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113946579A true CN113946579A (en) 2022-01-18

Family

ID=79330216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111187628.3A Pending CN113946579A (en) 2021-10-12 2021-10-12 Model-based data generation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113946579A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117806913A (en) * 2024-02-28 2024-04-02 成都瑞虎电子科技有限公司 Intelligent manufacturing system safety assessment method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117806913A (en) * 2024-02-28 2024-04-02 成都瑞虎电子科技有限公司 Intelligent manufacturing system safety assessment method
CN117806913B (en) * 2024-02-28 2024-05-03 成都瑞虎电子科技有限公司 Intelligent manufacturing system safety assessment method

Similar Documents

Publication Publication Date Title
CN111506722B (en) Knowledge graph question-answering method, device and equipment based on deep learning technology
CN112528259B (en) Identity verification method, device, computer equipment and storage medium
CN112464117A (en) Request processing method and device, computer equipment and storage medium
CN112347310A (en) Event processing information query method and device, computer equipment and storage medium
CN111368926B (en) Image screening method, device and computer readable storage medium
CN114676853A (en) Data processing method, device, equipment and medium
CN114283932B (en) Medical resource management method, device, electronic equipment and storage medium
CN113918526A (en) Log processing method and device, computer equipment and storage medium
CN112329629A (en) Evaluation method and device for online training, computer equipment and storage medium
CN114840387A (en) Micro-service monitoring method and device, computer equipment and storage medium
CN109766772A (en) Risk control method, device, computer equipment and storage medium
CN113889262A (en) Model-based data prediction method and device, computer equipment and storage medium
CN112036749A (en) Method and device for identifying risk user based on medical data and computer equipment
CN113672654B (en) Data query method, device, computer equipment and storage medium
CN114817055A (en) Regression testing method and device based on interface, computer equipment and storage medium
CN113946579A (en) Model-based data generation method and device, computer equipment and storage medium
CN114004639A (en) Preferential information recommendation method and device, computer equipment and storage medium
CN113873088A (en) Voice call interaction method and device, computer equipment and storage medium
CN113505805A (en) Sample data closed loop generation method, device, equipment and storage medium
CN112364136A (en) Keyword generation method, device, equipment and storage medium
CN114245204B (en) Video surface signing method and device based on artificial intelligence, electronic equipment and medium
CN111275059A (en) Image processing method and device and computer readable storage medium
CN114511200A (en) Job data generation method and device, computer equipment and storage medium
CN113627551A (en) Multi-model-based certificate classification method, device, equipment and storage medium
CN114399361A (en) Service request processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination