CN117541379A - Information self-certification method and device, electronic equipment and medium - Google Patents

Information self-certification method and device, electronic equipment and medium Download PDF

Info

Publication number
CN117541379A
CN117541379A CN202311724045.9A CN202311724045A CN117541379A CN 117541379 A CN117541379 A CN 117541379A CN 202311724045 A CN202311724045 A CN 202311724045A CN 117541379 A CN117541379 A CN 117541379A
Authority
CN
China
Prior art keywords
information
user
fraud risk
model
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311724045.9A
Other languages
Chinese (zh)
Inventor
卢俊义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202311724045.9A priority Critical patent/CN117541379A/en
Publication of CN117541379A publication Critical patent/CN117541379A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

The embodiment of the specification discloses an information self-certification method, an information self-certification system, electronic equipment and a medium. The information self-certification method comprises the steps of obtaining user information; performing fraud risk detection on the user information to obtain fraud risk degrees corresponding to the users; based on the fraud risk degree, extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information; based on the multi-mode large model, fusion processing is carried out on the text information, the image characteristics and the user behavior auxiliary information, and a self-certification result corresponding to the user is generated. The embodiment of the specification improves the accuracy and the effectiveness of the self-certification system.

Description

Information self-certification method and device, electronic equipment and medium
Technical Field
One or more embodiments of the present disclosure relate to the field of information technology, and in particular, to an information self-certification method, system, electronic device, and medium.
Background
In the field of information technology, a self-certification method generally refers to a method in which a user certifies the validity of a document by providing a specific type of file, record or information. By self-verifying the user information, the auditing mechanism can more effectively conduct data auditing and risk assessment, and provide more convenient and safe service. However, since the user information is often tampered with at present, there is a risk of bypassing the self-certification system using fraudulent means, so that the accuracy and effectiveness of the self-certification system are reduced.
Disclosure of Invention
The embodiment of the specification provides an information self-certification method, an information self-certification system, electronic equipment and a medium, wherein the technical scheme is as follows:
in a first aspect, embodiments of the present disclosure provide an information self-certification method, including: acquiring user information; performing fraud risk detection on the user information to obtain fraud risk degrees corresponding to the users; based on the fraud risk degree, extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information; based on the multi-mode large model, fusion processing is carried out on the text information, the image characteristics and the user behavior auxiliary information, and a self-certification result corresponding to the user is generated.
In a second aspect, embodiments of the present disclosure provide an information self-certification device, including: the information acquisition module is used for acquiring user information; the risk detection module is used for carrying out fraud risk detection on the user information to obtain fraud risk degrees corresponding to the users; the information extraction module is used for extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information based on the fraud risk degree; and the fusion module is used for carrying out fusion processing on the text information, the image characteristics and the user behavior auxiliary information based on the multi-mode large model to generate a self-certification result corresponding to the user.
In a third aspect, embodiments of the present disclosure provide an electronic device including a processor and a memory; the processor is connected with the memory; a memory for storing executable program code; the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory for executing the steps of the information self-certification method of the first aspect of the above embodiment.
In a fourth aspect, embodiments of the present disclosure provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the steps of the information self-authentication method of the first aspect of the embodiments described above.
The technical scheme provided by some embodiments of the present specification has the following beneficial effects:
user information can be acquired first; then carrying out fraud risk detection on the user information to obtain fraud risk degrees corresponding to the users; based on the fraud risk degree, extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information; and then, based on the multi-mode large model, carrying out fusion processing on the text information, the image characteristics and the user behavior auxiliary information to generate a self-certification result corresponding to the user. The embodiment of the specification fully utilizes the acquired user information, combines the multi-mode large model to process the information, and realizes the completeness and the practical feasibility of the self-certification function of the information; in addition, the embodiment of the specification carries out fraud risk detection on the user information to obtain the fraud risk degree corresponding to the user, so that the possibility of resource loss is further reduced; the embodiment of the specification can fully automatically evaluate the credibility of the user information, improve the user experience, improve the accuracy and the effectiveness of the self-certification system based on fraud risk detection and the multi-mode large model, and provide basic information guarantee for constructing the self-certification system.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present description, the drawings that are required in the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present description, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario of an information self-certification system provided in the present specification.
Fig. 2 is a schematic flow chart of an information self-certification method provided in the present specification.
Fig. 3 is a flow chart of another information self-certification method provided in the present specification.
Fig. 4 is a flow chart of another information self-certification method provided in the present specification.
FIG. 5 is a schematic diagram of a multi-modal large model data processing flow provided herein.
Fig. 6 is a flow chart of another information self-certification method provided in the present specification.
Fig. 7 is a timing flow chart of the information-based self-certification system provided in the present specification.
Fig. 8 is a schematic structural diagram of an information self-checking device provided in the present specification.
Fig. 9 is a schematic structural diagram of an electronic device provided in the present specification.
Detailed Description
The technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the term "include" and any variations thereof is intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
The execution subject of the information self-certification method provided by the embodiments of the present disclosure may be an information self-certification device provided by the embodiments of the present disclosure, or a server integrated with the information self-certification device, where the information self-certification device may be implemented in a hardware or software manner.
Before describing the technical scheme of the invention, related technical terms are briefly explained:
Multimode: multimodal refers to the ability to process a variety of different types of data in the fields of artificial intelligence and machine learning. These data types may include text, images, audio, video, and the like. Multimodal methods aim to integrate and process these heterogeneous data in order to obtain more comprehensive information and better performance in a variety of complex tasks. In the field of deep learning, multimodal methods generally involve training a model using multiple input modalities (e.g., text and images) so that the model can understand and process multiple data types simultaneously. This approach may be applied to tasks such as emotion analysis, visual question-and-answer, video description generation, etc., to improve the model's understanding of real world complex data.
Large model: in machine learning and deep learning, models with very large numbers of parameters and complexity. These models typically consist of millions to billions of parameters, including multiple levels of neural network structures or other complex learning structures.
Self-certification system: the user needs to upload a relevant certificate for verifying the identity of the individual before using the identity-related rights. The self-certification system is used for verifying whether the user uploading data item accords with the real information of the user.
Multi-modal large model: the deep learning model consists of a plurality of sub-models, each sub-model is specially used for processing one data type, and then information of different modes is integrated through joint training so as to improve the comprehensive performance of the model. The multimodal big model is capable of processing many different types of data, such as text, images, audio, etc. The multi-mode large model combines information of various input data to perform joint training and reasoning so as to solve complex cross-mode tasks. By integrating information of different modalities, the multimodal big model can exhibit powerful capabilities on tasks such as visual question-answering, video description generation, multimedia retrieval, and the like.
The present specification, prior to describing in detail the information self-certification method in connection with one or more embodiments, describes a scenario in which the information self-certification method is applied.
Referring to fig. 1, fig. 1 is a schematic diagram of a scenario of an information self-certification system 100 according to an embodiment of the present invention, where the information self-certification system 100 may include an information self-certification device 110, a user terminal 120, a storage terminal 130, and the like. The user terminal 120 may be an android system-based terminal or an IOS system-based terminal, or may be a PC based on a Windows system or a MAC system, or the like. The user terminal 120 and the information self-certification device 110 may be connected through a communication network, where the communication network includes a wireless network and a wired network, and the wireless network includes one or more of a wireless wide area network, a wireless local area network, a wireless metropolitan area network, and a wireless personal area network. The network includes network entities such as routers, gateways, etc., which are not shown. The user terminal 120 may interact with the information self-certification device 110 through a communication network, for example, a user may upload user information to the information self-certification device 110 through the user terminal 120.
The storage terminal 130 may be respectively in communication connection with the information self-authentication device 110 and the client 120. The user may store information of the material into the storage terminal 130 through the user terminal 120, and the information self-certification device 110 may store related information such as user information, self-certification result, mining information, and image type recognition result into the storage terminal 130.
The information self-certification device can be integrated in electronic equipment, and the electronic equipment can be a terminal, a server and other equipment. The terminal can be a mobile phone, a tablet computer, an intelligent Bluetooth device, a notebook computer, a personal computer (Personal Computer, PC) or the like; the server may be a single server or a server cluster composed of a plurality of servers. In some embodiments, the information self-certification device may also be integrated in a plurality of electronic devices, for example, the information self-certification device may be integrated in a plurality of servers, and the information self-certification method of the application is implemented by the plurality of servers.
In the embodiment of the present disclosure, the information self-certification device 110 may be used to obtain user information; performing fraud risk detection on the user information to obtain fraud risk degrees corresponding to the users; based on the fraud risk degree, extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information; based on the multi-mode large model, fusion processing is carried out on the text information, the image characteristics and the user behavior auxiliary information, and self-certification results and the like corresponding to the user are generated.
It should be noted that, the schematic view of the scenario of the information self-checking system shown in fig. 1 is merely an example, and the information self-checking system and the scenario described in the embodiment of the present invention are for more clearly describing the technical solution of the embodiment of the present invention, and do not constitute a limitation on the technical solution provided by the embodiment of the present invention, and those skilled in the art can know that, with the evolution of the information self-checking system and the appearance of a new scenario, the technical solution provided by the embodiment of the present invention is equally applicable to similar technical problems.
Referring to fig. 2, fig. 2 is a flowchart of an information self-certification method according to an embodiment of the invention, and the information self-certification method can be performed by the information self-certification device 110 shown in fig. 1. The information self-certification method at least comprises the following steps:
200. user information is acquired.
In this embodiment, the user information may include information of data uploaded by the user when the user performs the information self-certification, and may further include statistical information related to the user acquired by the information self-certification device. The user information may be text information, picture information, user behavior information, and the like. The user behavior information may be preference and interest information, behavior data information, geographic location information, social information, and the like.
For example, the text information may be basic information, contact information, identity information, account information, and the like. The basic information may include information such as name, date of birth, etc. The contact information may include email address, phone number, mailing address, etc. The identity information may include information such as an identity ID for authentication. The account information may include information for login and account management such as a user name, password, security question answer, and the like.
For example, the picture information may be information such as a user image, an identification card photo, a social security photo, and the like.
For example, the user behavior information may be preference and interest information, behavior data information, geographic location information, social information, and the like. The preference and interest information may include user preferences, subscription information, browsing history, etc., and may be used by the information self-certification device to personalize recommendation and customize experience for the user. The behavior data information may include click records, purchase history, browsing behavior, etc., and may be used by the information self-certification device to analyze user behavior and trends. The geographic location information may include information of the geographic location where the user is located, and may be used by the information self-authenticating device to provide location services and geographically-related personalized content. Social information may include social media accounts, social relationship networks, etc., which may be used to make social interactions and personalized recommendations with a user.
210. And carrying out fraud risk detection on the user information to obtain fraud risk degrees corresponding to the users.
In this embodiment, the fraud risk level may be an evaluation value for the fraud risk level that may exist in the user information, and so on.
In this embodiment, since the self-certification is initiated by the user, the source and the authenticity of the information uploaded by the user cannot be determined, and thus the embodiment performs fraud risk detection on the user information, thereby performing prejudgment on the uploading risk of the information, so as to obtain the fraud risk degree corresponding to the user, and further increase the accuracy and the effectiveness of the self-certification.
In some embodiments, performing fraud risk detection on user information to obtain a fraud risk degree corresponding to a user includes: based on the user information, acquiring picture fraud risk degree, behavior fraud risk degree and blacklist fraud risk degree corresponding to the user; and determining the fraud risk degree corresponding to the user according to the picture fraud risk degree, the behavior fraud risk degree and the blacklist fraud risk degree.
In this embodiment, the picture fraud risk degree may be a fraud risk degree obtained based on picture information in the user information. The fraud risk level may be a fraud risk level derived based on user behavior information in the user information. The blacklist fraud risk may be a fraud risk obtained by comparing the user information with a blacklist database.
The embodiment can acquire the picture fraud risk degree, the behavior fraud risk degree and the blacklist fraud risk degree corresponding to the user based on the user information, and then determine the fraud risk degree corresponding to the user according to the picture fraud risk degree, the behavior fraud risk degree and the blacklist fraud risk degree. In the embodiment, the user is subjected to fraud risk detection from a plurality of different angles in the user information, so that the accuracy of fraud risk detection is improved.
For example, the embodiment may compare the picture fraud risk degree, the behavior fraud risk degree and the blacklist fraud risk degree, and select the risk degree with the largest value as the fraud risk degree corresponding to the user. The embodiment can also carry out weighted summation processing on the picture fraud risk degree, the behavior fraud risk degree and the blacklist fraud risk degree, so as to obtain the fraud risk degree corresponding to the user.
In some embodiments, based on the user information, obtaining the picture fraud risk level, the behavior fraud risk level, and the blacklist fraud risk level corresponding to the user includes: acquiring picture information and user behavior information in the user information; image detection is carried out on the user based on the image information, so that the image fraud risk corresponding to the user is obtained; performing behavior detection on the user based on preset behavior conditions and user behavior information to obtain a corresponding behavior fraud risk degree of the user; and carrying out blacklist detection on the user based on the blacklist database to obtain blacklist fraud risk corresponding to the user.
In this embodiment, the preset behavior condition may be whether the user uploads the data information simultaneously in a short time from different places, whether the user uploads the data information in a concentrated amount, and the like. The user behavior information can be the address of the user when uploading the information, the number of times of uploading the information in a preset time period, and the like.
The embodiment can carry out image detection on the user based on the picture information to obtain the picture fraud risk degree corresponding to the user. For example, the embodiment can detect whether the self-certification material picture uploaded by the user is tampered or not and whether the P picture exists, if the P picture exists, the user is considered to modify the original picture, so that the fraud risk is larger, and the picture fraud risk degree with larger value is obtained.
The embodiment can also detect the behavior of the user based on the preset behavior condition and the user behavior information to obtain the corresponding behavior fraud risk degree of the user. For example, a user submits more than 100 times of information in three places of Shanghai, beijing and Guangzhou within five minutes, and the user can be judged to have the conditions of uploading information in a short time in different places and uploading information in a concentrated way, so that the fraud risk is high, and the fraud risk of behavior with a relatively high value is obtained.
The embodiment can also carry out blacklist detection on the user based on the blacklist database to obtain blacklist fraud risk corresponding to the user. For example, the embodiment is provided with a history database including a whitelist database and a blacklist database, the blacklist database having a list of fraud risk users. According to the embodiment, the user can be compared with the fraud risk user list in the blacklist database through the user information, and when the user is matched with some users in the fraud risk user list, the user fraud risk is indicated to be high, so that the blacklist fraud risk degree with a high value is obtained.
220. And extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information based on the fraud risk degree.
In this embodiment, when the fraud risk degree is not lower than the preset risk degree threshold, it may be determined that the fraud risk degree of the user is relatively high, and a prompt message that fraud risk detection is not passed may be sent to the user side. When the fraud risk degree is lower than a preset risk degree threshold, the user fraud risk degree can be judged to be lower, and text information, image characteristics and user behavior auxiliary information corresponding to the user information are further extracted to conduct self-certification processing on the user information.
In some embodiments, based on the fraud risk level, extracting text information, image features, and user behavior auxiliary information corresponding to the user information includes: and when the fraud risk degree is lower than a preset risk degree threshold, extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information through the characteristic extraction model.
In this embodiment, the feature extraction model may be a model that extracts features such as related text and images from user information. The feature extraction model may include a text recognition model, an image processing model, a feature stitching model, and the like. The user behavior auxiliary information may be user behavior information for assisting in verifying whether the user self-certification is valid.
For example, the preset risk threshold may be set to thirty percent, the acquired fraud risk corresponding to the user is ten percent, it is known that the fraud risk corresponding to the user is lower than the preset risk threshold, it is determined that the fraud risk is lower, text information, image features and user behavior auxiliary information corresponding to the user information may be extracted through the feature extraction model, and meanwhile prompt information that fraud risk detection passes may be sent to the user.
In some embodiments, extracting text information, image features and user behavior auxiliary information corresponding to the user information through the feature extraction model includes: based on a text recognition model in the feature extraction model, text recognition is carried out on the user information, and text information corresponding to the user information is obtained; extracting image features corresponding to user information based on an image processing model in the feature extraction model; and performing splicing processing on the user behavior information in the user information through a feature splicing model in the feature extraction model to obtain spliced user behavior auxiliary information.
In this embodiment, the text recognition model may be a machine learning model or a deep learning model for recognizing, understanding, and processing text data in user information. The text recognition model may include an optical character recognition model, a natural language processing model, a named entity recognition model, a text classification model, and the like.
For example, the present embodiment can extract printed text and handwritten text from an image or scanned document in user information through an optical character recognition model. The embodiment can also process and understand the natural language text in the user information through a natural language processing model, wherein the natural language processing model comprises a cyclic neural network (RNN), a long-short-term memory network (LSTM), a transducer and the like. The named entity recognition model can also be used for recognizing and extracting named entities, such as person names, place names, organization names and the like, from the text of the user information. The embodiment can also classify the text of the user information through a text classification model.
In this embodiment, the image processing model may be based on a convolutional neural network CNN, a machine learning model of an attention mechanism transducer, or a deep learning model. The embodiment can extract the image characteristics corresponding to the user information through the image processing model, is used for distinguishing the type of the uploaded picture of the user, and further forms an image-text characteristic pair with the text information, wherein the image-text characteristic pair can be used for mining the related information of the user through image-text questions and answers and other modes in the multi-mode large model.
In this embodiment, the feature stitching model may be a model that maps user behavior information in the user information to vector representation and performs stitching processing. In the embodiment, the user behavior information in the user information is mapped into vector representations through the feature stitching model, and then the user behavior information mapped into the vector representations is stitched to obtain the user behavior auxiliary information after the stitching process.
230. Based on the multi-mode large model, fusion processing is carried out on the text information, the image characteristics and the user behavior auxiliary information, and a self-certification result corresponding to the user is generated.
In this embodiment, the text information, the image features and the user behavior auxiliary information are obtained by extracting features of the user information. And then, fusion processing can be carried out on the text information, the image characteristics and the user behavior auxiliary information through the multi-mode large model, so as to generate a self-certification result corresponding to the user. The self-certification result corresponding to the user can be valid for the user self-certification, namely the user passes the self-certification; or the user self-evidence is invalid, namely the user self-evidence does not pass.
In some embodiments, based on the multi-mode big model, fusion processing is performed on text information, image features and user behavior auxiliary information, and a self-certification result corresponding to a user is generated, which includes: acquiring fusion characteristic items based on text information, image characteristics and user behavior auxiliary information; and acquiring a first model response of the multi-mode large model to the fusion characteristic item and the user behavior auxiliary information so as to generate a self-certification result corresponding to the user.
In this embodiment, the fused feature item may be a feature item obtained by fusing a preset feature fusion flag vector and a triplet feature item subjected to position encoding and feature embedding processing based on the preset feature fusion flag vector. The triplet feature item may be a feature item composed of text information, image features, and user behavior auxiliary information.
In this embodiment, the first model response may be a model response of the multi-modal large model to the fusion feature item and the user behavior auxiliary information.
In some embodiments, obtaining the fusion feature item based on the text information, the image features, and the user behavior assistance information includes: splicing the text information, the image characteristics and the user behavior auxiliary information into a triplet characteristic item; performing feature embedding processing on the triplet feature items to obtain triplet embedded vectors corresponding to the triplet feature items; performing position coding treatment on the triplet embedded vector to obtain a triplet embedded vector after position coding; and based on the preset feature fusion mark vector, carrying out fusion processing on the preset feature fusion mark vector and the triplet embedded vector after position coding to obtain a fusion feature item after fusion processing.
In this embodiment, the preset feature fusion flag vector may be a vector for fusing all feature items (corresponding to the triplet embedded vector after position encoding).
In the embodiment, the triplet embedded vectors after position coding are fused into an integral representation through the preset feature fusion mark vector, so that the multi-mode large model is comprehensively considered and processed. The fusion mode can help the multi-mode large model to better understand and utilize different types of characteristic information, so that the performance of the multi-mode large model is improved. The preset feature fusion marker vector is set in the embodiment to be used in the training process of the model, and the better representation of the model is learned through a back propagation algorithm. The representation of the preset feature fusion marker vector is dynamically adjusted according to the optimization target of the model and the features of the input data, so that the performance of the model is improved to the greatest extent.
The feature item embedding process of this embodiment is used to perform an embedding operation on all feature items in the triplet feature item, so as to convert all feature items in the triplet feature item into a low-dimensional representation. For example, text information is converted into word embedding vectors, image features are converted into image embedding vectors, user behavior assistance information is converted into behavior information embedding vectors, and so on. The present embodiment may also perform position encoding on the triplet embedded vector to preserve the position information of the feature item in the sequence.
In this embodiment, the fusion operation is performed on the preset feature fusion tag vector and all other feature items, so as to obtain the fusion feature item after the fusion processing, which may be implemented by weighting summation, splicing or other fusion modes. Then, the embodiment can input the fused feature item after the fusion processing and other feature items (such as user behavior auxiliary information) into the multi-mode large model for processing, thereby obtaining a self-certification result corresponding to the user.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for self-verifying information according to another embodiment of the present disclosure, which may be performed by the self-verifying information device 110 shown in fig. 1.
As shown in fig. 3, the information self-certification method at least includes the following steps:
300. acquiring user information;
310. performing fraud risk detection on the user information to obtain fraud risk degrees corresponding to the users;
320. based on the fraud risk degree, extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information;
330. based on the multi-mode large model, fusion processing is carried out on the text information, the image characteristics and the user behavior auxiliary information, and a self-certification result corresponding to the user is generated;
340. Based on the text information, a second model response of the multi-mode large model to the text information is obtained.
In this embodiment, the second model response is mining information corresponding to the text information.
According to the embodiment, the text information can be input as the multi-mode large model, and the mining information corresponding to the text information is output through the multi-mode large model. The second model response is a model response of the multimodal big model to the text information.
According to the embodiment, the text information, the image characteristics and the user behavior auxiliary information can be fused through the multi-mode large model, and the self-certification result corresponding to the user is generated. In this embodiment, the information mining may be performed on the text information corresponding to the user by extracting features of the user and using a multimodal big model (such as a natural language big model GPT, a graphic matching model Clip, a graphic understanding big model visual glm, etc.), so as to mine effective information related to the user. According to the embodiment of the specification, the mining information corresponding to the text information can be used for training the multi-mode large model, so that the accuracy of the multi-mode large model is improved.
For example, taking checking teacher identity information through social security information uploaded by a user as an example, when user information is mined through a multi-mode large model, related inquiry information can be set to obtain information such as payment base number, payment proportion and payment amount in social security uploaded by the user, user wage conditions can be further reversely deduced, and then the user wage conditions can be used as mining information corresponding to text information.
Referring to fig. 4, fig. 4 is a flowchart illustrating a method for self-verifying information according to another embodiment of the present disclosure, which may be performed by the self-verifying information device 110 shown in fig. 1.
As shown in fig. 4, the information self-certification method may at least include the following steps:
400. acquiring user information;
410. performing fraud risk detection on the user information to obtain fraud risk degrees corresponding to the users;
420. based on the fraud risk degree, extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information;
430. based on the multi-mode large model, fusion processing is carried out on the text information, the image characteristics and the user behavior auxiliary information, and a self-certification result corresponding to the user is generated;
440. based on the text information, a second model response of the multi-mode large model to the text information is obtained.
450. Based on the image features, a third model response of the multi-modal large model to the image features is obtained.
In this embodiment, the third model response is an image type recognition result corresponding to the image feature.
As shown in fig. 5, the present embodiment may input text information as a multi-modal large model, process the multi-modal large model, and output text information f through the first multi-layer sensor ((MultilayerPerceptron, MLP) text Corresponding mining information. The second model response is a model response of the multimodal big model to the text information. The present embodiment can also feature the image f image And as multi-mode large model input, outputting an image type recognition result corresponding to the image characteristics through a second multi-layer perceptron MLP after multi-mode large model processing. In the embodiment, fusion processing is performed on the preset feature fusion mark vector token and the triplet embedded vector after position coding based on the preset feature fusion mark vector token to obtain a fusion feature item after fusion processing; then the fused characteristic item after the fusion processing is combined with other characteristic items (such as user behavior auxiliary information f action ) After being input into the multi-mode large model together for processing, the multi-mode large model outputs a self-certification result corresponding to the user through the third multi-layer perceptron MLP.
According to the embodiment, the text information, the image characteristics and the user behavior auxiliary information can be fused through the multi-mode large model, and the self-certification result corresponding to the user is generated. The embodiment can also carry out information mining on text information corresponding to the user, so as to mine out effective information related to the user. In this embodiment, a multimodal big model (such as a natural language big model GPT, an image-text pairing model Clip, an image-text understanding big model visual glm, etc.) may be used to determine the type of the image uploaded by the user, so as to extract information related to the user in the image, and obtain an image type recognition result corresponding to the image feature. According to the embodiment of the specification, the image type recognition result corresponding to the image characteristic can be used for training the multi-mode large model, so that the accuracy of the multi-mode large model is improved.
For example, taking checking teacher identity information by uploading social security information as an example, after a user uploads a social security screenshot, image features corresponding to the social security screenshot are processed by a multi-mode large model to further judge whether the social security screenshot belongs to the social security screenshot information, and whether an identity ID and a name are matched with the personal information can be judged for identity checking.
The embodiment can send the related self-certification prompt information to the user side according to the self-certification result corresponding to the user, for example, send the prompt information that the related self-certification passes or fails to pass to the user side. The embodiment can store the related information such as the user information, the self-certification result, the mining information, the image type identification result and the like into the database for profiling. According to the embodiment of the specification, the related information such as the user information, the self-certification result, the mining information, the image type identification result and the like can be used for training the multi-mode large model, so that the accuracy of the multi-mode large model is improved.
The embodiment of the specification fully utilizes the acquired user information, combines the multi-mode large model to process the information, and realizes the completeness and the practical feasibility of the self-certification function of the information; in addition, the embodiment of the specification carries out fraud risk detection on the user information to obtain the fraud risk degree corresponding to the user, so that the possibility of resource loss is further reduced; the embodiment of the specification can fully automatically evaluate the credibility of the user information, improve the user experience, improve the accuracy and the effectiveness of the self-certification system based on fraud risk detection and the multi-mode large model, and provide basic information guarantee for constructing the self-certification system.
Referring to fig. 6, fig. 6 is a flow chart illustrating a method for self-verifying information according to another embodiment of the present disclosure. In this embodiment, a scenario in which the information self-certification device is specifically integrated in a server and is applied to self-certification when a user applies for loan will be described as an example.
The method flow may include:
600. based on the user loan application request, user information is acquired, wherein the user information comprises self-certification application information and statistical information related to the user.
610. Performing fraud risk detection on the user information to obtain fraud risk degrees corresponding to the users;
620. based on the fraud risk degree, extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information;
630. based on the multi-mode large model, fusion processing is carried out on the text information, the image characteristics and the user behavior auxiliary information, and a loan application self-certification result corresponding to the user is generated;
640. based on the text information, acquiring mining information corresponding to the text information output by the multi-mode large model on the text information;
650. based on the image characteristics, an image type recognition result output by the multi-mode large model on the image characteristics is obtained.
In this embodiment, the output of the modal large model may be refined to the output of each attention factor (corresponding to various features corresponding to the extracted user information based on the fraud risk), or may be directly output the self-evidence result and the evidence item to form an end-to-end self-evidence verification scheme.
Referring to fig. 7, fig. 7 is a timing chart of the information self-certification system according to the present embodiment. As shown in fig. 7, the user uploads self-certification information to the information self-certification device through the client, the information self-certification device performs fraud detection on the user information containing the information through an anti-fraud model, and when fraud risk is detected, a message that self-certification verification is not passed is sent to the client page. When no fraud risk is detected, text information, image characteristics and user behavior auxiliary information corresponding to the user information are extracted, fusion processing is carried out on the text information, the image characteristics and the user behavior auxiliary information based on a multi-mode large model, an application self-verification result corresponding to the user is generated, the image type can be verified, the real name matching of the image and the mining of the user information can be carried out, whether self-verification passes or not is judged, and when the self-verification fails, a message that the self-verification fails is sent to a client page. And when the self-verification passes, sending a message for passing the self-verification to the client page.
In this embodiment, in order to obtain the actual user data and prevent the user from being wasted due to fraudulent use, verification needs to be performed on the data information uploaded by the user. According to the embodiment, not only is the user prevented from bypassing the self-certification risk by using a fraud means, but also the user uploading data item is analyzed, the identity of the user is checked through the multi-mode large model, and the effective information of the data item is mined. The construction of the information self-certification system is beneficial to improving the performance of the online self-certification system, mining more user asset information, reducing the complaint rate of users and reducing the risk of resource loss caused by self-certification products. The embodiment judges the effectiveness of uploading the data items by the user, can process the data items through the multi-mode large model, realizes an automatic mechanism of the whole flow, and can enable the large model to output evidence items on which the judgment depends, so that the output result is subjected to attribution and backtracking. The fraud risk points possibly existing in the user are intercepted by using an anti-fraud model (namely fraud risk detection is carried out on the user information), so that the effectiveness of the whole system is ensured. The embodiment can also carry out verification and value extraction on the comprehensive anti-fraud model and the multi-mode large model of the user uploading data item, and can provide effective evidence factors for user asset identification, identity identification, credit evaluation and lending capability.
The embodiment of the specification fully utilizes the acquired user information, combines the multi-mode large model to process the information, and realizes the completeness and the practical feasibility of the self-certification function of the information; in addition, the embodiment of the specification carries out fraud risk detection on the user information to obtain the fraud risk degree corresponding to the user, so that the possibility of resource loss is further reduced; the embodiment of the specification can fully automatically evaluate the credibility of the user information, improve the user experience, improve the accuracy and the effectiveness of the self-certification system based on fraud risk detection and the multi-mode large model, and provide basic information guarantee for constructing the self-certification system.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an information self-checking device according to an embodiment of the present disclosure.
As shown in fig. 8, the information self-certification device may at least include an information acquisition module 800, a risk detection module 810, an information extraction module 820, and a fusion module 830, where:
an information acquisition module 800, configured to acquire user information;
the risk detection module 810 is configured to perform fraud risk detection on the user information to obtain a fraud risk degree corresponding to the user;
the information extraction module 820 is configured to extract text information, image features and user behavior auxiliary information corresponding to the user information based on the fraud risk level;
and the fusion module 830 is configured to perform fusion processing on the text information, the image feature and the user behavior auxiliary information based on the multi-mode large model, and generate a self-certification result corresponding to the user.
In some embodiments, the risk detection module comprises:
the risk degree submodule is used for acquiring picture fraud risk degree, behavior fraud risk degree and blacklist fraud risk degree corresponding to the user based on the user information;
and the risk degree determining module is used for determining the fraud risk degree corresponding to the user according to the picture fraud risk degree, the behavior fraud risk degree and the blacklist fraud risk degree.
In some embodiments, the risk degree submodule includes:
the information acquisition sub-module is used for acquiring picture information and user behavior information in the user information;
the image detection module is used for carrying out image detection on the user based on the picture information to obtain picture fraud risk corresponding to the user;
the behavior detection module is used for detecting the behavior of the user based on preset behavior conditions and user behavior information to obtain a corresponding behavior fraud risk degree of the user;
and the blacklist detection module is used for carrying out blacklist detection on the user based on the blacklist database to obtain blacklist fraud risk corresponding to the user.
In some embodiments, the information extraction module comprises:
and the risk degree judging module is used for extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information through the characteristic extraction model when the fraud risk degree is lower than a preset risk degree threshold.
In some embodiments, the risk determination module includes a feature extraction module, the feature extraction module including:
the text recognition module is used for recognizing the text of the user information based on the text recognition model in the feature extraction model to obtain text information corresponding to the user information;
The image feature module is used for extracting image features corresponding to the user information based on an image processing model in the feature extraction model;
and the behavior splicing module is used for splicing the user behavior information in the user information through the feature splicing model in the feature extraction model to obtain spliced user behavior auxiliary information.
In some embodiments, the fusion module comprises:
the fusion characteristic item module is used for acquiring fusion characteristic items based on text information, image characteristics and user behavior auxiliary information;
the first response module is used for acquiring a first model response of the multi-mode large model to the fusion characteristic item and the user behavior auxiliary information so as to generate a self-certification result corresponding to the user.
In some embodiments, the fused feature item module comprises:
the information splicing module is used for splicing the text information, the image characteristics and the user behavior auxiliary information into a triplet characteristic item;
the feature embedding module is used for carrying out feature embedding processing on the triplet feature items to obtain triplet embedding vectors corresponding to the triplet feature items;
the position coding module is used for carrying out position coding processing on the triplet embedded vector to obtain a triplet embedded vector after position coding;
And the fusion processing module is used for carrying out fusion processing on the preset feature fusion marking vector and the triplet embedded vector after position coding based on the preset feature fusion marking vector to obtain a fusion feature item after fusion processing.
In some embodiments, the information self-authenticating device further comprises:
the second response module is used for acquiring a second model response of the multi-mode large model to the text information based on the text information, wherein the second model response is mining information corresponding to the text information.
In some embodiments, the information self-authenticating device further comprises:
the third response module is used for acquiring a third model response of the multi-mode large model to the image features based on the image features, wherein the third model response is an image type recognition result corresponding to the image features.
Based on the content of the information self-certification system in the embodiments of the present specification, it can be known that the embodiments of the present specification make full use of the acquired user information, and combine with the multi-mode large model to perform information processing, so as to realize the completeness and practical feasibility of the information self-certification function; in addition, the embodiment of the specification carries out fraud risk detection on the user information to obtain the fraud risk degree corresponding to the user, so that the possibility of resource loss is further reduced; the embodiment of the specification can fully automatically evaluate the credibility of the user information, improve the user experience, improve the accuracy and the effectiveness of the self-certification system based on fraud risk detection and the multi-mode large model, and provide basic information guarantee for constructing the self-certification system.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are mutually referred to, and each embodiment mainly describes differences from other embodiments. In particular, for the information self-certification system embodiment, since it is substantially similar to the information self-certification method embodiment, the description is relatively simple, and the relevant points are referred to in the description of the method embodiment.
Please refer to fig. 9, which is a schematic diagram of an electronic device according to an embodiment of the present disclosure.
As shown in fig. 9, the electronic device 900 may include: at least one processor 901, at least one network interface 904, a user interface 903, memory 905, and at least one communication bus 902.
Wherein the communication bus 902 may be used to facilitate the coupled communication of the various components described above.
The user interface 903 may include keys, and the optional user interface may also include a standard wired interface, a wireless interface, among others.
The network interface 904 may include, but is not limited to, a bluetooth module, an NFC module, a Wi-Fi module, and the like.
Processor 901 may include one or more processing cores, among other things. The processor 901 connects various portions of the overall electronic device 900 using various interfaces and lines, executing various functions of the electronic device 900 and processing data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 905, and invoking data stored in the memory 905. Alternatively, the processor 901 may be implemented in at least one hardware form of DSP, FPGA, PLA. The processor 901 may integrate one or a combination of several of a CPU, GPU, modem, and the like. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 901 and may be implemented by a single chip.
The memory 905 may include RAM or ROM. Optionally, the memory 905 comprises a non-transitory computer readable medium. The memory 905 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 905 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described respective method embodiments, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. The memory 905 may also optionally be at least one storage device located remotely from the processor 901. The memory 905, which is a type of computer storage medium, may include an operating system, a network communication module, a user interface module, and an information self-authenticating application. The processor 901 may be used to invoke the information self-certification application stored in the memory 905 and perform the steps of information self-certification and formulation mentioned in the previous embodiments.
Embodiments of the present disclosure also provide a computer-readable storage medium having instructions stored therein, which when executed on a computer or processor, cause the computer or processor to perform the steps of one or more of the embodiments shown in fig. 2-6 described above. The above-described constituent modules of the electronic apparatus may be stored in the computer-readable storage medium if implemented in the form of software functional units and sold or used as independent products.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present description, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (Digital Subscriber Line, DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a digital versatile Disk (DigitalVersatile Disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those skilled in the art will appreciate that implementing all or part of the above-described embodiment methods may be accomplished by way of a computer program, which may be stored in a computer-readable storage medium, instructing relevant hardware, and which, when executed, may comprise the embodiment methods as described above. And the aforementioned storage medium includes: various media capable of storing program code, such as ROM, RAM, magnetic or optical disks. The technical features in the present examples and embodiments may be arbitrarily combined without conflict.
The above-described embodiments are merely preferred embodiments of the present disclosure, and do not limit the scope of the disclosure, and various modifications and improvements made by those skilled in the art to the technical solutions of the disclosure should fall within the protection scope defined by the claims of the disclosure without departing from the design spirit of the disclosure.

Claims (20)

1. An information self-certification method, comprising:
acquiring user information;
performing fraud risk detection on the user information to obtain fraud risk degrees corresponding to the users;
based on the fraud risk degree, extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information;
And based on the multi-mode large model, carrying out fusion processing on the text information, the image characteristics and the user behavior auxiliary information to generate a self-certification result corresponding to the user.
2. The method of claim 1, wherein the performing fraud risk detection on the user information to obtain a fraud risk degree corresponding to the user includes:
based on the user information, acquiring picture fraud risk degree, behavior fraud risk degree and blacklist fraud risk degree corresponding to the user;
and determining the fraud risk degree corresponding to the user according to the picture fraud risk degree, the behavior fraud risk degree and the blacklist fraud risk degree.
3. The method of claim 2, wherein the obtaining, based on the user information, the picture fraud risk level, the behavioral fraud risk level, and the blacklist fraud risk level corresponding to the user includes:
acquiring picture information and user behavior information in the user information;
performing image detection on the user based on the picture information to obtain a picture fraud risk degree corresponding to the user;
performing behavior detection on the user based on preset behavior conditions and the user behavior information to obtain a behavior fraud risk degree corresponding to the user;
And carrying out blacklist detection on the user based on the blacklist database to obtain blacklist fraud risk corresponding to the user.
4. The method of claim 1, wherein the extracting text information, image features, and user behavior auxiliary information corresponding to the user information based on the fraud risk level includes:
and when the fraud risk degree is lower than a preset risk degree threshold, extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information through a characteristic extraction model.
5. The method according to claim 4, wherein the extracting text information, image features and user behavior auxiliary information corresponding to the user information by the feature extraction model includes:
based on a text recognition model in the feature extraction model, text recognition is carried out on the user information, and text information corresponding to the user information is obtained;
extracting image features corresponding to the user information based on an image processing model in the feature extraction model;
and performing splicing processing on the user behavior information in the user information through a feature splicing model in the feature extraction model to obtain spliced user behavior auxiliary information.
6. The method of claim 1, wherein the performing fusion processing on the text information, the image features and the user behavior auxiliary information based on the multi-mode large model to generate the self-certification result corresponding to the user comprises:
acquiring fusion feature items based on the text information, the image features and the user behavior auxiliary information;
and acquiring a first model response of the multi-mode large model to the fusion characteristic item and the user behavior auxiliary information to generate a self-certification result corresponding to the user.
7. The method of claim 6, the obtaining a fusion feature item based on the text information, the image feature, and the user behavior assistance information, comprising:
splicing the text information, the image characteristics and the user behavior auxiliary information into a triplet characteristic item;
performing feature embedding processing on the triplet feature items to obtain triplet embedded vectors corresponding to the triplet feature items;
performing position coding treatment on the triplet embedded vector to obtain a triplet embedded vector after position coding;
and based on a preset feature fusion mark vector, carrying out fusion processing on the preset feature fusion mark vector and the triplet embedded vector after position coding to obtain a fusion feature item after fusion processing.
8. The method of claim 1, the method further comprising:
based on the text information, acquiring a second model response of the multi-mode large model to the text information, wherein the second model response is mining information corresponding to the text information.
9. The method of claim 1, the method further comprising:
and based on the image features, acquiring a third model response of the multi-mode large model to the image features, wherein the third model response is an image type recognition result corresponding to the image features.
10. An information self-authenticating device, comprising:
the information acquisition module is used for acquiring user information;
the risk detection module is used for carrying out fraud risk detection on the user information to obtain fraud risk degrees corresponding to the users;
the information extraction module is used for extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information based on the fraud risk degree;
and the fusion module is used for carrying out fusion processing on the text information, the image characteristics and the user behavior auxiliary information based on the multi-mode large model to generate a self-certification result corresponding to the user.
11. The apparatus of claim 10, the risk detection module comprising:
the risk degree sub-module is used for acquiring picture fraud risk degree, behavior fraud risk degree and blacklist fraud risk degree corresponding to the user based on the user information;
and the risk degree determining module is used for determining the fraud risk degree corresponding to the user according to the picture fraud risk degree, the behavior fraud risk degree and the blacklist fraud risk degree.
12. The apparatus of claim 11, the risk score submodule comprising:
the information acquisition sub-module is used for acquiring picture information and user behavior information in the user information;
the image detection module is used for carrying out image detection on the user based on the picture information to obtain picture fraud risk degrees corresponding to the user;
the behavior detection module is used for detecting the behavior of the user based on preset behavior conditions and the user behavior information to obtain a behavior fraud risk degree corresponding to the user;
and the blacklist detection module is used for carrying out blacklist detection on the user based on the blacklist database to obtain blacklist fraud risk corresponding to the user.
13. The apparatus of claim 10, the information extraction module comprising:
and the risk degree judging module is used for extracting text information, image characteristics and user behavior auxiliary information corresponding to the user information through the characteristic extraction model when the fraud risk degree is lower than a preset risk degree threshold.
14. The apparatus of claim 13, the risk determination module comprising a feature extraction module, the feature extraction module comprising:
the text recognition module is used for recognizing the text of the user information based on the text recognition model in the feature extraction model to obtain text information corresponding to the user information;
the image feature module is used for extracting image features corresponding to the user information based on an image processing model in the feature extraction model;
and the behavior splicing module is used for splicing the user behavior information in the user information through the characteristic splicing model in the characteristic extraction model to obtain spliced user behavior auxiliary information.
15. The apparatus of claim 10, the fusion module comprising:
the fusion characteristic item module is used for acquiring fusion characteristic items based on the text information, the image characteristics and the user behavior auxiliary information;
And the first response module is used for acquiring a first model response of the multi-mode large model to the fusion characteristic item and the user behavior auxiliary information so as to generate a self-certification result corresponding to the user.
16. The apparatus of claim 15, the fused feature item module comprising:
the information splicing module is used for splicing the text information, the image characteristics and the user behavior auxiliary information into a triplet characteristic item;
the feature embedding module is used for carrying out feature embedding processing on the triplet feature items to obtain triplet embedding vectors corresponding to the triplet feature items;
the position coding module is used for carrying out position coding processing on the triplet embedded vector to obtain a triplet embedded vector after position coding;
and the fusion processing module is used for carrying out fusion processing on the preset feature fusion marking vector and the triplet embedded vector after position coding based on the preset feature fusion marking vector to obtain a fusion feature item after fusion processing.
17. The apparatus of claim 10, further comprising:
and the second response module is used for acquiring a second model response of the multi-mode large model to the text information based on the text information, wherein the second model response is mining information corresponding to the text information.
18. The apparatus of claim 10, further comprising:
and the third response module is used for acquiring a third model response of the multi-mode large model to the image features based on the image features, wherein the third model response is an image type identification result corresponding to the image features.
19. An electronic device includes a processor and a memory;
the processor is connected with the memory;
the memory is used for storing executable program codes;
the processor runs a program corresponding to the executable program code by reading the executable program code stored in the memory for performing the method according to any one of claims 1 to 9.
20. A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any of claims 1 to 9.
CN202311724045.9A 2023-12-14 2023-12-14 Information self-certification method and device, electronic equipment and medium Pending CN117541379A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311724045.9A CN117541379A (en) 2023-12-14 2023-12-14 Information self-certification method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311724045.9A CN117541379A (en) 2023-12-14 2023-12-14 Information self-certification method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117541379A true CN117541379A (en) 2024-02-09

Family

ID=89790000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311724045.9A Pending CN117541379A (en) 2023-12-14 2023-12-14 Information self-certification method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117541379A (en)

Similar Documents

Publication Publication Date Title
US11539525B2 (en) Systems and methods for secure tokenized credentials
CN110569377B (en) Media file processing method and device
KR20190072563A (en) Method and apparatus for detecting facial live varnish, and electronic device
KR20180041699A (en) Image-based CAPTCHA task
CN112651841B (en) Online business handling method, online business handling device, server and computer readable storage medium
CN106874253A (en) Recognize the method and device of sensitive information
CN107800672A (en) A kind of Information Authentication method, electronic equipment, server and information authentication system
CN110895568B (en) Method and system for processing court trial records
CN111343162B (en) System secure login method, device, medium and electronic equipment
US20200374286A1 (en) Real time selfie systems and methods for automating user identify verification
CN111931153B (en) Identity verification method and device based on artificial intelligence and computer equipment
CN106453205A (en) Identity verification method and identity verification device
US11741691B2 (en) Distributed learning method, server and application using identification card recognition model, and identification card recognition method using the same
CN113656761B (en) Business processing method and device based on biological recognition technology and computer equipment
CN113011646A (en) Data processing method and device and readable storage medium
CN111353140B (en) Verification code generation and display method, device and system
US20220004652A1 (en) Providing images with privacy label
CN113591603A (en) Certificate verification method and device, electronic equipment and storage medium
US9967262B1 (en) Account verification based on content submission
KR20210009885A (en) Method, device and computer readable storage medium for automatically generating content regarding offline object
CN115906028A (en) User identity verification method and device and self-service terminal
CN117541379A (en) Information self-certification method and device, electronic equipment and medium
CN115328786A (en) Automatic testing method and device based on block chain and storage medium
CN114629955A (en) Identity authentication method, identity authentication equipment and computer readable storage medium
CN111784352A (en) Authentication risk identification method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination