CN114971658B - Anti-fraud propaganda method, system, electronic equipment and storage medium - Google Patents

Anti-fraud propaganda method, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN114971658B
CN114971658B CN202210902155.9A CN202210902155A CN114971658B CN 114971658 B CN114971658 B CN 114971658B CN 202210902155 A CN202210902155 A CN 202210902155A CN 114971658 B CN114971658 B CN 114971658B
Authority
CN
China
Prior art keywords
target test
test object
dimensional semantic
fraud
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210902155.9A
Other languages
Chinese (zh)
Other versions
CN114971658A (en
Inventor
郑华东
肖哲明
吴海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Anxun Information Technology Co ltd
Original Assignee
Sichuan Anxun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Anxun Information Technology Co ltd filed Critical Sichuan Anxun Information Technology Co ltd
Priority to CN202211600462.8A priority Critical patent/CN115829592A/en
Priority to CN202210902155.9A priority patent/CN114971658B/en
Publication of CN114971658A publication Critical patent/CN114971658A/en
Application granted granted Critical
Publication of CN114971658B publication Critical patent/CN114971658B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Acoustics & Sound (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Social Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Databases & Information Systems (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to an anti-fraud propaganda method, a system, an electronic device and a storage medium, wherein the anti-fraud propaganda method comprises the following steps: providing at least one simulated test scenario for the target test subject; acquiring behavior feature data, facial expression feature data and/or sound feature data of a target test object aiming at least one two-dimensional semantic event and/or multi-dimensional semantic event in a simulation test script; extracting and determining emotional state expression of the target test object aiming at the two-dimensional semantic event and/or the multi-dimensional semantic event based on characteristic analysis of behavior characteristic data, facial expression characteristic data and/or sound characteristic data of the target test object; determining the cheating type and/or cheating risk level of the target test object according to the preset weight of the emotional state expression corresponding to the two-dimensional semantic event and the multi-dimensional semantic event respectively; corresponding anti-fraud promotional information is provided responsive to the fraudulent type and/or the fraudulent risk level of the target test object.

Description

Anti-fraud propaganda method, system, electronic equipment and storage medium
Technical Field
The present invention relates to the field of network information security technologies, and in particular, to an anti-fraud method and system, an electronic device, and a storage medium.
Background
Fraud is a very active type of criminal activity in all countries of the world, and as economic development and technology advance, more and more fraud approaches have been shifted from an offline approach to an online approach, i.e., "phishing". Phishing is the act of deceiving public and private properties by fictional facts or ways of concealing the truth, aiming at illegal occupation, which occurs in cyberspace. Phishing and ordinary fraud take properties in different ways, and ordinary fraud activities are carried out through human-to-human conversation, while phishing activities achieve the purpose more through human-to-machine conversation. It is the way that phishing is determined to have some unique features.
From a behavioral pattern, phishing typically includes two types, namely telecom fraud and phishing. Telecommunication fraud refers to the acts of criminals compiling false information, setting up deception and carrying out remote and non-contact fraud on victims in a telephone, short message and network mode. The network marketing is an upgrading mode of the traditional marketing, namely, the marketing fraud activities are carried out by means of networks and the like. Compared with telecommunication fraud, the method is more secret, utilizes the ordinary people to take the business loving psychology development offline, is very fast in development speed, has a large number of victims and is wide, and the serious influence is often caused on the society.
CN110222992A discloses an phishing early warning method and device based on a deceived group portrait, wherein the early warning method comprises the steps of acquiring a phishing case sample, processing the case sample, and extracting deceived person information in the phishing case; after the cheated person information is processed, dividing the processed cheated person information into a test characteristic and a training characteristic; constructing an initial phishing early warning model, training the initial phishing early warning model according to the training characteristics, and verifying and evaluating the initial phishing early warning model according to the testing characteristics; obtaining a verification evaluation result, and generating a target phishing model when the verification evaluation result is detected to meet a preset requirement; and acquiring personal information of the user, predicting whether the user is cheated according to the target phishing model, and generating a fraud early warning if the user is predicted to be cheated.
CN111915468A discloses an anti-fraud active inspection and early warning system for network, comprising: an automatic inspection module configured to construct a swindle-cue clustering space; the online screening module is configured to obtain full probability associated distribution of first information of subject leaders and affiliated members in the suspected fraud clues through topological sorting and conditional distribution sampling; the credibility evaluation module is configured to evaluate suspicious fraud clues through a layered non-standard network comprehensive evaluation method; and the active early warning module is configured to actively push suspicious fraud clues with the weight larger than the set early warning threshold value to mobile phones of anti-fraud department personnel or case investigation systems, and enable electronic supervision and limit to investigate the suspicious fraud clues.
The existing network security awareness propaganda means comprise paper propaganda, such as propaganda bills, propaganda posters, propaganda surrounding gifts and the like; network promotions such as public numbers, short video platforms, and other network social platforms; and special propaganda means, such as national network security propaganda week, national anti-fraud day, etc. However, although the existing anti-fraud propaganda means have various forms, the contents are always uniform and lack of innovation, and although the anti-fraud propaganda relates to the comment explanation of a plurality of typical cases, the detailed description is generally lacked, so that the anti-fraud propaganda strength and the popularization effect are not effective enough.
Furthermore, on the one hand, due to the differences in understanding to the person skilled in the art; on the other hand, since the applicant has studied a great deal of literature and patents when making the present invention, but the disclosure is not limited thereto and the details and contents thereof are not listed in detail, it is by no means the present invention has these prior art features, but the present invention has all the features of the prior art, and the applicant reserves the right to increase the related prior art in the background.
Disclosure of Invention
In view of the deficiencies of the prior art, the present invention provides an anti-fraud method, system, electronic device and storage medium, which is directed to solving at least one or more of the problems of the prior art.
In order to achieve the above object, the present invention provides an anti-fraud method, comprising:
providing at least one simulated test scenario for the target test subject;
acquiring behavior feature data and/or facial expression feature data of a target test object aiming at least one two-dimensional semantic event and/or multi-dimensional semantic event in a simulation test script;
extracting and determining emotional state expression of the target test object aiming at the two-dimensional semantic event and/or the multi-dimensional semantic event based on characteristic analysis of behavior characteristic data and/or facial expression characteristic data of the target test object;
determining the cheating type and/or cheating risk level of the target test object according to the preset weight of the emotional state expression corresponding to the two-dimensional semantic event and the multi-dimensional semantic event respectively;
corresponding fraud prevention promotional information is provided to the target test object in response to the fraud type and/or fraud risk level.
Preferably, the present invention also relates to an anti-fraud method, comprising:
providing at least one simulated test scenario for the target test subject;
acquiring sound characteristic data of a target test object aiming at least one two-dimensional semantic event and/or multi-dimensional semantic event in a simulation test script;
extracting and determining emotional state expression of the target test object aiming at the two-dimensional semantic event and/or the multi-dimensional semantic event based on characteristic analysis of sound characteristic data of the target test object;
determining the cheating type and/or cheating risk level of the target test object according to the preset weight of the emotional state expression corresponding to the two-dimensional semantic event and the multi-dimensional semantic event respectively;
corresponding anti-fraud information is provided to the target test object in response to the fraudulent type and/or the fraudulent risk level.
Preferably, the anti-fraud method of the present invention further comprises:
acquiring the response duration of a target test object aiming at the feedback behavior of a two-dimensional semantic event and/or a multi-dimensional semantic event;
and determining the cheating type and/or cheating risk level of the target test object according to the preset weight of the response duration corresponding to the two-dimensional semantic event and the multi-dimensional semantic event respectively.
Preferably, before providing the target test object with at least one simulation test scenario, the method further comprises:
acquiring comprehensive attribute data of a target test object from at least one third-party platform;
determining the identity type of the target test object according to the comprehensive attribute data;
determining at least one type of the simulation test scenario based on the identity type.
Preferably, said determining at least one type of said simulation test scenario based on said identity type comprises:
and determining at least one simulation test script according to the matching corresponding relation between the identity type of the target test object and the simulation test script.
Preferably, the determining at least one type of simulation test scenario based on identity type further comprises:
and creating a task execution stream corresponding to at least one simulation test script, wherein the task execution stream comprises at least one test link corresponding to a two-dimensional semantic event and/or a multi-dimensional semantic event.
Preferably, the present invention provides an anti-fraud promotion system comprising:
the statistical unit is configured to acquire comprehensive attribute data of the target test object and determine the identity type of the target test object based on the comprehensive attribute data;
a processing unit configured to provide at least one simulation test scenario to a target test object according to its identity type;
the monitoring unit is configured to acquire behavior characteristic data and/or facial expression characteristic data of a target test object aiming at least one two-dimensional semantic event and/or multi-dimensional semantic event in the simulation test script, so that the processing unit determines the emotional state expression of the target test object according to the behavior characteristic data and/or the facial expression characteristic data, and determines the cheating type and/or the cheating risk level corresponding to the emotional state expression according to the emotional state expression;
a management unit configured to provide the target test object with corresponding anti-fraud promotional information in response to the fraud type and/or the fraud risk level.
Preferably, the present invention provides an anti-fraud promotion system comprising:
the statistical unit is configured to acquire comprehensive attribute data of the target test object and determine the identity type of the target test object based on the comprehensive attribute data;
the processing unit is configured to provide at least one simulation test script for the target test object according to the identity type of the target test object;
the monitoring unit is configured to acquire sound characteristic data of the target test object aiming at least one two-dimensional semantic event and/or multi-dimensional semantic event in the simulation test script, so that the processing unit determines emotional state expression of the target test object according to the sound characteristic data and determines an easy-cheating type and/or an easy-cheating risk level corresponding to the emotional state expression;
a management unit configured to provide the target test object with corresponding anti-fraud promotional information in response to the fraud type and/or the fraud risk level.
Preferably, the present invention provides an anti-fraud electronic device, comprising:
one or more processors;
a memory for storing one or more computer programs;
when executed by one or more processors, cause the one or more processors to implement the anti-fraud method as described above.
Preferably, the present invention provides a storage medium containing computer-executable instructions for performing the anti-fraud method as described above when executed by a computer processor.
Drawings
FIG. 1 is a schematic flow diagram of an anti-fraud method according to a preferred embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an anti-fraud system of a preferred embodiment provided by the present invention;
FIG. 3 is a schematic structural diagram of an anti-fraud electronic device of a preferred embodiment provided by the present invention.
List of reference numerals
1: a counting unit; 2: a processing unit; 3: a monitoring unit; 4: a management unit; 5: an audio communication unit; 6: an information identification unit; 7: a linking unit; 100: an electronic device; 11: obtaining a submodule; 12: determining a submodule; 20: creating a sub-module; 71: a first linking submodule; 72: a second linking submodule; 110: a processor; 120: a memory; 130: a communication bus; 140: a communication interface.
Detailed Description
The following detailed description is made with reference to the accompanying drawings.
Example 1
FIG. 1 is a flow chart illustrating an anti-fraud method of the present invention, according to a preferred embodiment. In particular, the anti-fraud method of the present invention is particularly applicable to computer devices and is implemented by a combination of hardware and software. Preferably, the anti-fraud method of the present invention may comprise:
s1: at least one simulated test scenario is provided for the target test subject.
S2: and acquiring behavior feature data and/or facial expression feature data of the target test object aiming at least one two-dimensional semantic event and/or multi-dimensional semantic event in the simulation test script.
S3: and performing feature analysis extraction on the behavior feature data and/or facial expression feature data of the target test object to determine the emotional state expression of the target test object aiming at the two-dimensional semantic event and/or the multi-dimensional semantic event.
S4: and determining the cheating type or cheating risk level of the target test object according to the preset weight of the emotional state expression corresponding to the two-dimensional semantic event and the multi-dimensional semantic event respectively.
S5: providing the target test object with anti-fraud promotional information corresponding to the fraud-prone type or fraud-prone risk level.
According to a preferred embodiment, in the present invention, the step S1 further includes, before:
acquiring comprehensive attribute data of a target test object;
determining the identity type of the target test object according to the comprehensive attribute data;
determining at least one type of the simulation test scenario based on the identity type.
According to a preferred embodiment, the comprehensive attribute data of the target test object may be various information such as age, sex, academic calendar, occupation, residence, native place, family or social character relationship diagram, consumption behavior record and hobbies. Specifically, the comprehensive attribute data of the target test object can be acquired through an external third-party platform. The third party platform can be, for example, a campus platform, a human resource platform, a communication platform, an online social or game platform, various registered service platforms, and the like. Besides acquiring the comprehensive attribute data of the target test object through the third-party platform, the target test object can supplement and provide other types of data so as to make up for data loss or errors and the like in the aspect of the third-party platform.
According to a preferred embodiment, at least one label type corresponding to the target test object is determined according to the comprehensive attribute data related to the target test object, so as to determine the identity type of the target test object. Specifically, the identity type of the target test object may be a user tag. For example, a category by academic calendar, a category by age or a category by occupation, etc. may be included.
In particular, in the present invention, the identity type of the target test object may be any of various types of user tags. Preferably, the identity type of the target test object is a set of various types of user tags. Therefore, the personal attributes of the target test object can be enriched, and the system can reasonably distribute the simulation test script and the matched anti-fraud propaganda information according to the relevance among the user labels of the target test object.
In some alternative embodiments, the identity type of the target test object may be in the form of a portrait label created for the target test object. In particular, one possible presentation of the representation label is that the middle is a representation of the target test object, one side is a user label, such as the age, gender, academic calendar or occupation mentioned above, and the other side is a label value, such as 45 years old, male, doctor and university teacher. Preferably, it would be more intuitive to show the identity type of the target test object by way of a portrait label.
According to a preferred embodiment, based on the preset distribution rule, the identity type of the target test object can be generally matched with the simulation test scripts in the system database or the system material library, and the at least one simulation test script meeting the condition is determined by calculating the respective matching degree. For example, if the matching degree is greater than a certain threshold, it may be regarded as a qualified simulation test scenario. After at least one simulation test script is determined for the target test object according to the identity type of the target test object, the simulation test script is pushed to the target test object for the anti-fraud simulation test of the target test object.
According to a preferred embodiment, the simulated test script may be understood as a design text for inducing a scammed object to fall into a fraud molecule fraud trap during the course of simulating an actual fraud behavior. In particular, the simulated test scenario may include a scenario type, scenario, audio-video, text information, and the like. Specifically, the types of the script simulating the test script include, but are not limited to, telephone fraud scripts, SMS message fraud scripts, mail fraud scripts, online fraud scripts, software fraud scripts and the like. Scenario scenarios for simulating test scenarios include, but are not limited to, network billing, network loan, network part-time, consumption and financing, betting, game account, network shopping, fictitious dangerous situations, marriage and dating, etc.
Specifically, for example, taking a target test subject aged 50 years or older as an example, such a target test subject may have a high degree of attention to content of the type of health care, investment financing, and the like. In particular, the type of scenario simulating the test scenario may be healthcare consumption, online shopping, fictional dangerous situations, lottery wins, and the like, for such target test objects.
According to a preferred embodiment, before providing the at least one simulated test scenario for the target test subject may comprise: and creating a task execution flow corresponding to each simulation test scenario, and setting a plurality of test nodes in the task execution flow. In particular, the task execution flow may represent the steps of completing a simulation test script. The test nodes may represent several steps to complete the simulation test script. Preferably, each step of completing the simulation test scenario may be implemented as a test node.
Further, the transmission tasks and the transmission sequence of the script information can be established according to the communication channel for simulating the access of the test script. For each transcript information, the corresponding test message may be added automatically or manually. And automatically embedding a program matched with each test message to complete the generation of the whole task execution flow. In particular, the task execution streams corresponding to the respective simulated test scenarios may be sent to the target test subjects in a form including, but not limited to, short messages, emails, voice, and the like. In particular, the simulation test scenario may be a voice scenario, which may include a plurality of voice messages. The simulation test script can also be a short message script, a WeChat script, an email script and the like, wherein the simulation test script comprises a plurality of text messages.
According to a preferred embodiment, after the simulation test scenario is sent to the corresponding target test object, the feedback information of the target test object in the simulation test scenario is monitored in real time and analyzed and identified. Specifically, the feedback information of the target test object may include voice feedback information, text feedback information, and the like. Further, by performing recognition processing on the feedback information of the target test object, a reply result of the target test object to the simulation test scenario can be obtained. The feedback information of the target test object for the simulation test scenario may include, for example, whether to click a link or download software, whether to enter a transfer process or a remittance process, or whether to provide text information such as an account number and a password.
According to a preferred embodiment, the feedback information of the target test object may be instant feedback information or feedback information of a preset interval. When the simulation test scenario is a voice scenario, the feedback behavior information of the target test object may be an instant message feedback in a voice interaction scene. When the simulation test scenario is a short message scenario, the feedback behavior information of the target test object may be feedback information at a predetermined interval, such as 0.5 hour. Specifically, if the simulated test scenario is a voice scenario, the communication duration, communication content, and whether the target test object has a behavior instruction conforming to the fraudster during communication can be recorded.
According to a preferred embodiment, the identity type of the target test object, the simulation test scenario corresponding to the identity type of the target test object and the association relationship between the behavior information of the target test object on each simulation test scenario are analyzed according to a preset algorithm rule in the system, so that the fraud type and fraud risk level of the target test object can be determined. In particular, a spoofable type may be understood as any one or combination of a spoofable script type or script scene, such as a network swipe, network investment, etc. Further, a spoof risk rating may be understood to represent the degree of risk that a target test subject may be spoofed. Specifically, if the full score is 100, the score ≧ 90 may be the security level. The score of 80-90 can be a first-level risk level. A score of 70 or less < 80 can be a secondary risk rating. The score of more than or equal to 60 and less than 70 can be a three-level risk grade. Score < 60 may be a high risk rating.
According to a preferred embodiment, after determining the fraud-prone type and fraud-prone risk level of the target test object, the target test object may be provided with anti-fraud promotional information corresponding to the fraud-prone type and/or fraud-prone risk level thereof for assisting the target test object in preventing similar fraud means. In particular, the anti-fraud promotional information may be picture, audio-visual, textual information, etc. contained in the system material library.
According to a preferred embodiment, generally, in the existing anti-fraud test platform, the rating result of the user fraud rating is often determined according to the processing result of the target test object to each test node, which will inevitably result in a large error in the determination result, and the fraud type of the target test object cannot be accurately rated, so that the anti-fraud information corresponding to the target test object cannot be accurately provided.
For example, in the simulation test scenario in the form of short message, when a certain test node is set to send a web link (when it simulates a phishing link) to the interaction interface of the target test object through short message, the monitoring module supervises and feeds back the behavior of the target test object in the process, specifically whether the web link (when it simulates a phishing link) is clicked, and then determines the spoofable type and/or the spoofable risk level of the target test object according to whether the web link is clicked.
In effect, this result-oriented decision ignores many objectively present factors, the key of which is the emotional performance of the target test subject during the course of the simulated interaction. For example, for the above example, even if different target test subjects choose to reject whether to click on a web link (when they simulate a phishing link), if the actual psychological states or moods of the two target test subjects are different, then the chances of potential cheating are actually different for both.
For example, when the emotional state of an elderly test subject when performing the no click web link (when it simulates a phishing link) operation is in doubt, hesitation or even anxiety, although the choice of no click web link (when it simulates a phishing link) may still maintain a large uncertainty in its mind, the choice of no click web link (when it simulates a phishing link) may not allow it to quickly make a determination because it realizes that this is not really present and occurring, and may not actually have serious consequences, or there is a limit to simulating the scene or content of the test scenario, so that when the final fraud risk level is determined to be a first level based on the behavioral data results of the target test subject, the actual possible fraud risk level will be a second level or even higher.
If only the final processing result is taken as the judgment result, the judgment of the fraud risk level of the target test object is in error, the misleading judgment result can lead the target test object to blindly believe own thinking and identification capability, cannot correctly realize the potential fraud risk, and can also lead the system to recommend the anti-fraud propaganda information which is not matched with the fraud risk level of the target test object, so the teaching effect actually played by simulating the immersive interactive experience can be greatly reduced. If the target test object is subjected to secondary fraud simulation or even real fraud in a similar fraud manner (such as merely replacing a sentence or a phrase) in the future, the probability of the target test object being cheated is still high.
For example, more basic teaching information is provided for a target test object with a higher actual fraud risk level, the target test object cannot experience more complicated and varied fraud patterns, cannot improve the self fraud prevention skills, and the more basic teaching information wastes the limited experience and learning time of the target test object, even if the target test object can additionally browse and learn more varied and intricate fraud cases and corresponding fraud prevention skills, but the target test object is required to pay more time and energy to learn and memorize, so that the intelligent recommendation trend of the existing online learning platform is greatly reduced, the user experience is reduced, and after the target test object passes through two simulated interactive experiences, the target test object does not feel the 'fraud experience' corresponding to the actual fraud risk level, the target test object can be blindly judged to have higher fraud prevention consciousness, the frequency of actively contacting similar related fraud prevention experiences and prevention in the future can be reduced, even any time is paid for the target test object, and the target test object can not be obviously improved along with the new fraud prevention means.
Based on the possible problems in the prior art, the present invention aims to provide an anti-fraud method, which is specifically embodied to determine emotional state representation of a target test object for at least one two-dimensional semantic event and/or multi-dimensional semantic event in a simulated test scenario by analyzing behavioral characteristic data and/or facial expression characteristic data of the target test object for the two-dimensional semantic event and/or multi-dimensional semantic event, and determine a fraud-prone type or a fraud-prone risk level of the target test object according to preset weights of emotional state representations corresponding to the two-dimensional semantic event and the multi-dimensional semantic event respectively.
According to a preferred embodiment, in the present invention, the emotional state data of the target test subject in experiencing the simulated test scenario, in particular the emotional performance for each test node in the simulated test scenario, needs to be monitored. In particular, image acquisition technology can be adopted to perform feature extraction on facial expression features, limb behavior features and the like of the target test object.
According to a preferred embodiment, the prior art also generally includes a method of emotion recognition by recognizing the sound of a test subject, and a method of emotion recognition by brain wave signals more accurately and scientifically. In consideration of practical application scenarios (such as a mobile terminal device carrying a software platform or a community promotion station), the emotion recognition is preferably performed by adopting an image acquisition method, and a voice recognition mode can be matched. Specifically, when the application carrier of the invention is, for example, a computer device, the camera authority can be turned on for image acquisition by the system, so that the emotional state of the target test object can be confirmed through the system's recognition of facial expression characteristics and/or limb behavior characteristics. When the simulation test scenario is a voice scenario, the voice message of the target test object can be simultaneously input through a microphone device, so that the emotional state of the target test object can be confirmed through the recognition of the voice message of the target test object by the system.
According to a preferred embodiment, after the facial expression features and the limb behavior features of the target test object in the simulation test script are acquired, the facial expression features and the limb behavior features of the target test object are analyzed to confirm the emotional state of the target test object. In particular, the principle of analyzing the facial expression characteristics and the limb behavior characteristics of the target test subject to confirm the emotional state of the target test subject is that when one person exhibits different emotional states, it is often accompanied by various involuntary reaction changes in skin conductance, heart rate, respiratory rate, blood pressure, telangiectasia, muscle movement, and pupillary dilation, which may be related to various factors related to changes in emotional state, such as anxiety, hesitation, distraction, anger, and the like.
Specifically, for example, when a person is in any of the emotions, the person usually shows uncontrolled facial muscle movements (e.g., some person may make a continuous lip biting action when a question is made or with hesitant emotion), pupil movements (e.g., a dilated pupil which is not consciously controlled when lying), and the like. Taking the behavior characteristics of limbs as an example, when a person generates excited emotion, the person may not consciously generate behavior of clapping hands or laughing. When a person is confused, the person may not consciously produce head or chin-rest expressions.
Preferably, the above extraction confirmation of the facial expression features and the limb behavior features of the target object can be derived from the result of model training on sample data. Specifically, the prior art has made a great deal of research on the correlation between human emotion and external expressions (including, for example, facial expressions and body behaviors of the present invention), and for this reason, the present invention may pre-store a great deal of feature sample data related to emotional states, facial features and body features, and after obtaining facial expression features and body behavior features of a target test object in a simulation test scenario, key feature points in the facial expression features of the target test object may be extracted through an image analysis and recognition technique and compared with the feature sample data in a system database to determine the emotional state or the emotional state range of the target test object.
In particular, the key feature points in the facial expression features of the target test subject are, for example, at least one or more feature points of the lip part of the target test subject, such as the angle of mouth, the peak of lips, and the lip bead. Since the facial feature extraction technology in the prior art is mature, it is not described in detail herein. The emotional state range may represent a specific emotional state of the target test subject, such as mild anxiety, severe anxiety, etc., and may also represent a general mood swing space for indicating a general emotional state of the target test subject, such as the target test subject may be in a positive mood, including relaxed, happy, and even excited. Or negative emotions including confusion, anxiety and even anger.
According to a preferred embodiment, the recognition of the voice message of the target test object is used to determine the emotional state of the target test object by analyzing the voice characteristics of the target test object such as voice pitch (fundamental frequency or high and low voice), volume (decibel), speech rate and word pause. In particular, when a person is anxious, hesitant, happy, and angry, his language style is usually accompanied by the involuntary changes in tone (fundamental or high bass), volume (decibel), pace, and word pause, as described above, so that the possible emotional states of the target test subject can be determined by analyzing the voice characteristic information. The method for recognizing emotion through voice characteristics is mature, and therefore, it is not described herein in detail.
According to a preferred embodiment, the simulation test scenario comprises a plurality of test nodes, each test node corresponds to a two-dimensional semantic event or a multi-dimensional semantic event, i.e. a complete simulation test scenario comprises a plurality of semantic events.
According to a preferred embodiment, in the present invention, a two-dimensional semantic event may specifically refer to a two-dimensional problem. The answers to the two-dimensional questions are typically relatively and uniquely determined, such as, for example, yes or no, correct or incorrect, and accept or reject. Specifically, the test nodes in the simulation test scenario are used to simulate, for example, whether you agree to send a file to you, agree to transfer money to the other party, remit money, click on a webpage link shared by the other party (when it simulates a phishing link), and receive a compressed package of files shared by the other party (when it simulates a virus file). Generally, in such test nodes, the decision of the target test object may have a decisive effect on the trend of the next script story, i.e. the decision made by the target test object for such two-dimensional semantic events largely determines whether the target test object is deceived, such as the similar action of transferring money or remitting money, and the emotional state presented by the target test object in the decision making process for the two-dimensional semantic events may mean the potential deceiveable risk level.
Specifically, for example, when the target test object faces a negative emotion such as agreement or agreement on transfer to the other party or transfer to the other party, the test node shows anxiety hesitation, and although it may eventually choose to refuse transfer to the other party, the hesitation and uncertainty presented by the target test object during the process may actually be higher in true cheating probability, so that it is difficult to ensure the same core content at a later date, but when the target test object is subjected to a simulation test or true cheating in a different form, the target test object may still choose the same disapproval manner.
According to a preferred embodiment, the multidimensional semantic event can specifically refer to a multidimensional problem. Answers to multi-dimensional questions are generally subjective and free. In particular, a multi-dimensional semantic event may be understood as being used for the advancement of a two-dimensional semantic event, or for the story-to-story relay of a script, etc. Specifically, to simulate a test node in a test script, for example, a typical multi-dimensional semantic event includes "remember which restaurant we have together in the last year, \8230;" the last year ". "similar multidimensional semantic events have the meaning that when a fraud molecule pretends to be a family friend to commit fraud to a target test object, it will usually have some kind of conversation (often called a cross-country) with the target test object to defeat the opponent's willingness to draw the opponent's trust, which may be more necessary especially for some objects that are not allowed to be contacted for a long time, since in most general cases it is clearly untimely to claim that the fraud is requested for a transfer, remittance directly to the target test object, claiming that the fraud requires financial assistance, and in the course of the fraud it is actually intended to elicit the ultimate fraud purpose by means of a pad of a plurality of multidimensional semantic events like the above, i.e. the above-mentioned two-dimensional semantic events, e.g. requesting the target test object to transfer money to it, remittance it, or e.g. pretending to be a financial institution (bank), requesting personal information on the target test object, etc.
Generally, in this type of test node, although the question of the multidimensional semantic event and the answer of the target test object may not have a decisive effect on the trend of the script story or directly affect whether the target test object is deceived, the potential auxiliary driving effect cannot be ignored, for example, when a fraud molecule imitates a friend of a family to have a "acquaintance conversation" with the target test object, the emotional expression of the target test object for the current multidimensional semantic event (for example, the above-mentioned "restaurant which has been remembered to go together in the last year, \8230; \82308230;") may indicate whether the target test object is mentally increased or decreased in trust, and the potential trust level may be related to whether the target test object makes a rejection action when a fraud molecule makes a transfer or remittance request.
Specifically, the target test subjects faced the aforementioned "restaurant remembering that we went together in the last new year" \8230;. "this multi-dimensional semantic event shows a positive emotion and also reflects by its answer content that the target test object is progressing along a wrong (i.e. eventually largely fraudulently) dramatic route, then in fact the final actual risk probability of frauds by the target test object may be relatively higher, not only because the target test object eventually made a behavioral measure such as a transfer, remittance, but more importantly because the target test object was probably trusting from the beginning and did not question, and then the same core content is applied later, but when the target test object was simulated or actually fraudged in a different form, perhaps the target test object would show a more robust judgment than this time, just because it underwent the same event, while a judgment without a fraud would have had worse than expected consequences on other similar events. In particular, the cheating risk level of the target test object can be indirectly estimated by the combination promotion of a plurality of multi-dimensional semantic events and the judgment of the emotional expression of the target test object aiming at each multi-dimensional semantic event.
In particular, for simulated test scripts of different scene types (such as brushing, loan, consumption financing or dating, etc.) and forms (such as web chat fraud scripts, mail fraud scripts or SMS fraud scripts, etc.), different forms of two-dimensional semantic events and multi-dimensional semantic events can be set through big data analysis and combined with actual scene and content requirements based on a plurality of acquired personal information of the target test object, such as age, gender, academic calendar, occupation, address, residence, family or social character relationship diagram, consumption behavior record and hobbies, and can be manually corrected by background personnel to better conform to actual scenes and personal conditions of the target test object.
According to a preferred embodiment, when the emotional state representation corresponding to the target test object is determined based on the feature extraction and analysis results of the target test object for the facial expression features and the limb behavior features of the two-dimensional semantic events and the multi-dimensional semantic events in the simulation test script, the assignment correction can be performed for each test node according to the specific emotional state representation. Specifically, in the present invention, the emotional state expression of the target test subject can be roughly classified into three categories including positive emotions, negative emotions, and normalized emotions. Positive emotions may include, for example, smiling, happy, or excited. Negative emotions may include, for example, confusion, anxiety, or anger. The normalized mood may be intermediate between the positive mood and the negative mood, i.e. it may be understood that there is no significant fluctuating change compared to the positive mood and the negative mood. It should be understood that the emotion classification presented here is only illustrated as a preferred example, and may be actually divided into more complex emotion classifications, and the specific classification method is not limited to the above form, and a more preferred way is to combine the emotion expression of the testee in the sample data for a specific semantic event.
According to a preferred embodiment, the specific expression of performing weight correction on the cheating type or cheating risk level of the target test object according to the preset weight of the emotional state corresponding to each of the two-dimensional semantic event and the multi-dimensional semantic event is as follows: as described above, the behavior data of the target test object in each test node corresponds to a certain score, and the cumulative result of each score corresponds to the fraud risk level of the target test object. For example, a score ≧ 90 may be a security rating. A score of 80 or more and less than 90 can be a first-level risk level. A score of 70 or less < 80 can be a secondary risk rating. The score of more than or equal to 60 and less than 70 can be a third-level risk grade. The score is less than 60, and the high risk grade can be obtained.
According to a preferred embodiment, different correction factors may be assigned to different mood classes. Alternatively, when the approximate emotional state (e.g., negative emotion) is determined, a correction factor interval is assigned, and different levels of emotion within the interval have different correction factors, such as a general doubt of 0.8 and a severe doubt of 0.5. In particular, as described above, a two-dimensional semantic event may have a more direct impact on the final rating of the objective test object's spoof risk level, while a multi-dimensional semantic event may have a relatively weaker impact on the final rating of the objective test object's spoof risk level.
Furthermore, when a simulation test script contains a plurality of two-dimensional semantic events and multi-dimensional semantic events, the setting can be performed in a manner that the score proportion of the two-dimensional semantic events or the correction coefficient is greater than the score proportion of the multi-dimensional semantic events or the correction coefficient. The specific score proportion can be determined according to the respective number of the two-dimensional semantic events and the multi-dimensional semantic events and the specific event content relative to the two-dimensional semantic events and the multi-dimensional semantic events. In particular, for a normalized emotion, since the target test object may not have a significant emotional state change, the correction coefficient of the normalized emotion may be 1, in other words, when a certain test node target test object does not have a significant emotional state change to the corresponding semantic event, it may not be corrected. It should be understood that the above examples of correction factors are for illustrative purposes only, and in fact the selection of correction factors may be in a more complex manner to improve the corresponding accuracy.
Specifically, for example, in some optional embodiments, when the two-dimensional semantic event is "transfer to the other party" and the target test object chooses to reject, if it is determined that the target test object has a specific emotional state (e.g., hesitation in hesitation) for the two-dimensional semantic event according to the facial expression features and/or the body behavior features of the target test object at that time, the fraud risk level of the target test object is finally determined as a safety level (score ≧ 90) according to the behavior data of "reject", and is corrected by a correction coefficient corresponding to the specific emotional state, the fraud risk level actually corresponding to the target test object may be a secondary risk level (score 70 ≦ 80) because it shows obvious hesitation in the process.
On the other hand, when the two-dimensional semantic event is "transfer to the other party" or not "and the target test subject chooses to agree, if it is determined that the target test subject has a specific emotional state (e.g., firm) with respect to the two-dimensional semantic event based on the facial expression features and/or the body behavior features of the target test subject at that time, the fraud risk level of the target test subject is finally determined as a three-level risk level (60 ≦ score < 70) based on the behavior data of" agree ", and the fraud risk level is corrected by a correction coefficient corresponding to the specific emotional state, the fraud risk level actually corresponding thereto may be a high-risk level (score < 60) because it shows no hesitation in the process, and particularly, the behavior of" agree "is finally made.
As a more preferred embodiment, when the target test object makes an action of "agreeing" to the two-dimensional semantic event and/or the multi-dimensional semantic event, especially a decisive event in the two-dimensional semantic event, such as agreement to transfer money, agreement to download a shared file, agreement to input account information, or the like, the fraud risk level of the target test object can be determined as a high risk level regardless of the emotional state expression of the target test object. Therefore, in the present invention, determining and modifying the spoofable type and/or the spoofable risk level of the target test object in response to the emotional state representation of the target test object on the two-dimensional semantic event and/or the multi-dimensional semantic event is particularly suitable for determining the spoofable type and/or the spoofable risk level actually corresponding to the target test object when the target test object performs a "refusal" action on the two-dimensional semantic event and/or the multi-dimensional semantic event, especially on a decisive event (such as the agreement transfer and the like mentioned above) in the two-dimensional semantic event, because in this case, the target test object may not have good anti-spoofing awareness, and therefore, even if the target test object selects "refusal", the potential spoofed risk is actually higher than expected.
In particular, a particular emotional state may refer to, for example, a general emotional performance of the testee for a certain two-dimensional semantic event in most sample data, such as for a two-dimensional semantic event of "transfer to the other party, remittance", the testee who chooses to reject may often show a hesitation, or a firm, etc.
According to a preferred embodiment, the corresponding emotional state expression is determined by acquiring the facial expression characteristics and the limb behavior characteristics of the target test object facing the two-dimensional semantic event and the multi-dimensional semantic event, and the cheating type and the cheating risk level of the target test object are corrected according to the emotional state expression, so that the accuracy of evaluating the cheating risk level of the target test object is improved, the potential cheating risk of the target test object is identified, the anti-fraud propaganda teaching material which accords with the actual cheating risk level of the target test object is provided for the target test object, and the anti-fraud consciousness of the target test object is improved.
According to a preferred embodiment, the present invention may further comprise a method for determining a corresponding emotional state of a target test subject based on a characteristic analysis of the speech information of the target test subject in the simulated test scenario, in particular when the simulated test scenario is a speech test scenario, so that the corresponding fraud type or fraud risk level of the target test subject may be modified according to the emotional state of the target test subject, as embodied by:
acquiring sound characteristic data of a target test object aiming at least one two-dimensional semantic event and/or multi-dimensional semantic event in the simulation test script;
performing feature analysis extraction on sound feature data of a target test object to determine emotional state expression of the target test object aiming at the two-dimensional semantic event and/or the multi-dimensional semantic event;
determining the cheating type or cheating risk level of the target test object according to the preset weight of the emotional state expression corresponding to the two-dimensional semantic event and the multi-dimensional semantic event respectively;
providing the target test object with anti-fraud promotional information corresponding to the fraudulent type or fraudulent risk level.
Particularly, the specific content of identifying the emotional state expression of the target test object for the two-dimensional semantic event and the multi-dimensional semantic event through the sound characteristic information and performing weight correction on the spoofing type or the spoofing risk level of the target test object according to the preset weight of the emotional state corresponding to the two-dimensional semantic event and the multi-dimensional semantic event is similar to the method for identifying the emotional state of the target test object based on the facial expression characteristic and the limb behavior characteristic so as to perform weight correction on the spoofing type or the spoofing risk level of the target test object, and therefore detailed description is omitted here, and the detailed content can refer to the above method steps.
According to a preferred embodiment, in addition to determining and correcting the corresponding easy-to-cheat type and the easy-to-cheat risk level by analyzing and identifying the emotional state expression of the target test object on the two-dimensional semantic event and/or the multi-dimensional semantic event corresponding to each test link in the simulation test script, the corresponding easy-to-cheat type and the easy-to-cheat risk level thereof can be determined and corrected by combining the response duration of the target test object on the two-dimensional semantic event and/or the multi-dimensional semantic event corresponding to each test link.
Specifically, the determining and correcting the corresponding spoofing type and the spoofing risk level thereof by combining the response duration of the target test object to the two-dimensional semantic event and/or the multi-dimensional semantic event corresponding to each test link includes:
acquiring the response time of a target test object aiming at least one two-dimensional semantic event and/or multi-dimensional semantic event in the simulation test script;
determining the spoofing type and/or the spoofing risk level of the target test object according to the preset weight of the response duration corresponding to the two-dimensional semantic event and the multi-dimensional semantic event respectively;
according to a preferred embodiment, the feedback time of the target test object for each two-dimensional semantic event and/or multi-dimensional semantic event may reflect the confidence level of the target test object for the current test scenario, and the feedback result of the target test object for each two-dimensional semantic event and/or multi-dimensional semantic event may be combined to represent the potential cheating risk of the target test object to some extent. In particular, corresponding score weights may be set for different feedback times, when or after determining the spoofable type and/or spoofable risk level of the target test object through the emotional state representation of the target test object for each two-dimensional semantic event and/or multi-dimensional semantic event, the spoofable type and/or spoofable risk level of the target test object may be further corrected with reference to the response duration of the target test object for each two-dimensional semantic event and/or multi-dimensional semantic event, and preferably, determining the corresponding spoofable type and/or spoofable risk level based on the emotional state representation or response duration of the target test object for each two-dimensional semantic event and/or multi-dimensional semantic event may be performed simultaneously, only by setting the same or different weights for each emotional state representation, corresponding behavior feedback result and response duration. Further, based on the priority difference of the two-dimensional semantic event and/or the multi-dimensional semantic event, the score weights respectively borne by the same response time length in the two-dimensional semantic event and/or the multi-dimensional semantic event are different.
Example 2
According to a preferred embodiment, as shown in FIG. 2, the present invention provides an anti-fraud promotion system based on the above-mentioned anti-fraud promotion method, and the actual carrier applied by the anti-fraud promotion system can be, for example, a computer device. Specifically, an anti-fraud system of the present invention may include a statistical unit 1, a processing unit 2, a monitoring unit 3, and a management unit 4 communicatively connected to each other. In particular, the amount of the solvent to be used,
the statistical unit 1 is used for acquiring comprehensive attribute data of the target test object and determining the identity type of the target test object based on the comprehensive attribute data.
And the processing unit 2 is used for providing at least one simulation test script for the target test object according to the identity type of the target test object.
And the monitoring unit 3 is used for acquiring behavior characteristic data and/or facial expression characteristic data of the target test object aiming at least one two-dimensional semantic event and/or multi-dimensional semantic event in the simulation test script, so that the processing unit 2 determines the emotional state expression of the target test object according to the behavior characteristic data and/or the facial expression characteristic data of the target test object, and determines the fraud type and/or fraud risk level corresponding to the threshold value according to the emotional state expression of the target test object.
And a management unit 4 for providing anti-fraud promotion information corresponding to the fraud-prone type or fraud-prone risk level of the target test object to the target test object.
According to a preferred embodiment, the statistical unit 1 may interface with an external third party platform through an interface for obtaining comprehensive attribute data related to the target test object from each third party platform. Or extracting the comprehensive attribute data of the target test object from the third-party platform through a preset algorithm. The third party platform may be the campus platform, the human resource platform, the online social platform, the game competition platform, and various registration service platforms.
According to a preferred embodiment, the statistical unit 1 may determine at least one tag type corresponding to the target test object according to the comprehensive attribute data associated with the target test object. In particular, the identity type of the target test object may be a user tag. For example, when the identity type of the target test object is divided by professional identity, the identity type may be a student, a teacher, or a company employee. When the identity type of the target test subject is divided by age, the identity type may be children, young, middle-aged, or old, and the like. In particular, in some alternative embodiments, the identity type of the target test object may be a form of creating a portrait label for the target test object, which may be referred to above.
According to a preferred embodiment, the statistical unit 1 may be interfaced to the processing unit 2 for sending thereto the comprehensive property data of the target test object. The processing unit 2 may analyze and process the comprehensive attribute data according to a preset algorithm rule to search for at least one simulation test scenario that meets the identity type or the user label of the target test object.
According to a preferred embodiment, the processing unit 2 may analyze and identify the behavior feature data and/or facial expression feature data of the target test object obtained by the monitoring unit 3 for at least one two-dimensional semantic event and/or multi-dimensional semantic event in the simulated test scenario to determine the emotional state representation of the target test object, and determine the cheating type and/or the cheating risk level of the target test object according to the emotional state representation of the target test object. Specifically, the processing unit 2 may analyze the identity type of the target test object, the simulation test scenario corresponding to the identity type of the target test object, and the association relationship between the behavior characteristics and/or the expression information of the target test object on each simulation test scenario according to a preset algorithm rule in the system, so as to determine the fraud type and/or fraud risk level of the target test object. In particular, a spoofable type may be understood as any one or combination of a spoofable script type or script scene, such as a network swipe, network investment, etc. The fraud risk level includes, for example, the security level, the primary risk level, the secondary risk level, the tertiary risk level, the high risk level, and the like.
According to a preferred embodiment, the management unit 4 can output anti-fraud information corresponding to each target test object according to the fraud type or fraud risk level thereof, for assisting the target test object in preventing similar fraud means. Specifically, the management unit 4 may transmit the test results of the target test object with respect to various types of simulated test scenarios to the target test object, including informing the target test object of the evaluation result of the fraud-prone type thereof, and transmitting the fraud prevention teaching promotion information under the corresponding fraud type or fraud scenario thereto. In particular, the anti-fraud instructional promotional information can be picture, audio-visual and textual information contained in the system material library.
According to a preferred embodiment, as shown in fig. 2, the statistical unit 1 may comprise an acquisition submodule 11 and a determination submodule 12. In particular, the obtaining sub-module 11 may interface at least one third party platform for obtaining the comprehensive property data of the target test object. The determining submodule 12 is configured to determine a corresponding identity type according to the comprehensive attribute data of each target test object.
According to a preferred embodiment, the processing unit 2 may comprise a creation sub-module 20, as shown in fig. 2. Specifically, the creation sub-module 20 is configured to create a task execution flow corresponding to each simulation test scenario, and a plurality of test nodes may be set up in the task execution flow. Further, the creation sub-module 20 may establish a task of transferring the scenario information and a transfer order according to a communication channel simulating the test scenario access. For each transcript information, the corresponding test message may be added automatically or manually. And automatically embedding a program matched with each test message to complete the generation of the whole task execution flow.
According to a preferred embodiment, as shown in FIG. 2, the anti-fraud promotion system may further comprise an audio communication unit 5 and a link unit 7. The task execution stream created by the creation sub-module 20 according to the spoofable type of the target test object may be transmitted to the audio communication unit 5 and the link unit 7, respectively, so as to transmit the task execution stream simulating the test scenario to the corresponding target test object through the audio communication unit 5 and the link unit 7.
According to a preferred embodiment, the audio communication unit 5 may transmit the task execution stream of the simulation test scenario input by the creation sub-module 20 to the target test object. Specifically, the audio communication unit 5 may generate and execute corresponding voice scenario information in accordance with the task execution stream simulating the test scenario.
Further, the present invention also includes an information identification unit 6 in communication with the audio communication unit 5. The audio communication unit 5 may generate a plurality of voice scripts based on the task execution stream of the simulation test script, sequentially send each test node in the task execution stream to a corresponding target test object, and may also make a dial call to the voice gateway, interact with the information recognition unit 6, and perform a voice question and answer with the target test object, thereby completing the simulation test of the voice scripts. Further, the information recognition unit 6 may analyze the feedback voice information of the target test object, and generate a corresponding interactive statement, that is, a voice recognition result in a text form, according to the analysis result, where the interactive statement may represent the expression information of the user. The information recognition unit 6 may send the voice recognition result to the monitoring unit 3, so that the monitoring unit 3 determines feedback behavior information corresponding to each target test object.
According to a preferred embodiment, the linking unit 7 may transmit a task execution stream of the simulation test scenario input by the creation sub-module 20 to the target test object. Specifically, the link unit 7 may generate and execute corresponding text scenario information (including, for example, short message scenario information, wechat scenario information, or email scenario information, etc.) according to the task execution flow of the simulation test scenario. The text script information may include a plurality of pieces of link information. Further, the link unit 7 may generate a plurality of text scripts based on the task execution flow of the simulation test script, sequentially send each test node in the task execution flow to the corresponding target test object, and reply through a link between the test node and the target test object, thereby completing the text script test.
According to a preferred embodiment, as shown in fig. 2, the linking unit 7 may include a first linking submodule 71 and a second linking submodule 72. Specifically, the first linking submodule 71 may be a short message linking module. The second linking submodule 72 may be a mail linking module.
According to a preferred embodiment, the first linking sub-module 71 may receive a task execution stream corresponding to the short message scenario information in the simulation test scenario sent by the creating sub-module 20, and send test information of each test node in the task execution stream to the target test object, so as to perform short message fraud scene interaction with the target test object, and simultaneously collect behavior data of each target test object. Further, if the target test object performs an operation of opening a link website in the short message scenario information, the link website is collected by the monitoring unit 3, and the monitoring unit 3 determines feedback behavior information of each target test object.
According to a preferred embodiment, the second linking sub-module 72 may receive a task execution flow corresponding to the email script information in the simulation test script sent by the creating sub-module 20, and send the test information of each test node in the task execution flow to the target test object, so as to perform an interaction of an email fraud scene with the target test object, and simultaneously collect behavior data of each target test object. Further, if the target test object performs an operation of opening a link website in the email scenario information, the link website is collected by the monitoring unit 3, and the monitoring unit 3 determines feedback behavior information of each target test object. It should be understood that the linking unit 7 may also comprise other further types of modules.
According to a preferred embodiment, the monitoring unit 3 may receive behavior data of the target test objects for the simulation test scenarios. Specifically, the monitoring unit 3 may receive the voice recognition results on the voice scenario of each target test object transmitted by the information recognition unit 6. The monitoring unit 3 may receive the operation behavior data about the short message scenario of each target test object sent by the first linking sub-module 71. The monitoring unit 3 may receive the operation behavior data on the mail scenario of each target test object transmitted by the second link sub-module 72.
According to a preferred embodiment, after the monitoring unit 3 receives the behavior data or the feedback information of each target test object for each simulation test scenario, the behavior data or the feedback information may be correlated with the identity type determined by the statistical unit 1 according to the data to be processed of the target test object, so as to form the attribute information of the spoofable type of the target test object. Further, the processing unit 2 may perform integrated evaluation on the attribute information of each type of simulation test scenario for comprehensively determining the cheating-prone type of the target test object.
According to a preferred embodiment, the invention also relates to an intelligent platform applied to anti-fraud knowledge promotion. Further, the intelligent platform can comprise an intranet terminal, an operation terminal, a service terminal and a gateway.
Specifically, the server may include the above-described statistical unit 1, processing unit 2, monitoring unit 3, management unit 4, audio communication unit 5, information identification unit 6, and link unit 7. In particular, the statistical unit 1, the processing unit 2, the management unit 4, the audio communication unit 5, the information recognition unit 6 and the first linking submodule 71 in the linking unit 7 may constitute a private network part of the server, which can interface with anti-fraud systems of law enforcement supervision authorities and with voice gateways in the audio communication unit 5 and/or short message gateways in the first linking submodule 71. The monitoring unit 3 and the second link submodule 72 in the link unit 7 may form an internet part of the server, which may be used to obtain the system feedback data in real time.
According to a preferred embodiment, the operator is configured to provide an anti-fraud announcement center, a knowledge base of materials, etc. Specifically, the anti-fraud propaganda center can be matched with the simulation test script to provide actual combat, scene and experience propaganda for communities, and the problems of single network safety and anti-fraud propaganda mode, unobvious training effect and the like are solved. In particular, in the anti-fraud announcement center interface, the announcement personnel can select different anti-fraud announcement objects or target test objects according to different labels, for example, according to different industries, jobs, sexes, ages, and the like. Furthermore, each label can cover various anti-fraud propaganda materials, and a user can select at least one material suitable for anti-fraud propaganda after selecting the label, freely combine and match the selected material and content, and generate a corresponding courseware by one key. Preferably, the tags are dynamically updated according to the latest fraud trends within a predetermined period.
According to a preferred embodiment, the operation end can also be used for anti-fraud information push. Specifically, the operator can push fraud trends, fraud techniques, fraud cases, and the like in the jurisdiction of the operator at regular time every day. By regularly pushing anti-fraud information, the anti-fraud information can be accurately publicized, and the early warning and handling management of the swindling crime events of people in the jurisdiction can be realized.
Example 3
According to a preferred embodiment, the present invention provides an electronic device 100 for anti-fraud knowledge. Specifically, as shown in fig. 3, the electronic device 100 may include: one or more processors 110, a memory 120, and a communication bus 130 for connecting at least the processors 110 and the memory 120.
According to a preferred embodiment, the memory 120 is configured to store computer system readable media that embody the various functions in the embodiments of the present invention.
According to a preferred embodiment, the processor 110 is configured to execute computer system readable media stored in the memory 120 for implementing various functional applications and data processing, in particular the anti-fraud method in the present embodiment.
According to a preferred embodiment, the Processor 110 includes, but is not limited to, a CPU (Central Processing Unit), an MPU (Micro Processor Unit), an MCU (Micro Control Unit), and an SOC (System on Chip).
According to a preferred embodiment, memory 120 includes, but is not limited to, volatile memory (e.g., DRAM or SRAM) and non-volatile memory (e.g., FLASH, optical disks, floppy disks, mechanical hard disks, etc.).
In accordance with a preferred embodiment, communication bus 130 includes, but is not limited to, an industry standard architecture bus, a micro channel architecture bus, an enhanced ISA bus, a video electronics standards Association local bus, and a peripheral component interconnect bus.
According to a preferred embodiment, as shown in FIG. 3, the electronic device 100 may further comprise at least one communication interface 140. Specifically, the electronic device 100 may be communicatively coupled to at least one external device through the communication interface 140. In addition, the electronic device 100 may also be communicatively coupled to at least one external network via a network adapter. The network adapter is communicatively coupled to a communication bus 130.
Example 4
The present invention also provides a storage medium containing computer-executable instructions which, when executed by a computer processor, are for performing an anti-fraud method of embodiment 1.
According to a preferred embodiment, the computer storage media of embodiment 4 of the present invention can take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium includes, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing.
More specific examples of computer readable storage media according to a preferred embodiment include: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
According to a preferred embodiment, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
According to a preferred embodiment, program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
In accordance with a preferred embodiment, computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as python, java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of this disclosure, may devise various solutions which are within the scope of this disclosure and are within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not intended to be limiting on the claims. The scope of the invention is defined by the claims and their equivalents. The present description contains several inventive concepts, such as "preferably", "according to a preferred embodiment" or "optionally", each indicating that the respective paragraph discloses a separate concept, the applicant reserves the right to submit divisional applications according to each inventive concept.

Claims (10)

1. An anti-fraud promotion method, comprising: providing at least one simulated test scenario for the target test subject;
acquiring behavior feature data and/or facial expression feature data of at least one two-dimensional semantic event and/or multi-dimensional semantic event in the simulation test script by a target test object, wherein the two-dimensional semantic event represents a two-dimensional problem with an objective determination result, and the multi-dimensional semantic event represents a multi-dimensional problem with a subjective uncertain result;
determining emotional state expression of a target test object for the two-dimensional semantic event and/or the multi-dimensional semantic event based on feature analysis extraction of behavioral feature data and/or facial expression feature data of the target test object;
determining the cheating type and/or cheating risk level of the target test object according to the preset weight of the emotional state expression corresponding to the two-dimensional semantic event and the multi-dimensional semantic event respectively;
providing corresponding fraud prevention promotional information for the target test object in response to the fraud type and/or fraud risk level,
when one simulation test script comprises a plurality of two-dimensional semantic events and multi-dimensional semantic events, setting according to the mode that the preset weight occupied by the two-dimensional semantic events is greater than the preset weight occupied by the multi-dimensional semantic events.
2. An anti-fraud promotion method, comprising:
providing at least one simulated test scenario for the target test subject;
acquiring sound characteristic data of a target test object aiming at least one two-dimensional semantic event and/or multi-dimensional semantic event in the simulation test script, wherein the two-dimensional semantic event represents a two-dimensional problem with an objective determination result, and the multi-dimensional semantic event represents a multi-dimensional problem with a subjective uncertain result;
determining emotional state expression of a target test object for the two-dimensional semantic event and/or the multi-dimensional semantic event based on feature analysis extraction of sound feature data of the target test object;
determining the cheating type and/or cheating risk level of the target test object according to the preset weight of the emotional state expression corresponding to the two-dimensional semantic event and the multi-dimensional semantic event respectively;
providing corresponding fraud prevention promotional information for the target test object in response to the fraud type and/or fraud risk level,
when one simulation test script comprises a plurality of two-dimensional semantic events and multi-dimensional semantic events, setting according to the mode that the preset weight occupied by the two-dimensional semantic events is greater than the preset weight occupied by the multi-dimensional semantic events.
3. The anti-fraud method of claim 2, further comprising:
acquiring the response duration of the target test object aiming at the feedback behavior of the two-dimensional semantic event and/or the multi-dimensional semantic event;
and determining the cheat type and/or cheat risk level of the target test object according to the preset weight of the response duration corresponding to the two-dimensional semantic event and the multi-dimensional semantic event respectively.
4. The anti-fraud method of claim 2, wherein said providing the target test object with the at least one simulated test script previously comprises:
acquiring comprehensive attribute data of a target test object from at least one third-party platform;
determining the identity type of the target test object according to the comprehensive attribute data;
determining at least one type of the simulation test scenario based on the identity type.
5. The anti-fraud method of claim 4, wherein said determining at least one type of the simulated test scenario based on the identity type comprises:
and determining at least one simulation test script according to the matching corresponding relation between the identity type of the target test object and the simulation test script.
6. The anti-fraud method of claim 5, wherein said determining at least one type of the simulated test scenario based on the identity type further comprises:
and creating a task execution stream corresponding to at least one simulation test script, wherein the task execution stream comprises at least one test link corresponding to the two-dimensional semantic event and/or the multi-dimensional semantic event.
7. An anti-fraud promotion system, comprising:
the statistical unit (1) is configured to acquire comprehensive attribute data of the target test object and determine the identity type of the target test object based on the comprehensive attribute data;
a processing unit (2) configured to provide at least one mock test scenario to the target test subject according to its identity type;
a monitoring unit (3) configured to acquire behavioral characteristic data and/or facial expression characteristic data of a target test object aiming at least one two-dimensional semantic event and/or multi-dimensional semantic event in the simulation test scenario, so that the processing unit (2) determines an emotional state expression of the target test object according to the behavioral characteristic data and/or facial expression characteristic data and determines an easy-cheating type and/or an easy-cheating risk level corresponding to the emotional state expression according to the emotional state expression, wherein the two-dimensional semantic event represents a two-dimensional problem with objective determination results, and the multi-dimensional semantic event represents a multi-dimensional problem with subjective uncertain results;
a management unit (4) configured to provide corresponding anti-fraud promotional information for said target test object in response to said fraud type and/or fraud risk level,
when one simulation test script comprises a plurality of two-dimensional semantic events and multi-dimensional semantic events, setting the simulation test script according to the mode that the preset weight occupied by the two-dimensional semantic events is greater than the preset weight occupied by the multi-dimensional semantic events.
8. An anti-fraud promotion system, comprising:
the statistical unit (1) is configured to acquire comprehensive attribute data of the target test object and determine the identity type of the target test object based on the comprehensive attribute data;
a processing unit (2) configured to provide at least one mock test script to the target test object according to its identity type;
a monitoring unit (3) configured to acquire sound feature data of a target test object aiming at least one two-dimensional semantic event and/or multi-dimensional semantic event in the simulation test scenario, so that the processing unit (2) determines an emotional state expression of the target test object according to the sound feature data, and determines an easy-cheating type and/or an easy-cheating risk level corresponding to the emotional state expression according to the emotional state expression, wherein the two-dimensional semantic event represents a two-dimensional problem with objective determination results, and the multi-dimensional semantic event represents a multi-dimensional problem with subjective uncertain results;
a management unit (4) configured to provide the target test object with corresponding anti-fraud promotional information in response to the fraud type and/or fraud risk level,
when one simulation test script comprises a plurality of two-dimensional semantic events and multi-dimensional semantic events, setting the simulation test script according to the mode that the preset weight occupied by the two-dimensional semantic events is greater than the preset weight occupied by the multi-dimensional semantic events.
9. An electronic device, comprising:
one or more processors (110);
a memory (120) for storing one or more computer programs;
when executed by said one or more processors (110), cause said one or more processors (110) to implement the anti-fraud method of any of claims 1-6.
10. A storage medium containing computer-executable instructions for performing the anti-fraud method of any one of claims 1-6, when executed by a computer processor.
CN202210902155.9A 2022-07-29 2022-07-29 Anti-fraud propaganda method, system, electronic equipment and storage medium Active CN114971658B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211600462.8A CN115829592A (en) 2022-07-29 2022-07-29 Anti-fraud propaganda method and system thereof
CN202210902155.9A CN114971658B (en) 2022-07-29 2022-07-29 Anti-fraud propaganda method, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210902155.9A CN114971658B (en) 2022-07-29 2022-07-29 Anti-fraud propaganda method, system, electronic equipment and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202211600462.8A Division CN115829592A (en) 2022-07-29 2022-07-29 Anti-fraud propaganda method and system thereof

Publications (2)

Publication Number Publication Date
CN114971658A CN114971658A (en) 2022-08-30
CN114971658B true CN114971658B (en) 2022-11-04

Family

ID=82970220

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210902155.9A Active CN114971658B (en) 2022-07-29 2022-07-29 Anti-fraud propaganda method, system, electronic equipment and storage medium
CN202211600462.8A Withdrawn CN115829592A (en) 2022-07-29 2022-07-29 Anti-fraud propaganda method and system thereof

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202211600462.8A Withdrawn CN115829592A (en) 2022-07-29 2022-07-29 Anti-fraud propaganda method and system thereof

Country Status (1)

Country Link
CN (2) CN114971658B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456981B (en) * 2023-12-25 2024-03-05 北京秒信科技有限公司 Real-time voice wind control system based on RNN voice recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688221A (en) * 2021-09-08 2021-11-23 中国平安人寿保险股份有限公司 Model-based dialect recommendation method and device, computer equipment and storage medium
CN113726942A (en) * 2021-08-31 2021-11-30 深圳壹账通智能科技有限公司 Intelligent telephone answering method, system, medium and electronic terminal
CN114120425A (en) * 2021-12-08 2022-03-01 云知声智能科技股份有限公司 Emotion recognition method and device, electronic equipment and storage medium
CN114119030A (en) * 2021-11-10 2022-03-01 恒安嘉新(北京)科技股份公司 Fraud prevention method and device, electronic equipment and storage medium
CN114117207A (en) * 2021-11-10 2022-03-01 恒安嘉新(北京)科技股份公司 System, method, electronic device and storage medium for preventing fraud

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008064431A1 (en) * 2006-12-01 2008-06-05 Latrobe University Method and system for monitoring emotional state changes
CN108764010A (en) * 2018-03-23 2018-11-06 姜涵予 Emotional state determines method and device
US20220188837A1 (en) * 2020-12-10 2022-06-16 Jpmorgan Chase Bank, N.A. Systems and methods for multi-agent based fraud detection
CN114420133A (en) * 2022-02-16 2022-04-29 平安科技(深圳)有限公司 Fraudulent voice detection method and device, computer equipment and readable storage medium
CN114745720A (en) * 2022-03-23 2022-07-12 中国人民解放军战略支援部队信息工程大学 Voice-variant fraud telephone detection method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113726942A (en) * 2021-08-31 2021-11-30 深圳壹账通智能科技有限公司 Intelligent telephone answering method, system, medium and electronic terminal
CN113688221A (en) * 2021-09-08 2021-11-23 中国平安人寿保险股份有限公司 Model-based dialect recommendation method and device, computer equipment and storage medium
CN114119030A (en) * 2021-11-10 2022-03-01 恒安嘉新(北京)科技股份公司 Fraud prevention method and device, electronic equipment and storage medium
CN114117207A (en) * 2021-11-10 2022-03-01 恒安嘉新(北京)科技股份公司 System, method, electronic device and storage medium for preventing fraud
CN114120425A (en) * 2021-12-08 2022-03-01 云知声智能科技股份有限公司 Emotion recognition method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114971658A (en) 2022-08-30
CN115829592A (en) 2023-03-21

Similar Documents

Publication Publication Date Title
Fitzpatrick et al. Automatic detection of verbal deception
CN109256136B (en) Voice recognition method and device
CN109960723B (en) Interaction system and method for psychological robot
CN108536681A (en) Intelligent answer method, apparatus, equipment and storage medium based on sentiment analysis
TW200837717A (en) Apparatus and method to reduce recognization errors through context relations among dialogue turns
CN110610705A (en) Voice interaction prompter based on artificial intelligence
CN110895568B (en) Method and system for processing court trial records
Hijjawi et al. ArabChat: An arabic conversational agent
CN109462603A (en) Voiceprint authentication method, equipment, storage medium and device based on blind Detecting
CN113343058B (en) Voice session supervision method, device, computer equipment and storage medium
CN113035232B (en) Psychological state prediction system, method and device based on voice recognition
WO2020252982A1 (en) Text sentiment analysis method and apparatus, electronic device, and non-volatile computer readable storage medium
CN108109445A (en) Teaching class feelings monitoring method
CN114971658B (en) Anti-fraud propaganda method, system, electronic equipment and storage medium
KR20200107501A (en) Device and Method of Scoring Emotion for Psychological Consultation
CN108109446A (en) Teaching class feelings monitoring system
US20210225364A1 (en) Method and system for speech effectiveness evaluation and enhancement
Chae et al. Dialogue chain-of-thought distillation for commonsense-aware conversational agents
CN114334163A (en) Depression diagnosis dialogue data set generation method, electronic device and storage medium
CN112669936A (en) Social network depression detection method based on texts and images
Hijjawi et al. The enhanced arabchat: An arabic conversational agent
JP7273563B2 (en) Information processing device, information processing method, and program
Sruti et al. Crime awareness and registration system using chatbot
Dove Predicting individual differences in vulnerability to fraud
CN115294635A (en) System, method and electronic equipment applied to anti-fraud knowledge propaganda

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant