CN117422547B - Auditing device and method based on intelligent dialogue system and micro expression recognition - Google Patents

Auditing device and method based on intelligent dialogue system and micro expression recognition Download PDF

Info

Publication number
CN117422547B
CN117422547B CN202311734324.3A CN202311734324A CN117422547B CN 117422547 B CN117422547 B CN 117422547B CN 202311734324 A CN202311734324 A CN 202311734324A CN 117422547 B CN117422547 B CN 117422547B
Authority
CN
China
Prior art keywords
target user
voice information
score
credit
micro
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311734324.3A
Other languages
Chinese (zh)
Other versions
CN117422547A (en
Inventor
储阳
周雪平
邓日晓
聂璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Sanxiang Bank Co Ltd
Original Assignee
Hunan Sanxiang Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Sanxiang Bank Co Ltd filed Critical Hunan Sanxiang Bank Co Ltd
Priority to CN202311734324.3A priority Critical patent/CN117422547B/en
Publication of CN117422547A publication Critical patent/CN117422547A/en
Application granted granted Critical
Publication of CN117422547B publication Critical patent/CN117422547B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to an auditing device and method based on an intelligent dialogue system and micro-expression recognition, mainly relating to the technical field of artificial intelligence, wherein the auditing device comprises an acquisition module for acquiring voice information and facial image data of a target user; the storage module is used for storing data; the processing module is used for processing data; the determining module is used for determining the credibility score and the emotion change score of the target user; the judging module is used for sequentially judging the first abnormal voice information and the second abnormal voice information; the calculation module is used for calculating and obtaining the credit score of the target user; and the evaluation module is used for evaluating the credit grade of the target user according to the credit score. According to the invention, the evaluation result is obtained by sequentially judging the abnormal voice information and carrying out credit scoring on the target user by combining the influence factors of the abnormal voice information of different categories on the credibility score and the emotion change score, so that manual auditing can be replaced, and the accuracy of the auditing conclusion and the auditing efficiency are improved.

Description

Auditing device and method based on intelligent dialogue system and micro expression recognition
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an auditing device and method based on an intelligent dialogue system and micro-expression recognition.
Background
On-site investigation is needed in small micro-credit auditing, but credit staff are insufficient and difficult to standardize, and how to judge the authenticity of answers in an intelligent dialogue through intelligent technology in the ChatGPT era is a major problem.
The operational effort of a financial institution in engaging in a small micro-credit business is compounded by three factors, scale, efficiency and risk. Since the internet finance is born, the credit investigation mode of customers by utilizing big data greatly improves the credit giving and paying efficiency of a loan party, so that the paying scale can be rapidly expanded in a short time, the reject ratio is well controlled, and the loan quality is ensured. However, the problems that big data are limited, the dimension is single, the small micro-credit products of each loan party have small line, the homogenization is easy, and the data use cost is gradually increased still exist. As credit interest rates go down, risk assets are exposed, and lending institutions will have increasingly strong appeal to expanding the dimension of customer information acquisition, discerning the authenticity of customer provided information.
Patent document with publication number CN 108765131a discloses a credit auditing method, device, terminal and readable storage medium based on micro-expression, the auditing method comprises: acquiring a credit micro-expression sample set, and constructing a micro-expression fraud recognition model according to the credit micro-expression sample set; when a credit auditing instruction is received, acquiring an original video stream of an applicant credit question and answer, wherein the original video stream comprises a micro expression of the applicant credit question and answer process; inputting the original video stream into the micro-expression fraud recognition model to perform micro-expression recognition to obtain a micro-expression recognition conclusion; and generating corresponding credit decision suggestion information according to the micro-expression recognition conclusion. The invention also provides a credit auditing device, equipment and a readable storage medium based on the micro-expression. The invention analyzes the micro expression of the credit applicant by using the micro expression fraud recognition model, determines the real internal heart condition of the applicant, and judges whether the applicant lies or not, thereby detecting fraud, reducing the workload of manual auditing, and being beneficial to improving the efficiency and accuracy of credit auditing.
However, the auditing method in the prior art is not accurate enough for obtaining the credit auditing conclusion based on the microexpressive identification, and the auditing device can not be used for replacing manual auditing, so that auditing efficiency is low.
Disclosure of Invention
Therefore, the invention provides an auditing device and method based on an intelligent dialogue system and micro-expression recognition, which are used for solving the problems of inaccurate auditing conclusion and low auditing efficiency in the prior art.
In order to achieve the above object, according to one aspect of the present invention, there is provided an auditing apparatus based on an intelligent dialogue system and micro-expression recognition, the apparatus comprising:
the acquisition module is used for acquiring voice information of a target user in the interaction process and facial image data when the voice information is output;
the storage module is connected with the acquisition module and used for storing the voice information and the facial image data acquired by the acquisition module, and storing preset knowledge base text data and a preset micro expression change data model;
the processing module is connected with the storage module and used for carrying out data processing on the voice information and the facial image data; the processing module comprises: the voice recognition unit is used for converting the voice information of the target user into voice text data; the micro-expression recognition unit is used for carrying out time sequence analysis on the facial image data of the target user and extracting micro-expression features;
the determining module is respectively connected with the storage module and the processing module and is used for determining the credibility score of the target user according to the comparison result of the semantic content of the voice text data corresponding to the voice information and the standard semantics stored in the preset knowledge base, calculating the emotion change index according to the change curves of a plurality of facial features in the facial image data in the acquisition time, and comparing the emotion change index of the target user with the preset standard emotion change index to determine the emotion change score of the target user;
The judging module is connected with the storage module and used for judging first abnormal voice information according to the volume and judging second abnormal voice information according to the volume change rate of the first abnormal voice information;
the calculation module is respectively connected with the determination module and the judgment module and is used for calculating a first credit score according to the credibility score and the emotion change score corresponding to the first abnormal voice information, calculating a second credit score according to the credibility score and the emotion change score corresponding to the second abnormal voice information, calculating a third credit score according to the credibility score and the emotion change score corresponding to the normal voice information, and calculating the credit scores of the first credit score, the second credit score and the third credit score according to a weighted average method;
and the evaluation module is connected with the calculation module and used for evaluating the credit rating of the estimation target user according to the credit rating.
Further, the determining module reads the voice text data of the target user in the processing module and classifies the voice text data according to semantic content, wherein the classified voice text data are the first voice text data Dh1, the second voice text data Dh2, … … and the nth voice text data Dhn of the target user respectively;
The determining module reads knowledge base text data preset in the storage module and classifies the knowledge base text data according to standard semantics, wherein the classified knowledge base text data are first-class standard knowledge base text data Dh10, second-class standard knowledge base text data Dh20, … … and nth-class standard knowledge base text data Dhn respectively;
setting a voice text data reliability calculation model ki=dhi/Dhi 0 of a target user in the determination module, and calculating voice text data reliability k=Σ (Dhi/Dhi 0) of the target user according to the voice text data reliability calculation model by the determination module (wherein i=1, 2, … …, n), k epsilon [0, n ];
the determining module determines the credibility level Kx of the target user according to the comparison result of the semantic content of the voice text data corresponding to the voice information and the standard semantics stored in the preset knowledge base,
when k is more than or equal to 0 and less than 0.6n, the determining module determines that the credibility level of the voice text data of the target user is low, and the voice text data is marked as Kx3;
when k is more than or equal to 0.6n and less than 0.8n, the determining module determines that Kx2 is marked in the voice text data credibility level of the target user;
When k is more than or equal to 0.8n and less than or equal to n, the determining module determines that the reliability level of the voice text data of the target user is high, and the voice text data is marked as Kx1;
each confidence level corresponds to a confidence score interval, wherein the score interval SKx 1= [80, 100] corresponding to Kx1, the score interval SKx 2= [60, 80) corresponding to Kx2, the score interval SKx 3= [0, 60) corresponding to Kx3, and the confidence score SKx = SKx1 or SKx2 or SKx3.
Further, the facial image data in the storage module are facial micro-expression data of the target user arranged according to a time sequence, and the processing module reads the facial image data of the storage module and classifies the facial image data of the target user according to facial features, wherein the facial image data are eye micro-expression data Wyi (t), nose micro-expression data Wbi (t) and mouth micro-expression data Wzi (t) respectively;
the micro-expression change data models preset in the storage module are eye micro-expression data models Wyi (t), nose micro-expression data models Wbi0 (t) and mouth micro-expression data models Wzi (t) respectively;
the determining module reads the classified facial image data of the target user and performs discrete degree analysis to respectively generate eye micro-expression change curves Wy (t), nose micro-expression change curves Wb (t) and mouth micro-expression change curves Wz (t);
The determining module reads the microexpressive change data model preset in the storage module, and generates an eye microexpressive change standard curve Wy0 (t), a nose microexpressive change standard curve Wb0 (t) and a mouth microexpressive change standard curve Wz0 (t) respectively;
the determining module calculates an average difference value delta Wy= |Wy (t) -Wy0 (t) |/(t-t 0) of an eye micro-expression change curve, an average difference value delta Wb= |Wb (t) -Wb0 (t) |/(t-t 0) of a nose micro-expression change curve, and an average difference value delta Wz= |Wz (t) -Wz0 (t) |/(t-t 0) of a mouth micro-expression change curve, wherein t0 is initial time for acquiring the image data, and t is actual time for acquiring the image data;
and the determining module calculates the emotion change index Q= delta Wy plus delta Wb plus delta Wz of the target user.
Further, a standard emotion change index Q0 is set in the determining module, the determining module determines the emotion change level of the target user, which is marked as Qx,
when Q/Q0 is more than or equal to 0 and less than 1, the determining module determines that the emotion change of the target user is small, and marks Qx1;
when Q/Q0 is more than or equal to 1 and less than 2, the determining module determines that Qx2 is marked in the emotion change of the target user;
when Q/Q0 is more than or equal to 2, the determining module determines that the emotion change of the target user is large, and marks Qx3;
Each of the emotion change levels corresponds to one emotion change scoring interval, wherein Qx1 corresponds to scoring interval SQx1 = [80, 100], qx2 corresponds to scoring interval SQx 2= [60, 80), qx3 corresponds to scoring interval SQx 3= [0, 60), and emotion change score SQx = SQx1 or SQx2 or SQx3.
Further, the voice information in the storage module is voice data of the target user arranged according to a time sequence, and the judging module receives the voice information of the storage module and judges abnormal voice information, wherein the abnormal voice information comprises first abnormal voice information and second abnormal voice information;
the judging module comprises a first judging unit and a second judging unit, wherein,
the first judging unit is configured to judge first abnormal voice information according to volume, and includes:
the first judgment unit presets a first standard volume as Fb10 and a second standard volume as Fb20, wherein Fb10 is larger than Fb20 and larger than 0,
when the sound volume Fb10 of the voice information is less than or equal to Fb and less than or equal to Fb20, the first judging unit judges that the voice information is normal voice information;
when the sound volume Fb of the voice information is more than Fb10 or Fb is less than Fb20, the first judging unit judges that the voice information is first abnormal voice information;
The second judging unit is configured to judge second abnormal voice information according to a volume change rate of the first abnormal voice information, and includes:
presetting a sound change rate calculation model AFb (ti) = [ Fb (ti) -Fb (t (i-1)) ]/[ ti-t (i-1) ]inthe judging unit, wherein Fb (ti) is the volume of the first abnormal voice information at the i-th moment, and Fb (t (i-1)) is the volume of the first abnormal voice information at the i-1-th moment;
and presetting a standard volume change rate AFb0 in the second judging unit, and judging the voice information at the ith moment as second abnormal voice information by the second judging unit when AFb (ti) is more than AFb 0.
Further, a credit score calculation model is preset in the calculation module
Wherein S is a credit score of the target user, S1 is the first credit score, S2 is the second credit score, S3 is the third credit score, α1 is a first confidence score adjustment coefficient corresponding to the first abnormal voice information, β1 is a first confidence score adjustment coefficient corresponding to the emotion change score corresponding to the first abnormal voice information, α2 is a second confidence score adjustment coefficient corresponding to the second abnormal voice information, and β2 is a second emotion change score adjustment coefficient corresponding to the emotion change score corresponding to the second abnormal voice information, wherein 0 < α2 < α1,0 < β2 < β1 < 1.
Further, the credit rating is preset in the evaluation module, wherein the credit rating comprises a first credit rating, a second credit rating and a third credit rating, the credit rating interval corresponding to the first credit rating is (80, 100), the credit rating interval corresponding to the second credit rating is (60, 80), and the credit rating interval corresponding to the third credit rating is [0, 60];
when the credit score S epsilon (80, 100) of the target user is equal to the credit score S epsilon of the target user, the evaluation module evaluates the credit grade of the target user as a first credit grade;
when the credit score S e (60, 80) of the target user, the evaluation module evaluates the credit rating of the target user as a second credit rating;
and when the credit score S epsilon [0, 60] of the target user, the evaluation module evaluates the credit grade of the target user as a third credit grade.
Further, the evaluation module generates corresponding auditing decision suggestion information according to the evaluation result.
On the other hand, the invention also provides an auditing method based on the intelligent dialogue system and the micro-expression recognition, which comprises the following steps:
s01, collecting voice information of a target user in the interaction process and face image data when the voice information is output;
S02, storing the voice information and the facial image data, and storing preset knowledge base text data and a preset micro expression change data model;
s03, carrying out data processing on the voice information and the face image data, wherein the data processing comprises the following steps: converting the voice information of the target user into voice text data, performing time sequence analysis on the facial image data of the target user, and extracting micro expression features;
s04, determining the credibility score and the emotion change score of the voice text data;
s05, judging first abnormal voice information according to the volume and judging second abnormal voice information according to the volume change rate of the first abnormal voice information;
s06, calculating a first credit score according to the credibility score and the emotion change score corresponding to the first abnormal voice information, calculating a second credit score according to the credibility score and the emotion change score corresponding to the second abnormal voice information, calculating a third credit score according to the credibility score and the emotion change score corresponding to the normal voice information, and calculating the credit scores of the first credit score, the second credit score and the third credit score according to a weighted average method;
S07, evaluating the credit rating of the target user according to the credit score.
Further, the method comprises, among other things,
s041, classifying the voice text data according to semantic content and calculating to obtain the credibility of the voice text data of the target user; determining the credibility score of the target user according to the semantic content of the voice text data of the target user obtained through calculation and a standard semantic comparison result stored in a preset knowledge base;
s042, classifying the facial image data of the target user according to the facial features, performing discrete degree analysis on the classified facial image data of the target user, generating a plurality of change curves of the facial features in the acquisition time, calculating to obtain emotion change indexes, and comparing the emotion change indexes of the target user with preset standard emotion change indexes to determine emotion change scores of the target user.
Compared with the prior art, the invention has the beneficial effects that the voice information of the target user is converted into voice text data through the voice recognition unit, and the microexpressive characteristics are extracted through the microexpressive recognition unit, so that the quality of the data to be analyzed is improved, and the accuracy of the judging result can be effectively improved; the credibility score and the emotion change score of the target user are determined through the determining module, so that the auditing result of the target user is more objective and specific; the first abnormal voice information and the second abnormal voice information are sequentially judged by the judging module, so that the accuracy of judging the abnormal voice information is improved; the final credit score is carried out on the target user through the calculation module according to the credibility score and the emotion change score corresponding to the abnormal voice, so that the accuracy of the credit score is improved; and the evaluation module is used for evaluating the credit grade of the target user according to the credit score, so that the accuracy of the judgment conclusion of the auditing device is improved.
In particular, the reliability level determination accuracy is improved by classifying the voice text data according to semantic content by the determination module and comparing the voice text data with the corresponding classified standard knowledge base text data in the knowledge base; classifying the facial image data through a determining module, extracting micro-expression characteristics, calculating to obtain a micro-expression change curve, and comparing the micro-expression change curve with the data of a micro-expression change data model to determine the emotion change grade of the user, so that the accuracy of determining the emotion change grade of the user is improved; and determining the credibility score and the emotion change score of the target user through the determining module, so that the auditing result of the target user is more objective and specific.
In particular, the judgment module judges the first abnormal voice information according to the volume, and judges the first abnormal voice information for the second time according to the volume change rate to obtain the second abnormal voice information, so that the accuracy of judging different abnormal voice information is improved, and the accuracy of scoring the credit of the target user can be improved.
In particular, according to the judgment result of the voice information, the calculation module adjusts the credibility score and the emotion change score of the user according to the influence coefficient of the credibility score and the influence coefficient of the emotion change score of the target user under the condition of abnormal voice information, and further calculates the credit score of the target user according to a weighted average method, so that the scoring process is more objective and specific, and the scoring result is high in accuracy.
In particular, the evaluation module is used for evaluating the credit grade of the target user according to the credit score, so that the accuracy of judging the conclusion by the auditing device is improved.
In particular, the auditing decision proposal information is generated, which is helpful for target users to perfect auditing conditions and improve auditing accuracy.
In particular, the auditing method obtains voice information classification by sequentially judging abnormal voice information, performs credit scoring on the target user by combining the reliability scoring determined by the abnormal voice information on voice text information and the influence factors of the emotion change scoring of the target user determined by micro expression recognition, obtains an evaluation result, can replace manual auditing to a greater extent, and improves the accuracy and auditing efficiency of auditing conclusion.
Drawings
FIG. 1 is a diagram of an auditing apparatus based on intelligent dialogue system and micro-expression recognition according to the present invention;
FIG. 2 is a schematic diagram of a judging module structure of an auditing device based on intelligent dialogue system and micro-expression recognition according to the present invention;
fig. 3 is a flowchart of an auditing method based on intelligent dialogue system and micro-expression recognition according to the present invention.
Detailed Description
In order that the objects and advantages of the invention will become more apparent, the invention will be further described with reference to the following examples; it should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
It should be noted that, in the description of the present invention, terms such as "upper," "lower," "left," "right," "inner," "outer," and the like indicate directions or positional relationships based on the directions or positional relationships shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the apparatus or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Furthermore, it should be noted that, in the description of the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those skilled in the art according to the specific circumstances.
An auditing device and method based on intelligent dialogue system and micro expression recognition, as shown in fig. 1-3, can be implemented as follows:
specifically, as shown in fig. 1, the auditing device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring voice information of a target user in the interaction process and facial image data when the voice information is output;
the storage module is connected with the acquisition module and used for storing the voice information and the facial image data acquired by the acquisition module and storing preset knowledge base text data and a preset micro expression change data model;
the processing module is connected with the storage module and used for carrying out data processing on the voice information and the facial image data; the processing module comprises: the voice recognition unit is used for converting voice information of the target user into voice text data; the micro-expression recognition unit is used for carrying out time sequence analysis on the facial image data of the target user and extracting micro-expression features;
the determining module is respectively connected with the storage module and the processing module and is used for determining the credibility score of the target user according to the comparison result of the semantic content of the voice text data corresponding to the voice information and the standard semantics stored in the preset knowledge base, calculating the emotion change index according to the change curves of a plurality of facial features in the facial image data in the acquisition time, and comparing the emotion change index of the target user with the preset standard emotion change index to determine the emotion change score of the target user;
The judging module is connected with the storage module and used for judging the first abnormal voice information according to the volume and judging the second abnormal voice information according to the volume change rate of the first abnormal voice information;
the calculation module is respectively connected with the determination module and the judgment module and is used for calculating a first credit score according to the credibility score and the emotion change score corresponding to the first abnormal voice information, calculating a second credit score according to the credibility score and the emotion change score corresponding to the second abnormal voice information, calculating a third credit score according to the credibility score and the emotion change score corresponding to the normal voice information, and calculating the credit score of the target user according to a weighted average method;
and the evaluation module is connected with the calculation module and used for evaluating the credit rating of the target user according to the credit score.
The voice information of the target user is converted into voice text data through the voice recognition unit, and the micro expression characteristics are extracted through the micro expression recognition unit, so that the quality of data to be analyzed is improved, and the accuracy of a judging result can be effectively improved; the credibility score and the emotion change score of the target user are determined through the determining module, so that the auditing result of the target user is more objective and specific; the first abnormal voice information and the second abnormal voice information are sequentially judged by the judging module, so that the accuracy of judging the abnormal voice information is improved; the final credit score is carried out on the target user through the calculation module according to the credibility score and the emotion change score corresponding to the abnormal voice, so that the accuracy of the credit score is improved; and the evaluation module is used for evaluating the credit grade of the target user according to the credit score, so that the accuracy of the judgment conclusion of the auditing device is improved.
Specifically, the determining module reads the voice text data of the target user in the processing module and classifies the voice text data according to semantic content, wherein the classified voice text data are respectively voice text data Dh1 of the target user, voice text data Dh2 of the second type, … … and voice text data Dhn of the nth type;
the determining module reads the text data of the knowledge base preset in the storage module and classifies the text data of the knowledge base according to standard semantics, wherein the classified text data of the knowledge base are respectively first-class standard text data Dh10, second-class standard text data Dh20, … … and nth-class standard text data Dhn;
setting a voice text data reliability calculation model ki=dhi/Dhi 0 of the target user in a determination module, and calculating the voice text data reliability k=Σ (Dhi/Dhi 0) of the target user according to the voice text data reliability calculation model by the determination module (wherein i=1, 2, … …, n), and k epsilon [0, n ];
the determining module determines the credibility level Kx of the target user according to the comparison result of the semantic content of the voice text data corresponding to the voice information and the standard semantic stored in the preset knowledge base,
When k is more than or equal to 0 and less than 0.6n, the determining module determines that the reliability level of the voice text data of the target user is low, and the voice text data is marked as Kx3;
when k is more than or equal to 0.6n and less than 0.8n, determining the reliability level of the voice text data of the target user by the determining module, and marking as Kx2;
when k is more than or equal to 0.8n and less than or equal to n, the determining module determines that the reliability level of the voice text data of the target user is high, and the voice text data is marked as Kx1;
each confidence level corresponds to a confidence score interval, wherein the score interval SKx 1= [80, 100] corresponding to Kx1, the score interval SKx 2= [60, 80) corresponding to Kx2, the score interval SKx 3= [0, 60) corresponding to Kx3, and the confidence score SKx = SKx1 or SKx2 or SKx3.
Specifically, the facial image data in the storage module are facial micro-expression data of a target user arranged according to a time sequence, the processing module reads the facial image data of the storage module and classifies the facial image data of the target user according to facial features, and the facial image data are eye micro-expression data Wyi (t), nose micro-expression data Wbi (t) and mouth micro-expression data Wzi (t) respectively;
the micro-expression change data models preset in the storage module are eye micro-expression data models Wyi (t), nose micro-expression data models Wbi0 (t) and mouth micro-expression data models Wzi (t) respectively;
The determining module reads the classified facial image data of the target user and performs discrete degree analysis to respectively generate eye micro-expression change curves Wy (t), nose micro-expression change curves Wb (t) and mouth micro-expression change curves Wz (t);
the determining module reads a microexpressive change data model preset in the storage module, and generates an eye microexpressive change standard curve Wy0 (t), a nose microexpressive change standard curve Wb0 (t) and a mouth microexpressive change standard curve Wz0 (t) respectively;
the determining module calculates an average difference value delta Wy= |Wy (t) -Wy0 (t) |/(t-t 0) of the eye micro-expression change curve, an average difference value delta Wb= |Wb (t) -Wb0 (t) |/(t-t 0) of the nose micro-expression change curve, and an average difference value delta Wz= |Wz (t) -Wz0 (t) |/(t-t 0) of the mouth micro-expression change curve, wherein t0 is the initial time for acquiring image data, and t is the actual time for acquiring the image data;
the determining module calculates and obtains a target user emotion change index Q= [ delta ] Wy+ [ delta ] Wb+ [ delta ] Wz.
Specifically, the standard emotion change index Q0 is set in the determining module, the determining module determines the emotion change level of the target user, which is marked as Qx,
when Q/Q0 is more than or equal to 0 and less than 1, the determining module determines that the emotion change of the target user is small, and marks the emotion change as Qx1;
When Q/Q0 is less than or equal to 1 and less than 2, the determining module determines that Qx2 is marked in the emotion change of the target user;
when Q/Q0 is more than or equal to 2, the determining module determines that the emotion change of the target user is large, and marks Qx3;
each mood change grade corresponds to a mood change scoring interval, wherein Qx1 corresponds to scoring interval SQx 1= [80, 100], qx2 corresponds to scoring interval SQx 2= [60, 80), qx3 corresponds to scoring interval SQx 3= [0, 60), mood change score SQx = SQx1 or SQx2 or SQx3.
The voice text data is classified according to semantic content through the determining module and compared with the corresponding classified standard knowledge base text data in the knowledge base, so that the accuracy of reliability level determination is improved; classifying the facial image data through a determining module, extracting micro-expression characteristics, calculating to obtain a micro-expression change curve, and comparing the micro-expression change curve with the data of a micro-expression change data model to determine the emotion change grade of the user, so that the accuracy of determining the emotion change grade of the user is improved; and determining the credibility score and the emotion change score of the target user through the determining module, so that the auditing result of the target user is more objective and specific.
Specifically, the voice information in the storage module is voice data of a target user arranged according to a time sequence, and the judging module receives the voice information of the storage module and judges abnormal voice information, wherein the abnormal voice information comprises first abnormal voice information and second abnormal voice information;
As shown in fig. 2, the judging module includes a first judging unit and a second judging unit, wherein,
the first judging unit is configured to judge first abnormal voice information according to volume, and includes:
the first judgment unit presets a first standard volume as Fb10 and a second standard volume as Fb20, wherein Fb10 is larger than Fb20 and larger than 0,
when the sound volume Fb10 of the voice information is less than or equal to Fb20, the first judging unit judges that the voice information is normal voice information;
when the sound volume Fb of the voice information is more than Fb10 or Fb is less than Fb20, the first judging unit judges that the voice information is first abnormal voice information;
the second judging unit is configured to judge second abnormal voice information according to a volume change rate of the first abnormal voice information, and includes:
presetting a sound change rate calculation model AFb (ti) = [ Fb (ti) -Fb (t (i-1)) ]/[ ti-t (i-1) ], wherein Fb (ti) is the volume of the first abnormal voice information at the i-th moment, and Fb (t (i-1)) is the volume of the first abnormal voice information at the i-th moment;
the standard volume change rate AFb0 is preset in the second judging unit, and when AFb (ti) is more than AFb0, the second judging unit judges that the voice information at the i time is second abnormal voice information.
The first abnormal voice information is judged according to the volume through the judging module, the second abnormal voice information is obtained by secondarily judging the first abnormal voice information according to the volume change rate, the accuracy of judging different abnormal voice information is improved, and the accuracy of credit scoring of a target user can be improved.
Specifically, a credit score calculation model is preset in a calculation module
Wherein S is the credit score of the target user, S1 is the first credit score, S2 is the second credit score, S3 is the third credit score, alpha 1 is the first credibility score adjustment coefficient corresponding to the first abnormal voice information, beta 1 is the first emotion change score adjustment coefficient corresponding to the emotion change score corresponding to the first abnormal voice information, alpha 2 is the second credibility score adjustment coefficient corresponding to the second abnormal voice information, beta 2 is the second emotion change score adjustment coefficient corresponding to the emotion change score corresponding to the second abnormal voice information, wherein 0 < alpha 2 < alpha 1,0 < beta 2 < beta 1 < 1.
According to the judgment result of the voice information, the calculation module adjusts the credibility score and the emotion change score of the user according to the influence coefficient of the credibility score and the influence coefficient of the emotion change score of the target user under the condition of abnormal voice information, and further calculates the credit score of the target user according to a weighted average method, so that the scoring process is more objective and specific, and the scoring result is high in accuracy.
Specifically, preset credit levels in the evaluation module, including a first credit level, a second credit level and a third credit level, where the credit rating interval corresponding to the first credit level is (80, 100), the credit rating interval corresponding to the second credit level is (60, 80), and the credit rating interval corresponding to the third credit level is [0, 60];
when the credit score S epsilon (80, 100) of the target user, the evaluation module evaluates the credit grade of the target user as a first credit grade;
when the credit score S epsilon (60, 80) of the target user, the evaluation module evaluates the credit rating of the target user as a second credit rating;
when the credit score S epsilon [0, 60] of the target user, the evaluation module evaluates the credit rating of the target user as a third credit rating.
Specifically, the evaluation module generates corresponding auditing decision suggestion information according to the evaluation result.
The credit rating of the target user is evaluated by utilizing the evaluation module according to the credit score, so that the accuracy of the judgment conclusion of the auditing device is improved; and generating auditing decision suggestion information, thereby being beneficial to improving auditing conditions of target users and improving auditing accuracy.
Specifically, as shown in fig. 3, the auditing method includes:
S01, collecting voice information of a target user in the interaction process and face image data when the voice information is output;
s02, storing voice information and facial image data, and storing preset knowledge base text data and a preset micro expression change data model;
s03, carrying out data processing on the voice information and the facial image data, wherein the data processing comprises the following steps: converting voice information of a target user into voice text data, performing time sequence analysis on facial image data of the target user, and extracting micro expression features;
s04, determining a voice text data credibility score and an emotion change score, comprising:
s041, classifying the voice text data according to semantic content and calculating to obtain the credibility of the voice text data of the target user; determining the credibility score of the target user according to the semantic content of the voice text data of the target user obtained through calculation and a standard semantic comparison result stored in a preset knowledge base;
s042, classifying the facial image data of the target user according to the facial features, performing discrete degree analysis on the classified facial image data of the target user, generating a plurality of change curves of the facial features in the acquisition time, calculating to obtain emotion change indexes, and comparing the emotion change indexes of the target user with preset standard emotion change indexes to determine emotion change scores of the target user;
S05, judging first abnormal voice information according to the volume and judging second abnormal voice information according to the volume change rate of the first abnormal voice information;
s06, calculating a first credit score according to the credibility score and the emotion change score corresponding to the first abnormal voice information, calculating a second credit score according to the credibility score and the emotion change score corresponding to the second abnormal voice information, calculating a third credit score according to the credibility score and the emotion change score corresponding to the normal voice information, and calculating the first credit score, the second credit score and the third credit score according to a weighted average method to obtain the credit score of the target user;
s07, evaluating the credit rating of the target user according to the credit score.
According to the auditing method, the abnormal voice information is sequentially judged to obtain voice information classification, the reliability score determined by the voice text information is combined with the abnormal voice information, the credit score is carried out on the target user by combining the influence factors of the emotion change score of the target user determined by micro expression recognition, the evaluation result is obtained, the manual auditing can be replaced to a greater extent, and the accuracy and auditing efficiency of auditing conclusion are improved.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will be within the scope of the present invention.
The foregoing description is only of the preferred embodiments of the invention and is not intended to limit the invention; various modifications and variations of the present invention will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. An auditing device based on intelligent dialogue system and micro-expression recognition, which is characterized by comprising:
the acquisition module is used for acquiring voice information of a target user in the interaction process and facial image data when the voice information is output;
the storage module is used for storing the voice information and the facial image data acquired by the acquisition module, and storing preset knowledge base text data and a preset micro expression change data model;
The processing module is connected with the storage module and used for carrying out data processing on the voice information and the facial image data; the processing module comprises: the voice recognition unit is used for converting the voice information of the target user into voice text data; the micro-expression recognition unit is used for carrying out time sequence analysis on the facial image data of the target user and extracting micro-expression features;
the determining module is respectively connected with the storage module and the processing module and is used for determining the credibility score of the target user according to the comparison result of the semantic content of the voice text data corresponding to the voice information and the standard semantics stored in the preset knowledge base, calculating the emotion change index according to the change curves of a plurality of facial features in the facial image data in the acquisition time, and comparing the emotion change index of the target user with the preset standard emotion change index to determine the emotion change score of the target user;
the judging module is connected with the storage module and used for judging first abnormal voice information according to the volume and judging second abnormal voice information according to the volume change rate of the first abnormal voice information;
The calculation module is respectively connected with the determination module and the judgment module and is used for calculating a first credit score according to the credibility score and the emotion change score corresponding to the first abnormal voice information, calculating a second credit score according to the credibility score and the emotion change score corresponding to the second abnormal voice information, calculating a third credit score according to the credibility score and the emotion change score corresponding to the normal voice information, and calculating the credit score of the target user according to a weighted average method;
the evaluation module is connected with the calculation module and used for evaluating the credit grade of the target user according to the credit score;
the voice information in the storage module is voice data of the target user arranged according to a time sequence, the judging module receives the voice information of the storage module and judges abnormal voice information, and the abnormal voice information comprises first abnormal voice information and second abnormal voice information;
the judging module comprises a first judging unit and a second judging unit, wherein,
The first judging unit is configured to judge first abnormal voice information according to volume, and includes:
the first judgment unit presets a first standard volume as Fb10 and a second standard volume as Fb20, wherein Fb10 is larger than Fb20 and larger than 0,
when the sound volume Fb20 of the voice information is less than or equal to Fb and less than or equal to Fb10, the first judging unit judges that the voice information is normal voice information;
when the sound volume Fb of the voice information is more than Fb10 or Fb is less than Fb20, the first judging unit judges that the voice information is first abnormal voice information;
the second judging unit is configured to judge second abnormal voice information according to a volume change rate of the first abnormal voice information, and includes:
presetting a sound change rate calculation model AFb (ti) = [ Fb (ti) -Fb (t (i-1)) ]/[ ti-t (i-1) ]inthe judging unit, wherein Fb (ti) is the volume of the first abnormal voice information at the i-th moment, and Fb (t (i-1)) is the volume of the first abnormal voice information at the i-th moment;
and presetting a standard volume change rate AFb0 in the second judging unit, and judging the voice information at the ith moment as second abnormal voice information by the second judging unit when AFb (ti) is more than AFb 0.
2. The auditing device based on intelligent dialogue system and micro-expression recognition according to claim 1, wherein the determining module reads the voice text data of the target user in the processing module and classifies the voice text data according to semantic content, and the classified voice text data are respectively the first type voice text data Dh1, the second type voice text data Dh2, … … and the nth type voice text data Dhn of the target user;
the determining module reads knowledge base text data preset in the storage module and classifies the knowledge base text data according to standard semantics, wherein the classified knowledge base text data are first-class standard knowledge base text data Dh10, second-class standard knowledge base text data Dh20, … … and nth-class standard knowledge base text data Dhn respectively;
setting a voice text data reliability calculation model ki=dhi/Dhi 0 of a target user in the determination module, and calculating voice text data reliability k=Σ (Dhi/Dhi 0) of the target user according to the voice text data reliability calculation model by the determination module, wherein i=1, 2, … …, n, k epsilon [0, n ]; the determining module determines the credibility level Kx of the target user according to the comparison result of the semantic content of the voice text data corresponding to the voice information and the standard semantics stored in the preset knowledge base,
When k is more than or equal to 0 and less than 0.6n, the determining module determines that the credibility level of the voice text data of the target user is low, and the voice text data is marked as Kx3;
when k is more than or equal to 0.6n and less than 0.8n, the determining module determines that Kx2 is marked in the voice text data credibility level of the target user;
when k is more than or equal to 0.8n and less than or equal to n, the determining module determines that the reliability level of the voice text data of the target user is high, and the voice text data is marked as Kx1;
each confidence level corresponds to a confidence score interval, wherein the score interval SKx 1= [80, 100] corresponding to Kx1, the score interval SKx 2= [60, 80) corresponding to Kx2, the score interval SKx 3= [0, 60) corresponding to Kx3, and the confidence score SKx = SKx1 or SKx2 or SKx3.
3. The auditing device based on intelligent dialogue system and micro-expression recognition according to claim 2, wherein the facial image data in the storage module is facial micro-expression data of the target user arranged according to a time sequence, and the processing module reads the facial image data of the storage module and classifies the facial image data of the target user according to facial features, which are eye micro-expression data Wyi (t), nose micro-expression data Wbi (t) and mouth micro-expression data Wzi (t), respectively;
The micro-expression change data models preset in the storage module are eye micro-expression data models Wyi (t), nose micro-expression data models Wbi0 (t) and mouth micro-expression data models Wzi (t) respectively;
the determining module reads the classified facial image data of the target user and performs discrete degree analysis to respectively generate eye micro-expression change curves Wy (t), nose micro-expression change curves Wb (t) and mouth micro-expression change curves Wz (t);
the determining module reads the microexpressive change data model preset in the storage module, and generates an eye microexpressive change standard curve Wy0 (t), a nose microexpressive change standard curve Wb0 (t) and a mouth microexpressive change standard curve Wz0 (t) respectively;
the determining module calculates an average difference value delta Wy= |Wy (t) -Wy0 (t) |/(t-t 0) of an eye micro-expression change curve, an average difference value delta Wb= |Wb (t) -Wb0 (t) |/(t-t 0) of a nose micro-expression change curve, and an average difference value delta Wz= |Wz (t) -Wz0 (t) |/(t-t 0) of a mouth micro-expression change curve, wherein t0 is initial time for acquiring the image data, and t is actual time for acquiring the image data;
and the determining module calculates the emotion change index Q= delta Wy plus delta Wb plus delta Wz of the target user.
4. The auditing apparatus based on intelligent dialogue system and micro-expression recognition according to claim 3, wherein the determining module sets a standard emotion change index Q0, the determining module determines the emotion change level of the target user, denoted as Qx,
when Q/Q0 is more than or equal to 0 and less than 1, the determining module determines that the emotion change of the target user is small, and marks Qx1;
when Q/Q0 is more than or equal to 1 and less than 2, the determining module determines that Qx2 is marked in the emotion change of the target user;
when Q/Q0 is more than or equal to 2, the determining module determines that the emotion change of the target user is large, and marks Qx3;
each of the emotion change levels corresponds to one emotion change scoring interval, wherein Qx1 corresponds to scoring interval SQx1 = [80, 100], qx2 corresponds to scoring interval SQx 2= [60, 80), qx3 corresponds to scoring interval SQx 3= [0, 60), and emotion change score SQx = SQx1 or SQx2 or SQx3.
5. The auditing device based on intelligent dialogue system and micro-expression recognition according to claim 4, wherein a credit score calculation model is preset in the calculation module
Wherein S is a credit score of the target user, S1 is the first credit score, S2 is the second credit score, S3 is the third credit score, α1 is a first confidence score adjustment coefficient corresponding to the first abnormal voice information, β1 is a first confidence score adjustment coefficient corresponding to the emotion change score corresponding to the first abnormal voice information, α2 is a second confidence score adjustment coefficient corresponding to the second abnormal voice information, and β2 is a second emotion change score adjustment coefficient corresponding to the emotion change score corresponding to the second abnormal voice information, wherein 0 < α2 < α1,0 < β2 < β1 < 1.
6. The auditing device based on intelligent dialogue system and micro-expression recognition according to claim 5, wherein the credit levels are preset in the evaluation module, and include a first credit level, a second credit level and a third credit level, wherein the credit score interval corresponding to the first credit level is (80, 100), the credit score interval corresponding to the second credit level is (60, 80), and the credit score interval corresponding to the third credit level is [0, 60];
when the credit score S epsilon (80, 100) of the target user is equal to the credit score S epsilon of the target user, the evaluation module evaluates the credit grade of the target user as a first credit grade;
when the credit score S e (60, 80) of the target user, the evaluation module evaluates the credit rating of the target user as a second credit rating;
and when the credit score S epsilon [0, 60] of the target user, the evaluation module evaluates the credit grade of the target user as a third credit grade.
7. The auditing device based on intelligent dialogue system and micro-expression recognition according to claim 6, wherein the evaluation module generates corresponding auditing decision suggestion information according to the evaluation result.
8. An intelligent dialog system and micro-expression recognition based auditing method applied to the intelligent dialog system and micro-expression recognition based auditing device as claimed in any one of claims 1-7, comprising:
s01, collecting voice information of a target user in the interaction process and face image data when the voice information is output;
s02, storing the voice information and the facial image data, and storing preset knowledge base text data and a preset micro expression change data model;
s03, carrying out data processing on the voice information and the face image data, wherein the data processing comprises the following steps: converting the voice information of the target user into voice text data, performing time sequence analysis on the facial image data of the target user, and extracting micro expression features;
s04, determining the credibility score and the emotion change score of the voice text data;
s05, judging first abnormal voice information according to the volume and judging second abnormal voice information according to the volume change rate of the first abnormal voice information;
s06, calculating a first credit score according to the credibility score and the emotion change score corresponding to the first abnormal voice information, calculating a second credit score according to the credibility score and the emotion change score corresponding to the second abnormal voice information, calculating a third credit score according to the credibility score and the emotion change score corresponding to the normal voice information, and calculating the credit scores of the first credit score, the second credit score and the third credit score according to a weighted average method;
S07, evaluating the credit grade of the target user according to the credit score;
wherein determining the speech text data confidence score and emotion change score comprises:
s041, classifying the voice text data according to semantic content and calculating to obtain the credibility of the voice text data of the target user; determining the credibility score of the target user according to the semantic content of the voice text data of the target user obtained through calculation and a standard semantic comparison result stored in a preset knowledge base;
s042, classifying the facial image data of the target user according to the facial features, performing discrete degree analysis on the classified facial image data of the target user, generating a plurality of change curves of the facial features in the acquisition time, calculating to obtain emotion change indexes, and comparing the emotion change indexes of the target user with preset standard emotion change indexes to determine emotion change scores of the target user.
CN202311734324.3A 2023-12-18 2023-12-18 Auditing device and method based on intelligent dialogue system and micro expression recognition Active CN117422547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311734324.3A CN117422547B (en) 2023-12-18 2023-12-18 Auditing device and method based on intelligent dialogue system and micro expression recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311734324.3A CN117422547B (en) 2023-12-18 2023-12-18 Auditing device and method based on intelligent dialogue system and micro expression recognition

Publications (2)

Publication Number Publication Date
CN117422547A CN117422547A (en) 2024-01-19
CN117422547B true CN117422547B (en) 2024-04-02

Family

ID=89525157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311734324.3A Active CN117422547B (en) 2023-12-18 2023-12-18 Auditing device and method based on intelligent dialogue system and micro expression recognition

Country Status (1)

Country Link
CN (1) CN117422547B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704834A (en) * 2017-10-13 2018-02-16 上海壹账通金融科技有限公司 Householder method, device and storage medium are examined in micro- expression face
JP2018194590A (en) * 2017-05-12 2018-12-06 株式会社ブレインチャイルド Credit score optimizing method, device and system
CN111275444A (en) * 2020-01-14 2020-06-12 深圳壹账通智能科技有限公司 Contract signing-based double recording method and device, terminal and storage medium
WO2021000678A1 (en) * 2019-07-04 2021-01-07 平安科技(深圳)有限公司 Business credit review method, apparatus, and device, and computer-readable storage medium
CN112215700A (en) * 2020-10-13 2021-01-12 中国银行股份有限公司 Credit face audit method and device
KR20230087268A (en) * 2021-12-09 2023-06-16 주식회사 카카오뱅크 Method for operating credit scoring model using autoencoder
KR20230123328A (en) * 2022-02-16 2023-08-23 강도형 Method and apparatus for providing information on emotion of user
CN116776235A (en) * 2023-06-20 2023-09-19 平安科技(深圳)有限公司 Emotion adjustment instruction recommending method, device, equipment and medium
CN117149979A (en) * 2023-09-15 2023-12-01 天元大数据信用管理有限公司 Method and device for constructing intelligent question-answering and review module before loan

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9930186B2 (en) * 2015-10-14 2018-03-27 Pindrop Security, Inc. Call detail record analysis to identify fraudulent activity
CN107392757A (en) * 2017-07-24 2017-11-24 重庆小雨点小额贷款有限公司 Signal auditing method and device
US11216784B2 (en) * 2020-01-29 2022-01-04 Cut-E Assessment Global Holdings Limited Systems and methods for automating validation and quantification of interview question responses

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018194590A (en) * 2017-05-12 2018-12-06 株式会社ブレインチャイルド Credit score optimizing method, device and system
CN107704834A (en) * 2017-10-13 2018-02-16 上海壹账通金融科技有限公司 Householder method, device and storage medium are examined in micro- expression face
WO2021000678A1 (en) * 2019-07-04 2021-01-07 平安科技(深圳)有限公司 Business credit review method, apparatus, and device, and computer-readable storage medium
CN111275444A (en) * 2020-01-14 2020-06-12 深圳壹账通智能科技有限公司 Contract signing-based double recording method and device, terminal and storage medium
CN112215700A (en) * 2020-10-13 2021-01-12 中国银行股份有限公司 Credit face audit method and device
KR20230087268A (en) * 2021-12-09 2023-06-16 주식회사 카카오뱅크 Method for operating credit scoring model using autoencoder
KR20230123328A (en) * 2022-02-16 2023-08-23 강도형 Method and apparatus for providing information on emotion of user
CN116776235A (en) * 2023-06-20 2023-09-19 平安科技(深圳)有限公司 Emotion adjustment instruction recommending method, device, equipment and medium
CN117149979A (en) * 2023-09-15 2023-12-01 天元大数据信用管理有限公司 Method and device for constructing intelligent question-answering and review module before loan

Also Published As

Publication number Publication date
CN117422547A (en) 2024-01-19

Similar Documents

Publication Publication Date Title
Hansen et al. Speaker recognition by machines and humans: A tutorial review
Ganapathiraju et al. Applications of support vector machines to speech recognition
US7904295B2 (en) Method for automatic speaker recognition with hurst parameter based features and method for speaker classification based on fractional brownian motion classifiers
US7603275B2 (en) System, method and computer program product for verifying an identity using voiced to unvoiced classifiers
US20070129941A1 (en) Preprocessing system and method for reducing FRR in speaking recognition
KR100406307B1 (en) Voice recognition method and system based on voice registration method and system
US20030110038A1 (en) Multi-modal gender classification using support vector machines (SVMs)
US20110161083A1 (en) Methods and systems for assessing and improving the performance of a speech recognition system
EP2711923B1 (en) Methods and systems for assessing and improving the performance of a speech recognition system
CN1291324A (en) System and method for detecting a recorded voice
CN110349586B (en) Telecommunication fraud detection method and device
Alexander Forensic automatic speaker recognition using Bayesian interpretation and statistical compensation for mismatched conditions
Mahesha et al. Support vector machine-based stuttering dysfluency classification using GMM supervectors
CN112015874A (en) Student mental health accompany conversation system
CN111489736B (en) Automatic scoring device and method for seat speaking operation
CN117422547B (en) Auditing device and method based on intelligent dialogue system and micro expression recognition
KR100864828B1 (en) System for obtaining speaker&#39;s information using the speaker&#39;s acoustic characteristics
Kartik et al. Multimodal biometric person authentication system using speech and signature features
JPWO2020003413A1 (en) Information processing equipment, control methods, and programs
CN114220419A (en) Voice evaluation method, device, medium and equipment
CN114360553A (en) Method for improving voiceprint safety
Hannani et al. Text-independent speaker verification
CN111091836A (en) Intelligent voiceprint recognition method based on big data
CN106971725B (en) Voiceprint recognition method and system with priority
Skosan Histogram equalization for robust text-independent speaker verification in telephone environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant