CN111553189A - Data verification method and device based on video information and storage medium - Google Patents

Data verification method and device based on video information and storage medium Download PDF

Info

Publication number
CN111553189A
CN111553189A CN202010236490.0A CN202010236490A CN111553189A CN 111553189 A CN111553189 A CN 111553189A CN 202010236490 A CN202010236490 A CN 202010236490A CN 111553189 A CN111553189 A CN 111553189A
Authority
CN
China
Prior art keywords
behavior
main body
video
verification
video information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010236490.0A
Other languages
Chinese (zh)
Inventor
李伟
赵之砚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
OneConnect Financial Technology Co Ltd Shanghai
Original Assignee
OneConnect Financial Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Financial Technology Co Ltd Shanghai filed Critical OneConnect Financial Technology Co Ltd Shanghai
Priority to CN202010236490.0A priority Critical patent/CN111553189A/en
Publication of CN111553189A publication Critical patent/CN111553189A/en
Priority to PCT/CN2021/071987 priority patent/WO2021196831A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Marketing (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Social Psychology (AREA)
  • Technology Law (AREA)
  • Psychiatry (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a data verification method, a device and a storage medium based on video information, wherein the method comprises the following steps: collecting data to be verified, wherein the data to be verified comprises first video information containing behavior bodies; according to the first video information, performing real person identification and first identity verification on the behavior main body in the data to be verified respectively to judge whether the behavior main body is a real person or not and whether the behavior main body is consistent with a prestored certificate photo or not; pushing a verification question to the behavior main body, and collecting second video information when the behavior main body answers the verification question; and respectively carrying out second identity authentication and micro-expression analysis on the behavior main body according to the second video information so as to judge whether the behavior main body is consistent with the behavior main body in the first video and judge whether the behavior main body has cheating behavior in the process of answering the authentication question. The invention is used for judging whether the answerer has cheating behavior when answering the health notice by adopting the image and voice recognition technology.

Description

Data verification method and device based on video information and storage medium
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a data verification method and apparatus based on video information, and a storage medium.
Background
The current health notice is mainly filled in the following modes: operating on the APP or the public number or H5 link, a manual selection of the reply item is made to the content in the health advice.
The applicant (or insured) is required to faithfully fill out health advice when purchasing insurance. For the insurance applicant who is not applied with a healthy body, more insurance fees can be paid or insurance application can be refused, so in the actual operation process, the following situations can occur that the health advice is not faithfully filled out:
1. the applicant deliberately withholds the medical history and does not faithfully fill out the health advice form;
2. the policyholder does not look up the health advice carefully due to the tiger, or looks at two eyes at will, feels that the body of the policyholder (or the policyholder) is good and has no problem, and fills in the information at will;
3. the insurance broker completes the insurance application index and replaces the applicant to fill in.
Therefore, the insurance policy can not be judged to be classified by the underwriter, the insurance policy can be delayed by the insurance company, the relevant data of the client is checked, the notice book is re-filled, and the insurance company is checked again, so that the working efficiency of the insurance company is reduced.
Therefore, a method for verifying data information is needed to prevent the health advice from being filled in incorrectly.
Disclosure of Invention
In view of the above problems, it is an object of the present invention to provide a data verification method, apparatus and storage medium based on video information. The system can be used in the whole health advice answer process, adopts the image and voice detection and recognition technology, prevents malicious insurance application from cheating and insurance, leads the insurance work, and can save a large amount of economic cost and time cost caused by the work of prospecting, insurance and the like due to manpower and material resources.
According to an aspect of the present invention, there is provided a data verification method based on video information, comprising the steps of:
s110: acquiring data to be verified, wherein the data to be verified comprises first video information containing behavior bodies;
s120: according to the first video information, performing real person identification and first identity verification on the behavior main body in the data to be verified respectively to judge whether the behavior main body is a real person or not and whether the behavior main body is consistent with a prestored certificate photo or not; if the behavior main body is a real person and is consistent with the pre-stored identification photo, performing S130; if the behavior main body is not a real person and/or is not consistent with the pre-stored certificate photo, performing S110;
s130: sequentially pushing at least one verification question to the behavior main body, and collecting second video information when the behavior main body answers the verification question;
s140: according to the second video information, respectively carrying out second identity verification and micro-expression analysis on the behavior main body to judge whether the behavior main body is consistent with the behavior main body in the first video or not and judge whether the behavior main body has cheating behavior in the process of answering the verification question or not; if the behavior body is inconsistent with the behavior body in the first video, stopping pushing a new verification problem to the behavior body; and if the behavior main body is consistent with the behavior main body in the first video, but the micro-expression analysis behavior main body has cheating behavior in the process of answering the verification question, marking the verification question as an abnormal question.
The real person identification is carried out on the behavior main body in the data to be verified according to the first video information so as to judge whether the behavior main body is a real person, and the method comprises the following processes:
intercepting at least two thousand first face pictures from first image information in the first video information through frame extraction, and calculating the first face pictures according to a silent multi-frame in-vivo detection model;
acquiring the heart rate of the behavior main body when each first face picture is intercepted by an RPPG (resilient packet Generator) heartbeat detection method;
if the heart rates are all in the set range, calculating a heart rate average fluctuation value according to the heart rates, and if the heart rates are all smaller than the heart rate average fluctuation value and the result calculated by the silent multi-frame in-vivo detection model is a real person, judging that the behavior body is the real person;
and if the heart rate is not in the set range, judging that the behavior subject is not a real person.
The calculation process of the heart rate average fluctuation value comprises the following steps: dividing the first face pictures into M groups according to the intercepted time sequence, wherein each group comprises N first face pictures, subtracting the minimum heart rate value from the maximum heart rate value in each group to obtain a heart rate difference value, adding the heart rate difference values of each group, and then averaging, wherein the average value is the heart rate average fluctuation value.
The first identity verification is carried out on the behavior main body according to the first video information, and whether the behavior main body is consistent with a pre-stored certificate photo or not is judged, wherein the method comprises the following processes:
adopting a face image detection algorithm model to perform quality detection on each first face picture, selecting at least one first face picture meeting preset quality conditions, and storing the first face picture as a standard face picture;
and comparing the standard face picture with the certificate photo prestored in the behavior main body to obtain the similarity between the standard face picture and the certificate photo, wherein if the similarity is higher than the preset similarity of the person and the certificate, the first identity verification is passed.
The second identity verification and the micro-expression analysis are respectively carried out on the behavior main body according to the second video information so as to judge whether the behavior main body is consistent with the behavior main body in the first video or not and judge whether the behavior main body has cheating behaviors in the process of answering the verification question or not, and the method comprises the following steps:
frame extraction is carried out on second image information in the second video information within set time to obtain a second face picture, the second face picture is compared with the standard face picture, and if the comparison result is the same, the behavior main body is consistent with the behavior main body in the first video;
and inputting the second face picture into an expression classification model based on a convolutional neural network for micro-expression analysis, and if the micro-expression analysis result shows deception behavior, taking the verification problem as an abnormal problem mark.
Further, the method also comprises the step of obtaining and recording answers of the behavior main body to the verification questions according to the second video information.
The S130 further includes: broadcasting the verification problem through a loudspeaker.
According to another aspect of the present invention, there is provided a data verification system based on video information, including:
the system comprises a first video acquisition unit, a second video acquisition unit and a verification unit, wherein the first video acquisition unit is used for acquiring data to be verified, and the data to be verified comprises first video information containing behavior main bodies;
the real person identification and first identity verification unit is used for respectively carrying out real person identification and first identity verification on the behavior main body in the data to be verified according to the first video information so as to judge whether the behavior main body is a real person and whether the behavior main body is consistent with a prestored certificate photo; if the behavior main body is a real person and is consistent with the pre-stored certificate photo, performing second video acquisition and verification problem pushing; if the behavior main body is not a real person and/or is inconsistent with the pre-stored certificate photo, the first video acquisition unit is carried out again;
the verification question pushing and second video collecting unit is used for sequentially pushing and verifying at least one question to the behavior main body and collecting second video information when the behavior main body answers the verification question;
a deception behavior judging unit, configured to perform second identity authentication and micro-expression analysis on the behavior main body according to the second video information, to judge whether the behavior main body is consistent with the behavior main body in the first video, and judge whether the behavior main body has deception behavior in a process of answering the authentication question; if the behavior body is inconsistent with the behavior body in the first video, stopping pushing a new verification problem to the behavior body; and if the behavior main body is consistent with the behavior main body in the first video, but the micro-expression analysis behavior main body has cheating behavior in the process of answering the verification question, marking the verification question as an abnormal question.
According to another aspect of the present invention, there is provided an electronic apparatus including: a memory in which a computer program is stored, and a processor, the computer program, when being executed by the processor, implementing the steps of the above-mentioned video information based data authentication method.
According to another aspect of the present invention, there is provided a computer-readable storage medium having stored therein a video-information-based data authentication program, which when executed by a processor, implements the steps of the above-described video-information-based data authentication method.
By using the data verification method, the data verification device and the storage medium based on the video information, the AI technical capabilities of face recognition, face comparison, voice recognition, semantic understanding, intelligent recording, micro expression and the like are integrated, the operation flow of the conventional health notification is changed, and the health notification data is verified and filled in a video mode. Has the following advantages: 1. putting an end to others to falsify name and answer, and ensuring the real person to answer; 2. using a micro-expression anti-fraud technology to reject false responses out of doors; 3. the whole health notification reply is recorded in the whole process and stored for a long time, so that basis is provided for disputes which may be generated in the future; 4. each health notification question is broadcast to the applicant (or the insured life), so that wrong answers of the applicant (or the insured life) due to tigers, carelessness and the like are reduced.
For insurance companies, malicious insurance application is prevented from being cheated and protected, and the work of underwriting is preposed, so that a large amount of economic cost and time cost caused by the work of exploration, underwriting and the like performed by manpower and material resources can be saved. The daily work of the staff on the rule post is greatly facilitated. Automatic quality control, save the preliminary examination time for quality control personnel, improve work efficiency.
For the applicant (or the insured), the video can be used for verifying the identity and preventing false filling, the possibility of making false responses by the applicant (or the insured) due to various reasons is reduced, the misapplication caused by incomprehension or tiger is avoided, the problems of subsequent medical unscrambling, ending insurance contract, non-refund of insurance fee and the like are avoided, the relevant benefits of the applicant (or the insured) are effectively guaranteed, and the application of the insurance is made clearly and plain. In case of disputes after the insurance policy is made, the recorded health notice video file can also be used as a secondary certificate to prove the insurance application condition at that time, so that the relevant departments can make correct judgment. The data verification method based on the video information adds tiles to harmonious insurance application, and enables the public to understand and accept the harmonious insurance application. The data verification method based on the video information can also be applied to other occasions where whether deception exists when a responder answers a question needs to be verified.
To the accomplishment of the foregoing and related ends, one or more aspects of the invention comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative aspects of the invention. These aspects are indicative, however, of but a few of the various ways in which the principles of the invention may be employed. Further, the present invention is intended to include all such aspects and their equivalents.
Drawings
Other objects and results of the present invention will become more apparent and more readily appreciated as the same becomes better understood by reference to the following description and appended claims, taken in conjunction with the accompanying drawings. In the drawings:
fig. 1 is a flowchart of a data verification method based on video information according to embodiment 1 of the present invention;
fig. 2 is a schematic diagram of a logical structure of a data verification system based on video information according to embodiment 2 of the present invention;
fig. 3 is a schematic diagram of a logic structure of an electronic device according to embodiment 3 of the present invention.
The same reference numbers in all figures indicate similar or corresponding features or functions.
Detailed Description
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more embodiments. It may be evident, however, that such embodiment(s) may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing one or more embodiments.
The noun explains:
silence multi-frame live detection model: based on the latest deep convolutional neural network, the face recognition method is obtained by combining hundreds of millions of real face data and non-real face data for training. A non-real human face picture has certain characteristics such as moire fringes, picture reflection, distortion, background abnormity and the like, and the silent multi-frame living body detection model is used for detecting a plurality of pictures extracted from a video to judge whether the picture is shot by a real person.
RPPG heartbeat detection: remote Photoplethysmography (RPPG) uses reflected ambient light to measure subtle changes in the brightness of the skin. Subtle changes in skin brightness are due to blood flow caused by the beating of the heart. Generally by RPPG we can get a BVP (blood volume pulse) like signal from which the heart rate can be predicted.
FFmpeg: FFmpeg is a set of open source computer programs that can be used to record, convert digital audio, video, and convert them into streams. It provides a complete solution for recording, converting and streaming audio and video.
Specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Example 1
Fig. 1 is a flowchart of a data verification method based on video information according to embodiment 1 of the present invention.
As shown in fig. 1, the data verification method based on video information provided in this embodiment includes the following steps:
s110: collecting data to be verified, wherein the data to be verified comprises first video information containing behavior bodies;
the behavioral subjects in the data to be verified may be answerers who will reply to the health advice. The front camera is arranged on the display screen, shoots first video information of the answer person and can display the first video information on the display screen, and the first video information is used for real person identification and identity verification of the answer person.
S120: according to the first video information, respectively carrying out real person identification and first identity verification on the answerer, and judging whether the answerer is a real person or not and whether the answerer is consistent with a prestored certificate photo or not; if the answerer is a real person and is consistent with the prestored certificate photo, s130 is carried out; if the answerer is not a real person and/or is not consistent with the pre-stored certificate photo, re-performing s 110;
in this step, it is determined whether the person is a real person who records video, and whether the person on the submitted identity document is recording video, and the following health question answering stage can be performed if both conditions are satisfied. If a condition is not satisfied to indicate that the answerer has a fraud tendency, the following health questions are not displayed, the health advice questions are stopped, and the first video recording of other answerers is carried out.
S130: sequentially pushing verification questions to the answerers, and collecting second video information when the answerers answer the verification questions;
the verification question can be a single health question, the single health question is displayed on a display screen by screen, and second video information when the answerer answers the single health question is collected and displayed on the display screen. The display screen only displays one healthy question, the front camera collects videos of the answerer when the answerer answers the healthy question and the videos serve as second video information, and the second video information and the healthy question are displayed on the display screen at the same time. After answering the current health question, the answerer displays the next health question.
S140: according to the second video information, respectively carrying out second identity verification and micro-expression analysis on the answerers so as to judge whether the answerers are consistent with the answerers in the first video and whether the answerers have cheating behaviors in the process of answering the health questions; if the answerer is inconsistent with the answerer in the first video, stopping pushing a new verification question to the answerer; and if the answerer is consistent with the answerer in the first video and the micro-expression analysis answerer has cheating behavior in the process of answering the health question, marking the health question as an abnormal question.
And correspondingly acquiring second video information when the answerer answers each health question, judging whether the answerer has deception behavior when answering each health question, and recording and storing the second video information. The auditor can conveniently check the health notice in a targeted manner. After all questions are answered, all the second videos are stored, so that the video files recorded by the answerers during answering can be stored for a long time, and powerful auxiliary evidence is provided for disputes which may be generated in the future.
Specifically, in step S120:
firstly, a multimedia video processing tool FFmpeg is adopted to separate image information and voice information of first video information to obtain first image information and first voice information.
According to the first video information, real person identification is carried out on the answerer, whether the answerer is a real person or not is judged, and the method comprises the following processes:
at least two ten thousand first face pictures are captured from the first image information through a frame extraction technology, the first face pictures are calculated according to a silent multi-frame in-vivo detection model, whether a real person is shooting is judged, the silent multi-frame in-vivo detection mode can effectively prevent other people from using paper pictures, mobile phone videos and other modes to carry out face acquisition, and real person face acquisition is really achieved.
Acquiring the heart rate of a answerer when each first face picture is intercepted by an RPPG (resilient packet Generator) heartbeat detection method, and judging whether a real person is shooting;
if the acquired heart rates are all in the set range, calculating the average fluctuation value of the heart rate according to each heart rate value, if the heart rates are all smaller than the average fluctuation value of the heart rate, and the result calculated according to the silent multi-frame in-vivo detection model is a real person, judging that the answerer is the real person, and the real person passes the identification;
if the acquired heart rate is not in the set range, judging that the answerer is not a real person, not calculating the average fluctuation value of the heart rate any more, and finishing the filling of the health notice. The range of heart rate settings may be: (50-160) times per minute.
The calculation process of the heart rate average fluctuation value comprises the following steps:
dividing the first face pictures into M groups according to the intercepted time sequence, wherein each group comprises N first face pictures, subtracting the minimum heart rate value from the maximum heart rate value in each group to obtain heart rate difference values, adding the heart rate difference values of each group and then averaging, and the average value is the heart rate average fluctuation value. The calculation formula is as follows:
Figure BDA0002431163860000081
wherein: m is the number of shooting groups, M can be more than 1 ten thousand, each group has N human face pictures; h1、H2、H3、....、HNObtaining a heart rate value through RPPG heartbeat detection when each face picture is intercepted from a group; and A is the average fluctuation value of the heart rate.
The real person identification is combined by adopting two detection modes of heartbeat detection and silent multi-frame in-vivo detection models, so that the reliability is improved.
According to first video information, carry out real person's discernment respectively to the answer person, judge whether the answer person is real person, still can adopt speech recognition and lip language identification's mode, specifically include:
performing voice recognition on the first voice information to obtain semantic information corresponding to the first voice information; performing framing processing on the first image information to obtain the lip position in each frame of image after framing; performing lip language identification on the position of a lip in each frame of image to obtain semantic information corresponding to the lip language of each frame of image; and calculating the similarity value of the semantic information corresponding to the first voice information and the semantic information corresponding to the lip language recognition by using a time alignment algorithm, judging whether the answerer is a real person or not according to the similarity value, and if the answerer is the real person, identifying the real person to pass.
According to the first video information, performing first identity verification on the answerer, and judging whether the answerer is consistent with a prestored certificate photo or not, wherein the method comprises the following processes:
the quality of each first face picture can be detected by adopting a face image detection algorithm model, a plurality of first face pictures can be randomly extracted, and at least one first face picture meeting preset quality conditions is selected and stored as a standard face picture; and comparing the standard face picture with a certificate photo prestored by the answerer to obtain the similarity between the standard face picture and the certificate photo, and if the similarity is higher than the preset similarity of the certificate, the first identity verification is passed.
The human face image detection algorithm model can detect the whole human face image, and the preset quality condition can be as follows: whether the face features are available, whether the face proportion meets the requirements (20% -70%) and whether the whole image pixel meets the set requirements. The purpose of detection is to determine whether the extracted first face picture has a comparison condition, and provide a picture with higher quality as a standard face picture.
In this step, it is determined whether the person on the identity document is recording the screen and will ask and answer a health question. If the person on the identity document is on the recording screen and the real person is on the recording screen, the following health question answering stage can be carried out.
In step S130, while the display screen displays the health issue, the voice broadcast of the displayed health issue can be automatically performed through the speaker. The answerers can listen to each health question, and the problem that some answerers do not read the question seriously and answer at will is avoided.
Specifically, in step S140:
firstly, the multimedia video processing tool FFmpeg is adopted to separate the image information and the voice information of the second video information to obtain the second image information and the second voice information.
According to the second video information, respectively carrying out second identity verification and micro-expression analysis on the answerers so as to judge whether the answerers are consistent with the answerers in the first video and whether the answerers have cheating behaviors in the process of answering the health questions, wherein the method comprises the following steps:
and (4) performing frame extraction on second image information in the second video information within a set time to obtain a second face picture, comparing the second face picture with the standard face picture obtained in the step s120 to perform second identity verification, if the comparison result is different, if the answer is deceived in the process of answering the health question, stopping pushing the health question to the answer, and stopping answering the health notice of the answer. If the comparison result is the same, the answerer is consistent with the answerer in the first video, no person is changed in the midway, and the answer of the health advice book cannot be stopped. And meanwhile, inputting the second face picture into an expression classification model based on a convolutional neural network for micro-expression analysis to judge whether deception behaviors exist in the process of answering the health questions by the answerer, if the corresponding micro-expression analysis result of the second face picture shows that the deception behaviors exist, deception tendency exists, and marking and storing the health questions as abnormal questions.
The frame extraction is carried out within a set time, which can be determined according to the actual situation. In the method and the device, the frame can be extracted when the answer is started when the answer is answered by the answer, namely the answer gives out voice after the health question is displayed, and when the answer of the health question is finished. Or decimated every 2 seconds.
When the user answers, more than one information is often leaked, the intention of the user is reflected more clearly by the information including the language, the micro expression, the body action, the conscious touch and the like, so that the multi-channel information is collected and mutually judged at the same time, and the system is more favorable for accurately understanding whether the user answers questions honestly.
The answerer answers a health question, if the answer habit of common people is violated, the answer question is marked as an abnormal question, after the whole health advice book is completely asked and answered, the auditor can record the abnormality according to which health question recorded before, know that the emotion and psychological changes of the answerer are large, and the answer of the health question is possible to be counterfeited. After all questions and answers of the whole health notice are finished, the auditor can record the abnormity according to which health problem recorded before, and know which link the deception tendency occurs. If a person-change response occurs, the response to the health advice note is stopped. By checking the recorded abnormal questions, the auditor can easily find out whether the answerer has false answer or not when answering each health notification question, and if the answer has false answer, the auditor can perform data review and the like.
The method further comprises the following steps of obtaining and recording answers of answering persons to a single health question according to the second video information, wherein the following steps are included:
and performing voice recognition on second voice information in the second video information, obtaining semantic information corresponding to the second voice information through a semantic recognition engine, matching the semantic information with answers corresponding to single health problems prestored in a database through a deep learning neural network model to obtain the matching rate of the semantic information and the answers, and storing the semantic information and the matching rate.
The answer of a single health question prestored in the database comprises a plurality of answers which can be applied to the health question. The matching rate can indicate the similarity between the answers answered by the answerers and the answers prestored in the database, the answers with high similarity indicate that the answers of the answerers conform to the insurance conditions of the health questions, the answers with low similarity indicate that the answers of the answerers need to be further audited by an insurance auditor, and the insurance auditor can determine which insurance condition of the client needs to be further audited according to the stored matching rate of the answers of each health question, so that the work efficiency of the auditor is improved, and the work intensity is reduced.
According to the second video information, obtaining and recording answers of answering persons to a single health question, and adopting a lip language identification mode, wherein the method specifically comprises the following steps: and performing lip language identification on the second image information, acquiring lip actions through a face identification model, matching the lip actions with a lip language model of a single health question answer prestored in a database through a lip language identification algorithm model established by a deep learning neural network, obtaining the lip language model matching rate of the lip actions and the answer acquired through the face identification model, and storing the lip actions and the matching rate.
The lip language recognition algorithm model mainly adopts a deep learning model algorithm based on time series recognition, such as RNN (recurrent neural network) + LSTM (long-short term memory network).
In the application, the health questions to be answered and the video and audio of the answerer recorded by the camera are displayed on the display screen at the same time. When the answerer reads and listens to the health problem, the information of the face, voice, lip print, expression and the like of the answerer is collected to carry out real person identification, identity authentication, confirmation of answer content and identification of deceptive response, and the efficiency of application and verification is improved.
Example 2
Fig. 2 is a flowchart of a data verification method based on video information according to embodiment 1 of the present invention.
As shown in fig. 2, the data verification system based on video information provided in this embodiment includes: a first video acquisition unit 201, a real person identification and first identity verification unit 202, a verification problem pushing and second video acquisition unit 203 and a fraud judgment unit 204.
The system comprises a first video acquisition unit 201, a second video acquisition unit and a verification unit, wherein the first video acquisition unit is used for acquiring data to be verified, and the data to be verified comprises first video information containing behavior bodies;
a real person identification and first identity verification unit 202, configured to perform real person identification and first identity verification on the behavior main body in the data to be verified, respectively, according to the first video information, and determine whether the behavior main body is a real person and is consistent with a pre-stored certificate photo; if the behavior main body is a real person and is consistent with the prestored certificate photo, performing second video acquisition and verification problem pushing; if the behavior main body is not a real person and/or is inconsistent with the pre-stored certificate photo, the first video acquisition unit is carried out again;
the verification question pushing and second video collecting unit 203 is used for sequentially pushing at least one verification question to the behavior main body and collecting second video information when the behavior main body answers the verification question;
a fraud determination unit 204, configured to perform second identity authentication and micro-expression analysis on the behavior body according to the second video information, determine whether the behavior body is consistent with the behavior body in the first video, and determine whether the behavior body has fraud in a process of answering the authentication question; if the behavior body is inconsistent with the behavior body in the first video, stopping pushing a new verification problem to the behavior body; and if the behavior main body is consistent with the behavior main body in the first video, but the micro-expression analysis behavior main body has cheating behavior in the process of answering the verification question, taking the verification question as an abnormal question mark.
Example 3
Fig. 3 is a schematic diagram of a logic structure of an electronic device according to embodiment 3 of the present invention.
As shown in fig. 3, an electronic device 1 includes a memory 3 and a processor 2, the memory stores a computer program 4, and the computer program 4 implements the steps of the data authentication method based on video information in embodiment 1 when executed by the processor 3.
Example 4
A computer-readable storage medium, which includes therein a video-information-based data verification program, and when the video-information-based data verification program is executed by a processor, the steps of the video-information-based data verification method in embodiment 1 are implemented.
The data verification method, apparatus, and storage medium based on video information according to the present invention are described above by way of example with reference to fig. 1, 2, and 3. However, it will be understood by those skilled in the art that various modifications may be made to the method, apparatus and storage medium for data verification based on video information as set forth in the above description without departing from the scope of the invention. Therefore, the scope of the present invention should be determined by the contents of the appended claims.

Claims (10)

1. A data verification method based on video information is characterized by comprising the following steps:
s110: acquiring data to be verified, wherein the data to be verified comprises first video information containing behavior bodies;
s120: according to the first video information, performing real person identification and first identity verification on the behavior main body in the data to be verified respectively to judge whether the behavior main body is a real person or not and whether the behavior main body is consistent with a prestored certificate photo or not; if the behavior main body is a real person and is consistent with the pre-stored identification photo, performing S130; if the behavior main body is not a real person and/or is not consistent with the pre-stored certificate photo, performing S110;
s130: sequentially pushing at least one verification question to the behavior main body, and collecting second video information when the behavior main body answers the verification question;
s140: according to the second video information, respectively carrying out second identity verification and micro-expression analysis on the behavior main body to judge whether the behavior main body is consistent with the behavior main body in the first video or not and judge whether the behavior main body has cheating behavior in the process of answering the verification question or not; if the behavior body is inconsistent with the behavior body in the first video, stopping pushing a new verification problem to the behavior body; and if the behavior main body is consistent with the behavior main body in the first video, but the micro-expression analysis behavior main body has cheating behavior in the process of answering the verification question, marking the verification question as an abnormal question.
2. The data verification method based on video information as claimed in claim 1, wherein the performing real person identification on the behavior subject in the data to be verified according to the first video information to determine whether the behavior subject is a real person comprises the following processes:
intercepting at least two thousand first face pictures from first image information in the first video information through frame extraction, and calculating the first face pictures according to a silent multi-frame in-vivo detection model;
acquiring the heart rate of the behavior main body when each first face picture is intercepted by an RPPG (resilient packet Generator) heartbeat detection method;
if the heart rates are all in the set range, calculating a heart rate average fluctuation value according to the heart rates, and if the heart rates are all smaller than the heart rate average fluctuation value and the result calculated by the silent multi-frame in-vivo detection model is a real person, judging that the behavior body is the real person;
and if the heart rate is not in the set range, judging that the behavior subject is not a real person.
3. The video-information-based data verification method according to claim 2, wherein the calculation process of the heart rate average fluctuation value includes:
dividing the first face pictures into M groups according to the intercepted time sequence, wherein each group comprises N first face pictures, subtracting the minimum heart rate value from the maximum heart rate value in each group to obtain a heart rate difference value, adding the heart rate difference values of each group, and then averaging, wherein the average value is the heart rate average fluctuation value.
4. The data verification method based on video information as claimed in claim 2, wherein the performing of the first identity verification on the behavior main body according to the first video information and determining whether the behavior main body is consistent with the pre-stored certificate photo comprises the following processes:
adopting a face image detection algorithm model to perform quality detection on each first face picture, selecting at least one first face picture meeting preset quality conditions, and storing the first face picture as a standard face picture;
and comparing the standard face picture with the certificate photo prestored in the behavior main body to obtain the similarity between the standard face picture and the certificate photo, wherein if the similarity is higher than the preset similarity of the person and the certificate, the first identity verification is passed.
5. The data verification method based on video information as claimed in claim 4, wherein the performing second identity verification and micro-expression analysis on the behavior principal according to the second video information to determine whether the behavior principal is consistent with the behavior principal in the first video and whether the behavior principal has fraud behavior in answering the verification question comprises the following steps:
frame extraction is carried out on second image information in the second video information within set time to obtain a second face picture, the second face picture is compared with the standard face picture, and if the comparison result is the same, the behavior main body is consistent with the behavior main body in the first video;
and inputting the second face picture into an expression classification model based on a convolutional neural network for micro-expression analysis, and if the micro-expression analysis result shows deception behavior, taking the verification problem as an abnormal problem mark.
6. The video-information-based data verification method of claim 1, further comprising obtaining and recording answers from the behavioral entity to the verification questions based on the second video information.
7. The video-information-based data verification method according to claim 1, wherein said S130 further comprises: broadcasting the verification problem through a loudspeaker.
8. A data verification system based on video information, comprising:
the system comprises a first video acquisition unit, a second video acquisition unit and a verification unit, wherein the first video acquisition unit is used for acquiring data to be verified, and the data to be verified comprises first video information containing behavior main bodies;
the real person identification and first identity verification unit is used for respectively carrying out real person identification and first identity verification on the behavior main body in the data to be verified according to the first video information so as to judge whether the behavior main body is a real person and whether the behavior main body is consistent with a prestored certificate photo; if the behavior main body is a real person and is consistent with the pre-stored certificate photo, performing second video acquisition and verification problem pushing; if the behavior main body is not a real person and/or is inconsistent with the pre-stored certificate photo, the first video acquisition unit is carried out again;
the verification question pushing and second video collecting unit is used for sequentially pushing at least one verification question to the behavior main body and collecting second video information when the behavior main body answers the verification question;
a deception behavior judging unit, configured to perform second identity authentication and micro-expression analysis on the behavior main body according to the second video information, to judge whether the behavior main body is consistent with the behavior main body in the first video, and judge whether the behavior main body has deception behavior in a process of answering the authentication question; if the behavior body is inconsistent with the behavior body in the first video, stopping pushing a new verification problem to the behavior body; and if the behavior main body is consistent with the behavior main body in the first video, but the micro-expression analysis behavior main body has cheating behavior in the process of answering the verification question, marking the verification question as an abnormal question.
9. An electronic device, comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, carries out the steps of the video-information-based data authentication method according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a video-information-based data verification program is stored, which, when executed by a processor, implements the steps of the video-information-based data verification method according to any one of claims 1 to 7.
CN202010236490.0A 2020-03-30 2020-03-30 Data verification method and device based on video information and storage medium Pending CN111553189A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010236490.0A CN111553189A (en) 2020-03-30 2020-03-30 Data verification method and device based on video information and storage medium
PCT/CN2021/071987 WO2021196831A1 (en) 2020-03-30 2021-01-15 Data verification method based on video information, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010236490.0A CN111553189A (en) 2020-03-30 2020-03-30 Data verification method and device based on video information and storage medium

Publications (1)

Publication Number Publication Date
CN111553189A true CN111553189A (en) 2020-08-18

Family

ID=72002050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010236490.0A Pending CN111553189A (en) 2020-03-30 2020-03-30 Data verification method and device based on video information and storage medium

Country Status (2)

Country Link
CN (1) CN111553189A (en)
WO (1) WO2021196831A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112752051A (en) * 2020-12-31 2021-05-04 广州市中崎商业机器股份有限公司 Intelligent recording device, control method and device thereof, and storage medium
WO2021196831A1 (en) * 2020-03-30 2021-10-07 深圳壹账通智能科技有限公司 Data verification method based on video information, device, and storage medium
CN113556518A (en) * 2021-09-23 2021-10-26 成都派沃特科技股份有限公司 Video data scheduling method, device, equipment and storage medium
CN114863515A (en) * 2022-04-18 2022-08-05 厦门大学 Human face living body detection method and device based on micro-expression semantics

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122709B (en) * 2017-03-17 2020-12-04 上海云从企业发展有限公司 Living body detection method and device
TWI711980B (en) * 2018-02-09 2020-12-01 國立交通大學 Facial expression recognition training system and facial expression recognition training method
CN109815794A (en) * 2018-12-14 2019-05-28 北京飞搜科技有限公司 Recognition of face is counter to cheat method, apparatus and electronic equipment
CN109697665A (en) * 2018-12-15 2019-04-30 深圳壹账通智能科技有限公司 Loan checking method, device, equipment and medium based on artificial intelligence
CN109711312A (en) * 2018-12-20 2019-05-03 四川领军智能科技有限公司 A kind of testimony of a witness verifying system and method based on silent In vivo detection recognition of face
CN109508706B (en) * 2019-01-04 2020-05-05 江苏正赫通信息科技有限公司 Silence living body detection method based on micro-expression recognition and non-sensory face recognition
CN110889332A (en) * 2019-10-30 2020-03-17 中国科学院自动化研究所南京人工智能芯片创新研究院 Lie detection method based on micro expression in interview
CN111553189A (en) * 2020-03-30 2020-08-18 深圳壹账通智能科技有限公司 Data verification method and device based on video information and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021196831A1 (en) * 2020-03-30 2021-10-07 深圳壹账通智能科技有限公司 Data verification method based on video information, device, and storage medium
CN112752051A (en) * 2020-12-31 2021-05-04 广州市中崎商业机器股份有限公司 Intelligent recording device, control method and device thereof, and storage medium
CN113556518A (en) * 2021-09-23 2021-10-26 成都派沃特科技股份有限公司 Video data scheduling method, device, equipment and storage medium
CN113556518B (en) * 2021-09-23 2021-12-17 成都派沃特科技股份有限公司 Video data scheduling method, device, equipment and storage medium
CN114863515A (en) * 2022-04-18 2022-08-05 厦门大学 Human face living body detection method and device based on micro-expression semantics

Also Published As

Publication number Publication date
WO2021196831A1 (en) 2021-10-07

Similar Documents

Publication Publication Date Title
CN111553189A (en) Data verification method and device based on video information and storage medium
CN109522815B (en) Concentration degree evaluation method and device and electronic equipment
Niessen et al. Detecting careless respondents in web-based questionnaires: Which method to use?
EP3807792B1 (en) Authenticating an identity of a person
CN112328999B (en) Double-recording quality inspection method and device, server and storage medium
JP4401079B2 (en) Subject behavior analysis
US20100266213A1 (en) Method of assessing people's self-presentation and actions to evaluate personality type, behavioral tendencies, credibility, motivations and other insights through facial muscle activity and expressions
US20140240507A1 (en) Online Examination Proctoring System
CN109492595B (en) Behavior prediction method and system suitable for fixed group
CN110738114A (en) student identity safety verification system for online education
KR20180050968A (en) on-line test management method
CN208351494U (en) Face identification system
CN112215700A (en) Credit face audit method and device
CN115511329A (en) Electric power operation compliance monitoring system and method
KR100998617B1 (en) Central control type cbt system and method thereof
CN113794759B (en) Examination cloud platform system based on block chain
CN110705270A (en) Voice monitoring online examination method and device based on five-determination technology
CN110378587A (en) Intelligent quality detecting method, system, medium and equipment
CN111914763B (en) Living body detection method, living body detection device and terminal equipment
Ivanova et al. Enhancing trust in eassessment-the tesla system solution
CN114971658B (en) Anti-fraud propaganda method, system, electronic equipment and storage medium
CN112991076A (en) Information processing method and device
CN113542668A (en) Monitoring system and method based on 3D camera
CN113485668B (en) Intelligent account opening method and system
CN111767845A (en) Certificate identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40033239

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination